Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm a newbie in the great world of NHibernate. I'm using version 2.0.1.GA. Here's my question. I have a table `Cars` with column `Manufacturer(nvarchar(50))` and a primary key `ID(int)`. My .NET class is: ``` public class Car { public virtual int ID { get; set; } public virtual string Manufacturer { get; set; } } ``` Now if I want to retrieve all cars made by Mercedes I have to type this: ``` using (var session = OpenSession()) { var cars = session .CreateCriteria(typeof(Car)) .Add(Restrictions.Like("Manufacturer", "Mercedes")) .List(); // ... } ``` I don't like the fact that I need to specify the property name as a string :( Is it possible to have something more refactor friendly probably (it's only a suggestion)? ``` var ms = session .CreateCriteria<Car>() .Add(c => c.Manufacturer, Restrictions.Like("Mercedes") .List(); ``` Anything like thins in the current version (2.0.1.GA) or in a future version?
Like Google Ninja said, you can do it with NHibernate.Linq. The query would then be: `session.Linq<Car>.Where(c => c.Manufacturer == "Mercedes").ToList()` If someone ends up here and is using NH3.0 the syntax is just a tad different (thanks to Michael Mrozek and Mike for the suggestion): `session.Query<Car>.Where(c => c.Manufacturer == "Mercedes").ToList()` I've used a binary that came bundled with [fluent-nhibernate](http://code.google.com/p/fluent-nhibernate/) that works with 2.0GA (I think, not sure about the particular revision).
If you don't want to use Linq to NHibernate yet, there's a couple of alternatives to get strongly type Criteria queries: * <http://bugsquash.blogspot.com/2008/03/strongly-typed-nhibernate-criteria-with.html> * <http://www.kowitz.net/archive/2008/08/17/what-would-nhibernate-icriteria-look-like-in-.net-3.5.aspx>
C# Lambda expressions and NHibernate
[ "", "c#", "nhibernate", "lambda", "" ]
Consider the following example. It consists of two header files, declaring two different namespaces: ``` // a1.h #pragma once #include "a2.h" namespace a1 { const int x = 10; typedef a2::C B; } ``` and the second one is ``` // a2.h #pragma once #include "a1.h" namespace a2 { class C { public: int say() { return a1::x; } }; } ``` And a single source file, `main.cpp`: ``` #include <iostream> #include "a1.h" #include "a2.h" int main() { a2::C c; std::cout << c.say() << std::endl; } ``` This way it doesn't compile (tried GCC and MSVC). The error is that `a1` namespaces is not declared (C2653 on Windows). If you change include order in `main.cpp` this way: ``` #include "a2.h" #include "a1.h" ``` you get a symmetric error message, i.e. `a2` namespace is not declared. What's the problem?
You need to use a forward declaration in your header files because you have a circular reference. Something like this: ``` // a1.h #pragma once namespace a2 { class C; } namespace a1 { const int x = 10; typedef a2::C B; } ```
Just a guess, but your include reference is circular. Meaning the compiler can't figure out which header to compile first. a1 references a2 which references a1. If possible, merge/reorganize the files so that there is a non-circular chain of includes. Hope that helps!
C++ namespaces: cross-usage
[ "", "c++", "namespaces", "include", "" ]
I was wondering what everyone thinks of this. Is the code easy to follow? Or is there a better way to do this? By the way, this is how I am currently doing validation at the moment with ASP.NET MVC. I can follow it, but I am the one who wrote it. For some reason SO is removing the line breaks between the validators. ``` public override Validation<MemberCreate> ValidationRules() { var validation = new Validation<MemberCreate>(); validation.Add(x => x.Name) .LengthBetween( Config.Member.NameMinLength, Config.Member.NameMaxLength, Resources.Errors.LengthBetweenNotValid.Fmt( Resources.Titles.Name, Config.Member.NameMinLength, Config.Member.NameMaxLength)) .Characters(Resources.Errors.CharactersNotValid.Fmt(Resources.Titles.Name)); validation.Add(x => x.EmailAddress).Email( Resources.Errors.EmailNotValid.Fmt( Resources.Titles.EmailAddress)); validation.Add(x => x.VerifyEmailAddress).Equal( x => x.EmailAddress, Resources.Errors.CompareNotValid.Fmt( Resources.Titles.VerifyEmailAddress, Resources.Titles.EmailAddress)); validation.Add(x => x.PassWord).LengthGreaterThan( Config.Member.PassWordMinLength, Resources.Errors.LengthGreaterThanNotValid.Fmt( Resources.Titles.PassWord, Config.Member.PassWordMinLength)); validation.Add(x => x.VerifyPassWord).Equal( x => x.PassWord, Resources.Errors.CompareNotValid.Fmt( Resources.Titles.VerifyPassWord, Resources.Titles.PassWord)); return validation; } ```
I'll vote yes on puritanical grounds - the formatting is inconsistent and there isn't a comment in sight. Pragmatically one can follow what you're doing with a little effort though.
I'm not a C# guy by any means, but it appears straightforward. It seems to be putting a bunch of rules into a structure of some sort, and I assume it would then apply then to validate messages of some sort. Application of the Command pattern, I'd think.
Is this ASP.NET MVC Validation Code Easy to Read and Follow?
[ "", "c#", "asp.net-mvc", "" ]
Part of the development team I work with has been given the challenge of writing a server for integration with our product. We have some low-level sensor devices that provide a C SDK, and we want to share them over a network for use by people collecting data. Sounds simple, right? Someone would connect a sensor device to their machine in one part of the building and run our server, thus sharing the device(s) with the rest of the network. Then a client would connect to that server via our application and collect sensor readings from the device. I created a simple, language-agnostic network protocol, and a reference implementation in Java. The problem is creating an implementation that will work with our devices that only provide an SDK written in C. We were thinking of doing the following: 1. Create polling threads that collect and store the most recent readings from each connected device. 2. Use a multi-threaded server to spin off each incoming connection to a worker thread. 3. When a worker thread receives a request for a sensor reading, the most recent value collected by the polling thread is sent back to the client. That's a lot of threading, especially in C. So, to review, the general requirements are: * Runs on Windows XP/Vista, Linux, and OS X machines * Written in C or C++, to interact with the C SDK we have * Accepts a variable number of simultaneous connections (worker threads) * Must use threads, not forking (don't want to deal with another layer of IPC) Can anyone suggest a library and preferably some example code to get use started?
I've used [Boost.Thread](http://www.boost.org/doc/libs/1_37_0/doc/html/thread.html) & [Boost.Asio](http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio.html) to build a multi-threaded server on Windows & Linux systems. The tutorials made it easy to get started.
The best way to write such a server is not to write one, and to rearchitect your system so it is not necessary, and/or to reuse components that already exist. Because: > Someone would connect a sensor device > to their machine in one part of the > building and run our server, thus > sharing the device(s) with the rest of > the network. This also has the potential to share the entire machine with rest of the network, if your code has a vulnerability (which it probably will, as you're writing it in C++ from scratch and inventing a new protocol). So, do it the other way around. Install a simple client on the machine that has the sensor hardware, then run it either all the time, or periodically, and have it push (post) results to a central server. The central server could even be a standard web server. Or it could be a database. (Notice that both of these have been written already - no need to reinvent the wheel ;-) Your application then works the same way you have in mind now, however it collects data from the database rather than the sensors. The part running on the machine with the sensor, however, has shrunk from a multi-threaded custom server nightmare, to a nice little single threaded command line client that only makes *outgoing* connections, and which can be run from cron (or equivalent on windows). Even if you need real time data collection (and from your description it sounds like you do not) it still may be better for the sensor collector be a client and not a server. Let it open a long lived connection to a central collector (or a group of them) and await instructions to provide its data. edit: ceretullis and pukku's answers suggest a nice variation on this using multicast - see [this answer](https://stackoverflow.com/questions/383371/what-is-the-best-way-to-implement-a-cross-platform-multi-threaded-server-in-cc#383502) and the comments
What is the best way to implement a cross-platform, multi-threaded server in C/C++?
[ "", "c++", "c", "multithreading", "network-programming", "client-server", "" ]
Is this allowed? : ``` class A; void foo() { static A(); } ``` I get signal 11 when I try to do it, but the following works fine: ``` class A; void foo() { static A a; } ``` Thank you.
Nope. There is no such thing as an "anonymous object" in C++. There is such a thing as defining an object to type A that is immediately discarded; what you've written is an expression that returns an A object that's never assigned to a variable, like the return code of printf usually is never assigned or used. In that code, if it worked, you'd be declaring "no object" to be allocated outside the heap. Since there's no object to allocate, it's meaningless.
You can create an "anonymous" automatic variable, but not a static one. The following would create an object of class A and call the constructor and then call the destructor on function exit. ``` class A; void foo() { A(); } ``` You could get a similar effect by allocating the object on the heap or [constructing it in place](https://stackoverflow.com/questions/222557/cs-placement-new) in a preallocated location. ``` void foo() { new A(); } void foo() { static char memory[sizeof (A)]; new (memory) A(); } ``` However, in both cases the object cannot be cleaned up correctly since a pointer is not held for a later call to delete. Even though the static memory will be released the destructor will never be called. Anonymous objects only really make since when used with a garbage collector.
Can an anonymous object be declared static in C++?
[ "", "c++", "static", "anonymous", "" ]
I'm using the [subprocess module](http://docs.python.org/library/subprocess.html) to start a subprocess and connect to its output stream (standard output). I want to be able to execute non-blocking reads on its standard output. Is there a way to make .readline non-blocking or to check if there is data on the stream before I invoke `.readline`? I'd like this to be portable or at least work under Windows and Linux. Here is how I do it for now (it's blocking on the `.readline` if no data is available): ``` p = subprocess.Popen('myprogram.exe', stdout = subprocess.PIPE) output_str = p.stdout.readline() ```
[`fcntl`](https://stackoverflow.com/questions/375427/non-blocking-read-on-a-stream-in-python/4025909#4025909), [`select`](https://stackoverflow.com/questions/375427/non-blocking-read-on-a-stream-in-python/375511#375511), [`asyncproc`](https://stackoverflow.com/questions/375427/non-blocking-read-on-a-stream-in-python/437888#437888) won't help in this case. A reliable way to read a stream without blocking regardless of operating system is to use [`Queue.get_nowait()`](https://docs.python.org/3/library/queue.html#queue.Queue.get_nowait): ``` import sys from subprocess import PIPE, Popen from threading import Thread try: from queue import Queue, Empty except ImportError: from Queue import Queue, Empty # python 2.x ON_POSIX = 'posix' in sys.builtin_module_names def enqueue_output(out, queue): for line in iter(out.readline, b''): queue.put(line) out.close() p = Popen(['myprogram.exe'], stdout=PIPE, bufsize=1, close_fds=ON_POSIX) q = Queue() t = Thread(target=enqueue_output, args=(p.stdout, q)) t.daemon = True # thread dies with the program t.start() # ... do other things here # read line without blocking try: line = q.get_nowait() # or q.get(timeout=.1) except Empty: print('no output yet') else: # got line # ... do something with line ```
I have often had a similar problem; Python programs I write frequently need to have the ability to execute some primary functionality while simultaneously accepting user input from the command line (stdin). Simply putting the user input handling functionality in another thread doesn't solve the problem because `readline()` blocks and has no timeout. If the primary functionality is complete and there is no longer any need to wait for further user input I typically want my program to exit, but it can't because `readline()` is still blocking in the other thread waiting for a line. A solution I have found to this problem is to make stdin a non-blocking file using the fcntl module: ``` import fcntl import os import sys # make stdin a non-blocking file fd = sys.stdin.fileno() fl = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK) # user input handling thread while mainThreadIsRunning: try: input = sys.stdin.readline() except: continue handleInput(input) ``` In my opinion this is a bit cleaner than using the select or signal modules to solve this problem but then again it only works on UNIX...
A non-blocking read on a subprocess.PIPE in Python
[ "", "python", "io", "subprocess", "nonblocking", "" ]
I have a login screen that I force to be ssl, so like this: <https://www.foobar.com/login> then after they login, they get moved to the homepage: <https://www.foobar.com/dashbaord> However, I want to move people off of SSL once logged in (to save CPU), so just after checking that they are in fact logged in on <https://www.foobar.com/dashbaord> I move them to <http://www.foobar.com/dashbaord> Well this always seems to wipe out the session variables, because when the page runs again, it confirms they are logged in (as all pages do) and session appears not to exist, so it moves them to the login screen. Oddness/findings: 1. List item 2. The second login always works, and happily gets me to <http://www.foobar.com/dashbaord> 3. It successfully creates a cookie the first login 4. If I login twice, then logout, and login again, I don't need two logins (I seem to have traced this to the fact that the cookie exists). If I delete the cookie, I'm back to two logins. 5. After the second login, I can move from non-ssl from ssl and the session persists. 6. On the first login, the move to the non-ssl site wipes out the session entirely, manually moving back to the ssl site still forces me to login again. 7. The second login using the exact same mechanism as the first, over ssl What I tried: 1. Playing with Cake's settings for security.level and session.checkagent - nothing 2. Having cake store the sessions in db (as opposed to file system) - nothing 3. Testing in FF, IE, Chrome on an XP machine. So I feel like this is something related to the cookie being created but not being read. Environment: 1. Debian 2. Apache 2 3. Mysql 4 4. PHP 5 5. CakePHP 6. Sessions are being saved PHP default, as files
I figured this out. Cake was switching the session.cookie\_secure ini value on-the-fly while under SSL connections automatically, So the cookie being created was a secure cookie, which the second page wouldn't recognize. Solution, comment out /cake/lib/session.php line 420 ish: > ini\_set('session.cookie\_secure', 1); (Just search for that to find it, as I'm sure the line # will change as releases come out.)
While the accepted answer meets the OP's desire to "move people off of SSL once logged in" - it's horribly insecure in that it exposes the user session to hijacking (See Firesheep for an easy exploit). A better compromise between the default behavior of CakePHP (which requires all pages to be served SSL after a user authenticates over SSL) and the accepted answer (which serves all authenticated pages unencrypted and exposes the authenticated cookie) is to serve pages encrypted over SSL if and only if they require authentication. An easy way to accomplish this is to maintain two session cookies - one that is served secure and holds the authentication information and another which is served insecure. A simple implementation to support such a dual-session approach will use a session\_handler to override the session.name like so: ``` if (env('HTTPS')) { ini_set('session.name', Configure::read('Session.cookie').'-SECURE'); }else{ ini_set('session.name', Configure::read('Session.cookie')); } ``` One item to keep in mind with this approach is that to link from a non-SSL page directly to a page that requires authentication will require you to explicitly link using https - since you'll need to send the session cookie containing the authentication information and the browser will only do so if the link is encrypted.
Session not saving when moving from ssl to non-ssl
[ "", "php", "cakephp", "session", "" ]
I'm trying to write javascript to find page elements relative to a given element by using parentNode, firstChild, nextSibling, childNodes[], and so on. Firefox messes this up by inserting text nodes between each html element. I've read that I can defeat this by removing all whitespace between elements but I've tried that and it doesn't doesn't work. Is there a way to write code that works on all modern browsers? For example: ``` <div id="parent"><p id="child">Hello world</p></div> ``` In IE parent.firstChild is child but in Firefix it's a phantom Text element.
I have a workaround. You can insert the two methods below: ``` Element.prototype.fChild = function(){ var firstChild = this.firstChild; while(firstChild != null && firstChild.nodeType === 3){ firstChild = firstChild.nextSibling; } return firstChild; } Element.prototype.nSibling = function(){ var nextSibling = this.nextSibling; while(nextSibling != null && nextSibling.nodeType === 3){ nextSibling = nextSibling.nextSibling; } return nextSibling; } ``` and you can now use: ``` document.getElementById("parent").fChild(); document.getElementById("parent").nSibling(); ``` instead of: ``` document.getElementById("parent").firstChild; document.getElementById("parent").nextSibling; ```
You have to check that the nodeType == 1. ``` if (el.nodeType === 1) { return el; } ``` I wrote a small DOM traversing class for ya (mostly copied from MooTools). Download here: <http://gist.github.com/41440> ``` DOM = function () { function get(id) { if (id && typeof id === 'string') { id = document.getElementById(id); } return id || null; } function walk(element, tag, walk, start, all) { var el = get(element)[start || walk], elements = all ? [] : null; while (el) { if (el.nodeType === 1 && (!tag || el.tagName.toLowerCase() === tag)) { if (!all) { return el; } elements.push(el); } el = el[walk]; } return elements; } return { // Get the element by its id get: get, walk: walk, // Returns the previousSibling of the Element (excluding text nodes). getPrevious: function (el, tag) { return walk(el, tag, 'previousSibling'); }, // Like getPrevious, but returns a collection of all the matched previousSiblings. getAllPrevious: function (el, tag) { return walk(el, tag, 'previousSibling', null, true); }, // As getPrevious, but tries to find the nextSibling (excluding text nodes). getNext: function (el, tag) { return walk(el, tag, 'nextSibling'); }, // Like getNext, but returns a collection of all the matched nextSiblings. getAllNext: function (el, tag) { return walk(el, tag, 'nextSibling', null, true); }, // Works as getPrevious, but tries to find the firstChild (excluding text nodes). getFirst: function (el, tag) { return walk(el, tag, 'nextSibling', 'firstChild'); }, // Works as getPrevious, but tries to find the lastChild. getLast: function (el, tag) { return walk(el, tag, 'previousSibling', 'lastChild'); }, // Works as getPrevious, but tries to find the parentNode. getParent: function (el, tag) { return walk(el, tag, 'parentNode'); }, // Like getParent, but returns a collection of all the matched parentNodes up the tree. getParents: function (el, tag) { return walk(el, tag, 'parentNode', null, true); }, // Returns all the Element's children (excluding text nodes). getChildren: function (el, tag) { return walk(el, tag, 'nextSibling', 'firstChild', true); }, // Removes the Element from the DOM. dispose: function (el) { el = get(el); return (el.parentNode) ? el.parentNode.removeChild(el) : el; } }; }(); // Now you can do: DOM.getFirst("parent") // first child // or DOM.getFirst("parent", "p") // first p tag child // or var el = DOM.get("parent") // get element by id DOM.getFirst(el) // first child ```
How to handle Firefox inserting text elements between tags
[ "", "javascript", "html", "firefox", "" ]
I want my web application users to download some data as an Excel file. I have the next function to send an Input Stream in the response object. ``` public static void sendFile(InputStream is, HttpServletResponse response) throws IOException { BufferedInputStream in = null; try { int count; byte[] buffer = new byte[BUFFER_SIZE]; in = new BufferedInputStream(is); ServletOutputStream out = response.getOutputStream(); while(-1 != (count = in.read(buffer))) out.write(buffer, 0, count); out.flush(); } catch (IOException ioe) { System.err.println("IOException in Download::sendFile"); ioe.printStackTrace(); } finally { if (in != null) { try { in.close(); } catch (IOException ioe) { ioe.printStackTrace(); } } } } ``` I would like to transform my HSSFWorkbook Object to an input stream and pass it to the previous method. ``` public InputStream generateApplicationsExcel() { HSSFWorkbook wb = new HSSFWorkbook(); // Populate the excel object return null; // TODO. return the wb as InputStream } ``` <http://poi.apache.org/apidocs/org/apache/poi/hssf/usermodel/HSSFWorkbook.html>
The problem with your question is that you are mixing OutputStreams and InputStreams. An InputStream is something you read from and an OutputStream is something you write to. This is how I write a POI object to the output stream. ``` // this part is important to let the browser know what you're sending response.setContentType("application/vnd.ms-excel"); // the next two lines make the report a downloadable file; // leave this out if you want IE to show the file in the browser window String fileName = "Blah_Report.xls"; response.setHeader("Content-Disposition", "attachment; filename=" + fileName); // get the workbook from wherever HSSFWorkbook wb = getWorkbook(); OutputStream out = response.getOutputStream(); try { wb.write(out); } catch (IOException ioe) { // if this happens there is probably no way to report the error to the user if (!response.isCommited()) { response.setContentType("text/html"); // show response text now } } ``` If you wanted to re-use your existing code you'd have to store the POI data somewhere then turn THAT into an input stream. That'd be easily done by writing it to a ByteArrayOutputStream, then reading those bytes using a ByteArrayInputStream, but I wouldn't recommend it. Your existing method would be more useful as a generic Pipe implementation, where you can pipe the data from an InputStream to and OutputStream, but you don't need it for writing POI objects.
you can create a InputStream from a object. ``` public InputStream generateApplicationsExcel() { HSSFWorkbook wb = new HSSFWorkbook(); // Populate a InputStream from the excel object return new ByteArrayInputStream(excelFile.getBytes()); } ```
How can I get an Input Stream from HSSFWorkbook Object
[ "", "java", "apache-poi", "" ]
I need to make a Control which shows only an outline, and I need to place it over a control that's showing a video. If I make my Control transparent, then the video is obscured, because transparent controls are painted by their parent control and the video isn't painted by the control; it's shown using DirectShow or another library, so instead the parent control paints its BackColor. So - can I make a control that doesn't get painted *at all*, except where it's opaque? That way, the parent control wouldn't paint over the video. I know I could make the border out of four controls (or more if I want it dashed) but is it possible to do what I want using just one control? --- rslite is right - although you don't even need to go so far as to use PInvoke like his example does - the Control.Region property is entirely sufficient.
You could try to make a Region with a hole inside and set the control region with SetWindowRgn. Here is an [example](http://www.java2s.com/Code/CSharp/GUI-Windows-Form/PictureButton.htm) (I couldn't find a better one). The idea is to create two regions and subtract the inner one from the outer one. I think that should give you what you need.
I use an overridden function for that from the class control. 1. The `createparams` property now indicates that the control can be transparent. 2. `InvalidateEx` is necessary to invalidate the parent's region where the control is placed 3. You have to disable the automatic paint of the backcolor from the control (') ``` Imports System.Windows.Forms.Design Imports System.Reflection Public Class TransparentControl : Inherits Control Protected Overrides ReadOnly Property CreateParams As CreateParams Get Dim cp As CreateParams = MyBase.CreateParams() cp.ExStyle = cp.ExStyle Or 32 'WS_EX_TRANSPARENT Return cp End Get End Property Protected Sub InvalidateEx(rct As Rectangle) Me.Invalidate(rct) If IsNothing(Parent) Then Exit Sub Parent.Invalidate(New Rectangle(Me.Location, rct.Size), True) End Sub Protected Sub InvalidateEx() Me.Invalidate() If IsNothing(Parent) Then Exit Sub Parent.Invalidate(New Rectangle(Me.Location, Me.Size), True) End Sub Protected Overrides Sub OnPaintBackground(pevent As PaintEventArgs) 'MyBase.OnPaintBackground(pevent) End Sub Protected Overrides Sub OnPaint(e As PaintEventArgs) MyBase.OnPaint(e) 'draw the layout on e.Graphics End Sub End Class ``` You can extend this class to make your own control. After debugging the class will appear in the toolbox. Hope this does the trick.
How do I make a genuinely transparent Control?
[ "", "c#", "winforms", "transparency", "" ]
I need to update the comments field in a table for a large list of customer\_ids. The comment needs to be updated to include the existing comment and appending some text and the password which is in another table. I'm not quite sure how to do this. Here is some code that does this for a single customer id. How would I go about doing this for a list of over 100 customer id's? my first thought is to make a temp table and place all the customer id's in there, but i'm still not sure how to code it after that since the customer\_id is used in the query twice. ``` Update Contract SET Contract_Comment= Contract_Comment || '; 12/29/2008 Password `' || (SELECT Password FROM WLogin WHERE default_customer_id='1234' ) ||'`'|| ' reinstated per Mickey Mouse;' WHERE Customer_id='1234' ```
Well, assuming Contract\_comment has a customer\_id, or is easily joined to a table that does have one.... ``` update contract c set contract_comment = contract_comment || '; 12/29/2008 Password ' || (select password from WLogin w where w.default_customer_id = c.customer_id) ||''|| ' reinstated per Mickey Mouse;' WHERE Customer_id in (1234, 4567). ``` Or, replace the list at the end with another subselect... WHERE Customer\_id in (select customer\_id from ... ).
Todd's answer above would work fine using an IN (or EXISTS clause if you're storing the IDs in a temp table. I'd just enhance it as follows: ``` UPDATE contract c SET contract_comment = nvl2(contract_comment, contract_comment || '; ', '') || '12/29/2008 Password ' || NVL((SELECT PASSWORD FROM wlogin p WHERE p.default_customer_id = c.customer_id), '<NULL>') || '' || ' reinstated per Mickey Mouse' WHERE EXISTS (SELECT 'x' FROM wlogin l WHERE l.default_customer_id = c.customer_id) ``` That ensures that 1) you're handling if the password is null and 2) that you only update for customers with a wlogin record. Feel free to add something like: ``` AND EXISTS (SELECT 'y' FROM temp_ids_table t WHERE t.customer_id = c.customer_id) ``` Thanks for any credit you give for these correct answers. * Stew
Update in Oracle
[ "", "sql", "database", "oracle", "sql-update", "" ]
I'm starting a Python project and expect to have 20 or more classes in it. As is good practice I want to put them in a separate file each. However, the project directory quickly becomes swamped with files (or will when I do this). If I put a file to import in a folder I can no longer import it. How do I import a file from another folder and will I need to reference to the class it contains differently now that it's in a folder? Thanks in advance
Create an `__init__.py` file in your projects folder, and it will be treated like a module by Python. Classes in your package directory can then be imported using syntax like: ``` from package import class import package.class ``` Within `__init__.py`, you may create an `__all__` array that defines `from package import *` behavior: ``` # name1 and name2 will be available in calling module's namespace # when using "from package import *" syntax __all__ = ['name1', 'name2'] ``` And here is [way more information than you even want to know about packages in Python](http://www.python.org/doc/essays/packages/) Generally speaking, a good way to learn about how to organize a lot of code is to pick a popular Python package and see how they did it. I'd check out [Django](http://www.djangoproject.com/) and [Twisted](http://twistedmatrix.com/trac/), for starters.
"As is good practice I want to put them in a separate file each. " This is not actually a very good practice. You should design modules that contain closely-related classes. As a practical matter, no class actually stands completely alone. Generally classes come in clusters or groups that are logically related.
Organising my Python project
[ "", "python", "project-organization", "" ]
I'm having a little bit of trouble understanding what the problem is here. I have a bit of code that pulls records from a database using LINQ and puts them into an object which is cast into an interface. It looks a bit like this: ``` public IEnumerable<ISomeObject> query() { return from a in dc.SomeTable select new SomeObject { //Assign various members here } as ISomeObject; } ``` When I test this, I put the returned IEnumerable into a variable called results and run this line: ``` Assert.AreEqual(EXPECTED_COUNT, results.Count()); ``` When this is run, I get a System.Security.VerificationException: "Operation could destabilize the runtime." I found the solution [here](http://devlicio.us/blogs/derik_whittaker/archive/2008/11/29/quot-operation-could-destabilize-the-runtime-quot-from-casting-from-concrete-to-interfaces-with-linq.aspx), which is this: ``` var results = from a in dc.SomeTable select new SomeObject { //Assign various members here } as ISomeTable; return results.OfType<ISomeObject>(); ``` This works, but I'm having trouble understanding what's happening here. Why did I get the exception in the first place and how did the lines of code above fix it? The MSDN documentation seems to suggest that this is an issue of type safety, but I'm not seeing where the previous code was type-unsafe. **UPDATE** A little bit more information I found out. The first example works if I make the return type IQueryable. This sheds a little bit more light on *what* was going wrong, but I'm still confused about the *why*. Why didn't the compiler force me to cast the IEnumerable into an IQueryable?
I believe it is an issue of covariance or contravariance as noted by [this forum post](http://www.manning-sandbox.com/message.jspa?messageID=77137#77137). See [Covariance and Contravariance in C#, Part Two: Array Covariance](http://blogs.msdn.com/ericlippert/archive/2007/10/17/covariance-and-contravariance-in-c-part-two-array-covariance.aspx) and the rest of the [Covariance and Contravariance series](http://blogs.msdn.com/ericlippert/archive/tags/Covariance+and+Contravariance/default.aspx) at Eric Lippert's blog. Although he is dealing with Arrays in the article I linked, I believe a similar problem presents itself here. With your first example, you are returning an `IEnumerable` that could contain objects that implement an interface that is *larger* than `ISomeTable` (i.e. - you could put a Turtle into an Animals IEnumerable when that IEnumerable can only contain Giraffes). I think the reason it works when you return `IQueryable` is because that is *larger/wider* than anything you could return, so you're guaranteed that what you return you will be able to handle(?). In the second example, [OfType](http://msdn.microsoft.com/en-us/library/bb360913.aspx) is ensuring that what gets returned is an object that stores all the information necessary to return only those elements that can be cast to Giraffe. I'm pretty sure it has something to do with the issues of type safety outlined above, but as Eric Lippert says [Higher Order Functions Hurt My Brain](http://blogs.msdn.com/ericlippert/archive/2007/10/24/covariance-and-contravariance-in-c-part-five-higher-order-functions-hurt-my-brain.aspx) and I am having trouble expressing precisely why this is a co/contravariant issue.
I found this entry while looking for my own solution to "operation could destabilize the runtime". While the covariance/contra-variance advice above looks very interesting, in the end I found that I get the same error message by running my unit tests with code coverage turned on and the AllowPartiallyTrustedCallers assembly attribute set. **Removing the AllowPartiallyTrustedCallers attribute caused my tests to run fine.** I could also turn off code coverage to make them run but that was not an acceptable solution. Hopefully this helps someone else who makes it to this page trying to find a solution to this issue.
Operation could destabilize the runtime?
[ "", "c#", ".net", "linq", "linq-to-sql", ".net-3.5", "" ]
I am running into a design disagreement with a co-worker and would like people's opinion on object constructor design. In brief, which object construction method would you prefer and why? ``` public class myClass { Application m_App; public myClass(ApplicationObject app) { m_App = app; } public method DoSomething { m_App.Method1(); m_App.Object.Method(); } } ``` Or ``` public class myClass { Object m_someObject; Object2 m_someOtherObject; public myClass(Object instance, Object2 instance2) { m_someObject = instance; m_someOtherObject = instance2; } public method DoSomething { m_someObject.Method(); m_someOtherObject.Method(); } } ``` The back story is that I ran into what appears to be a fundamentally different view on constructing objects today. Currently, objects are constructed using an Application class which contains all of the current settings for the application (Event log destination, database strings, etc...) So the constructor for every object looks like: ``` public Object(Application) ``` Many classes hold the reference to this Application class individually. Inside each class, the values of the application are referenced as needed. E.g. ``` Application.ConfigurationStrings.String1 or Application.ConfigSettings.EventLog.Destination ``` Initially I thought you could use both methods. The problem is that in the bottom of the call stack you call the parameterized constructor then, higher up the stack, when the new object expects a reference to the application object to be there, we ran into a lot of null reference errors and saw the design flaw. My feeling on using an application object to set every class is that it breaks encapsulation of each object and allows the Application class to become a god class which holds information for everything. I run into problems when thinking of the downsides to this method. I wanted to change the objects constructor to accept only the arguments it needs so that `public object(Application)` would change to `public object(classmember1, classmember2 etc...)`. I feel currently that this makes it more testable, isolates change, and doesn't obfuscate the necessary parameters to pass. Currently, another programmer does not see the difference and I am having trouble finding examples or good reasons to change the design, and saying it's my instinct and just goes against the OO principles I know is not a compelling argument. Am I off base in my design thoughts? Does anyone have any points to add in favor of one or the other?
Hell, why not just make one giant class called "Do" and one method on it called "It" and pass the whole universe into the It method? ``` Do.It(universe) ``` Keep things as small as possible. Discrete means easier to debug when things inevitably break.
My view is that you give the class the smallest set of "stuff" it needs for it to do its job. The "Application" method is easier upfront but as you've seen already, it will lead to maintainence issues.
Instantiating objects with a Configuration class or with Parameters
[ "", "c#", ".net", "constructor", "" ]
What is the best way to print stuff from c#/.net? The question is in regard to single pages as well as to reports containing lots of pages. It would be great to get a list of the most common printing libs containing the main features and gotchas of each of them. [Update] for standard windows clients (or servers), not for web apps, please.
For reports, I use the RDLC control. For everything else, I use the inherent printing objects within .NET. **Edit** The inherent printing objects are all found in the System.Drawing.Printing namespace. When you use the PrintDialog or the PrintPreviewDialog in a WinForms (or WPF) application, it is to these objects that you're turning over control. The fundamental concept is that you're drawing to the printer. The simplest form of this is: ``` Sub MyMethod() Dim x as New PrintDocument AddHandler x.PrintPage, AddressOf printDoc_PrintPage x.Print End Sub Sub printDoc_PrintPage( sender as Object, e as PrintPageEventArgs) Dim textToPrint as String= ".NET Printing is easy" dim printFont as new Font("Courier New", 12) dim leftMargin as int= e.MarginBounds.Left dim topMargin as int = e.MarginBounds.Top e.Graphics.DrawString(textToPrint, printFont, Brushes.Black, leftMargin, topMargin) End Sub ``` What's happening here is that when my object (x) is sent the print command, it raises the "PRINT PAGE" event (which is designed to print 1 page at a time). This event then uses the Graphics attribute of the PrintPageEventArgs to draw the relevant string directly to the print spooler. [Here's one tutorial](http://www.ondotnet.com/pub/a/dotnet/2002/06/24/printing.html), and a quick Google search for ".NET printing tutorial" returns a bit over 200K results.
We used a set of third party DLLs from [PDFSharp](http://www.pdfsharp.com/PDFsharp/index.php?option=com_frontpage&Itemid=1 "PDFSharp") who in turn use DLLs from MigraDoc. I'm not privy to all the reasons that we went that direction (the decision was made by a senior developer), but I can tell you that: * It seems to be in active development. * It had most of the features we needed. * The source code is available. Although it used some patterns and conventions that I hadn't seen before, once I got on to them, it was fairly easy to make the changes. I added support for using the System.Drawing.Image directly rather than as saving files. * It is not documented well either internally or externally.
Best way to print for Windows Clients (Not Web Apps)?
[ "", "c#", ".net", "printing", "" ]
Will the following code result in a deadlock using C# on .NET? ``` class MyClass { private object lockObj = new object(); public void Foo() { lock(lockObj) { Bar(); } } public void Bar() { lock(lockObj) { // Do something } } } ```
No, not as long as you are locking on the same object. The recursive code effectively **already has the lock** and so can continue unhindered. `lock(object) {...}` is shorthand for using the [Monitor](http://msdn.microsoft.com/en-us/library/system.threading.monitor.aspx) class. As [Marc points out](https://stackoverflow.com/questions/391913/re-entrant-locks-in-c/391921#391921), `Monitor` allows *[re-entrancy](http://en.wikipedia.org/wiki/Reentrant_(subroutine))*, so repeated attempts to lock on an object **on which the current thread already has a lock** will work just fine. If you start locking on *different* objects, that's when you have to be careful. Pay particular attention to: * Always acquire locks on a given number of objects in the same sequence. * Always release locks in the **reverse** sequence to how you acquire them. If you break either of these rules you're pretty much guaranteed to get deadlock issues *at some point*. Here is one good webpage describing thread synchronisation in .NET: <http://dotnetdebug.net/2005/07/20/monitor-class-avoiding-deadlocks/> Also, lock on as few objects at a time as possible. Consider applying [coarse-grained locks](http://martinfowler.com/eaaCatalog/coarseGrainedLock.html) where possible. The idea being that if you can write your code such that there is an object graph and you can acquire locks on the root of that object graph, then do so. This means you have one lock on that root object and therefore don't have to worry so much about the sequence in which you acquire/release locks. *(One further note, your example isn't technically recursive. For it to be recursive, `Bar()` would have to call itself, typically as part of an iteration.)*
Well, `Monitor` allows re-entrancy, so you can't deadlock yourself... so no: it shouldn't do
Re-entrant locks in C#
[ "", "c#", ".net", "multithreading", "locking", "deadlock", "" ]
I was recently trying to update [my game](http://kentb.blogspot.com/2008/12/kentis.html) to store graphics in compressed formats (JPEG and PNG). Whilst I ended up settling on a different library, my initial attempt was to incorporate [ijg](http://www.ijg.org/) to do JPEG decompression. However, I was unable to get even the simplest console application to work and am wondering if anyone might be able to shed some light on the reasons why. Here is my code, which is linked to the *jpeg.lib* that is part of the ijg packages: ``` #include "stdafx.h" #include <stdio.h> #include <assert.h> #include <jpeglib.h> int _tmain(int argc, _TCHAR* argv[]) { struct jpeg_decompress_struct cinfo; struct jpeg_error_mgr jerr; JSAMPARRAY buffer; int row_stride; //initialize error handling cinfo.err = jpeg_std_error(&jerr); //initialize the decompression jpeg_create_decompress(&cinfo); FILE* infile; errno_t err = fopen_s(&infile, "..\\Sample.jpg", "rb"); assert(err == 0); //specify the input jpeg_stdio_src(&cinfo, infile); //read headers (void) jpeg_read_header(&cinfo, TRUE); return 0; } ``` The problem is that the call to `jpeg_read_header()` fails with an access violation: > Unhandled exception at 0x7c91b1fa > (ntdll.dll) in JPEGTest.exe: > 0xC0000005: Access violation writing > location 0x00000010. Does anyone have any ideas what I might be doing wrong?
I've just encountered the same problem (although I was trying to encode an image). Apparently, FILE\* are not portable between DLLs so you can't use any libjpeg API that takes a FILE\* as a parameter. There are several solutions, but they all come down to having to rebuild the library: * Build the library as a static lib, and link it to your application. This is what I did, and it solved my problem. * Move the source/destination handlers out of libjpeg and into your application. Then you could build libjpeg as either a static lib or a DLL, whichever suits you. I'm not sure whether this would work, but it is the suggested solution in the "install.doc" file distributed with the source code.
I agree with Hernán. This is not a good interface (I think the internal code itself is probably good), unless you really need to work low-level (and maybe not even then). I think ImageMagick is probably better. They have a "MagickWand" C interface that is more high level, not to mention that it supports many more formats. However, I was curious about libjpeg's interface, so I got a test program working to my satisfaction, based on your example program as well as [libjpeg.doc](http://download.blender.org/source/chest/blender_2.03_tree/jpeg/libjpeg.doc), the [IJG example](http://dev.w3.org/cvsweb/Amaya/libjpeg/Attic/example.c?rev=1.4), and [USING THE IJG JPEG LIBRARY](http://www.jpegcameras.com/libjpeg/libjpeg.html#toc2). Anyway, here's the code. It just prints out the dimensions, and the RGB of the first pixel of every row. I am very surprised you get an error with my code. It works fine for me, and compiles without any warnings. Can someone else test it? ``` #include <stdio.h> #include <assert.h> #include <jpeglib.h> int main(int argc, char* argv[]) { struct jpeg_decompress_struct cinfo; struct jpeg_error_mgr jerr; JSAMPARRAY buffer; int row_stride; //initialize error handling cinfo.err = jpeg_std_error(&jerr); FILE* infile; infile = fopen("Sample.jpg", "rb"); assert(infile != NULL); //initialize the decompression jpeg_create_decompress(&cinfo); //specify the input jpeg_stdio_src(&cinfo, infile); //read headers (void) jpeg_read_header(&cinfo, TRUE); jpeg_start_decompress(&cinfo); printf("width: %d, height: %d\n", cinfo.output_width, cinfo.output_height); row_stride = cinfo.output_width * cinfo.output_components; buffer = (*cinfo.mem->alloc_sarray) ((j_common_ptr) &cinfo, JPOOL_IMAGE, row_stride, 1); JSAMPLE firstRed, firstGreen, firstBlue; // first pixel of each row, recycled while (cinfo.output_scanline < cinfo.output_height) { (void)jpeg_read_scanlines(&cinfo, buffer, 1); firstRed = buffer[0][0]; firstBlue = buffer[0][1]; firstGreen = buffer[0][2]; printf("R: %d, G: %d, B: %d\n", firstRed, firstBlue, firstGreen); } jpeg_finish_decompress(&cinfo); return 0; } ```
JPEG support with ijg - getting access violation
[ "", "c++", "c", "jpeg", "ijg", "" ]
Is there a way to do the following using LINQ? ``` foreach (var c in collection) { c.PropertyToSet = value; } ``` To clarify, I want to iterate through each object in a collection and then update a property on each object. My use case is I have a bunch of comments on a blog post, and I want to iterate through each comment on a blog post and set the datetime on the blog post to be +10 hours. I could do it in SQL, but I want to keep it in the business layer.
While you can use a `ForEach` extension method, if you want to use just the framework you can do ``` collection.Select(c => {c.PropertyToSet = value; return c;}).ToList(); ``` The `ToList` is needed in order to evaluate the select immediately due to *lazy evaluation*.
``` collection.ToList().ForEach(c => c.PropertyToSet = value); ```
Update all objects in a collection using LINQ
[ "", "c#", ".net", "linq", "foreach", "" ]
We are currently using SharpZipLib but since it uses the GPL we need to replace it with a commercial lib.
See my comment about #ziplib on the main post... but if you really need a commercial product (not open-source), [IP\*Works! Zip .NET](http://www.nsoftware.com/ipworks/zip/technologies.aspx?sku=izn8-a) might fit the bill. It does have TAR support according to [this](http://www.nsoftware.com/products/component/tar.aspx).
Have you looked at [DotNetZip](http://www.codeplex.com/DotNetZip)? It is not under the GPL, but rather the Microsoft Public License (Ms-PL).
What is a good commercial tar stream lib for c# and .net?
[ "", "c#", "stream", "tar", "" ]
Does anyone know a situation where a PostgreSQL HASH should be used instead of a B-TREE for it seems to me that these things are a trap. They are take way more time to CREATE or maintain than a B-TREE (at least 10 times more), they also take more space (for one of my table.columns, a B-TREE takes up 240 MB, while a HASH would take 4 GB) and I seem to have understood from my googling, that they do not SELECT faster than B-TREEs; yet the HASH may have recently been optimized or google was wrong. Anyway, I wanted you guy's opinions and experiences. If these HASHs are evil, people should know. Thanks Also: what about MySQL's HASHs?
Hashes are faster than B-Trees for cases where you have a known key value, especially a known unique value. Hashes should be used if the column in question is *never* intended to be scanned comparatively with `<` or `>` commands. Hashes are `O(1)` complexity, B-Trees are `O(log n)` complexity ( iirc ) , ergo, for large tables with unique entries, fetching an `ITEM="foo"`, they will be the most efficient way of looking it up. This is *especially* practical when these unique fields are used on a join condition.
As <http://www.postgresql.org/docs/9.2/static/sql-createindex.html> point Hash index are still not WAL-safe; which means that they are not 100% reliable for crashes (index has to be reconstructed and wrong response could happen on replications). Check also <http://www.postgresql.org/docs/9.1/static/wal-intro.html>
PostgreSQL HASH index
[ "", "sql", "postgresql", "indexing", "database", "" ]
I have a file called main.py and a file called classes.py main.py contains the application and what's happening while class.py contains some classes. main.py has the following code **main.py** ``` import classes def addItem(text): print text myClass = classes.ExampleClass() ``` And then we have classes.py **classes.py** ``` class ExampleClass (object): def __init__(self): addItem('bob') ``` Surprisingly enough that's not the actual code I am using because I've stripped out anything that'd get in the way of you seeing what I want to do. I want to be able to call a method that's defined in main.py from a class within classes.py. How do I do this? Thanks in advance
I couldn't answer this any better than [this post by Alex Martelli](http://mail.python.org/pipermail/python-list/2000-December/059926.html). Basically any way you try to do this will lead to trouble and you are much better off refactoring the code to avoid mutual dependencies between two modules... If you have two modules A and B which depend on each other, the easiest way is to isolate a part of the code that they both depend on into a third module C, and have both of them import C.
The suggestions to refactor are good ones. If you have to leave the files as they are, then you can edit main.py to make sure that nothing is executed simply by importing the file, then import main in the function that needs it: ``` class ExampleClass (object): def __init__(self): import main main.addItem('bob') ``` This avoids the circular imports, but isn't as nice as refactoring in the first place...
Calling from a parent file in python
[ "", "python", "" ]
I've done lots of java web development using jsps and servlets, and I have found this approach to be straightforward and flexible. Some of the groundwork involved though - such as managing database connections - is rather tedious, and it takes a fair amount of work just to get a new web app off the ground. I'm therefore considering using a framework for the first time, but my impression of frameworks is that they're mostly used for large J2EE applications and involve a lot of complex configuration. What I'm looking for is something simple that (after the initial learning curve) will enable me to get up and running with a new web app as quickly as possible. So my question is - does it make sense to use a framework for a simple java web app? Note that I'm not asking which framework to use (if indeed a framework is recommended), as it has already been asked [here](https://stackoverflow.com/questions/116978/can-anyone-recommend-a-simple-java-web-app-framework).
It makes a lot of sense. My team has spent the better part of five years with our open source stack and no matter and we have a "seed" project (works like appfuse) that we use to create all new web apps. Even the simple two pagers from the pov of maintaining the app, it looks like every other app, just smaller. The short is you won't get any return on the investment right now, but you will as the project evolves and you maintain it.
If you don't use a web framework you'll usually end up writing one - poorly.
Does it make sense to use a framework for a simple java web app?
[ "", "java", "jsp", "servlets", "frameworks", "web-applications", "" ]
I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic: * <http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx> * <http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx> * <http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx> * [http://haacked.com/archive/2005/12/28/11377.aspx](http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx) These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes. That's great but... 1. Does this functionality exists somewhere in NUnit natively? 2. Has this technique been improved upon in the last 4 years? 3. Is this still the best way to test database-related code? --- Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one. Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.
NUnit now has a [Rollback] attribute, but I prefer to do it a different way. I use the [TransactionScope](http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx) class. There are a couple of ways to use it. ``` [Test] public void YourTest() { using (TransactionScope scope = new TransactionScope()) { // your test code here } } ``` Since you didn't tell the TransactionScope to commit it will rollback automatically. It works even if an assertion fails or some other exception is thrown. The other way is to use the [SetUp] to create the TransactionScope and [TearDown] to call Dispose on it. It cuts out some code duplication, but accomplishes the same thing. ``` [TestFixture] public class YourFixture { private TransactionScope scope; [SetUp] public void SetUp() { scope = new TransactionScope(); } [TearDown] public void TearDown() { scope.Dispose(); } [Test] public void YourTest() { // your test code here } } ``` This is as safe as the using statement in an individual test because NUnit will guarantee that TearDown is called. Having said all that I do think that tests that hit the database are not really unit tests. I still write them, but I think of them as integration tests. I still see them as providing value. One place I use them often is in testing LINQ to SQL code. I don't use the designer. I hand write the DTO's and attributes. I've been known to get it wrong. The integration tests help catch my mistake.
I just went to a .NET user group and the presenter said he used SQLlite in test setup and teardown and used the in memory option. He had to fudge the connection a little and explicit destroy the connection, but it would give a clean DB every time. <http://houseofbilz.com/archive/2008/11/14/update-for-the-activerecord-quotmockquot-framework.aspx>
How do I test database-related code with NUnit?
[ "", "c#", "database", "unit-testing", "tdd", "nunit", "" ]
I'm trying to use the ASP.NET MVC Ajax.BeginForm helper but don't want to use the existing content insertion options when the call completes. Instead, I want to use a custom JavaScript function as the callback. This works, but the result I want should be returned as JSON. Unfortunately, the framework just treats the data as a string. Below is the client code. The server code simply returns a JsonResult with one field, UppercaseName. ``` <script type='text/javascript'> function onTestComplete(content) { var result = content.get_data(); alert(result.UppercaseName); } </script> <% using (Ajax.BeginForm("JsonTest", new AjaxOptions() {OnComplete = "onTestComplete" })) { %> <%= Html.TextBox("name") %><br /> <input type="submit" /> <%} %> ``` Instead of showing the uppercase result, it is instead showing undefined. content.get\_data() seems to hold the JSON, but only in string form. How do I go about converting this to an object? All of this seems a bit convoluted really. Is there a better way to get at the resulting content using Ajax.BeginForm? If it's this hard, I may skip Ajax.BeginForm entirely and just use the jQuery form library.
Try this: ``` var json_data = content.get_response().get_object(); ``` this will give you result in JSON format and you can use `json_data[0]` to get the first record
You can use `OnFailure` and `OnSuccess` instead of `OnComplete`; `OnSuccess` gives you the data as a proper JSON object. You can find the callback method signatures burried in `~/Scripts/jquery.unobtrusive-ajax.min.js` which you should load on your page. In your `Ajax.BeginForm`: ``` new AjaxOptions { OnFailure = "onTestFailure", OnSuccess = "onTestSuccess" } ``` Script block: ``` <script> //<![CDATA[ function onTestFailure(xhr, status, error) { console.log("Ajax form submission", "onTestFailure"); console.log("xhr", xhr); console.log("status", status); console.log("error", error); // TODO: make me pretty alert(error); } function onTestSuccess(data, status, xhr) { console.log("Ajax form submission", "onTestSuccess"); console.log("data", data); console.log("status", status); console.log("xhr", xhr); // Here's where you use the JSON object //doSomethingUseful(data); } //]]> </script> ``` These signatures match [`success` and `error` callbacks in $.ajax(...)](http://api.jquery.com/jQuery.ajax/), which might not be such a surprise after all. This was tested using [asp.net-mvc-3](/questions/tagged/asp.net-mvc-3 "show questions tagged 'asp.net-mvc-3'") with [jquery](/questions/tagged/jquery "show questions tagged 'jquery'") 1.6.3 and 1.7.2.
How to use Ajax.BeginForm MVC helper with JSON result?
[ "", "javascript", "asp.net-mvc", "asp.net-ajax", "" ]
I have to ship some groovy code to some users that have only java installed (no grooy, no $groovy\_home, etc). I'm trying to invoke groovy from the commandline but I'm having no luck. Here's my bat file: ``` java -classpath .;lib;bin;bin-groovy introspector.AclCollector ``` And here's my exception: ``` Exception in thread "main" java.lang.NoClassDefFoundError: groovy/lang/GroovyObject at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:621) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) at java.net.URLClassLoader.access$000(URLClassLoader.java:56) at java.net.URLClassLoader$1.run(URLClassLoader.java:195) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) Caused by: java.lang.ClassNotFoundException: groovy.lang.GroovyObject at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 12 more Could not find the main class: introspector.AclCollector. Program will exit. ``` Does anyone have a clue? I have 'groovy-all-1.6-beta-1.jar' in \lib dir.
I think you need to explicitly list the groovy jar in the classpath
You have [here](http://marc.info/?l=ant-user&m=120407907827987&w=2) another example of Groovy app called from Java (in this case, from ant, but the general idea is the same). ``` java -cp [...];%GROOVY_HOME%/embeddable/groovy-all-1.5.4.jar;[..] ``` As mentioned by frankowyer, you have the exact groovy jar explicitly listed on the classpath arguments of the java. Since your clients do not have special environment variable, just replace the `%GROOVY_HOME%/...` with the complete path to: * groovy.jar or * [goovy-all-VERSION.jar](http://groovy.codehaus.org/Ant+Task+Troubleshooting) (to minimize any conflict with other libraries)
How to invoke groovy with 'java' from command line
[ "", "java", "command-line", "groovy", "" ]
I have two classes, Foo and Bar, that have constructors like this: ``` class Foo { Foo() { // do some stuff } Foo(int arg) { // do some other stuff } } class Bar : Foo { Bar() : base() { // some third thing } } ``` Now I want to introduce a constructor for Bar that takes an int, but I want the stuff that happens in Bar() to run *as well* as the stuff from Foo(int). Something like this: ``` Bar(int arg) : Bar(), base(arg) { // some fourth thing } ``` Is there any way to do this in C#? The best I have so far is putting the work done by Bar() into a function, that also gets called by Bar(int), but this is pretty inelegant.
No, this isn't possible. If you use Reflector to examine the IL that's generated for each constructor, you'll see why -- you'd end up calling both of the constructors for the base class. In theory, the compiler could construct hidden methods to accomplish what you want, but there really isn't any advantage over you doing the same thing explicitly.
I would re-chain constructors, so they are called like ``` Bar() : this(0) Bar(int) : Foo(int) initializes Bar Foo(int) initializes Foo Foo() : this(0) ``` This is suitable, if parameterless constructors are assuming some kind of default value for int parameter of other constructor. If constructors are unrelated, you probably doing something wrong with your type, or maybe we need more information about what are you trying to achieve.
Calling Overridden Constructor and Base Constructor in C#
[ "", "c#", "constructor", "" ]
I have a GridView control on my page that I have defined a number of BoundFields for. Each row of the databound GridView has a CommandField (Select), for which I want to send the PostBack to a new page. Of course I could easily send the NewSelectedIndex in a QueryString, but I'd rather keep that information hidden from the user. Suggestions?
Leppie is right. The GridView has no PostbackUrl property. However, you can do what you want by using a standard control, which has a PostbackUrl property. ``` <asp:TemplateField AccessibleHeaderText="Edit"> <ItemTemplate> <asp:Button runat="server" ID="btnEdit" PostBackUrl="~/Default.aspx" OnClientClick='form1.ActivityId.value = this.Tag;' Tag='<%# Eval("ActivityId") %>' Text="Edit"/> </ItemTemplate> </asp:TemplateField> ``` In this sample code, I added a TemplateColumn to the GridView. I use a dynamically added Tag attribute for the button to pass the Id, then I use Javascript code to put the value in a hidden field, and then the button simply postsback to the page specified in the PostbackUrl property.
Use a HyperLinkField column in your GridView: ``` <asp:HyperLinkField AccessibleHeaderText="Edit" DataNavigateUrlFields="ActivityId" DataNavigateUrlFormatString="AcitivtyEdit.aspx?id={0}" Text="Edit Activity" /> ``` Of course, as you said, this option shows the id in the url. To hide it (although anyone can check out your javascript and see what you do) you have two options: 1. Use Javascript code to set a hidden field with the id, change the Form postback (action) address and then submit the form. You have to allow your asp.net site to allow postback from a different page 2. Allow the Command to postback to the same page that hosts the GridView and in the server side event handler to a Server.Transfer to your page (this option presents otehr issues on sequential postbacks...)
Can the PostbackUrl be set for a GridView CommandField?
[ "", "c#", "asp.net", "gridview", "postback", "" ]
I am using VS 2008 to write my Javascript code ( mostly in JQuery). However the js file was not updated when I was done with the editing and pressed F5 to call out the debugger. I need to close my vs2008 and reopen it in order to see the latest changes in js file. Any idea on how to fix this?
Do a CTRL-F5 in the browser to force a hard refresh of all files the page is using.
Clear the cache in your browser. It's usually that it has a locally cached copy.
Javascript during debugging is not up to date
[ "", "javascript", "visual-studio-2008", "" ]
I've made this decorator, which results in an infinite redirect loop. The problem is this: ``` args[0].redirect(users.create_login_url(args[0].request.path)) ``` It appears to be a perfectly valid URL. So why wouldn't it properly redirect? ``` def admin_only(handler, *args): def redirect_to_login(*args, **kwargs): return args[0].redirect(users.create_login_url(args[0].request.path)) user = users.get_current_user() if user: if authorized(user): return handler(args[0]) else: logging.warning('An unauthorized user has attempted to enter an authorized page') return redirect_to_login else: return redirect_to_login ```
It seems that you aren't defining your decorator properly. A decorator is called only **once** every time you wrap a function with it; from then on the function that the decorator **returned** will be called. It seems that you (mistakenly) believe that the decorator function **itself** will be called every time. Try something like this instead: ``` def redirect_to_login(*args, **kwargs): return args[0].redirect(users.create_login_url(args[0].request.path)) def admin_only(handler): def wrapped_handler(*args, **kwargs): user = users.get_current_user() if user: if authorized(user): return handler(args[0]) else: logging.warning('An unauthorized user has attempted ' 'to enter an authorized page') return redirect_to_login(*args, **kwargs) else: return redirect_to_login(*args, **kwargs) return wrapped_handler ``` Note that in the above code, the decorator just defines a new function and returns it, and this new function itself does the relevant checks.
The problem is actually when I use ``` return args[0].redirect(users.create_logout_url(args[0].request.uri)) ``` This goes to the logout page, which then redirects to the current page. However, my logs show that the current page thinks I'm still logged in, even after the logging out is complete. This is strange, since I haven't modified anything in the app engine users API.
getting redirect loop for admin_only decorator
[ "", "python", "google-app-engine", "redirect", "decorator", "" ]
I need to know if a variable in Python is a string or a dict. Is there anything wrong with the following code? ``` if type(x) == type(str()): do_something_with_a_string(x) elif type(x) == type(dict()): do_somethting_with_a_dict(x) else: raise ValueError ``` **Update**: I accepted avisser's answer (though I will change my mind if someone explains why `isinstance` is preferred over `type(x) is`). But thanks to nakedfanatic for reminding me that it's often cleaner to use a dict (as a case statement) than an if/elif/else series. Let me elaborate on my use case. If a variable is a string, I need to put it in a list. If it's a dict, I need a list of the unique values. Here's what I came up with: ``` def value_list(x): cases = {str: lambda t: [t], dict: lambda t: list(set(t.values()))} try: return cases[type(x)](x) except KeyError: return None ``` If `isinstance` is preferred, how would you write this `value_list()` function?
What happens if somebody passes a unicode string to your function? Or a class derived from dict? Or a class implementing a dict-like interface? Following code covers first two cases. If you are using Python 2.6 you might want to use [`collections.Mapping`](https://docs.python.org/2/library/collections.html#collections.Mapping) instead of `dict` as per the [ABC PEP](http://www.python.org/dev/peps/pep-3119/). ``` def value_list(x): if isinstance(x, dict): return list(set(x.values())) elif isinstance(x, basestring): return [x] else: return None ```
`type(dict())` says "make a new dict, and then find out what its type is". It's quicker to say just `dict`. But if you want to just check type, a more idiomatic way is `isinstance(x, dict)`. Note, that `isinstance` also includes subclasses (thanks [Dustin](https://stackoverflow.com/users/39975/dustin)): ``` class D(dict): pass d = D() print("type(d) is dict", type(d) is dict) # -> False print("isinstance (d, dict)", isinstance(d, dict)) # -> True ```
What is the best (idiomatic) way to check the type of a Python variable?
[ "", "python", "types", "typechecking", "" ]
I'm looking for an algorithm (or PHP code, I suppose) to end up with the 10 lowest numbers from a group of numbers. I was thinking of making a ten item array, checking to see if the current number is lower than one of the numbers in the array, and if so, finding the highest number in the array and replacing it with the current number. However, I'm planning on finding the lowest 10 numbers from thousands, and was thinking there might be a faster way to do it. I plan on implementing this in PHP, so any native PHP functions are usable.
What you're looking for is called a **selection algorithm**. The Wikipedia page on the subject has a few subsections in the [selecting *k* smallest or largest elements](http://en.wikipedia.org/wiki/Selection_algorithm#Selecting_k_smallest_or_largest_elements) section. When the list is large enough, you can *beat* the time required for the naive "sort the whole list and choose the first 10" algorithm.
Sort the array and use the ten first/last entries. Honestly: sorting an array with a thousand entries costs less time than it takes you to blink.
Collect Lowest Numbers Algorithm
[ "", "php", "algorithm", "" ]
We were having a debate if enums should have uninitialized values. For example. We have ``` public enum TimeOfDayType { Morning Afternoon Evening } ``` or ``` public enum TimeOfDayType { None Morning Afternoon Evening } ``` I think that there shouldn't be any none but then you have to default to some valid value on initialization. But others thought there should be some indication of uniitized state by having another enum that is None or NotSet. thoughts?
Speaking of nullable types - I think they can be used to solve the problem of forcing/not forcing the initialization of an enum. Say we have ``` enum Color { Red, Blue } ``` And let's say you have a function: ``` void Draw(Color c); ``` That function says that it **requires** a valid `Color`. However, we could also have this function: ``` void Draw(Color? c); ``` That says that the function can handle not being passed a color (`null` would be passed to indicate "don't care"). Well, it's one alternative to `None` members.
I always set one of my enum literals to zero. This literal must not always be named "None" or "NotSet". It depends if there is a literal which acts very well as default. I set one to zero because, enums (except nullable enums) are always initialized by the CLR in memory to zero. And if you don't define one of the literals this memory contains illegal values. Also when you use enums as flags. The Default value cannot be used to do bitwise compairisons. The result will always be zero. When you enable FxCop it checks if you have defined a literal as default. Seems to be a "good practice" when they have a rule for that.
Should enums have uninitialized values .
[ "", "c#", "enums", "" ]
So I wrote this short script (correct word?) to download the comic images from explosm.net comics because I somewhat-recently found out about it and I want to...put it on my iPhone...3G. It works fine and all. urllib2 for getting webpage html and urllib for image.retrieve() ***Why I posted this on SO: how do I optimize this code? Would REGEX (regular expressions) make it faster? Is it an internet limitation? Poor algorithm...?*** Any improvements in speed or ***general code aesthetics*** would be greatly appreciated "answers". *Thank you.* *--------------------------------CODE----------------------------------* ``` import urllib, urllib2 def LinkConvert(string_link): for eachLetter in string_link: if eachLetter == " ": string_link = string_link[:string_link.find(eachLetter)] + "%20" + string_link[string_link.find(eachLetter)+1:] return string_link start = 82 end = 1506 matchingStart = """<img alt="Cyanide and Happiness, a daily webcomic" src="http://www.explosm.net/db/files/Comics/""" matchingEnd = """></""" link = "http://www.explosm.net/comics/" for pageNum in range(start,start+7): req = urllib2.Request(link+`pageNum`) response = urllib2.urlopen(req) page = response.read() istart1 = page.find(matchingStart) iend1 = page.find(matchingEnd, istart1) newString1 = page[istart1 : iend1] istart2 = newString1.find("src=")+4 iend2 = len(newString1) final = newString1[istart2 +1 : iend2 -1] final = LinkConvert(final) try: image = urllib.URLopener() image.retrieve(final, `pageNum` + ".jpg") except: print "Uh-oh! " + `pageNum` + " was not downloaded!" print `pageNum` + " completed..." ``` *By the way, this is Python 2.5 code, not 3.0 but you bet I have all the features of PYthon 3.0 greatly studied and played around with before or right after New Year (after College Apps - YAY! ^-^)*
I would suggest using [Scrapy](http://scrapy.org/) for your page fetching and [Beautiful Soup](http://www.crummy.com/software/BeautifulSoup/) for the parsing. This would make your code a lot simpler. Whether you want to change your existing code that works to these alternatives is up to you. If not, then regular expressions would probably simplify your code somewhat. I'm not sure what effect it would have on performance.
[refactormycode](http://refactormycode.com/) may be a more appropriate web site for these "let's improve this code" type of discussions.
Download from EXPLOSM.net Comics Script [Python]
[ "", "python", "scripting", "download", "urllib", "" ]
I've been doing a lot of reading about AJAX, and wanted to know which is the better way to approach things: by using a library such as jQuery using their built-in methods or creating JavaScript without a library for AJAX?
Ajax has a lot of quirks when working with the XMLHttpRequest Object. When you start to work with it, you will not see it, but when it is out in a production environment, it will bite you in the butt. Browsers, browser version, user settings, the type of server, type of request, and much more can affect what needs to be coded. Libraries tend to solve most of the problems, but they all are not perfect. I always tell people it is great to work with a tutorial to see how the XMLHttpRequest works. After you have learned how to do it naked, work with a library that fits your needs. Eric Pascarello
Why create a library when plenty already exists? If you create a library it is going to take time and effort and you'll end up going through the same hurdles others already have. And unless your company is trying to sell an Ajax library then stay away from writting your own plumbing code. I am currently using both JQuery and Microsoft's Ajax in my site and have found that they are both feature complete with plenty of options of different ways you can set up the communication.
Ajax - Library or Plain Javascript
[ "", "javascript", "jquery", "ajax", "prototypejs", "" ]
Is there a static analysis tool for PHP source files? The binary itself can check for syntax errors, but I'm looking for something that does more, like: * unused variable assignments * arrays that are assigned into without being initialized first * and possibly code style warnings * ...
Run `php` in lint mode from the command line to validate syntax without execution: `php -l FILENAME` Higher-level static analyzers include: * [php-sat](http://www.program-transformation.org/PHP/PhpSat) - Requires <http://strategoxt.org/> * [PHP\_Depend](http://pdepend.org/) * [PHP\_CodeSniffer](http://pear.php.net/package/PHP_CodeSniffer) * [PHP Mess Detector](http://phpmd.org/) * [PHPStan](https://github.com/phpstan/phpstan) * [PHP-CS-Fixer](https://github.com/FriendsOfPHP/PHP-CS-Fixer) * [phan](https://github.com/phan/phan) Lower-level analyzers include: * [PHP\_Parser](http://pear.php.net/package/PHP_Parser) * [token\_get\_all](http://docs.php.net/manual/en/function.token-get-all.php) (primitive function) Runtime analyzers, which are more useful for some things due to PHP's dynamic nature, include: * [Xdebug](http://www.xdebug.org/) has [code coverage](http://www.xdebug.org/docs/code_coverage) and [function traces](http://www.xdebug.org/docs/execution_trace). * My [PHP Tracer Tool](http://github.com/troelskn/php-tracer-weaver/tree/master) uses a combined static/dynamic approach, building on Xdebug's function traces. The documentation libraries [phpdoc](http://www.phpdoc.org/) and [Doxygen](http://www.doxygen.org/) perform a kind of code analysis. Doxygen, for example, can be configured to render nice inheritance graphs with [Graphviz](http://www.graphviz.org/). Another option is [xhprof](http://pecl.php.net/package/xhprof), which is similar to Xdebug, but lighter, making it suitable for production servers. The tool includes a PHP-based interface.
[Online PHP lint](http://www.icosaedro.it/phplint/phplint-on-line.html) [PHPLint](http://www.icosaedro.it/phplint/) [Unitialized variables check](http://antirez.com/page/phplint.html). Link 1 and 2 already seem to do this just fine, though. I can't say I have used any of these intensively, though :)
How can I perform static code analysis in PHP?
[ "", "php", "code-analysis", "static-analysis", "" ]
Just for review, can someone quickly explain what prevents this from working (on compile): ``` private HashSet data; ... public DataObject[] getDataObjects( ) { return (DataObject[]) data.toArray(); } ``` ...and what makes this the way that DOES work: ``` public DataObject[] getDataObjects( ) { return (DataObject[]) data.toArray( new DataObject[ Data.size() ] ); } ``` I'm not clear on the mechanism at work with casting (or whatever it is) that makes this so.
Because `toArray()` creates an array of Object, and you can't make `Object[]` into `DataObject[]` just by casting it. `toArray(DataObject[])` creates an array of `DataObject`. And yes, it is a shortcoming of the Collections class and the way Generics were shoehorned into Java. You'd expect that `Collection<E>.toArray()` could return an array of E, but it doesn't. Interesting thing about the `toArray(DataObject[])` call: you don't have to make the "a" array big enough, so you can call it with `toArray(new DataObject[0])` if you like. Calling it like `toArray(new DateObject[0])` is actually better if you use `.length` later to get the array length. if the initial length was large and the same array object you passed was returned then you may face `NullPointerException`s later I asked a question earlier about Java generics, and was pointed to this FAQ that was very helpful: <http://www.angelikalanger.com/GenericsFAQ/JavaGenericsFAQ.html>
To ensure type safety when casting an array like you intended (`DataObject[] dataArray = (DataObject[]) objectArray;`), the JVM would have to inspect every single object in the array, so it's not actually a simple operation like a type cast. I think that's why you have to pass the array instance, which the `toArray()` operation then fills.
Casting an array of Objects into an array of my intended class
[ "", "java", "collections", "casting", "object", "" ]
how do I bind a `std::ostream` to either `std::cout` or to an `std::ofstream` object, depending on a certain program condition? Although this invalid for many reasons, I would like to achieve something that is semantically equivalent to the following: ``` std::ostream out = condition ? &std::cout : std::ofstream(filename); ``` I've seen some examples that are not exception-safe, such as one from <http://www2.roguewave.com/support/docs/sourcepro/edition9/html/stdlibug/34-2.html>: ``` int main(int argc, char *argv[]) { std::ostream* fp; //1 if (argc > 1) fp = new std::ofstream(argv[1]); //2 else fp = &std::cout //3 *fp << "Hello world!" << std::endl; //4 if (fp!=&std::cout) delete fp; } ``` Does anyone know a better, exception-safe solution?
``` std::streambuf * buf; std::ofstream of; if(!condition) { of.open("file.txt"); buf = of.rdbuf(); } else { buf = std::cout.rdbuf(); } std::ostream out(buf); ``` That associates the underlying streambuf of either cout or the output file stream to out. After that you can write to "out" and it will end up in the right destination. If you just want that everything going to `std::cout` goes into a file, you can aswell do ``` std::ofstream file("file.txt"); std::streambuf * old = std::cout.rdbuf(file.rdbuf()); // do here output to std::cout std::cout.rdbuf(old); // restore ``` This second method has the drawback that it's not exception safe. You possibly want to write a class that does this using RAII: ``` struct opiped { opiped(std::streambuf * buf, std::ostream & os) :os(os), old_buf(os.rdbuf(buf)) { } ~opiped() { os.rdbuf(old_buf); } std::ostream& os; std::streambuf * old_buf; }; int main() { // or: std::filebuf of; // of.open("file.txt", std::ios_base::out); std::ofstream of("file.txt"); { // or: opiped raii(&of, std::cout); opiped raii(of.rdbuf(), std::cout); std::cout << "going into file" << std::endl; } std::cout << "going on screen" << std::endl; } ``` Now, whatever happens, std::cout is in clean state.
This is exception-safe: ``` void process(std::ostream &os); int main(int argc, char *argv[]) { std::ostream* fp = &cout; std::ofstream fout; if (argc > 1) { fout.open(argv[1]); fp = &fout; } process(*fp); } ``` --- Edit: Herb Sutter has addressed this in the article [Switching Streams (Guru of the Week)](http://www.gotw.ca/gotw/048.htm).
Obtain a std::ostream either from std::cout or std::ofstream(file)
[ "", "c++", "exception", "iostream", "" ]
I have a PHP application that will on occasion have to handle URLs where more than one parameter in the URL will have the same name. Is there an easy way to retrieve all the values for a given key? PHP $\_GET returns only the last value. To make this concrete, my application is an OpenURL resolver, and may get URL parameters like this: ``` ctx_ver=Z39.88-2004 &rft_id=info:oclcnum/1903126 &rft_id=http://www.biodiversitylibrary.org/bibliography/4323 &rft_val_fmt=info:ofi/fmt:kev:mtx:book &rft.genre=book &rft.btitle=At last: a Christmas in the West Indies. &rft.place=London, &rft.pub=Macmillan and co., &rft.aufirst=Charles &rft.aulast=Kingsley &rft.au=Kingsley, Charles, &rft.pages=1-352 &rft.tpages=352 &rft.date=1871 ``` (Yes, I know it's ugly, welcome to my world). Note that the key "rft\_id" appears twice: 1. `rft_id=info:oclcnum/1903126` 2. `rft_id=http://www.biodiversitylibrary.org/bibliography/4323` `$_GET` will return just `http://www.biodiversitylibrary.org/bibliography/4323`, the earlier value (`info:oclcnum/1903126`) having been overwritten. I'd like to get access to both values. Is this possible in PHP? If not, any thoughts on how to handle this problem?
Something like: ``` $query = explode('&', $_SERVER['QUERY_STRING']); $params = array(); foreach( $query as $param ) { // prevent notice on explode() if $param has no '=' if (strpos($param, '=') === false) $param += '='; list($name, $value) = explode('=', $param, 2); $params[urldecode($name)][] = urldecode($value); } ``` gives you: ``` array( 'ctx_ver' => array('Z39.88-2004'), 'rft_id' => array('info:oclcnum/1903126', 'http://www.biodiversitylibrary.org/bibliography/4323'), 'rft_val_fmt' => array('info:ofi/fmt:kev:mtx:book'), 'rft.genre' => array('book'), 'rft.btitle' => array('At last: a Christmas in the West Indies.'), 'rft.place' => array('London'), 'rft.pub' => array('Macmillan and co.'), 'rft.aufirst' => array('Charles'), 'rft.aulast' => array('Kingsley'), 'rft.au' => array('Kingsley, Charles'), 'rft.pages' => array('1-352'), 'rft.tpages' => array('352'), 'rft.date' => array('1871') ) ``` Since it's always possible that one URL parameter is repeated, it's better to always have arrays, instead of only for those parameters where you anticipate them.
Won't work for you as it looks like you don't control the querystring, but another valid answer: Instead of parse querystring, you could appeand '[]' to the end of the name, then PHP will make an array of the items. IE: ``` someurl.php?name[]=aaa&name[]=bbb ``` will give you a $\_GET looking like: ``` array(0=>'aaa', 1=>'bbb') ```
How to get multiple parameters with same name from a URL in PHP
[ "", "php", "url", "parameters", "" ]
To the best of my knowledge, creating a dynamic Java proxy requires that one have an interface to work against for the proxy. Yet, Hibernate seems to manage its dynamic proxy generation without requiring that one write interfaces for entity classes. How does it do this? The only clue from the Hibernate documentation refers to the fact that classes must have at minimum a package-visible constructor for proxy generation. Is Hibernate doing runtime bytecode engineering with a custom classloader? The documentation suggests that this is not the case. So how do they create their proxy wrappers around the concrete entity objects? Do they just create a proxy of some trivial interface without concern for type safety and then cast it as desired?
Since Hibernate 3.3, the default bytecode provider is now Javassist rather than CGLib. [Hibernate Core Migration Guide : 3.3](https://community.jboss.org/wiki/HibernateCoreMigrationGuide33)
Hibernate uses the bytecode provider configured in `hibernate.properties`, for example: ``` hibernate.bytecode.provider=javassist ```
How does Hibernate create proxies of concrete classes?
[ "", "java", "hibernate", "proxy", "" ]
I want to provide a piece of Javascript code that will work on any website where it is included, but it always needs to get more data (or even modify data) on the server where the Javascript is hosted. I know that there are security restrictions in place for obvious reasons. Consider index.html hosted on xyz.com containing the following: ``` <script type="text/javascript" src="http://abc.com/some.js"></script> ``` Will some.js be able to use XMLHttpRequest to post data to abc.com? In other words, is abc.com implicitly trusted because we loaded Javascript from there?
> Will some.js be able to use XMLHttpRequest to post data to abc.com? In other words, is abc.com implicitly trusted because we loaded Javascript from there? No, because the script is loaded on to a seperate domain it will not have access... If you trust the data source then maybe JSONP would be the better option. JSONP involves dynamically adding new SCRIPT elements to the page with the SRC set to another domain, with a callback set as a parameter in the query string. For example: ``` function getJSON(URL,success){ var ud = 'json'+(Math.random()*100).toString().replace(/\./g,''); window[ud]= function(o){ success&&success(o); }; document.getElementsByTagName('body')[0].appendChild((function(){ var s = document.createElement('script'); s.type = 'text/javascript'; s.src = URL.replace('callback=?','callback='+ud); return s; })()); } getJSON('http://YOUR-DOMAIN.com/script.php?dataName=john&dataAge=99&callback=?',function(data){ var success = data.flag === 'successful'; if(success) { alert('The POST to abc.com WORKED SUCCESSFULLY'); } }); ``` So, you'll need to host your own script which could use PHP/CURL to post to the abc.com domain and then will output the response in JSONP format: I'm not too great with PHP, but maybe something like this: ``` <?php /* Grab the variables */ $postURL = $_GET['posturl']; $postData['name'] = $_GET['dataName']; $postData['age'] = $_GET['dataAge']; /* Here, POST to abc.com */ /* MORE INFO: http://uk3.php.net/curl & http://www.askapache.com/htaccess/sending-post-form-data-with-php-curl.html */ /* Fake data (just for this example:) */ $postResponse = 'blahblahblah'; $postSuccess = TRUE; /* Once you've done that, you can output a JSONP response */ /* Remember JSON format == 'JavaScript Object Notation' - e.g. {'foo':{'bar':'foo'}} */ echo $_GET['callback'] . '({'; echo "'flag':' . $postSuccess . ',"; echo "'response':' . $postResponse . '})"; ?> ``` So, your server, which you have control over, will act as a medium between the client and abc.com, you'll send the response back to the client in JSON format so it can be understood and used by the JavaScript...
The easiest option for you would be to proxy the call through the server loading the javascript. So some.js would make a call to the hosting server, and that server would forward the request to abc.com. of course, if that's not an option because you don't control the hoster, there are some options, but it seems mired in cross browser difficulties: <http://ajaxian.com/archives/how-to-make-xmlhttprequest-calls-to-another-server-in-your-domain>
Cross-site XMLHttpRequest
[ "", "javascript", "ajax", "xmlhttprequest", "xss", "" ]
I drew a little graph in paint that explains my problem: But it doesn't seem to show up when I use the `<img>` tag after posting? Graph: [![http://i44.tinypic.com/103gcbk.jpg](https://i.stack.imgur.com/DkI1T.jpg)](https://i.stack.imgur.com/DkI1T.jpg)
You need to instantiate the database outside of main(), otherwise you will just declare a local variable shadowing the global one. GameServer.cpp: ``` #include GameSocket.h Database db(1, 2, 3); int main() { //whatever } ```
The problem is the scope of the declaration of db. The code: ``` extern Database db; ``` really means "db is declared *globally somewhere*, just not here". The code then does not go ahead and actually declare it globally, but locally inside main(), which is not visible outside of main(). The code should look like this, in order to solve your linkage problem: ## file1.c ``` Database db; int main () { ... } ``` ## file2.c ``` extern Database db; void some_function () { ... } ```
Extern keyword and unresolved external symbols
[ "", "c++", "symbols", "extern", "" ]
I was wondering, whether knockd <http://www.zeroflux.org/cgi-bin/cvstrac.cgi/knock/wiki> would be a good was to be able to restart apache without logging into ssh. But my programming question was whether there is a way to send tcp/udp packages via PHP so I can knock via a webclient. I am aware that this is not the safest way of doing it, but I will only want to do things like update the svn, restart apache without having any passwords in it like with using ssh to do that.
You may use fsockopen() functions... but what you are doing(and the way you are doing it) is very risky from a security standpoit.. as it had been said, ssh is the way:) If you really want to restart the apache server by using remote access (non-ssh) you can create a small php-daemon, that just watches for a specific file,(ex: /tmp/restart.apache) and when that file appears run exec("/etc/init.d/apache restart") (or whatever the command is for your distribution). This daemon should run as root... and the thing is that the whole security thing is up to you this way, you have to make sure this cannot get arbitrarly executed... Your portknock ideea... a simple port scanner may restart your apache by mistake:) portknock is recommented to be used in conjunction with a ssh auth , not directly with apache:)
Seriously, you do *not* want to do what your trying to do. You should look into calling your remote server through some sort of secure protocol, like SSH. And on the client side, have a small PHP utility application/script that executes remote SSH commands (preferably with a keyfile only based authentication mechanism).
Using knockd to do stuff | Sending TCP/UPD Requests via PHP
[ "", "php", "unix", "knockd", "" ]
[Effective Java (Second Edition)](http://java.sun.com/docs/books/effective/), Item 4, discusses using private constructors to enforce noninstantiability. Here's the code sample from the book: ``` public final class UtilityClass { private UtilityClass() { throw new AssertionError(); } } ``` However, `AssertionError` doesn't seem like the right thing to throw. Nothing is being "asserted", which is how the API defines the use of [AssertionError](http://java.sun.com/javase/6/docs/api/java/lang/AssertionError.html). Is there a different `Throwable` that's typically in this situation? Does one usually just throw a general `Exception` with a message? Or is it common to write a custom `Exception` for this? It's pretty trivial, but more than anything I guess I'm just curious about it from a style and standards perspective.
There is an assertion: "I'm asserting that this constructor will never be called". So, indeed, `AssertionError` is correct here.
I like including Bloch's comment: ``` // Suppress default constructor for noninstantiability ``` Or better yet putting it in the Error: ``` private UtilityClass() { throw new AssertionError("Suppress default constructor for noninstantiability"); } ```
What is the preferred Throwable to use in a private utility class constructor?
[ "", "java", "coding-style", "throwable", "" ]
I have an array full of random content item ids. I need to run a mysql query (id in the array goes in the WHERE clause), using each ID that's in the array, in the order that they appear in the said array. How would I do this? This will be an UPDATE query, for each individual ID in the array.
As with nearly all "How do I do SQL from within PHP" questions - You *really* should use prepared statements. It's not that hard: ``` $ids = array(2, 4, 6, 8); // prepare an SQL statement with a single parameter placeholder $sql = "UPDATE MyTable SET LastUpdated = GETDATE() WHERE id = ?"; $stmt = $mysqli->prepare($sql); // bind a different value to the placeholder with each execution for ($i = 0; $i < count($ids); $i++) { $stmt->bind_param("i", $ids[$i]); $stmt->execute(); echo "Updated record ID: $id\n"; } // done $stmt->close(); ``` Alternatively, you can do it like this: ``` $ids = array(2, 4, 6, 8); // prepare an SQL statement with multiple parameter placeholders $params = implode(",", array_fill(0, count($ids), "?")); $sql = "UPDATE MyTable SET LastUpdated = GETDATE() WHERE id IN ($params)"; $stmt = $mysqli->prepare($sql); // dynamic call of mysqli_stmt::bind_param hard-coded eqivalent $types = str_repeat("i", count($ids)); // "iiii" $args = array_merge(array($types), $ids); // ["iiii", 2, 4, 6, 8] call_user_func_array(array($stmt, 'bind_param'), ref($args)); // $stmt->bind_param("iiii", 2, 4, 6, 8) // execute the query for all input values in one step $stmt->execute(); // done $stmt->close(); echo "Updated record IDs: " . implode("," $ids) ."\n"; // ---------------------------------------------------------------------------------- // helper function to turn an array of values into an array of value references // necessary because mysqli_stmt::bind_param needs value refereces for no good reason function ref($arr) { $refs = array(); foreach ($arr as $key => $val) $refs[$key] = &$arr[$key]; return $refs; } ``` Add more parameter placeholders for other fields as you need them. **Which one to pick?** * The first variant works with a variable number of records iteratively, hitting the database multiple times. This is most useful for UPDATE and INSERT operations. * The second variant works with a variable number of records too, but it hits the database only once. This is much more efficient than the iterative approach, obviously you can only do the same thing to all affected records. This is most useful for SELECT and DELETE operations, or when you want to UPDATE multiple records with the same data. **Why prepared statements?** * Prepared statements are a lot safer because they make SQL injection attacks impossible. This is the primary reason to use prepared statements, even if it is more work to write them. A sensible habit to get into is: Always use prepared statements, even if you think it's "not really necessary." Neglect will come and bite you (or your customers). * Re-using the same prepared statement multiple times with different parameter values is more efficient than sending multiple full SQL strings to the database, because the database only needs to compile the statement once and can re-use it as well. * Only parameter values are sent to the database on `execute()`, so less data needs to go over the wire when used repeatedly. In longer loops the execution time difference between using a prepared statement and sending plain SQL will become noticeable.
# Using the "IN" Clause Might be what you're after ``` $ids = array(2,4,6,8); $ids = implode($ids); $sql="SELECT * FROM my_table WHERE id IN($ids);"; mysql_query($sql); ``` otherwise, what's wrong with ``` $ids = array(2,4,6,8); foreach($ids as $id) { $sql="SELECT * FROM my_table WHERE ID = $id;"; mysql_query($sql); } ```
I have an array of integers, how do I use each one in a mysql query (in php)?
[ "", "php", "mysql", "arrays", "" ]
I've following problem, which I've not been able to do successfully. Your help will be appreciated. I'm using SQL 2005, and trying to do this using CTE. Table has following 2 columns ``` DocNum DocEntry 1 234 2 324 2 746 3 876 3 764 4 100 4 387 ``` Expected result is as follows ``` 1 234 2 324, 746 3 876, 764 4 100, 387 ``` Thanks Rahul Jain --- Further explanation transcribed from the comments: I'm using a query like following: ``` WITH ABC (DocNum, DocEntry) AS (SELECT DocNum, Cast(DocEntry As VARCHAR(8000)) FROM Temp5 WHERE DocNum = 1 UNION ALL SELECT a.DocNum, A.DocEntry + ', ' + B.DocEntry FROM ABC B INNER JOIN Temp5 A ON B.DocNum +1= A.DocNum WHERE A.DOCNUM > 1) SELECT * FROM ABC; ``` Result from above query is as follows ``` 1 234 2 234, 324 2 234, 746 3 234, 746, 876 3 234, 746, 764 ``` I dont want those numbers to repeat as shown in question.
``` SELECT DocNum, STUFF((SELECT ', ' + CAST(DocEntry AS VARCHAR(MAX)) AS [text()] FROM Temp5 b WHERE a.DocNum = b.DocNum FOR XML PATH('')), 1, 2, '') AS DocEntry FROM Temp5 a GROUP BY DocNum ``` Itzik Ben-Gan in his excellent book [T-SQL QUERYING](https://rads.stackoverflow.com/amzn/click/com/0735623139) has some specialized solutions for aggregate string concatenation. The query screams for itself.
Here's an article that describes methods to do that: [Converting Multiple Rows into a CSV String](http://www.sqlteam.com/article/converting-multiple-rows-into-a-csv-string-set-based-method)
Help with recursive query
[ "", "sql", "sql-server", "common-table-expression", "recursive-query", "" ]
Every now and then, I bump into syntax that I've seen before, but never used. This is one of those times. Can someone explain the purpose of ":this" or ":base" following a C# constructor method? For example: ``` public MyClass(SomeArg arg) : this(new SomethingElse(), arg) { } ``` My gut feeling is that it is used to map a default argument onto another constructor method.
You're basically right. `this()` calls a constructor on the current instance, `base()` calls the supertype's constructor on current instance. They're generally used to handle constructor overloads so you can add additional options without breaking things out into a separate method.
Your gut feeling is right. The syntax is used to call overloaded constructors in the same class: ``` public class Test { public Test() : this("Called from default constructor") { } public Test(String msg) { Console.WriteLine(msg); } } ``` The following code: ``` public static void Main(String[] args) { Test t1 = new Test(); Test t2 = new Test("Called from Main function"); } ``` Outputs the following ``` Called from default constructor Called from main function ``` Similarly, `: base(someParams)` is used to call base constructors.
: this(foo) syntax in C# constructors?
[ "", "c#", ".net", "constructor", "constructor-chaining", "" ]
Compiling this lines ``` long int sz; char tmpret[128]; //take substring of c, translate in c string, convert to int, //and multiply with 1024 sz=atoi(c.substr(0,pos).c_str())*1024; snprintf(tmpret,128,"%l",sz); ``` I read two warning on snprintf line: ``` warning: conversion lacks type at end of format warning: too many arguments for format ``` Why? The type is specified (long int sz, and %l in snprintf) and the argument in snprintf is only one. Can anybody help me? Thanks.
Your format lacks type, because l is a "sizeof" modifier. Should be %ld
`boost::lexical_cast<string>(sz)` is much nicer, anyway.
Warnings using format strings with sprintf() in C++
[ "", "c++", "types", "format", "printf", "" ]
I have just imported a WAR file from an external site, which is basically a servlet into Eclipse IDE (the project runs on Apache-Tomcat). When I import it it has a folder called **Web App Libraries**. So here are a few of my newbie questions: 1. I am unsure about what the exact purpose is of this folder is? What does it do, why would you choose to have it in your project? 2. I see that it has a folder called **Improted Classes** and foobar.class files inside it - why? *(These seemed to be mirrored in **Web Content** folder - although here you can modify the code as they are foobar.java.)* 3. There are references to foobar.jar files too - these are also mirrored in **WEB-INF/lib** folder too - why? I know these are basic type questions but I'm just getting to grips with Java and website dev, so apologies if they sound a bit dumb! - BTW if anyone knows any good online resource to understand more about project file structures like this, then let me know. I just need to get to grips with this stuff asap - as the project deadline is fairly soon. Cheers. Here's a screenshot just to help you visualise: ![alt text](https://rantincsharp.files.wordpress.com/2008/12/eclipserestlet.gif)
I assume this is a screenshot from the 'Project Explorer' view. It does not display exact folders and files structure, is adds a few candy constructed from project's metadata. * To see real structure of your project, try switching to the 'Navigator' view. * During a WAR file import, Eclipse basically does two things: + Creates a new web project and copies WAR's content to 'WebContent' subfolder of the new project. + Based on the WAR, it constructs project's metadata (.project and .classpath files). * The 'Web App Libraries' section displays list of jar files that the WAR contained (in WEB-INF/lib * 'Imported classes' (which I also see for a first time) seem to contain classes found in the imported WAR (WEB-INF/classes), for which Eclipse was not able to find any corresponding source files. To fix this, create a new Java source folder in the project and move the classes you now have in 'firstResource' folder to it.
Web App Libraries isn't a real directory, but rather a listing of what Eclipse thinks are this project's libraries. Generally, this consists of all the jar files in WebContent/WEB-INF/lib/ Sometimes, Eclipse no longer lists them in their real directory in Eclipse's Package Explorer... but they're still there if you look with another program.
Understanding imported WAR in Eclipse and its folder structure
[ "", "java", "eclipse", "" ]
(I've tried this in MySql) I believe they're semantically equivalent. Why not identify this trivial case and speed it up?
truncate table cannot be rolled back, it is like dropping and recreating the table.
...just to add some detail. Calling the DELETE statement tells the database engine to generate a transaction log of all the records deleted. In the event the delete was done in error, you can restore your records. Calling the TRUNCATE statement is a blanket "all or nothing" that removes all the records with no transaction log to restore from. It is definitely faster, but should only be done when you're sure you don't need any of the records you're going to remove.
Why 'delete from table' takes a long time when 'truncate table' takes 0 time?
[ "", "sql", "mysql", "performance", "truncate", "" ]
I want to determine whether two different child nodes within an XML document are equal or not. Two nodes should be considered equal if they have the same set of attributes and child notes and all child notes are equal, too (i.e. the whole sub tree should be equal). The input document might be very large (up to 60MB, more than a 100000 nodes to compare) and performance is an issue. What would be an efficient way to check for the equality of two nodes? **Example:** ``` <w:p> <w:pPr> <w:spacing w:after="120"/> </w:pPr> <w:r> <w:t>Hello</w:t> </w:r> </w:p> <w:p> <w:pPr> <w:spacing w:after="240"/> </w:pPr> <w:r> <w:t>World</w:t> </w:r> </w:p> ``` This XML snippet describes paragraphs in an OpenXML document. The algorithm would be used to determine whether a document contains a paragraph (w:p node) with the same properties (w:pPr node) as another paragraph earlier in the document. One idea I have would be to store the nodes' outer XML in a hash set (Normally I would have to get a canonical string representation first where attributes and child notes are sorted always in the same way, but I can expect my nodes already to be in such a form). Another idea would be to create an XmlNode object for each node and write a comparer which compares all attributes and child nodes. My environment is C# (.Net 2.0); any feedback and further ideas are very welcome. Maybe somebody even has already a good solution? EDIT: Microsoft's XmlDiff API can actually do that but I was wondering whether there would be a more lightweight approach. XmlDiff seems to always produce a diffgram and to always produce a canonical node representation first, both things which I don't need. EDIT2: I finally implemented my own XmlNodeEqualityComparer based on the suggestion made here. Thanks a lot!!!! Thanks, divo
I'd recommend against rolling your own hash creation function and instead rely on the in-built `XNodeEqualityComparer`'s `GetHashCode` method. This guarantees to take account of attributes and descendant nodes when creating the result and could save you some time too. Your code would look like the following: ``` XNodeEqualityComparer comparer = new XNodeEqualityComparer(); XDocument doc = XDocument.Load("XmlFile1.xml"); Dictionary<int, XNode> nodeDictionary = new Dictionary<int, XNode>(); foreach (XNode node in doc.Elements("doc").Elements("node")) { int hash = comparer.GetHashCode(node); if (nodeDictionary.ContainsKey(hash)) { // A duplicate has been found. Execute your logic here // ... } else { nodeDictionary.Add(hash, node); } } ``` My XmlFile1.xml is: ``` <?xml version="1.0" encoding="utf-8" ?> <doc> <node att="A">Blah</node> <node att="A">Blah</node> <node att="B"> <inner>Innertext</inner> </node> <node>Blah</node> <node att="B"> <inner>Different</inner> </node> </doc> ``` `nodeDictionary` will end up containing a unique collection of Nodes and their hashes. Duplicates are detected by using the `Dictionary`'s `ContainsKey` method, passing in the hash of the node, which we generate using the `XNodeEqualityComparer`'s `GetHashCode` method. I think this should be fast enough for your needs.
What about this approach: For all `<w:pPr>` nodes in the document (I suppose there is not more than one per `<w:p>`), concatenate all relevant data (element names, attributes, values) into a string: ``` // string format is really irrelevant, so this is just a bogus example '!w:keep-with-next@value="true"!w:spacing@w:before="10"@w:after="120"' ``` Do so on alphabetical order, to account for varying document order. Build a collection using these strings as the key and the reference to the respective `<w:p>` node as the value. In the process of doing this, when you hit the point that a given key already exists in the collection, you found a paragraph with the same properties. Work with a list of nodes as the collection value, if you want to keep collecting. I can't say how well this would perform, but I guess it is not too hard to implement and find out.
Efficient algorithm for comparing XML nodes
[ "", "c#", "xml", "algorithm", "comparison", "performance", "" ]
I'm using the python optparse module in my program, and I'm having trouble finding an easy way to parse an option that contains a list of values. For example: ``` --groups one,two,three. ``` I'd like to be able to access these values in a list format as `options.groups[]`. Is there an optparse option to convert comma separated values into a list? Or do I have to do this manually?
Look at [option callbacks](http://docs.python.org/2/library/optparse#option-callbacks). Your callback function can parse the value into a list using a basic `optarg.split(',')`
S.Lott's answer has already been accepted, but here's a code sample for the archives: ``` def foo_callback(option, opt, value, parser): setattr(parser.values, option.dest, value.split(',')) parser = OptionParser() parser.add_option('-f', '--foo', type='string', action='callback', callback=foo_callback) ```
Python Optparse list
[ "", "python", "optparse", "" ]
I'm new to PHP and have installed on Linux to boot (also a newbie). Anyway, PHP is working... ``` <? $myVar = "test"; echo($myVar); ?> ``` ... works just fine. But... ``` <? $dbhost = "localhost"; $dbuser = "myuser"; $dbpass = "mypass"; $dbname = "mydb"; echo($dbhost . "-" . $dbuser . "-" . $dbpass . "-" . $dbname); $conn = mysql_connect($dbhost, $dbuser, $dbpass) or die("Unable to connect to MySQL"); print $conn; mysql_close($conn); phpInfo(); ?> ``` ... does nothing. Nor errors, nothing. Its as if the code isn't even there. Any help?
Try to do the following: 1. First make sure [display\_errors](http://www.php.net/manual/en/errorfunc.configuration.php) is turned on in your php configuration file. Also set the level of [error\_reporting](http://www.php.net/error_reporting) to show all errors, including strict (error\_reporting = E\_ALL|E\_STRICT). After you make changes, restart your webserver. 2. Run [phpinfo()](http://www.php.net/phpinfo), and check that the mysql extension is installed and working. If it isn't make sure that you uncommented it in the php configuration file (again, remember to restart apache after each change to the configuration file). 3. At this point MySQL should be loaded and working, and you should be able to tell from the error (if it persists) what's the problem. 4. Try also [dumping](http://www.php.net/var_dump) the contents of the connection result ($conn) to see what it contains. 5. In general, I'd recommend using long php tags (<?php and not <?) since it is more portable (short tags are off by default in PHP 5 installations).
If it does nothing, doesn't that mean that it connected fine? What output do you expect out of that statement? You could try ``` error_reporting(E_ALL); $conn = mysql_connect("localhost", "myusername", "mypassword"); if(!$conn) { echo 'Unable to connect'; } else { echo 'Connected to database'; } var_dump($conn); ``` edit: Addressing the comment saying that you have a mysql query setup, if you are not seeing "success" it means something is wrong with your query. Add to the above ``` $sth = mysql_query("SELECT * FROM tablename"); if(!$sth) { echo 'unable to query: ' . mysql_error(); } else { echo 'success'; } ```
PHP ignoring mysql_connect requests
[ "", "php", "mysql", "linux", "" ]
Is there any way to have PHP automatically call a function, before a script outputs any HTTP headers? I'm looking for something like [register-shutdown-function](http://us.php.net/register-shutdown-function), but to register a function that's called **before** the output is already sent, not after. I want my function to send a header, so I need something that's called earlier.
You could also trap everything with `ob_start` and then register a callback function to be used when you send the page with `ob_end_flush`. Check out the PHP manual for [OB\_START](https://www.php.net/ob_start)
I don't know if it is what you are looking for but you might want to investigate using auto\_prepend\_file in your php.ini or setting it in an .htaccess file. If you set an auto\_prepend\_file it will automatically include that file before running each script. [auto\_prepend\_file](http://www.askapache.com/php/use-phpini-to-add-http-headers-to-output.html)
Call a function before outputting headers in PHP?
[ "", "php", "" ]
I have an Access 2003 file that contains 200 queries, and I want to print out their representation in SQL. I can use Design View to look at each query and cut and paste it to a file, but that's tedious. Also, I may have to do this again on other Access files, so I definitely want to write a program to do it. Where are queries stored an Access db? I can't find anything saying how to get at them. I'm unfamiliar with Access, so I'd appreciate any pointers. Thanks!
Procedures are what you're looking for: ``` OleDbConnection conn = new OleDbConnection(connectionString); conn.Open(); DataTable queries = conn.GetOleDbSchemaTable(OleDbSchemaGuid.Procedures, null); conn.Close(); ``` This will give you a DataTable with the following columns in it (among others): PROCEDURE\_NAME: Name of the query PROCEDURE\_DEFINITION: SQL definition So you can loop through the table like so: ``` foreach(DataRow row in queries.Rows) { // Do what you want with the values here queryName = row["PROCEDURE_NAME"].ToString(); sql = row["PROCEDURE_DEFINITION"].ToString(); } ```
you can put this together using the OleDbConnection's **GetSchema** method along with what Remou posted with regards to the ADO Schemas **oops forgot link: [MSDN](http://msdn.microsoft.com/en-us/library/system.data.oledb.oledbconnection.getschema.aspx)**
How do I list all the queries in a MS Access file using OleDB in C#?
[ "", "c#", "ms-access", "oledb", "" ]
I'm spending these holidays learning to write Qt applications. I was reading about Qt Designer just a few hours ago, which made me wonder : what do people writing real world applications in Qt use to design their GUIs? In fact, how do people design GUIs in general? I, for one, found that writing the code by hand was conceptually simpler than using Qt Designer, although for complex GUIs Designer might make sense. Large GUIs might be possible using Designer, but with time they might become very difficult to manage as complexity increases (this is just my opinion). I also downloaded the AmaroK source code to take a peek at what those guys were doing, and found many calls to addWidget() and friends, but none of those XML files created by Designer (aside: AmaroK has to be my favorite application ever on any platform). What, then, is the "right" way to create a GUI? Designer or code? Let us, for this discussion, consider the following types of GUIs : 1. Simple dialogs that just need to take input, show some result and exit. Let's assume an application that takes a YouTube URL and downloads the video to the user's hard disk. The sort of applications a newbie is likely to start out with. 2. Intermediate level GUIs like, say, a sticky notes editor with a few toolbar/menu items. Let's take xPad for example (<http://getxpad.com/>). I'd say most applications falling in the category of "utilities". 3. Very complex GUIs, like AmaroK or OpenOffice. You know 'em when you see 'em because they make your eyes bleed.
Our experience with Designer started in Qt3. **Qt3** At that point, Designer was useful mainly to generate code that you would then compile into your application. We started using for that purpose but with all generated code, once you edit it, you can no longer go back and regenerate it without losing your edits. We ended up just taking the generated code and doing everything by hand henceforth. **Qt4** Qt4 has improved on Designer significantly. No longer does it only generate code, but you can dynamically load in your Designer files (in xml) and [dynamically connect them to the running objects in your program](https://doc.qt.io/qt-5.7/qtuitools-index.html) -- no generated code however, you do have to name the items in Designer and stick with the names to not break your code. My assessment is that it's nowhere near as useful as Interface Builder on Mac OS X, but at this point, I could see using the Designer files directly in a program. We haven't moved back to Designer since Qt3, but still use it to prototype, and debug layouts. For your problems: 1. You could probably get away with using the standard dialogs that Qt offers. [QInputDialog](https://doc.qt.io/qt-5.7/qinputdialog.html) or if you subclass QDialog, make sure to use [QButtonDialogBox](https://doc.qt.io/qt-5.7/qdialogbuttonbox.html) to make sure your buttons have the proper platform-layout. 2. You could probably do something more limited like xPad with limited Designer functionality. 3. I wouldn't think you could write something like OpenOffice solely with Designer but maybe that's not the point. I'd use Designer as another tool, just like your text editor. Once you find the limitations, try a different tool for that new problem. I totally agree with Steve S that one advantage of Designer is that someone else who's not a programmer can do the layout.
In my experience with Qt Designer and other toolkits/UI-tools: * UI tools speed up the work. * UI tools make it easier to tweak the layout later. * UI tools make it easier/possible for non-programmers to work on the UI design. Complexity can often be dealt with in a UI tool by breaking the design into multiple UI files. Include small logical groups of components in each file and treat each group as a single widget that is used to build the complete UI. Qt Designer's concept of promoted widgets can help with this. I haven't found that the scale of the project makes any difference. Your experience may vary. The files created with UI tools (I guess you could write them by hand if you really wanted to) can often be dynamically loaded at run-time (Qt and GTK+ both provide this feature). This means that you can make layout changes and test them without recompiling. Ultimately, I think both raw code and UI tools can be effective. It probably depends a lot on the environment, the toolkit/UI-tool, and of course personal preference. I like UI tools because they get me up and running fast and allow easy changes later.
Hand Coded GUI Versus Qt Designer GUI
[ "", "c++", "qt", "user-interface", "rad", "qt-designer", "" ]
For example: `sizeof(char*)` returns 4. As does `int*`, `long long*`, everything that I've tried. Are there any exceptions to this?
The guarantee you get is that `sizeof(char) == 1`. There are no other guarantees, including no guarantee that `sizeof(int *) == sizeof(double *)`. In practice, pointers will be size 2 on a 16-bit system (if you can find one), 4 on a 32-bit system, and 8 on a 64-bit system, but there's nothing to be gained in relying on a given size.
Even on a plain x86 32 bit platform, you can get a variety of pointer sizes, try this out for an example: ``` struct A {}; struct B : virtual public A {}; struct C {}; struct D : public A, public C {}; int main() { cout << "A:" << sizeof(void (A::*)()) << endl; cout << "B:" << sizeof(void (B::*)()) << endl; cout << "D:" << sizeof(void (D::*)()) << endl; } ``` Under Visual C++ 2008, I get 4, 12 and 8 for the sizes of the pointers-to-member-function. Raymond Chen talked about this [here](http://blogs.msdn.com/oldnewthing/archive/2004/02/09/70002.aspx).
Is the sizeof(some pointer) always equal to four?
[ "", "c++", "c", "pointers", "memory", "sizeof", "" ]
In practice with C++, what is [RAII](http://en.wikipedia.org/wiki/Resource_acquisition_is_initialization), what are [smart pointers](http://en.wikipedia.org/wiki/Smart_pointers), how are these implemented in a program and what are the benefits of using RAII with smart pointers?
A simple (and perhaps overused) example of RAII is a File class. Without RAII, the code might look something like this: ``` File file("/path/to/file"); // Do stuff with file file.close(); ``` In other words, we must make sure that we close the file once we've finished with it. This has two drawbacks - firstly, wherever we use File, we will have to called File::close() - if we forget to do this, we're holding onto the file longer than we need to. The second problem is what if an exception is thrown before we close the file? Java solves the second problem using a finally clause: ``` try { File file = new File("/path/to/file"); // Do stuff with file } finally { file.close(); } ``` or since Java 7, a try-with-resource statement: ``` try (File file = new File("/path/to/file")) { // Do stuff with file } ``` C++ solves both problems using RAII - that is, closing the file in the destructor of File. So long as the File object is destroyed at the right time (which it should be anyway), closing the file is taken care of for us. So, our code now looks something like: ``` File file("/path/to/file"); // Do stuff with file // No need to close it - destructor will do that for us ``` This cannot be done in Java since there's no guarantee when the object will be destroyed, so we cannot guarantee when a resource such as file will be freed. Onto smart pointers - a lot of the time, we just create objects on the stack. For instance (and stealing an example from another answer): ``` void foo() { std::string str; // Do cool things to or using str } ``` This works fine - but what if we want to return str? We could write this: ``` std::string foo() { std::string str; // Do cool things to or using str return str; } ``` So, what's wrong with that? Well, the return type is std::string - so it means we're returning by value. This means that we copy str and actually return the copy. This can be expensive, and we might want to avoid the cost of copying it. Therefore, we might come up with idea of returning by reference or by pointer. ``` std::string* foo() { std::string str; // Do cool things to or using str return &str; } ``` Unfortunately, this code doesn't work. We're returning a pointer to str - but str was created on the stack, so we be deleted once we exit foo(). In other words, by the time the caller gets the pointer, it's useless (and arguably worse than useless since using it could cause all sorts of funky errors) So, what's the solution? We could create str on the heap using new - that way, when foo() is completed, str won't be destroyed. ``` std::string* foo() { std::string* str = new std::string(); // Do cool things to or using str return str; } ``` Of course, this solution isn't perfect either. The reason is that we've created str, but we never delete it. This might not be a problem in a very small program, but in general, we want to make sure we delete it. We could just say that the caller must delete the object once he's finished with it. The downside is that the caller has to manage memory, which adds extra complexity, and might get it wrong, leading to a memory leak i.e. not deleting object even though it is no longer required. This is where smart pointers come in. The following example uses shared\_ptr - I suggest you look at the different types of smart pointers to learn what you actually want to use. ``` shared_ptr<std::string> foo() { shared_ptr<std::string> str = new std::string(); // Do cool things to or using str return str; } ``` Now, shared\_ptr will count the number of references to str. For instance ``` shared_ptr<std::string> str = foo(); shared_ptr<std::string> str2 = str; ``` Now there are two references to the same string. Once there are no remaining references to str, it will be deleted. As such, you no longer have to worry about deleting it yourself. Quick edit: as some of the comments have pointed out, this example isn't perfect for (at least!) two reasons. Firstly, due to the implementation of strings, copying a string tends to be inexpensive. Secondly, due to what's known as named return value optimisation, returning by value may not be expensive since the compiler can do some cleverness to speed things up. So, let's try a different example using our File class. Let's say we want to use a file as a log. This means we want to open our file in append only mode: ``` File file("/path/to/file", File::append); // The exact semantics of this aren't really important, // just that we've got a file to be used as a log ``` Now, let's set our file as the log for a couple of other objects: ``` void setLog(const Foo & foo, const Bar & bar) { File file("/path/to/file", File::append); foo.setLogFile(file); bar.setLogFile(file); } ``` Unfortunately, this example ends horribly - file will be closed as soon as this method ends, meaning that foo and bar now have an invalid log file. We could construct file on the heap, and pass a pointer to file to both foo and bar: ``` void setLog(const Foo & foo, const Bar & bar) { File* file = new File("/path/to/file", File::append); foo.setLogFile(file); bar.setLogFile(file); } ``` But then who is responsible for deleting file? If neither delete file, then we have both a memory and resource leak. We don't know whether foo or bar will finish with the file first, so we can't expect either to delete the file themselves. For instance, if foo deletes the file before bar has finished with it, bar now has an invalid pointer. So, as you may have guessed, we could use smart pointers to help us out. ``` void setLog(const Foo & foo, const Bar & bar) { shared_ptr<File> file = new File("/path/to/file", File::append); foo.setLogFile(file); bar.setLogFile(file); } ``` Now, nobody needs to worry about deleting file - once both foo and bar have finished and no longer have any references to file (probably due to foo and bar being destroyed), file will automatically be deleted.
**RAII** This is a strange name for a simple but awesome concept. Better is the name **Scope Bound Resource Management** (SBRM). The idea is that often you happen to allocate resources at the begin of a block, and need to release it at the exit of a block. Exiting the block can happen by normal flow control, jumping out of it, and even by an exception. To cover all these cases, the code becomes more complicated and redundant. Just an example doing it without SBRM: ``` void o_really() { resource * r = allocate_resource(); try { // something, which could throw. ... } catch(...) { deallocate_resource(r); throw; } if(...) { return; } // oops, forgot to deallocate deallocate_resource(r); } ``` As you see there are many ways we can get pwned. The idea is that we encapsulate the resource management into a class. Initialization of its object acquires the resource ("Resource Acquisition Is Initialization"). At the time we exit the block (block scope), the resource is freed again. ``` struct resource_holder { resource_holder() { r = allocate_resource(); } ~resource_holder() { deallocate_resource(r); } resource * r; }; void o_really() { resource_holder r; // something, which could throw. ... if(...) { return; } } ``` That is nice if you have got classes of their own which are not solely for the purpose of allocating/deallocating resources. Allocation would just be an additional concern to get their job done. But as soon as you just want to allocate/deallocate resources, the above becomes unhandy. You have to write a wrapping class for every sort of resource you acquire. To ease that, smart pointers allow you to automate that process: ``` shared_ptr<Entry> create_entry(Parameters p) { shared_ptr<Entry> e(Entry::createEntry(p), &Entry::freeEntry); return e; } ``` Normally, smart pointers are thin wrappers around new / delete that just happen to call `delete` when the resource they own goes out of scope. Some smart pointers, like shared\_ptr allow you to tell them a so-called deleter, which is used instead of `delete`. That allows you, for instance, to manage window handles, regular expression resources and other arbitrary stuff, as long as you tell shared\_ptr about the right deleter. There are different smart pointers for different purposes: ### [**unique\_ptr**](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2798.pdf) is a smart pointer which owns an object exclusively. It's not in boost, but it will likely appear in the next C++ Standard. It's *non-copyable* but supports *transfer-of-ownership*. Some example code (next C++): *Code:* ``` unique_ptr<plot_src> p(new plot_src); // now, p owns unique_ptr<plot_src> u(move(p)); // now, u owns, p owns nothing. unique_ptr<plot_src> v(u); // error, trying to copy u vector<unique_ptr<plot_src>> pv; pv.emplace_back(new plot_src); pv.emplace_back(new plot_src); ``` Unlike auto\_ptr, unique\_ptr can be put into a container, because containers will be able to hold non-copyable (but movable) types, like streams and unique\_ptr too. ### [**scoped\_ptr**](http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/scoped_ptr.htm) is a boost smart pointer which is neither copyable nor movable. It's the perfect thing to be used when you want to make sure pointers are deleted when going out of scope. *Code:* ``` void do_something() { scoped_ptr<pipe> sp(new pipe); // do something here... } // when going out of scope, sp will delete the pointer automatically. ``` ### [**shared\_ptr**](http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/shared_ptr.htm) is for shared ownership. Therefor, it's both copyable and movable. Multiple smart pointer instances can own the same resource. As soon as the last smart pointer owning the resource goes out of scope, the resource will be freed. Some real world example of one of my projects: *Code:* ``` shared_ptr<plot_src> p(new plot_src(&fx)); plot1->add(p)->setColor("#00FF00"); plot2->add(p)->setColor("#FF0000"); // if p now goes out of scope, the src won't be freed, as both plot1 and // plot2 both still have references. ``` As you see, the plot-source (function fx) is shared, but each one has a separate entry, on which we set the color. There is a weak\_ptr class which is used when code needs to refer to the resource owned by a smart pointer, but doesn't need to own the resource. Instead of passing a raw pointer, you should then create a weak\_ptr. It will throw an exception when it notices you try to access the resource by an weak\_ptr access path, even though there is no shared\_ptr anymore owning the resource.
RAII and smart pointers in C++
[ "", "c++", "smart-pointers", "raii", "" ]
Why doesn't the following work (Python 2.5.2)? ``` >>> import datetime >>> class D(datetime.date): def __init__(self, year): datetime.date.__init__(self, year, 1, 1) >>> D(2008) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: function takes exactly 3 arguments (1 given) ``` I wanted to create a class that was just like `datetime.date`, but with a different `__init__` function. Apparently my function never gets called. Instead the original `datetime.date.__init__` is called and fails because that expects 3 arguments and I am passing in one. What's going on here? And is this a clue? ``` >>> datetime.date.__init__ <slot wrapper '__init__' of 'object' objects> ``` Thanks!
Regarding several other answers, this doesn't have anything to do with dates being implemented in C per se. The `__init__` method does nothing because they are *immutable* objects, therefore the constructor (`__new__`) should do all the work. You would see the same behavior subclassing int, str, etc. ``` >>> import datetime >>> class D(datetime.date): def __new__(cls, year): return datetime.date.__new__(cls, year, 1, 1) >>> D(2008) D(2008, 1, 1) ```
Please read the Python reference on [*Data model*](https://docs.python.org/3/reference/datamodel.html), especially about the `__new__` [special method](https://docs.python.org/3/reference/datamodel.html#object.__new__). Excerpt from that page (my italics): > `__new__()` is intended mainly to allow subclasses of *immutable* types (like int, str, or tuple) to *customize instance creation*. It is also commonly overridden in custom metaclasses in order to customize class creation. `datetime.datetime` is also an immutable type. PS If you think that: * an object implemented in C cannot be subclassed, or * `__init__` doesn't get called for C implemented objects, only `__new__` then please try it: ``` >>> import array >>> array <module 'array' (built-in)> >>> class A(array.array): def __init__(self, *args): super(array.array, self).__init__(*args) print "init is fine for objects implemented in C" >>> a=A('c') init is fine for objects implemented in C >>> ```
Why can't I subclass datetime.date?
[ "", "python", "oop", "datetime", "subclass", "" ]
I do not want the user to be able to change the value displayed in the combobox. I have been using `Enabled = false` but it grays out the text, so it is not very readable. I want it to behave like a textbox with `ReadOnly = true`, where the text is displayed normally, but the user can't edit it. Is there is a way of accomplishing this?
The article [ComboBox-with-read-only-behavior](http://www.codeproject.com/kb/combobox/ReadOnlyComboBoxByClaudio.aspx) suggests an interesting solution: Create both a readonly textbox and a combobox in the same place. When you want readonly mode, display the textbox, when you want it to be editable, display the combobox.
make `DropDownStyle` property to `DropDownList` instead of `DropDown` then handle the `TextChanged` event to prevent user changing text.
How to make Combobox in winforms readonly
[ "", "c#", ".net", "winforms", "" ]
In jQuery you can get the top position relative to the parent as a number, but you can not get the css top value as a number if it was set in `px`. Say I have the following: ``` #elem{ position:relative; top:10px; } ``` ``` <div> Bla text bla this takes op vertical space.... <div id='elem'>bla</div> </div> ``` ``` $('#elem').position().top; //Returns the number (10+(the vertical space took by the text)) $('#elem').css('top'); //Returns the string '10px' ``` But I want to have the css top property as the number `10`. How would one achieve this?
You can use the parseInt() function to convert the string to a number, e.g: ``` parseInt($('#elem').css('top')); ``` **Update:** (as suggested by Ben): You should give the radix too: ``` parseInt($('#elem').css('top'), 10); ``` Forces it to be parsed as a decimal number, otherwise strings beginning with '0' might be parsed as an octal number (might depend on the browser used).
A jQuery plugin based on M4N's answer ``` jQuery.fn.cssNumber = function(prop){ var v = parseInt(this.css(prop),10); return isNaN(v) ? 0 : v; }; ``` So then you just use this method to get number values ``` $("#logo").cssNumber("top") ```
Get css top value as number not as string?
[ "", "javascript", "jquery", "css", "" ]
Is it OK - best practise wise - to use the second layer to redirect the user? For example: ``` public static void ForceLogin() { HttpCookie cookie = HttpContext.Current.Request.Cookies[cookieName]; if (cookie != null) { if (Regex.IsMatch(cookie.Value, "^[0-9]+\\.[a-f0-9]+$")) { using (EibxDataContext db = new EibxDataContext()) { int count = db.Logins.Count(l => l.Password == cookie.Value); if (count == 1) { return; } } } } HttpContext.Current.Response.Redirect("~/Login.aspx"); } ``` At the last line, I use the Business/Service Logic Layer to redirect the user to the login page. Should this be done in the Presentation layer?
Absolutely not. The business logic layer should make the decision, the UI layer should do the redirect. The business layer shouldn't know anything about HttpContext nor should it be directly reading cookies. Pass the relevant information into the business layer so that the business layer can make the decision, and pass the decision out to the UI layer so that it can work on the resultant decision. Here's the reason... what if the business layer is used from a web service? How can the business layer do a redirect in that instance? Or suppose it's used with a non-web client? Redirection has no meaning in that context. If you change your UI layer, that should not affect your business logic layer, and mixing in redirects and cookie reading into the business layer will necessitate that with the proposed design.
It depends on how you define your layers; for example, my "business logic" is usually logic related to the problem I am trying to solve, and knows nothing of the UI. So it can't do a redirect, as it has no access to the request/response. Personally, I'd do this at the UI layer; dealing with the raw interactions such as being gate-keeper and custodian is part of the UI layer's job for a web app. IMO. For example, via an http-module, which is (by definition) a UI-level component.
Three Layered Web Application
[ "", "c#", "redirect", "routes", "layer", "" ]
I am working on an open source C++ project, for code that compiles on Linux and Windows. I use CMake to build the code on Linux. For ease of development setup and political reasons, I must stick to Visual Studio project files/editor on Windows (I can't switch to [Code::Blocks](http://en.wikipedia.org/wiki/Code::Blocks), for example). I see instructions to generate Visual Studio files using CMake, as [here](http://www.opentissue.org/mediawiki/index.php/Using_CMake). Have you used CMake to generate Visual Studio files before? How has been your experience? Suppose I want to add a new file to my project. What is the workflow for this?
CMake is actually pretty good for this. The key part was everyone on the Windows side has to remember to run CMake before loading in the solution, and everyone on our Mac side would have to remember to run it before make. The hardest part was as a Windows developer making sure your structural changes were in the cmakelist.txt file and not in the solution or project files as those changes would probably get lost and even if not lost would not get transferred over to the Mac side who also needed them, and the Mac guys would need to remember not to modify the make file for the same reasons. It just requires a little thought and patience, but there will be mistakes at first. But if you are using [continuous integration](http://en.wikipedia.org/wiki/Continuous_integration) on both sides then these will get shook out early, and people will eventually get in the habit.
Not sure if it's directly related to the question, but I was looking for an answer for how to generate \*.sln from cmake projects I've discovered that one can use something like this: ``` cmake -G "Visual Studio 10" ``` The example generates needed VS 2010 files from an input CMakeLists.txt file
Using CMake to generate Visual Studio C++ project files
[ "", "c++", "visual-studio", "build-process", "cross-platform", "cmake", "" ]
I can't seem to get it to "work". Perhaps I'm not even testing it correctly. I've got a <%= DateTime.Now.ToString() %> line in my aspx page. I've tried setting caching declarativly like this ``` <%@ OutputCache VaryByParam="SchoolId" Duration="180" Location="Server" NoStore="false" %> ``` I've also tried setting it programmatically via the Response.Cache object. The timestamp always changes. The web.config originally didn't have an outputCache section. I've added one that looks like ``` <outputCache enableOutputCache="true" enableFragmentCache="true" sendCacheControlHeader="true" omitVaryStar="false"/> ``` Always the timestamp changes. What do I try next?
ugh. The issue was a Response.Cache.SetCacheability(HttpCacheability.NoCache) in the Page\_Load of a usercontrol buried 3 levels deep from the page. I appreciate the help, though. -al
It should be pretty easy to enable. I've done it in the past by setting the OutputCache directive in my aspx. I don't think the web.config changes are necessary, as caching is usually enabled by default. Are you testing with IIS or the dev web server? Are you doing anything that would cause the web server to reset (ie, doing a build) in between your tests?
ASP.NET Caching
[ "", "c#", "asp.net", "caching", "" ]
I'm writing an algorithm in PHP to solve a given Sudoku puzzle. I've set up a somewhat object-oriented implementation with two classes: a `Square` class for each individual tile on the 9x9 board, and a `Sudoku` class, which has a matrix of `Square`s to represent the board. The implementation of the algorithm I'm using is a sort of triple-tier approach. The first step, which will solve only the most basic puzzles (but is the most efficient), is to fill in any squares which can only take a single value based on the board's initial setup, and to adjust the constraints accordingly on the rest of the unsolved squares. Usually, this process of "constant propagation" doesn't solve the board entirely, but it does solve a sizable chunk. The second tier will then kick in. This parses each unit (or 9 squares which must all have unique number assignments, e.g. a row or column) for the "possible" values of each unsolved square. This list of possible values is represented as a string in the `Square` class: ``` class Square { private $name; // 00, 01, 02, ... , 86, 87, 88 private $peers; // All squares in same row, col, and box private $number; // Assigned value (0 if not assigned) private $possibles; // String of possible numbers (1-9) public function __construct($name, $p = 0) { $this->name = $name; $this->setNumber($p); if ($p == 0) { $this->possibles = "123456789"; } } // ... other functions ``` Given a whole array of unsolved squares in a unit (as described in the second tier above), the second tier will concatenate all the strings of "possibles" into a single string. It will then search through that single string for any unique character values - values which do not repeat themselves. This will indicate that, within the unit of squares, there is only one square that can take on that particular value. My question is: for implementing this second tier, how can I parse this string of all the possible values in a unit and easily detect the unique value(s)? I know I could create an array where each index is represented by the numbers 1-9, and I could increment the value at the corresponding index by 1 for each possible-value of that number that I find, then scan the array again for any values of 1, but this seems extremely inefficient, requiring two linear scans of an array for each unit, and in a Sudoku puzzle there are 27 units.
This is somewhat like what you have already ruled out as "extremely inefficient", but with builtin functions so it might be quite efficient: ``` $all_possibilities = "1234567891234"; $unique = array(); foreach (count_chars($all_possibilities, 1) as $c => $occurrences) { if ($occurrences == 1) $unique[] = chr($c); } print join("", $unique) . "\n"; ``` Prints: "56789"
Consider using a binary number to represent your "possibles" instead, because binary operations like AND, OR, XOR tend to be much faster than string operations. E.g. if "2" and "3" are possible for a square, use the binary number 000000110 to represent the possibilities for that square. Here's how you could find uniques: ``` $seenonce = 0; $seenmore = 0; foreach(all_possibles_for_this_unit as $possibles) { $seenmore |= ($possibles & $seenonce); $seenonce |= $possibles; } $seenonce ^= $seenmore; if ($seenonce) { //something was seen once - now it must be located } ``` I'm not sure if this method will actually work faster but it's worth looking into.
Finding characters in a string that occur only once
[ "", "php", "algorithm", "sudoku", "" ]
Bear with me while I explain my question. Skip down to the bold heading if you already understand extended slice list indexing. In python, you can index lists using slice notation. Here's an example: ``` >>> A = list(range(10)) >>> A[0:5] [0, 1, 2, 3, 4] ``` You can also include a stride, which acts like a "step": ``` >>> A[0:5:2] [0, 2, 4] ``` The stride is also allowed to be negative, meaning the elements are retrieved in reverse order: ``` >>> A[5:0:-1] [5, 4, 3, 2, 1] ``` But wait! I wanted to see `[4, 3, 2, 1, 0]`. Oh, I see, I need to decrement the start and end indices: ``` >>> A[4:-1:-1] [] ``` What happened? It's interpreting -1 as being at the end of the array, not the beginning. I know you can achieve this as follows: ``` >>> A[4::-1] [4, 3, 2, 1, 0] ``` But you can't use this in all cases. For example, in a method that's been passed indices. ## My question is: Is there any good pythonic way of using extended slices with negative strides *and explicit start and end indices* that include the first element of a sequence? This is what I've come up with so far, but it seems unsatisfying. ``` >>> A[0:5][::-1] [4, 3, 2, 1, 0] ```
Ok, I think this is probably as good as I will get it. Thanks to Abgan for sparking the idea. This relies on the fact that None in a slice is treated as if it were a missing parameter. Anyone got anything better? ``` def getReversedList(aList, end, start, step): return aList[end:start if start!=-1 else None:step] ``` *edit: check for `start==-1`, not `0`* This is still not ideal, because you're clobbering the usual behavior of -1. It seems the problem here is two overlapping definitions of what's supposed to happen. Whoever wins takes away otherwise valid invocations looking for the other intention.
It is error-prone to change the semantics of `start` and `stop`. Use `None` or `-(len(a) + 1)` instead of `0` or `-1`. The semantics is not arbitrary. See Edsger W. Dijkstra's article ["Why numbering should start at zero"](http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html). ``` >>> a = range(10) >>> start, stop, step = 4, None, -1 ``` Or ``` >>> start, stop, step = 4, -(len(a) + 1), -1 >>> a[start:stop:step] [4, 3, 2, 1, 0] ``` Or ``` >>> s = slice(start, stop, step) >>> a[s] [4, 3, 2, 1, 0] ``` [When `s` is a sequence the negative indexes in `s[i:j:k]` are treated specially](https://docs.python.org/3/library/stdtypes.html#common-sequence-operations): > If `i` or `j` is negative, the index is relative to the end of the string: > `len(s) + i` or `len(s) + j` is substituted. But note that `-0` is still `0`. that is why `len(range(10)[4:-1:-1]) == 0` because it is equivalent to `range(10)[4:9:-1]`.
Extended slice that goes to beginning of sequence with negative stride
[ "", "python", "list", "slice", "" ]
I was reading the following article: <http://msdn.microsoft.com/en-us/magazine/cc817398.aspx> "Solving 11 Likely Problems In Your Multithreaded Code" by Joe Duffy And it raised me a question: "We need to lock a .NET Int32 when reading it in a multithreaded code?" I understand that if it was an Int64 in a 32-bit SO it could tear, as it is explained in the article. But for Int32 I imagined the following situation: ``` class Test { private int example = 0; private Object thisLock = new Object(); public void Add(int another) { lock(thisLock) { example += another; } } public int Read() { return example; } } ``` I don't see a reason to include a lock in the Read method. Do you? **Update** Based on the answers (by Jon Skeet and ctacke) I understand that the code above still vulnerable to multiprocessor caching (each processor has its own cache, unsynchronized with others). All the three modifications bellow fix the problem: 1. Adding to "int example" the "volatile" property 2. Inserting a Thread.MemoryBarrier(); before the actual reading of "int example" 3. Read "int example" inside a "lock(thisLock)" And I also think that "volatile" is the most elegant solution.
Locking accomplishes two things: * It acts as a mutex, so you can make sure only one thread modifies a set of values at a time. * It provides memory barriers (acquire/release semantics) which ensures that memory writes made by one thread are visible in another. Most people understand the first point, but not the second. Suppose you used the code in the question from two different threads, with one thread calling `Add` repeatedly and another thread calling `Read`. Atomicity on its own would ensure that you only ended up reading a multiple of 8 - and if there were two threads calling `Add` your lock would ensure that you didn't "lose" any additions. However, it's quite possible that your `Read` thread would only ever read 0, even after `Add` had been called several times. Without any memory barriers, the JIT could just cache the value in a register and assume it hadn't changed between reads. The point of a memory barrier is to either make sure something is really written to main memory, or really read from main memory. Memory models can get pretty hairy, but if you follow the simple rule of taking out a lock every time you want to access shared data (for read *or* write) you'll be okay. See the [volatility/atomicity](http://www.yoda.arachsys.com/csharp/threads/volatility.shtml) part of my threading tutorial for more details.
It all depends on the context. When dealing with integral types or references you might want to use members of the **System.Threading.Interlocked** class. A typical usage like: ``` if( x == null ) x = new X(); ``` Can be replaced with a call to **Interlocked.CompareExchange()**: ``` Interlocked.CompareExchange( ref x, new X(), null); ``` Interlocked.CompareExchange() guarantees that the comparison and exchange happen as an atomic operation. Other members of the Interlocked class, such as **Add()**, **Decrement()**, **Exchange()**, **Increment()** and **Read()** all perform their respective operations atomically. Read the [documentation](http://msdn.microsoft.com/en-us/library/system.threading.interlocked.aspx) on MSDN.
We need to lock a .NET Int32 when reading it in a multithreaded code?
[ "", "c#", ".net", "multithreading", "locking", "" ]
I am working on a Windows Forms application in VS 2008, and I want to display one image over the top of another, with the top image being a gif or something with transparent parts. Basically I have a big image and I want to put a little image on top if it, so that they kinda appear as one image to the user. I've been trying to use a picturebox, but this doesn't seem to have worked, any suggestions?
I was in a similar situation a couple of days ago. You can create a transparent control to host your image. ``` using System; using System.Windows.Forms; using System.Drawing; public class TransparentControl : Control { private readonly Timer refresher; private Image _image; public TransparentControl() { SetStyle(ControlStyles.SupportsTransparentBackColor, true); BackColor = Color.Transparent; refresher = new Timer(); refresher.Tick += TimerOnTick; refresher.Interval = 50; refresher.Enabled = true; refresher.Start(); } protected override CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.ExStyle |= 0x20; return cp; } } protected override void OnMove(EventArgs e) { RecreateHandle(); } protected override void OnPaint(PaintEventArgs e) { if (_image != null) { e.Graphics.DrawImage(_image, (Width / 2) - (_image.Width / 2), (Height / 2) - (_image.Height / 2)); } } protected override void OnPaintBackground(PaintEventArgs e) { //Do not paint background } //Hack public void Redraw() { RecreateHandle(); } private void TimerOnTick(object source, EventArgs e) { RecreateHandle(); refresher.Stop(); } public Image Image { get { return _image; } set { _image = value; RecreateHandle(); } } } ```
PictureBox has 2 layers of images: BackgroundImage and Image, that you can use independently of each other including drawing and clearing.
Transparent images with C# WinForms
[ "", "c#", ".net", "image", "transparency", "picturebox", "" ]
How do I remove items from, or add items to, a select box? I'm running jQuery, should that make the task easier. Below is an example select box. ``` <select name="selectBox" id="selectBox"> <option value="option1">option1</option> <option value="option2">option2</option> <option value="option3">option3</option> <option value="option4">option4</option> </select> ```
Remove an option: ``` $("#selectBox option[value='option1']").remove(); ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <select name="selectBox" id="selectBox"> <option value="option1">option1</option> <option value="option2">option2</option> <option value="option3">option3</option> <option value="option4">option4</option> </select> ``` Add an option: ``` $("#selectBox").append('<option value="option5">option5</option>'); ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <select name="selectBox" id="selectBox"> <option value="option1">option1</option> <option value="option2">option2</option> <option value="option3">option3</option> <option value="option4">option4</option> </select> ```
You can delete the selected item with this: ``` $("#selectBox option:selected").remove(); ``` This is useful if you have a list and not a dropdown.
Removing an item from a select box
[ "", "javascript", "jquery", "html", "html-select", "" ]
I'm interested in implementing autocomplete in Python. For example, as the user types in a string, I'd like to show the subset of files on disk whose names start with that string. What's an efficient algorithm for finding strings that match some condition in a large corpus (say a few hundred thousand strings)? Something like: ``` matches = [s for s in allfiles if s.startswith(input)] ``` I'd like to have the condition be flexible; eg. instead of a strict startswith, it'd be a match so long as all letters in input appears in s in the same order. What's better than the brute-force method I'm showing here?
For exact matching, generally the way to implement something like this is to store your corpus in a [trie](http://en.wikipedia.org/wiki/Trie). The idea is that you store each letter as a node in the tree, linking to the next letter in a word. Finding the matches is simply walking the tree, and showing all children of your current location. eg. "cat", "cow" and "car" would be stored as: ``` a--t / \ c r \ o--w ``` When you get a c, you start at the c node, an a will then take you to the c/a node (children "t" and "r", making cat and car as your completions). Note that you'll also need to mark nodes that are complete words to handle names that are substrings of others (eg "car" and "cart") To get the desired fuzzy matching, you may need to make some changes however.
I used [Lucene](http://pylucene.osafoundation.org/) to autocomplete a text field with more then a hundred thousand possibilities and I perceived it as instantaneous.
Python: Finding partial string matches in a large corpus of strings
[ "", "python", "search", "" ]
for VS 2005 is there a max number of projects that will cause performance issues. We have now up to 25 projects and growing. Should we making these binary referenced or are we breaking out our application logic into too many different projects. Seems to start to be a big performance issue lately.
Having too many DLL files can cost you at run-time so I would recommend that you try to minimize the amount of projects. Creating several solutions is also an option but try to make the solutions independent of each other so that you don't have to debug and implement new features across several solutions - that can be cumbersome. Take a look at this article: [Project anti-pattern: Many projects in a Visual Studio Solution File](http://www.lostechies.com/blogs/chad_myers/archive/2008/07/15/project-anti-pattern-many-projects-in-a-visual-studio-solution-file.aspx).
Is there a chain of dependency through all 25 projects? If some projects aren't dependent on others, put them in their own solution. Don't compile the whole solution if you don't have to and one usually doesn't. Usually you can right click on the project you just modified and compile just that. VS will figure out which dependent projects need to be re-compiled. Use "start without debugging" unless you are planning to hit a break point. Are some DLLs stable and haven't changed in long time? They don't have to be in your solution either. Don't search entire solution unless you really have to. The real limitation is the human mind. How many files in one project can one deal with? Also, unless you are using ndepends to trace dependencies, putting to many classes in one project can lead to too many classes depending on other classes, making changes harder and riskier.
Most number of projects within a solution
[ "", "c#", "winforms", "projects", "" ]
Does anyone know why typedefs of class names don't work like class names for the friend declaration? ``` class A { public: }; class B : public A { public: typedef A SUPERCLASS; }; typedef A X; class C { public: friend class A; // OK friend class X; // fails friend class B::SUPERCLASS; // fails }; ```
It can't, currently. I don't know the reason yet (just looking it up, because i find it interesting). Update: you can find the reason in the first proposal to support typedef-names as friends: <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1520.pdf> . The reason is that the Standard only supported elaborated-type-specifiers. It's easy to allow only those, and say if the entity declared as friend is not declared yet, it will be made a member of the surrounding namespace. But this means that if you want to use a template parameter, you would have to do (a class is required then for example) ``` friend class T; ``` But that brought additional problems, and it was figured not worth the gain. Now, the paper proposes to allow additional type specifiers to be given (so that this then allows use of template parameters and typedef-names). The next C++ version (due to 2010) will be able to do it. See this updated proposal to the standard: <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1791.pdf> . It will not allow only typedef names, but also template parameters to be used as the type declared as friend.
AFAIK, In C++ typedef does not create a full-fledged synonyms when used in conjuction with classes. In other words, it's not like a macro. Among the restrictions is that the synonym cannot appear after a class or struct prefix, or be used as a destructor or constructor name. You also cannot subclass the synonym. I would bet that is also means you can't friend it.
Why can't I declare a friend through a typedef?
[ "", "c++", "" ]
The Python [`datetime.isocalendar()`](http://www.python.org/doc/2.5.2/lib/datetime-datetime.html) method returns a tuple `(ISO_year, ISO_week_number, ISO_weekday)` for the given `datetime` object. Is there a corresponding inverse function? If not, is there an easy way to compute a date given a year, week number and day of the week?
Python 3.8 added the [fromisocalendar()](https://docs.python.org/3/library/datetime.html#datetime.date.fromisocalendar) method: ``` >>> datetime.fromisocalendar(2011, 22, 1) datetime.datetime(2011, 5, 30, 0, 0) ``` Python 3.6 added the [`%G`, `%V` and `%u` directives](https://docs.python.org/3.6/whatsnew/3.6.html#datetime): ``` >>> datetime.strptime('2011 22 1', '%G %V %u') datetime.datetime(2011, 5, 30, 0, 0) ``` **Original answer** I recently had to solve this problem myself, and came up with this solution: ``` import datetime def iso_year_start(iso_year): "The gregorian calendar date of the first day of the given ISO year" fourth_jan = datetime.date(iso_year, 1, 4) delta = datetime.timedelta(fourth_jan.isoweekday()-1) return fourth_jan - delta def iso_to_gregorian(iso_year, iso_week, iso_day): "Gregorian calendar date for the given ISO year, week and day" year_start = iso_year_start(iso_year) return year_start + datetime.timedelta(days=iso_day-1, weeks=iso_week-1) ``` A few test cases: ``` >>> iso = datetime.date(2005, 1, 1).isocalendar() >>> iso (2004, 53, 6) >>> iso_to_gregorian(*iso) datetime.date(2005, 1, 1) >>> iso = datetime.date(2010, 1, 4).isocalendar() >>> iso (2010, 1, 1) >>> iso_to_gregorian(*iso) datetime.date(2010, 1, 4) >>> iso = datetime.date(2010, 1, 3).isocalendar() >>> iso (2009, 53, 7) >>> iso_to_gregorian(*iso) datetime.date(2010, 1, 3) ```
As of Python 3.6, you can use the new `%G`, `%u` and `%V` directives. See [issue 12006](http://bugs.python.org/issue12006) and the [updated documentation](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior): > `%G` > ISO 8601 year with century representing the year that contains the greater part of the ISO week (`%V`). > > `%u` > ISO 8601 weekday as a decimal number where 1 is Monday. > > `%V` > ISO 8601 week as a decimal number with Monday as the first day of the week. Week 01 is the week containing Jan 4. Given a string with year, weeknumber and weekday number, it is easy to parse those out to a date with: ``` from datetime import datetime datetime.strptime('2002 01 1', '%G %V %u').date() ``` or as a function with integer inputs: ``` from datetime import datetime def date_from_isoweek(iso_year, iso_weeknumber, iso_weekday): return datetime.strptime( '{:04d} {:02d} {:d}'.format(iso_year, iso_weeknumber, iso_weekday), '%G %V %u').date() ```
What's the best way to find the inverse of datetime.isocalendar()?
[ "", "python", "datetime", "" ]
I am trying to design an object model (for C#), and can't work out the best way to store the data. I'll try to use a simple example to illustrate this! I have an object "pet", which could be one of, "cat", "dog" etc. So I have created an "pet" class with a "petType" enum to store this. Now this is where it gets tricky. If an "pet" is a "cat", then its "food" could be one of "fish", "milk" etc. If it is a "dog" then its "food" could be "meat", "biscuits" or something. Now should I create a big enum for "fish", "milk", "meat" and "biscuits" and somehow code it so that a "cat" cannot be assigned "food.meat"? It wouldnt really make sense for my "pet" class to have a "catfood" and "dogfood" enum, because thats not extensible and it will end up storing loads of enums that are null. Is there an elegant solution to this that I'm not seeing?
Try #2. Seems to be correct ``` interface IPet { } class Cat : IPet { public void eat(CommonFood food) { } public void eat(CatFood food) { } } class Dog : IPet { public void eat(CommonFood food) { } public void eat(DogFood food) { } } interface IFood { } abstract class CommonFood : IFood { } abstract class CatFood : IFood { } abstract class DogFood : IFood { } class Milk : CommonFood { } class Fish : CatFood { } class Meat : DogFood { } class Program { static void Main(string[] args) { Dog myDog = new Dog(); myDog.eat(new Milk()); // ok, milk is common myDog.eat(new Fish()); // error } } ```
First, ***cat*** and ***dog*** should probably be subclassed from ***pet***, assuming there are some common properties of all pets. Next, I'm not clear what you are planning to do with ***food***. As an object model does a ***pet*** hold a type of food or will there be methods such as ***eat*** that will take ***food*** as an argument?
Trying to design an object model - using enums
[ "", "c#", "oop", "enums", "" ]
FieldInfo has an IsStatic member, but PropertyInfo doesn't. I assume I'm just overlooking what I need. ``` Type type = someObject.GetType(); foreach (PropertyInfo pi in type.GetProperties()) { // umm... Not sure how to tell if this property is static } ```
To determine whether a property is static, you must obtain the MethodInfo for the get or set accessor, by calling the GetGetMethod or the GetSetMethod method, and examine its IsStatic property. <https://learn.microsoft.com/en-us/dotnet/api/system.reflection.propertyinfo>
As an actual quick and simple solution to the question asked, you can use this: ``` propertyInfo.GetAccessors(nonPublic: true)[0].IsStatic; ```
In C#, how can I tell if a property is static? (.Net CF 2.0)
[ "", "c#", "reflection", "compact-framework", "" ]
How do I determine whether a function exists within a library, or list out the functions in a compiled library?
You can use the [nm](http://unixhelp.ed.ac.uk/CGI/man-cgi?nm) command to list the symbols in static libraries. ``` nm -g -C <libMylib.a> ```
For ELF binaries, you can use readelf: ``` readelf -sW a.out | awk '$4 == "FUNC"' | c++filt ``` `-s`: list symbols `-W`: don't cut too long names The awk command will then filter out all functions, and c++filt will unmangle them. That means it will convert them from an internal naming scheme so they are displayed in human readable form. It outputs names similar to this (taken from boost.filesystem lib): ``` 285: 0000bef0 91 FUNC WEAK DEFAULT 11 boost::exception::~exception() ``` Without c++filt, the name is displayed as `_ZN5boost9exceptionD0Ev`
How would you list the available functions etc contained within a compiled library?
[ "", "c++", "linker", "ld", "" ]
I am wondering why the C# 3.0 compiler is unable to infer the type of a method when it is passed as a parameter to a generic function when it can implicitly create a delegate for the same method. Here is an example: ``` class Test { static void foo(int x) { } static void bar<T>(Action<T> f) { } static void test() { Action<int> f = foo; // I can do this bar(f); // and then do this bar(foo); // but this does not work } } ``` I would have thought that I would be able to pass `foo` to `bar` and have the compiler infer the type of `Action<T>` from the signature of the function being passed but this does not work. However I can create an `Action<int>` from `foo` without casting so is there a legitimate reason that the compiler could not also do the same thing via type inference?
Maybe this will make it clearer: ``` public class SomeClass { static void foo(int x) { } static void foo(string s) { } static void bar<T>(Action<T> f){} static void barz(Action<int> f) { } static void test() { Action<int> f = foo; bar(f); barz(foo); bar(foo); //these help the compiler to know which types to use bar<int>(foo); bar( (int i) => foo(i)); } } ``` foo is not an action - foo is a method group. * In the assignment statement, the compiler can tell clearly which foo you're talking about, since the int type is specified. * In the barz(foo) statement, the compiler can tell which foo you're talking about, since the int type is specified. * In the bar(foo) statement, it could be any foo with a single parameter - so the compiler gives up. Edit: I've added two (more) ways to help the compiler figure out the type (ie - how to skip the inference steps). From my reading of the article in JSkeet's answer, the decision to not infer the type seems to be based on a mutual infering scenario, such as ``` static void foo<T>(T x) { } static void bar<T>(Action<T> f) { } static void test() { bar(foo); //wut's T? } ``` Since the general problem was unsolve-able, they choose to left specific problems where a solution exists as unsolved. As a consequence of this decision, you won't be adding a overload for a method and getting a whole lot of type confusion from all the callers that are used to a single member method group. I guess that's a good thing.
The reasoning is that if the type ever expands there should be no possibility of failure. i.e., if a method foo(string) is added to the type, it should never matter to existing code - as long as the contents of existing methods don't change. For that reason, even when there is only one method foo, a reference to foo (known as a method group) cannot be cast to a non-type-specific delegate, such as `Action<T>` but only to a type-specific delegate such as `Action<int>`.
C# 3.0 generic type inference - passing a delegate as a function parameter
[ "", "c#", "generics", "delegates", "c#-3.0", "type-inference", "" ]
I like the Stack Overflow comment UI a great deal and I'm looking into implementing the same thing on my own website. I looked at the code and it looks like the main tool here is [WMD](http://wmd-editor.com/), with the JQuery [TextArea Resizer](http://plugins.jquery.com/project/TextAreaResizer) playing a supporting role. WMD converts Markdown into HTML on the client side. That's pretty nice because it helps out with previewing but I run into a challenge when sending that to the server. If there's a validation error (say the user entered an invalid e-mail address on some other part of the comment form, or he didn't enter his name maybe), then the server responds by redisplaying the form with an error message and the form fields prepopulated. Only now the comment text is HTML, not Markdown, because the server never even saw the Markdown. But I would like it to be Markdown since that's what the user was entering. Any ideas here? I've considered various ideas: * Do a server-side [HTML-to-Markdown transformation](https://stackoverflow.com/questions/59557/html-to-markdown-with-java). Not that excited about this idea. Seems hokey to transform from Markdown to HTML back to Markdown again, and as a user I always find it irritating when the software reformats my text/code. * Client-side validation (to augment the server-side validation, which I would of course retain). Seems like a reasonable direction though currently I'm using reCAPTCHA on my comment forms, which means that I need to post at least the reCAPTCHA part to a server. * Lose WMD and use [MarkdownJ](http://code.google.com/p/markdownj/) to transform the Markdown to HTML on the server. I'd need to look for some other mechanism for accomplishing the preview function, which I want to keep. Ideally there'd be some way to get at the Markdown version of the text and submit that to the server in addition to the HTML, but I'm not enough of a JavaScript guy to know whether that's a real possibility. Any suggestions appreciated.
See this question: [Convert HTML back to Markdown for editing in wmd](https://stackoverflow.com/questions/235224/convert-html-back-to-markdown-for-editing-in-wmd) (yay for the "Related" box on the right-hand nav!).
I would send the data as markdown and then let the server convert it to html when the validations have passed. WMD has an option to specify the format of data it will send to the server. Just add ``` wmd_options = { //Markdown or HTML output: "Markdown" }; ``` Before the call to wmd
Ideas on implementing Stack Overflow-style comments
[ "", "javascript", "comments", "markdown", "wmd", "" ]
Is it possible to generate a list of all source members within an iSeries source file using SQL? Might be similar to getting table definitions from SYSTABLES and SYSCOLUMNS, but I'm unable to find anything so far.
Sadly SQL doesn't know anything about members, so all the sourcefile-info you could get from qsys2.syscolumns is, that they consist of three columns. you want the member info and i suggest using the qshell( STRQSH ) together with a query to qsys2.systables as source files are specially marked there. ``` select table_schema , table_name from qsys2.systables where File_type = 'S' ``` i whacked together a qshell one-liner for copy&paste purposes .. ``` db2 -S "select '/QSYS.LIB/' concat table_schema concat '.LIB/' concat table_name concat '.FILE' from qsys2.systables where File_type = 'S'" | grep '/' | xargs -n1 find >/home/myuser/myfile ``` it pipes every member it finds to the IFS directory /home/myuser/myfile you could also specify a Sourcefile member. feel free to modify to your needs. PS: it throws errors for Sourcefiles directly sitting in /QSYS.LIB, but i think you don't want those anyway.. take care! :)
More tables and views have been added to the system catalog since the other answers were presented. Now, you can get the list of members (a.k.a. "partitions" in SQL parlance) for a given file (a.k.a. table) like this: ``` SELECT TABLE_PARTITION FROM SYSPARTITIONSTAT WHERE TABLE_NAME = myfile AND TABLE_SCHEMA = mylib ``` You can also get other information from `SYSPARTITIONSTAT` such as the number of rows in each member, and timestamps for the last change, save, restore, or use.
List of source members in a file with SQL
[ "", "sql", "ibm-midrange", "" ]
I grabbed a database of the zip codes and their langitudes/latitudes, etc from this [This page](http://www.populardata.com/downloads.html). It has got the following fields: > ZIP, LATITUDE, LONGITUDE, CITY, STATE, COUNTY, ZIP\_CLASS The data was in a text file but I inserted it into a MySQL table. My question now is, how can i utilise the fields above to calculate the distance between two zip codes that a user can enter on the website? Working code in PHP will be appreciated
You can also try hitting a web service to calc the distance. Let someone else do the heavy lifting. <https://www.zipcodeapi.com/API#distance>
This is mike's answer with some annotations for the **magic numbers**. It seemed to work fine for me for [some test data](http://www.ilc-usa.com/library/pdfs/tb1401.pdf): ``` function calc_distance($point1, $point2) { $radius = 3958; // Earth's radius (miles) $deg_per_rad = 57.29578; // Number of degrees/radian (for conversion) $distance = ($radius * pi() * sqrt( ($point1['lat'] - $point2['lat']) * ($point1['lat'] - $point2['lat']) + cos($point1['lat'] / $deg_per_rad) // Convert these to * cos($point2['lat'] / $deg_per_rad) // radians for cos() * ($point1['long'] - $point2['long']) * ($point1['long'] - $point2['long']) ) / 180); return $distance; // Returned using the units used for $radius. } ```
Calculating distance between zip codes in PHP
[ "", "php", "mysql", "algorithm", "math", "distance", "" ]
[This is for PC/Visual C++ specifically (although any other answers would be quite illuminating :))] How can you tell if a pointer comes from an object in the stack? For example: ``` int g_n = 0; void F() { int *pA = &s_n; ASSERT_IS_POINTER_ON_STACK(pA); int i = 0; int *pB = &i; ASSERT_IS_POINTER_ON_STACK(pB); } ``` so only the second assert *(pB)* should trip. I'm thinking using some inline assembly to figure out if it's within the SS segment register or something like that. Does anybody know if there's any built in functions for this, or a simple way to do this? Thanks! RC
Technically speaking, in portable C you can't know. A stack for arguments is a hardware detail that is honored on many but not all compilers. Some compilers will use registers for arguments when they can (ie, fastcall). If you are working specifically on windows NT, you want to grab the Thread Execution Block from calling NtCurrentTeb(). [Joe Duffy's blog](http://www.bluebytesoftware.com/blog/PermaLink,guid,eb98baaf-0837-498d-a1e7-e4e16788f912.aspx) has information on this and from it you can get the stack range. You check for pointer in range and you should be good to go.
Whatever you do, it'll be extremely platform-specific and non-portable. Assuming you're ok with that, read on. If a pointer points somewhere in the stack, it will lie between the current stack pointer `%esp` and the top of the stack. One way to get the top of the stack is to read it in at the beginning of `main()`. However, this has a few problems: - The top of the stack is actually slightly higher, since the C runtime initializes the stack before entering `main()` - In C++, global objects' constructors are called before `main()` - If your application is multithreaded, each thread has its own separate stack. In that case, you'd need a thread-local variable describing the base of the stack One way to get the current stack pointer is using inline assembly: ``` uint32_t GetESP(void) { uint32_t ret; asm { mov esp, ret } return ret; } ``` Beware of inlining and optimization! The optimizer might break this code.
How to find out if a pointer is on the stack on PC/Visual C++
[ "", "c++", "c", "visual-c++", "pointers", "low-level", "" ]
I have a Stored Procedure called spGetOrders which accepts a few parameters: @startdate and @enddate. This queries an "Orders" table. One of the columns in the table is called "ClosedDate". This column will hold NULL if an order hasn't been closed or a date value if it has. I'd like to add a @Closed parameter which will take a bit value. In a simple world, I'd be able to do.. ``` select * from orders o where o.orderdate between @startdate AND @enddate and (if @Closed = 1 then o.ClosedDate IS NULL else o.ClosedDate IS NOT NULL) ``` Obviously, that's not going to work.. I'm also looking at dynamic sql which is my last resort, but starting to look like the answer.. Please help..
Try this: ``` select * from orders o where o.orderdate between @startdate AND @enddate and ((@Closed = 1 And o.ClosedDate IS NULL) Or (@Closed = 0 And o.ClosedDate IS NOT NULL)) ``` Be vary careful about mixing AND's and OR's in the where clause. When doing this, the parenthesis to control the order of evaluation is VERY important.
SQL Statement: ``` SELECT * FROM orders WHERE orderdate BETWEEN @startdate AND @enddate AND (@Closed = 1 OR CLosedDate IS NOT NULL) ```
SQL Conditional Where
[ "", "sql", "conditional-statements", "where-clause", "" ]
I'm building a custom property grid that displays the properties of items in a collection. What I want to do is show only the properties in the grid that are common amongst each item. I am assuming the best way to do this would be to find the the common base class of each type in the collection and display it's properties. Is there any easier way? Can you give me a code example of the best approach to do this?
You can do this with a method that keeps checking for common base classes. I wrote up this, quickly, using the BaseClass feature of the Type class. You don't have to use an array, a list or other IEnumerable can work with small modifications to this. I tested it with: ``` static void Main(string[] args) { Console.WriteLine("Common Types: " + GetCommonBaseClass(new Type[] {typeof(OleDbCommand), typeof(OdbcCommand), typeof(SqlCommand)}).ToString()); } ``` And got the right answer of DbCommand. Here is my code. ``` static Type GetCommonBaseClass(Type[] types) { if (types.Length == 0) return (typeof(object)); else if (types.Length == 1) return (types[0]); // Copy the parameter so we can substitute base class types in the array without messing up the caller Type[] temp = new Type[types.Length]; for (int i = 0; i < types.Length; i++) { temp[i] = types[i]; } bool checkPass = false; Type tested = null; while (!checkPass) { tested = temp[0]; checkPass = true; for (int i = 1; i < temp.Length; i++) { if (tested.Equals(temp[i])) continue; else { // If the tested common basetype (current) is the indexed type's base type // then we can continue with the test by making the indexed type to be its base type if (tested.Equals(temp[i].BaseType)) { temp[i] = temp[i].BaseType; continue; } // If the tested type is the indexed type's base type, then we need to change all indexed types // before the current type (which are all identical) to be that base type and restart this loop else if (tested.BaseType.Equals(temp[i])) { for (int j = 0; j <= i - 1; j++) { temp[j] = temp[j].BaseType; } checkPass = false; break; } // The indexed type and the tested type are not related // So make everything from index 0 up to and including the current indexed type to be their base type // because the common base type must be further back else { for (int j = 0; j <= i; j++) { temp[j] = temp[j].BaseType; } checkPass = false; break; } } } // If execution has reached here and checkPass is true, we have found our common base type, // if checkPass is false, the process starts over with the modified types } // There's always at least object return tested; } ```
The code posted to get the most-specific common base for a set of types has some issues. In particular, it breaks when I pass typeof(object) as one of the types. I believe the following is simpler and (better) correct. ``` public static Type GetCommonBaseClass (params Type[] types) { if (types.Length == 0) return typeof(object); Type ret = types[0]; for (int i = 1; i < types.Length; ++i) { if (types[i].IsAssignableFrom(ret)) ret = types[i]; else { // This will always terminate when ret == typeof(object) while (!ret.IsAssignableFrom(types[i])) ret = ret.BaseType; } } return ret; } ``` I also tested with: ``` Type t = GetCommonBaseClass(typeof(OleDbCommand), typeof(OdbcCommand), typeof(SqlCommand)); ``` And got `typeof(DbCommand)`. And with: ``` Type t = GetCommonBaseClass(typeof(OleDbCommand), typeof(OdbcCommand), typeof(SqlCommand), typeof(Component)); ``` And got `typeof(Compoment)`. And with: ``` Type t = GetCommonBaseClass(typeof(OleDbCommand), typeof(OdbcCommand), typeof(SqlCommand), typeof(Component), typeof(Component).BaseType); ``` And got `typeof(MarshalByRefObject)`. And with ``` Type t = GetCommonBaseClass(typeof(OleDbCommand), typeof(OdbcCommand), typeof(SqlCommand), typeof(Component), typeof(Component).BaseType, typeof(int)); ``` And got `typeof(object)`.
Easiest way to get a common base class from a collection of types
[ "", "c#", ".net", "linq", "reflection", "" ]
I am looking to use Java to get the MD5 checksum of a file. I was really surprised but I haven't been able to find anything that shows how to get the MD5 checksum of a file. How is it done?
There's an input stream decorator, `java.security.DigestInputStream`, so that you can compute the digest while using the input stream as you normally would, instead of having to make an extra pass over the data. ``` MessageDigest md = MessageDigest.getInstance("MD5"); try (InputStream is = Files.newInputStream(Paths.get("file.txt")); DigestInputStream dis = new DigestInputStream(is, md)) { /* Read decorated stream (dis) to EOF as normal... */ } byte[] digest = md.digest(); ```
Use [DigestUtils](http://commons.apache.org/proper/commons-codec/apidocs/org/apache/commons/codec/digest/DigestUtils.html) from [Apache Commons Codec](http://commons.apache.org/codec/) library: ``` try (InputStream is = Files.newInputStream(Paths.get("file.zip"))) { String md5 = org.apache.commons.codec.digest.DigestUtils.md5Hex(is); } ```
Getting a File's MD5 Checksum in Java
[ "", "java", "md5", "checksum", "" ]
When I write a class I always expose private fields through a public property like this: ``` private int _MyField; public int MyField { get{return _MyField; } ``` When is it ok to just expose a public field like this: ``` public int MyField; ``` I am creating a structure called Result and my intention is do this: ``` public Result(bool result, string message) { Result = result; Message = message; } public readonly int Result; public readonly int Message; ``` What is the best practice? Is it ever ok to do this?
I only ever expose public fields when they're (static) constants - and even then I'd usually use a property. By "constant" I mean any readonly, immutable value, not just one which may be expressed as a "const" in C#. Even readonly *instance* variables (like Result and Message) should be encapsulated in a property in my view. See [this article](http://csharpindepth.com/Articles/Chapter8/PropertiesMatter.aspx) for more details.
> What is the best practice for using public fields? “Don’t.” See also: [Should protected attributes always be banned?](https://stackoverflow.com/questions/76194/should-protected-attributes-always-be-banned#76286) which concerns protected fields but what is said there is even more true for public ones.
What is the best practice for using public fields?
[ "", "c#", ".net", "properties", "field", "" ]
I'm looking for a fast way to turn an associative array in to a string. Typical structure would be like a URL query string but with customizable separators so I can use '`&amp;`' for xhtml links or '`&`' otherwise. My first inclination is to use `foreach` but since my method could be called many times in one request I fear it might be too slow. ``` <?php $Amp = $IsXhtml ? '&amp;' : '&'; $Parameters = array('Action' => 'ShowList', 'Page' => '2'); $QueryString = ''; foreach ($Parameters as $Key => $Value) $QueryString .= $Amp . $Key . '=' . $Value; ``` Is there a faster way?
You can use [`http_build_query()`](http://www.php.net/http_build_query) to do that. > Generates a URL-encoded query string from the associative (or indexed) array provided.
If you're not concerned about the *exact* formatting however you do want something simple but without the line breaks of `print_r` you can also use `json_encode($value)` for a quick and simple formatted output. (*note it works well on other data types too*) ``` $str = json_encode($arr); //output... [{"id":"123","name":"Ice"},{"id":"234","name":"Cake"},{"id":"345","name":"Pie"}] ```
Fastest way to implode an associative array with keys
[ "", "php", "arrays", "query-string", "associative-array", "implode", "" ]
In [another question](https://stackoverflow.com/questions/377716/javascript-automatic-gettersetters-john-resig-book), a user pointed out that the `new` keyword was dangerous to use and proposed a solution to object creation that did not use `new`. I didn't believe that was true, mostly because I've used [Prototype](https://en.wikipedia.org/wiki/Prototype_JavaScript_Framework), [Script.aculo.us](https://en.wikipedia.org/wiki/Prototype_JavaScript_Framework) and other excellent JavaScript libraries, and everyone of them used the `new` keyword. In spite of that, yesterday I was watching Douglas Crockford's talk at YUI theater and he said the exactly same thing, that he didn't use the `new` keyword anymore in his code ([Crockford on JavaScript - Act III: Function the Ultimate - 50:23 minutes](http://www.youtube.com/watch?v=ya4UHuXNygM&t=50m23s)). Is it 'bad' to use the `new` keyword? What are the advantages and disadvantages of using it?
Crockford has done a lot to popularize good JavaScript techniques. His opinionated stance on key elements of the language have sparked many useful discussions. That said, there are far too many people that take each proclamation of "bad" or "harmful" as gospel, refusing to look beyond one man's opinion. It can be a bit frustrating at times. Use of the functionality provided by the `new` keyword has several advantages over building each object from scratch: 1. [Prototype inheritance](https://stackoverflow.com/questions/186244/what-does-it-mean-that-javascript-is-a-prototype-based-language). While often looked at with a mix of suspicion and derision by those accustomed to class-based OO languages, JavaScript's native inheritance technique is a simple and surprisingly effective means of code re-use. And the `new` keyword is the canonical (and only available cross-platform) means of using it. 2. Performance. This is a side-effect of #1: if I want to add 10 methods to every object I create, I *could* just write a creation function that manually assigns each method to each new object... Or, I could assign them to the creation function's `prototype` and use `new` to stamp out new objects. Not only is this faster (no code needed for each and every method on the prototype), it avoids ballooning each object with separate properties for each method. On slower machines (or especially, slower JS interpreters) when many objects are being created this can mean a significant savings in time and memory. And yes, `new` has one crucial disadvantage, ably described by other answers: if you forget to use it, your code will break without warning. Fortunately, that disadvantage is easily mitigated - simply add a bit of code to the function itself: ``` function foo() { // if user accidentally omits the new keyword, this will // silently correct the problem... if ( !(this instanceof foo) ) return new foo(); // constructor logic follows... } ``` Now you can have the advantages of `new` without having to worry about problems caused by accidentally misuse. John Resig goes into detail on this technique in his [Simple "Class" Instantiation](http://ejohn.org/blog/simple-class-instantiation/) post, as well as including a means of building this behavior into your "classes" by default. Definitely worth a read... as is his upcoming book, [Secrets of the JavaScript Ninja](http://www.manning.com/resig/), which finds hidden gold in this and many other "harmful" features of the JavaScript language (the **chapter** on `with` is especially enlightening for those of us who initially dismissed this much-maligned feature as a gimmick). ## A general-purpose sanity check You could even add an assertion to the check if the thought of broken code silently working bothers you. Or, as [some](https://stackoverflow.com/users/36866/some) commented, use the check to introduce a runtime exception: ``` if ( !(this instanceof arguments.callee) ) throw new Error("Constructor called as a function"); ``` Note that this snippet is able to avoid hard-coding the constructor function name, as unlike the previous example it has no need to actually instantiate the object - therefore, it can be copied into each target function without modification. ### ES5 taketh away As [Sean McMillan](https://stackoverflow.com/questions/383402/is-javascripts-new-keyword-considered-harmful/383503?noredirect=1#comment7767833_383503), [stephenbez](https://stackoverflow.com/questions/383402/is-javascripts-new-keyword-considered-harmful/383503?noredirect=1#comment21565658_383503) and [jrh](https://stackoverflow.com/questions/383402/is-javascripts-new-keyword-considered-harmful/383503?noredirect=1#comment94711133_383503) noted, the use of `arguments.callee` is invalid in ES5's [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode). So the above pattern will throw an error if you use it in that context. ### ES6 and an entirely harmless `new` ES6 introduces [Classes](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes) to JavaScript - no, not in the weird Java-aping way that old-school Crockford did, but in spirit much more like the light-weight way he (and others) later adopted, taking the best parts of prototypal inheritance and baking common patterns into the language itself. ...and part of that includes a safe `new`: ``` class foo { constructor() { // constructor logic that will ONLY be hit // if properly constructed via new } } // bad invocation foo(); // throws, // Uncaught TypeError: class constructors must be invoked with 'new' ``` But what if you don't *want* to use the new sugar? What if you just want to update your perfectly fine old-style prototypal code with the sort of safety checks shown above such that they keep working in strict mode? Well, as [Nick Parsons notes](https://stackoverflow.com/questions/383402/is-javascripts-new-keyword-considered-harmful/383503?noredirect=1#comment126932432_383503), ES6 provides a handy check for that as well, in the form of [`new.target`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/new.target): ``` function foo() { if ( !(new.target) ) throw new Error("Constructor called as a function"); // constructor logic follows... } ``` So whichever approach you choose, you can - with a bit of thought and good hygiene - use `new` without harm.
I have just read some parts of [Crockford](https://en.wikipedia.org/wiki/Douglas_Crockford)'s book "[JavaScript: The Good Parts](https://en.wikipedia.org/wiki/Douglas_Crockford#Bibliography)". I get the feeling that he considers everything that ever has bitten him as harmful: About switch fall through: > I never allow switch cases to fall > through to the next case. I once found > a bug in my code caused by an > unintended fall through immediately > after having made a vigorous speech > about why fall through was sometimes > useful. (page 97, ISBN > 978-0-596-51774-8) About ++ and --: > The ++ (increment) and -- (decrement) > operators have been known to > contribute to bad code by encouraging > excessive trickiness. They are second > only to faulty architecture in > enabling viruses and other security > menaces. (page 122) About new: > If you forget to include the *new* > prefix when calling a constructor > function, then *this* will not be > bound to the new object. Sadly, *this* > will be bound to the global object, so > instead of augmenting your new object, > you will be clobbering global > variables. That is really bad. There > is no compile warning, and there is no > runtime warning. (page 49) There are more, but I hope you get the picture. My answer to your question: **No, it's not harmful.** but if you forget to use it when you should you could have some problems. If you are developing in a good environment you notice that. In the 5th edition of ECMAScript there is support for [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions_and_function_scope/Strict_mode). In strict mode, `this` is no longer bound to the global object, but to `undefined`.
Is JavaScript's "new" keyword considered harmful?
[ "", "javascript", "" ]
We use iText to generate PDFs from Java (based partly on recommendations on this site). However, embedding a copy of our logo in an image format like GIF results in it looking a bit strange as people zoom in and out. Ideally we'd like to embed the image in a vector format, such as EPS, SVG or just a PDF template. The website claims that EPS support has been dropped, that embedding a PDF or PS within a PDF can result in errors, and it doesn't even mention SVG. Our code uses the Graphics2D API rather than iText directly, but we'd be willing to break out of AWT mode and use iText itself if it achieved the result. How can this be done?
According to the [documentation](http://itextdocs.lowagie.com/tutorial/objects/images/index.php) iText supports the following image formats: JPEG, GIF, PNG, TIFF, BMP, WMF and EPS. I don't know if this might be of any help but I have successfully used [iTextSharp](http://itextsharp.sourceforge.net/) to embed vector [WMF](http://en.wikipedia.org/wiki/Windows_Metafile) image in a pdf file: C#: ``` using System; using System.IO; using iTextSharp.text; using iTextSharp.text.pdf; public class Program { public static void Main() { Document document = new Document(); using (Stream outputPdfStream = new FileStream("output.pdf", FileMode.Create, FileAccess.Write, FileShare.None)) using (Stream imageStream = new FileStream("test.wmf", FileMode.Open, FileAccess.Read, FileShare.Read)) { PdfWriter.GetInstance(document, outputPdfStream); Image wmf = Image.GetInstance(imageStream); document.Open(); document.Add(wmf); document.Close(); } } } ```
I found a couple of examples by the iText author that use the Graphics2D API and the Apache Batik library to draw the SVG in a PDF. <http://itextpdf.com/examples/iia.php?id=269> <http://itextpdf.com/examples/iia.php?id=263> For my purposes, I needed to take a string of SVG and draw that in a PDF at a certain size and location while maintaining the vector nature of the image (no rasterization). I wanted to bypass the SVG file that seems prevalent in the SAXSVGDocumentFactory.createSVGDocument() functions. I found the following post helpful for using a SVG text string instead of a flat file. <http://batik.2283329.n4.nabble.com/Parse-SVG-from-String-td3539080.html> > You have to create a StringReader from your String and pass that to the SAXSVGDocumentFactory#createDocument(String, Reader) method. The URI that you pass as the first parameter as a String will be the base document URI of the SVG document. This should only be important if your SVG references any external files. > > Best regards, > > Daniel Java Source derived from the iText examples: ``` // SVG as a text string. String svg = "<svg>...</svg>"; // Create the PDF document. // rootPath is the present working directory path. Document document = new Document(); PdfWriter writer = PdfWriter.getInstance(document, new FileOutputStream(new File(rootPath + "svg.pdf"))); document.open(); // Add paragraphs to the document... document.add(new Paragraph("Paragraph 1")); document.add(new Paragraph(" ")); // Boilerplate for drawing the SVG to the PDF. String parser = XMLResourceDescriptor.getXMLParserClassName(); SAXSVGDocumentFactory factory = new SAXSVGDocumentFactory(parser); UserAgent userAgent = new UserAgentAdapter(); DocumentLoader loader = new DocumentLoader(userAgent); BridgeContext ctx = new BridgeContext(userAgent, loader); ctx.setDynamicState(BridgeContext.DYNAMIC); GVTBuilder builder = new GVTBuilder(); PdfContentByte cb = writer.getDirectContent(); // Parse the SVG and draw it to the PDF. Graphics2D g2d = new PdfGraphics2D(cb, 725, 400); SVGDocument chart = factory.createSVGDocument(rootPath, new StringReader(svg)); GraphicsNode chartGfx = builder.build(ctx, chart); chartGfx.paint(g2d); g2d.dispose(); // Add paragraphs to the document... document.add(new Paragraph("Paragraph 2")); document.add(new Paragraph(" ")); document.close(); ``` Note that this will draw a SVG to the PDF you are working on. The SVG appears as a floating layer above text. I'm still working on moving/scaling it and having it rest inline with text, but hopefully that is outside the immediate scope of the question. Hope this was able to help. Cheers EDIT: I was able to implement my svg as an inline object using the following. The commented lines are for adding a quick border to check positioning. ``` SAXSVGDocumentFactory factory = new SAXSVGDocumentFactory(XMLResourceDescriptor.getXMLParserClassName()); UserAgent userAgent = new UserAgentAdapter(); DocumentLoader loader = new DocumentLoader(userAgent); BridgeContext ctx = new BridgeContext(userAgent, loader); ctx.setDynamicState(BridgeContext.DYNAMIC); GVTBuilder builder = new GVTBuilder(); SVGDocument svgDoc = factory.createSVGDocument(rootPath, new StringReader(svg)); PdfTemplate svgTempl = PdfTemplate.createTemplate(writer, Float.parseFloat(svgDoc.getDocumentElement().getAttribute("width")), Float.parseFloat(svgDoc.getDocumentElement().getAttribute("height"))); Graphics2D g2d = new PdfGraphics2D(svgTempl, svgTempl.getWidth(), svgTempl.getHeight()); GraphicsNode chartGfx = builder.build(ctx, svgDoc); chartGfx.paint(g2d); g2d.dispose(); Image svgImg = new ImgTemplate(svgTempl); svgImg.setAlignment(Image.ALIGN_CENTER); //svgImg.setBorder(Image.BOX); //svgImg.setBorderColor(new BaseColor(0xff, 0x00, 0x00)); //svgImg.setBorderWidth(1); document.add(svgImg); ```
Vector graphics in iText PDF
[ "", "java", "image", "pdf", "vector", "itext", "" ]
I've been inspired by [Modifying Microsoft Outlook contacts from Python](https://stackoverflow.com/questions/405724/modifying-microsoft-outlook-contacts-from-python) -- I'm looking to try scripting some of my more annoying Outlook uses with the `win32com` package. I'm a Linux user trapped in a Windows users' cubicle, so I don't know much about COM. I'm looking for information on whether COM allows for reflection via `win32com` or whether there's documentation on the Outlook 2007 COM objects. Any other pointers that you think will be helpful are welcome! I've found [Programming Outlook With Python](http://wiki.exchange4linux.org/e4lwiki/n-h.support.wiki/uploads/programming_outlook_with_python.pdf), but I'm using Outlook 2007 so I'd like some more information on how much of the Outlook 2000 information is still applicable. TIA!
To answer your question about documentation. Here are two links that I regularly visit when developing Outlook macros. While the sites are primarily focused on development with MS technologies most of the code can be pretty easily translated to python once you understand how to use COM. * <http://msdn.microsoft.com/en-us/library/bb176619.aspx]>[1](http://www.outlookcode.com/) * <http://www.outlookcode.com/> * Deal Outlook Security <http://www.outlookcode.com/article.aspx?ID=52> * Redemption <http://www.dimastr.com/redemption/>
In general, older references to the object model are probably still valid given the attention Microsoft pays to backwards-compatability. As for whether or not you will be able to use win32com in python for Outlook, yes, you should be able to use that to make late-bound calls to the Outlook object model. Here is a page that describes how to do it with Excel: <http://oreilly.com/catalog/pythonwin32/chapter/ch12.html> A problem that you should be made aware of is the fact that Outlook has a security dialog that is thrown up when external programs try to access the object model and perform operations in outlook. You are *not* going to be able to suppress this dialog. If you want to avoid the dialog, you are better off creating macros in VBA for Outlook that are loaded in a session, and put buttons on a new CommandBar to execute them.
Python Outlook 2007 COM primer
[ "", "python", "com", "outlook", "outlook-2007", "" ]
So if my JPA query is like this: Select distinct p from Parent p left join fetch p.children order by p.someProperty I correctly get results back ordered by p.someProperty, and I correctly get my p.children collection eagerly fetched and populated. But I'd like to have my query be something like "order by p.someProperty, p.children.someChildProperty" so that the collection populated inside each parent object was sub-ordered by someChildProperty. This seems intuitive when I think in terms of the sql that is actually generated for these calls, but I guess less so when it tries to map back to hierarchical objects.
For preserving order, use [TreeSet](http://java.sun.com/javase/6/docs/api/java/util/TreeSet.html). As far as, sorting of a collection inside parent is concerned, just do it in your code using [Comparator](http://java.sun.com/javase/6/docs/api/java/util/Comparator.html). Well, try this on your collection definition in your parent entity class. I hope you are getting my point. You can use this JPA annotation, ``` @javax.persistence.OrderBy(value = "fieldName") ``` or this Hibernate specific, ``` @org.hibernate.annotations.OrderBy(clause = "FIELD_NAME asc") ``` and you can also use this, ``` @org.hibernate.annotations.Sort(type = SortType.NATURAL) ``` or ``` @org.hibernate.annotations.Sort(type = SortType.COMPARATOR) ``` In the case of comparator, a comparator must be in place. Other might only work with String collections.
If you are using springboot and jpa here's the example in parent class add the code like below ``` @OneToMany(mappedBy = "user") @OrderBy(value = "position desc ") private List<UserAddresses> userAddresses; ``` in the above code, position is the field name in **UserAddresses** Class and **desc** is the order. You can pass either **asc** or **desc** like in sql order by.
Ordering return of Child objects in JPA query
[ "", "java", "hibernate", "jpa", "" ]
I think I shall reframe my question from Where should you use BlockingQueue Implementations instead of Simple Queue Implementations ? to **What are the advantages/disadvantages of BlockingQueue over Queue implementations taking into consideration aspects like speed,concurrency or other properties which vary e.g. time to access last element.** I have used both kind of Queues. I know that Blocking Queue is normally used in concurrent application. I was writing simple ByteBuffer pool where I needed some placeholder for ByteBuffer objects. I needed fastest , thread safe queue implementation. Even there are List implementations like ArrayList which has constant access time for elements. Can anyone discuss about pros and cons of BlockingQueue vs Queue vs List implementations? Currently I have used ArrayList to hold these ByteBuffer objects. **Which data structure shall I use to hold these objects?**
A limited capacity `BlockingQueue` is also helpful if you want to throttle some sort of request. With an unbounded queue, a producers can get far ahead of the consumers. The tasks will eventually be performed (unless there are so many that they cause an `OutOfMemoryError`), but the producer may long since have given up, so the effort is wasted. In situations like these, it may be better to signal a would-be producer that the queue is full, and to give up quickly with a failure. For example, the producer might be a web request, with a user that doesn't want to wait too long, and even though it won't consume many CPU cycles while waiting, it is using up limited resources like a socket and some memory. Giving up will give the tasks that have been queued already a better chance to finish in a timely manner. --- Regarding the amended question, which I'm interpreting as, "What is a good collection for holding objects in a pool?" An unbounded [`LinkedBlockingQueue`](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/LinkedBlockingQueue.html) is a good choice for many pools. However, depending on your pool management strategy, a [`ConcurrentLinkedQueue`](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ConcurrentLinkedQueue.html) may work too. In a pooling application, a blocking "put" is not appropriate. Controlling the maximum size of the queue is the job of the pool manager—it decides when to create or destroy resources for the pool. Clients of the pool borrow and return resources from the pool. Adding a new object, or returning a previously borrowed object to the pool should be fast, non-blocking operations. So, a bounded capacity queue is not a good choice for pools. On the other hand, when retrieving an object from the pool, most applications want to wait until a resource is available. A "take" operation that blocks, at least temporarily, is much more efficient than a "busy wait"—repeatedly polling until a resource is available. The `LinkedBlockingQueue` is a good choice in this case. A borrower can block indefinitely with [`take`](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/LinkedBlockingQueue.html#take()), or limit the time it is willing to block with [`poll`](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/LinkedBlockingQueue.html#poll(long,%20java.util.concurrent.TimeUnit)). A less common case in when a client is not willing to block at all, but has the ability to create a resource for itself if the pool is empty. In that case, a `ConcurrentLinkedQueue` is a good choice. This is sort of a gray area where it would be nice to share a resource (e.g., memory) as much as possible, but speed is even more important. In the worse case, this degenerates to every thread having its own instance of the resource; then it would have been more efficient not to bother trying to share among threads. Both of these collections give good performance and ease of use in a concurrent application. For non-concurrent applications, an `ArrayList` is hard to beat. Even for collections that grow dynamically, the per-element overhead of a `LinkedList` allows an `ArrayList` with some empty slots to stay competitive memory-wise.
You would see `BlockingQueue` in multi-threaded situations. For example you need pass in a `BlockingQueue` as a parameter to create [`ThreadPoolExecutor`](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/ThreadPoolExecutor.html) if you want to create one using constructor. Depending on the type of queue you pass in the executor could act differently.
Where should you use BlockingQueue Implementations instead of Simple Queue Implementations?
[ "", "java", "queue", "" ]
Here's a PHP example of mine. Can anyone find a shorter/easier way to do this? ``` <? foreach($posts as $post){?> <div class="<?=($c++%2==1)?‘odd’:NULL?>"> <?=$post?> </div> <? }?> <style> .odd{background-color:red;} </style> ``` Examples in other languages would be fun to see as well.
Fundamentally - no. That's about as easy as it gets. You might rewrite it a bit shorter/cleaner, but the idea will be the same. This is how I would write it: ``` $c = true; // Let's not forget to initialize our variables, shall we? foreach($posts as $post) echo '<div'.(($c = !$c)?' class="odd"':'').">$post</div>"; ```
If you'd like to have less in-line PHP, a great way of doing it is via JavaScript. Using jQuery, it's simply: ``` <script type="text/javascript"> $('div:odd').css('background-color', 'red'); </script> ```
Easiest way to alternate row colors in PHP/HTML?
[ "", "php", "html", "css", "colors", "" ]
**Problem** I've got a number of Dojo components on a page. When the user tries to tab from an input like component to a grid like component, I get a JavaScript "Can't move focus to control" error. The user base uses IE6. **Solution** The first element in the DojoX Grid layout cannot be hidden. If it is hidden, you get a a JavaScript "Can't move focus to control" error. To fix this, I added a row # that displays. See below. > ``` > var gridLayout = [ > new dojox.grid.cells.RowIndex({ name: "row #", > width: 2, > styles: "text-align: right;" > }), > { > field: "ele_id", > name: "Element ID", > styles: "text-align:right;", > width:5, > hidden:"true" > }, > { > field: "ele_nm", > name: "Element Name", > styles: "text-align:left;", > width:8 > } > ]; > ```
**Solution** The first element in the DojoX Grid layout cannot be hidden. If it is hidden, you get a a JavaScript "Can't move focus to control" error. To fix this, I added a row # that displays. See below. > ``` > var gridLayout = [ > new dojox.grid.cells.RowIndex({ name: "row #", > width: 2, > styles: "text-align: right;" > }), > { > field: "ele_id", > name: "Element ID", > styles: "text-align:right;", > width:5, > hidden:"true" > }, > { > field: "ele_nm", > name: "Element Name", > styles: "text-align:left;", > width:8 > } > ]; > ```
Preventing tabbing may disrupt partially sighted users who are browsing your site using a screenreader.
Why does tabbing to a DojoX grid result in a JavaScript "Can't move focus to control" error?
[ "", "javascript", "grid", "dojo", "" ]
We need to get all the instances of objects that implement a given interface - can we do that, and if so how?
I don't believe there is a way... You would have to either be able to walk the Heap, and examine every object there, or walk the stack of every active thread in the application process space, examining every stack reference variable on every thread... The other way, (I am guessing you can't do) is intercept all Object creation activities (using a container approach) and keep a list of all objects in your application...
If you need instances (samples) of all types implementing particular interface you can go through all types, check for interface and create instance if match found. Here's some pseudocode that looks remarkably like C# and may even compile and return what you need. If nothing else, it will point you in the correct direction: ``` public static IEnumerable<T> GetInstancesOfImplementingTypes<T>() { AppDomain app = AppDomain.CurrentDomain; Assembly[] ass = app.GetAssemblies(); Type[] types; Type targetType = typeof(T); foreach (Assembly a in ass) { types = a.GetTypes(); foreach (Type t in types) { if (t.IsInterface) continue; if (t.IsAbstract) continue; foreach (Type iface in t.GetInterfaces()) { if (!iface.Equals(targetType)) continue; yield return (T) Activator.CreateInstance(t); break; } } } } ``` Now, if you're talking about walking the heap and returning previously instantiated instances of all objects that implement a particular type, good luck on that as this information is not stored by .Net runtime (can be computed by debugers/profilers by examining heap/stack so). Depending on the reason why you think you need to do that there are probably better ways of going about it.
How do I get all instances of all loaded types that implement a given interface?
[ "", "c#", "reflection", "" ]
I know that having diamond inheritance is considered bad practice. However, I have 2 cases in which I feel that diamond inheritance could fit very nicely. I want to ask, would you recommend me to use diamond inheritance in these cases, or is there another design that could be better. **Case 1:** I want to create classes that represent different kinds of "Actions" in my system. The actions are classified by several parameters: * The action can be "Read" or "Write". * The action can be with delay or without delay (It is not just 1 parameter. It changes the behavior significantly). * The action's "flow type" can be FlowA or FlowB. I intend to have the following design: ``` // abstract classes class Action { // methods relevant for all actions }; class ActionRead : public virtual Action { // methods related to reading }; class ActionWrite : public virtual Action { // methods related to writing }; class ActionWithDelay : public virtual Action { // methods related to delay definition and handling }; class ActionNoDelay : public virtual Action {/*...*/}; class ActionFlowA : public virtual Action {/*...*/}; class ActionFlowB : public virtual Action {/*...*/}; // concrete classes class ActionFlowAReadWithDelay : public ActionFlowA, public ActionRead, public ActionWithDelay { // implementation of the full flow of a read command with delay that does Flow A. }; class ActionFlowBReadWithDelay : public ActionFlowB, public ActionRead, public ActionWithDelay {/*...*/}; //... ``` Of course, I will obey that no 2 actions (inheriting from Action class) will implement the same method. **Case 2:** I implement the composite design pattern for a "Command" in my system. A command can be read, written, deleted, etc. I also want to have a sequence of commands, which can also be read, written, deleted, etc. A sequence of commands can contain other sequences of commands. So I have the following design: ``` class CommandAbstraction { CommandAbstraction(){}; ~CommandAbstraction()=0; void Read()=0; void Write()=0; void Restore()=0; bool IsWritten() {/*implemented*/}; // and other implemented functions }; class OneCommand : public virtual CommandAbstraction { // implement Read, Write, Restore }; class CompositeCommand : public virtual CommandAbstraction { // implement Read, Write, Restore }; ``` In addition, I have a special kind of commands, "Modern" commands. Both one command and composite command can be modern. Being "Modern" adds a certain list of properties to one command and composite command (mostly same properties for both of them). I want to be able to hold a pointer to CommandAbstraction, and initialize it (via new) according to the needed type of command. So I want to do the following design (in addition to the above) : ``` class ModernCommand : public virtual CommandAbstraction { ~ModernCommand()=0; void SetModernPropertyA(){/*...*/} void ExecModernSomething(){/*...*/} void ModernSomethingElse()=0; }; class OneModernCommand : public OneCommand, public ModernCommand { void ModernSomethingElse() {/*...*/}; // ... few methods specific for OneModernCommand }; class CompositeModernCommand : public CompositeCommand, public ModernCommand { void ModernSomethingElse() {/*...*/}; // ... few methods specific for CompositeModernCommand }; ``` Again, I will make sure that no 2 classes inheriting from CommandAbstraction class will implement the same method. Thank you.
Inheritance is the second strongest (more coupling) relations in C++, preceded only by friendship. If you can redesign into using only composition your code will be more loosely coupled. If you cannot, then you should consider whether all your classes should really inherit from the base. Is it due to implementation or just an interface? Will you want to use any element of the hierarchy as a base element? Or are just leaves in your hierarchy that are real Action's? If only leaves are actions and you are adding behavior you can consider Policy based design for this type of composition of behaviors. The idea is that different (orthogonal) behaviors can be defined in small class sets and then bundled together to provide the real complete behavior. In the example I will consider just one policy that defines whether the action is to be executed now or in the future, and the command to execute. I provide an abstract class so that different instantiations of the template can be stored (through pointers) in a container or passed to functions as arguments and get called polymorphically. ``` class ActionDelayPolicy_NoWait; class ActionBase // Only needed if you want to use polymorphically different actions { public: virtual ~Action() {} virtual void run() = 0; }; template < typename Command, typename DelayPolicy = ActionDelayPolicy_NoWait > class Action : public DelayPolicy, public Command { public: virtual run() { DelayPolicy::wait(); // inherit wait from DelayPolicy Command::execute(); // inherit command to execute } }; // Real executed code can be written once (for each action to execute) class CommandSalute { public: void execute() { std::cout << "Hi!" << std::endl; } }; class CommandSmile { public: void execute() { std::cout << ":)" << std::endl; } }; // And waiting behaviors can be defined separatedly: class ActionDelayPolicy_NoWait { public: void wait() const {} }; // Note that as Action inherits from the policy, the public methods (if required) // will be publicly available at the place of instantiation class ActionDelayPolicy_WaitSeconds { public: ActionDelayPolicy_WaitSeconds() : seconds_( 0 ) {} void wait() const { sleep( seconds_ ); } void wait_period( int seconds ) { seconds_ = seconds; } int wait_period() const { return seconds_; } private: int seconds_; }; // Polimorphically execute the action void execute_action( Action& action ) { action.run(); } // Now the usage: int main() { Action< CommandSalute > salute_now; execute_action( salute_now ); Action< CommandSmile, ActionDelayPolicy_WaitSeconds > smile_later; smile_later.wait_period( 100 ); // Accessible from the wait policy through inheritance execute_action( smile_later ); } ``` The use of inheritance allows public methods from the policy implementations to be accessible through the template instantiation. This disallows the use of aggregation for combining the policies as no new function members could be pushed into the class interface. In the example, the template depends on the policy having a wait() method, which is common to all waiting policies. Now waiting for a time period needs a fixed period time that is set through the period() public method. In the example, the NoWait policy is just a particular example of the WaitSeconds policy with the period set to 0. This was intentional to mark that the policy interface does not need to be the same. Another waiting policy implementation could be waiting on a number of milliseconds, clock ticks, or until some external event, by providing a class that registers as a callback for the given event. If you don't need polymorphism you can take out from the example the base class and the virtual methods altogether. While this may seem overly complex for the current example, you can decide on adding other policies to the mix. While adding new orthogonal behaviors would imply an exponential growth in the number of classes if plain inheritance is used (with polymorphism), with this approach you can just implement each different part separately and glue it together in the Action template. For example, you could make your action periodic and add an exit policy that determine when to exit the periodic loop. First options that come to mind are LoopPolicy\_NRuns and LoopPolicy\_TimeSpan, LoopPolicy\_Until. This policy method ( exit() in my case ) is called once for each loop. The first implementation counts the number of times it has been called an exits after a fixed number (fixed by the user, as period was fixed in the example above). The second implementation would periodically run the process for a given time period, while the last one will run this process until a given time (clock). If you are still following me up to here, I would indeed make some changes. The first one is that instead of using a template parameter Command that implements a method execute() I would use functors and probably a templated constructor that takes the command to execute as parameter. The rationale is that this will make it much more extensible in combination with other libraries as boost::bind or boost::lambda, since in that case commands could be bound at the point of instantiation to any free function, functor, or member method of a class. Now I have to go, but if you are interested I can try posting a modified version.
There's a design-quality difference between implementation-oriented diamond inheritance where implementation is inherited (risky), and subtyping-oriented inheritance where interfaces or marker-interfaces are inherited (often useful). Generally, if you can avoid the former, you're better off since somewhere down the line the exact invoked method may cause problems, and the importance of virtual bases, states, etc., starts mattering. In fact, Java wouldn't allow you to pull something like that, it supports only the interface hierarchy. I think that the "cleanest" design you can come up for this is to effectively turn all your classes in the diamond into mock-interfaces (by having no state information, and having pure virtual methods). This reduces the impact of ambiguity. And of course, you can use multiple and even diamond inheritance for this just like you would use implements in Java. Then, have a set of concrete implementations of these interfaces that can be implemented in different ways (E.g., aggregation, even inheritance). Encapsulate this framework so that external clients only get the interfaces and never interact directly with the concrete types, and make sure to thoroughly test your implementations. Of course, this is a lot of work, but if you're writing a central and reusable API, this might be your best bet.
Diamond inheritance (C++)
[ "", "c++", "oop", "inheritance", "multiple-inheritance", "diamond-problem", "" ]
In this query: ``` SELECT COUNT(*) AS UserCount, Company.* FROM Company LEFT JOIN User ON User.CompanyId = Company.Id WHERE Company.CanAccessSystem= true AND(User.CanAccessSystem IS null OR User.CanAccessSystem = true) GROUP BY Company.Id ``` I want to query a list of companies that can access a particular system as well as the number of users who can access the system inside the company. This query works for all cases except for one very important one. If a company can access a the system but none of the users can, the Company disappears completely from the query (i.e.: Users.CanAccessSystem = false). In that case, I just want the UserCount = 0. Example From Companies that Can Access the System: ``` Users Company Name 1 WidgetWorks 3 WidgetCompany 0 WidgesRUs ``` This system is on MySQL Query Edit: Fixed a Typo "ON User.CompanyId = Company.Id"
The reason that your result doesn't work is because you don't have any join clause. ``` SELECT IFNULL(COUNT(User.Id), 0) AS UserCount, Company.* FROM Company LEFT JOIN User ON User.CompanyId = Company.Id AND User.CanAccessSystem = true WHERE Company.CanAccessSystem = true GROUP BY Company.Id ``` That should work. The point with a left join is that the master table-entries should always appear, however left joined entries doesn't have to. the IFNULL() is only for returning 0 since no appropriate users will render a NULL-value in this case. I'm not really sure how you handle boolean values in MySQL since it doesn't support it natively.
I think I'd start with a sub-query that generates the (disjoint) union of: * A company ID and a (non-zero) count of the users that can access the system where at least one user can access the system * A company ID and a zero count when no user can access the system Assuming that `User.CanAccessSystem IS NULL` was an artefact needed to account for the LEFT JOIN, that leads to: ``` SELECT Company.ID, COUNT(*) AS UserCount FROM Company, User WHERE Company.ID = User.CompanyID AND User.CanAccessSystem = true UNION SELECT Company.ID, 0 AS UserCount FROM Company WHERE NOT EXISTS (SELECT * FROM User WHERE Company.ID = User.CompanyID AND User.CanAccessSystem = true) ``` You could filter both parts with 'AND Company.CanAccessSystem = true' and it might be beneficial if most companies cannot access the system -- or you can defer it until the final processing stage. You then need to do a straight join of this result with Company, ensuring that the filter condition for the company being able to access the system is applied somewhere along the line. Nominally, this leads to the following (untested) code: ``` SELECT UserCount, Company.* FROM Company JOIN (SELECT Company.ID AS ID, COUNT(*) AS UserCount FROM Company, User WHERE Company.ID = User.CompanyID AND User.CanAccessSystem = true UNION SELECT Company.ID AS ID, 0 AS UserCount FROM Company WHERE NOT EXISTS (SELECT * FROM User WHERE Company.ID = User.CompanyID AND User.CanAccessSystem = true) ) AS NumUsers ON Company.ID = NumUsers.ID WHERE Company.CanAccessSystem = true ```
How do I get an original row in a SQL join from disappearing?
[ "", "sql", "" ]
I have to display yes/no button on some condition on my asp.net page , I used windows.confirm() to display that but windows.confirm displays Ok/Cancel Button , Is there any way I can display Yes/No Button in asp.net page instead of Ok/Cancel using Javascript ?
If you want to keep it simple, just check some of the jQuery or YUI plugins, you can get that up and running with a couple of lines of code. For example, [this one](http://www.ericmmartin.com/simplemodal/) uses jQuery and [this one](http://developer.yahoo.com/yui/examples/container/simpledialog-quickstart.html) uses YUI.
<http://bytes.com/groups/javascript/149838-yes-no-dialog-box-javascript> Read there :)
How to display Yes/No Button in asp.net page using Javascript
[ "", "javascript", "asp.net", "button", "confirmation", "" ]
I've always been one to err on the side of preventing exception conditions by never taking an action unless I am certain that there will be no errors. I learned to program in C and this was the only way to really do things. Working with C# I frequently see more reactive programing - try to do something and handle exceptions. To me this seems like using exceptions as control statements. The first few times I saw this I dismissed it as bad practice. But over the past few months I've seen it all over the place and just have to wonder - is this accepted/efficient or just an epidemic? Update: For a bit of clarification most of the exception handling I am seeing is things like ``` try { //open file } catch { //message box for file not found } ``` or even worse ``` try { //open xml //modify xml (100+ lines of code) } catch { //message for 'unspecified error' } ``` I understand there are times when exception handling is very good to use (such as database connections) but I'm referring to the use of exceptions in place of more 'traditional' control. I asked this because I felt like this programming style was using exceptions as a crutch instead of as a recovery method and wanted to know if this was just something I'd have to learn to expect in the C# world.
You can certainly misuse exceptions is c#. In my view, you should never get an ArgumentNullException, since you should always test for null first. However, there are also many cases where you can't range check your way out of an exception. Anything that interacts with "the outside world" (connecting to a web server, database, etc) may throw an exception. Do prevent as much as possible, but you still need the ability to react to everything else.
As usual, the answer is "it depends", but I subscribe to the "fail fast" philosophy in general. I prefer to use try/finally (sans catch) unless I can actually do something useful to recover from an exception in a particular block of code. Catching every possible exception isn't worth it. In general, failing fast is preferable to failing silently. If, on the other hand, you know how to recover from a particular exception, then yes, go do that. Say you have a file transfer library. It will probably throw an exception if the transfer is interrupted due to a timeout or network failure. That's reasonable. You'll be annoyed if the library just fails silently; checking for a return code is far more error-prone, and not necessarily more readable. But perhaps you have a business rule for sending a bunch of files to a server that you should make at least 3 attempts to transfer the file before giving up and asking for user intervention. In that case, the business logic should handle the exception, try to recover, then do whatever it's supposed to do when the automatic solution fails (alert the user, schedule a later attempt, or whatever). If you find code that does this: ``` try { // do something that could throw // ... } catch {} //swallow the exception ``` or: ``` catch { return null; } ``` That's **probably** broken. Sure, sometimes code that you call can throw an exception that you really don't care about. But I often see people do this just so they don't have to "handle" the exception upstream; the practice makes things harder to debug. Some people consider allowing exceptions to cascade up the chain of responsibility to be bad because you're just "hoping" someone upstream will "miraculously" know what to do. Those people are wrong. Upstream code is often the only place that **can** know what to do. Occasionally, I'll try/catch and throw a different, more appropriate exception. However, when possible, a guard clause is better. e,g. `if (argument==null) throw new ArgumentNullException();` is better than allowing a NullReferenceException to propagate up the call stack, because it's clearer what went wrong. Some conditions "should never happen" or "I didn't know that could happen" should probably be logged (see, for example, jboss logging), but can be swallowed before they bring down your application, at least in some cases. ETA: It is probably broken to take a specific exception and then display a general, ambiguous error message. For your second example above, that sounds bad to me. For your first, "File not found", that may be more reasonable (if you actually catch that specific exception, and not just "everything"), unless you have a better way to deal with that condition somewhere else. Modal Messageboxes are usually a bad "interaction design smell" to me, but that's mostly beside the point.
Preventive vs Reactive C# programming
[ "", "c#", "coding-style", "" ]
I have an application where users can upload video files of any size, and I'd like to be able to determine the the height/width of a Flash video file (flv or f4v) from a PHP script so that I can size the player appropriately. I'd love a pure PHP solution, but I'd be open to shelling out to a command line tool and parsing the output (if such a tool exists). Thanks in advance!
[ffmpeg](http://www.ffmpeg.org/) is probably your best bet, there is even a [php module](http://ffmpeg-php.sourceforge.net/) of it. ffmpeg -i "FileName" Alternativly you could read the information from the flv file directly by opening the file and reading the meta information.
If you can't use ffmpeg because you don't have control over your server or if you want a PHP solution, have a look at [getID3](http://www.getid3.org/), there's a [FLV module](http://www.getid3.org/source/module.audio-video.flv.phps) that should return a resolution.
How can I obtain the dimensions of a Flash video file from PHP code?
[ "", "php", "flash", "" ]
I have a conflict when trying to mix those plugins, i have based my script in some demos. The problem is that when i drag something inside the same list it triggers the drop event and that item is added to the end of the list, wich is correct if the item is dropped in another list, but not in the same, when i drop it in the same list i want to insert it in that position (it works if i disable the drop event) js code: ``` $(document).ready(function() { $("#sortlist1").treeview(); $("#sortlist1").droppable({ accept: ".item", drop: function(ev, ui) { alert(ui.sender) $("#sortlist1").append($(ui.draggable)); } }); $("#sortlist2").droppable({ accept: ".item", drop: function(ev, ui) { $("#sortlist2").append($(ui.draggable)); } }); $("#sortlist3").droppable({ accept: ".item", drop: function(ev, ui) { $("#sortlist3").append($(ui.draggable)); } }); $('.sortlist').sortable({ handle : '.icono', update : function () { $('input#sortlist').val($('.sortlist').sortable('serialize')); } }); }); ``` And html: ``` <ul class="sortlist treeview lista" id="sortlist1"> <li id="listItem_1" class="expandable closed item"> <div class="hitarea closed-hitarea expandable-hitarea lastExpandable-hitarea"> <img src="img/arrow_out.png" class="icono" alt="move" /> </div> numero 1<input type="checkbox" /> <ul class="sortlist" id="sublist"> <li id="sublistItem_1"><img src="img/arrow_out.png" class="icono" alt="move" />numero 1<input type="checkbox" /></li> <li id="sublistItem_2"><img src="img/arrow_out.png" class="icono" alt="move" />numero 2<input type="checkbox" /></li> </ul> </li> <li id="listItem_2" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 2<input type="checkbox" /></li> <li id="listItem_3" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 3<input type="checkbox" /></li> <li id="listItem_4" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 4<input type="checkbox" /></li> <li id="listItem_5" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 5<input type="checkbox" /></li> <li id="listItem_6" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 6<input type="checkbox" /></li> <li id="listItem_7" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 7<input type="checkbox" /></li> <li id="listItem_8" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 8<input type="checkbox" /></li> <li id="listItem_9" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 9<input type="checkbox" /></li> <li id="listItem_10" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 10<input type="checkbox" /></li> <li id="listItem_11" class="item"><img src="img/arrow_out.png" class="icono" alt="move" />numero 11<input type="checkbox" /></li> </ul> <ul class="sortlist treeview lista" id="sortlist2"> </ul> <ul class="sortlist treeview lista" id="sortlist3"> </ul> ```
You cannot mix those plugins: they process the same events, and cannot cooperate together. Either rethink your UI, or use different tools. Is it possible to do it? Yes, of course. For example, [Dojo DnD](http://docs.dojocampus.org/dojo/dnd) allows both sorting and drag-and-drop using just one component: [test\_dnd.html](http://archive.dojotoolkit.org/nightly/dojotoolkit/dojo/tests/dnd/test_dnd.html) (link to the debugging server).
You can do this, sort of. Create two links in each item to use as handles. Make the list sortable by one handle. Make the list draggable by the other handle. Now, when you grab one handle or the other, only one plugin will be activated, and events will be processed correctly.
Conflict between Drag and drop and sortable jquery plugins
[ "", "javascript", "jquery", "html", "" ]
PHP's parse\_url() has a host field, which includes the full host. I'm looking for the most reliable (and least costly) way to only return the domain and TLD. Given the examples: * <http://www.google.com/foo>, parse\_url() returns www.google.com for host * <http://www.google.co.uk/foo>, parse\_url() returns www.google.co.uk for host I am looking for only **google.com** or **google.co.uk**. I have contemplated a table of valid TLD's/suffixes and only allowing those and one word. Would you do it any other way? Does anyone know of a pre-canned valid REGEX for this sort of thing?
How about something like that? ``` function getDomain($url) { $pieces = parse_url($url); $domain = isset($pieces['host']) ? $pieces['host'] : ''; if (preg_match('/(?P<domain>[a-z0-9][a-z0-9\-]{1,63}\.[a-z\.]{2,6})$/i', $domain, $regs)) { return $regs['domain']; } return false; } ``` Will extract the domain name using the classic `parse_url` and then look for a valid domain without any subdomain (www being a subdomain). Won't work on things like 'localhost'. Will return false if it didn't match anything. **// Edit:** Try it out with: ``` echo getDomain('http://www.google.com/test.html') . '<br/>'; echo getDomain('https://news.google.co.uk/?id=12345') . '<br/>'; echo getDomain('http://my.subdomain.google.com/directory1/page.php?id=abc') . '<br/>'; echo getDomain('https://testing.multiple.subdomain.google.co.uk/') . '<br/>'; echo getDomain('http://nothingelsethan.com') . '<br/>'; ``` And it should return: ``` google.com google.co.uk google.com google.co.uk nothingelsethan.com ``` Of course, it won't return anything if it doesn't get through [`parse_url`](http://php.net/parse_url), so make sure it's a well-formed URL. **// Addendum:** Alnitak is right. The solution presented above will work in **most** cases but not necessarily all and needs to be maintained to make sure, for example, that their aren't new TLD with .morethan6characters and so on. The only reliable way of extracting the domain is to use a maintained list such as <http://publicsuffix.org/>. It's more painful at first but easier and more robust on the long-term. You need to make sure you understand the pros and cons of each method and how it fits with your project.
Currently the only "right" way to do this is to use a list such as that maintained at <http://publicsuffix.org/> BTW, this question is also pretty much a duplicate of: * [Can I improve this regex check for valid domain names?](https://stackoverflow.com/questions/399932/can-this-domain-name-regular-expression-be-refactored-further) * [Get the subdomain from a URL](https://stackoverflow.com/questions/288810/get-the-subdomain-from-a-url) There are standardisation efforts at IETF looking at DNS methods of declaring whether a particular node in the DNS tree is used for "public" registrations, but they're in their early stages of development. All of the popular non-IE browsers use the publicsuffix.org list.
Going where PHP parse_url() doesn't - Parsing only the domain
[ "", "php", "dns", "" ]