Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I want to use implicit linking in my project , and nmake really wants a .def file . The problem is , that this is a class , and I don't know what to write in the exports section . Could anyone point me in the right direction ? The error message is the following : **NMAKE : U1073: don't know how to make 'DLLCLASS.def'** P.S: I'm trying to build using Windows CE Platform Builder .
You can always find the decorated name for the member function by using [dumpbin](http://msdn.microsoft.com/en-us/library/c1h23y6c(VS.80).aspx) /symbols myclass.obj in my case ``` class A { public: A( int ){} }; ``` the `dumpbin` dump showed the symbol `??0A@@QAE@H@Z (public: __thiscall A::A(int))` Putting this symbol in the .def file causes the linker to create the A::A(int) symbol in the export symbols. **BUT!** as @paercebal states in his comment: the manual entry of decorated (mangled) names is a chore - error prone, and sadly enough, not guarenteed to be portable across compiler versions.
If I recall correctly, you can use `__declspec(dllexport)` on the *class*, and VC++ will automatically create exports for all the symbols related to the class (constructors/destructor, methods, vtable, typeinfo, etc). Microsoft has more information on this [here](http://msdn.microsoft.com/en-us/library/81h27t8c(VS.80).aspx).
Exporting DLL C++ Class , question about .def file
[ "", "c++", "build", "windows-ce", "nmake", "function", "" ]
Are there any "all-in-one" installers for php + mysql on iis? Preferably with a gui configuration interface.
The PHP part is easy with Web Platform Installer: <http://php.iis.net/> MySQL is a breeze to install on Windows: <http://dev.mysql.com/downloads/mysql/5.1.html#win32>
I don't know of any all-in-one installers for both MySql and PHP, but PHP itself comes with an automated installer that will attach itself to IIS - but the preferred method is still manual (the automated procedure only uses CGI). There are plenty of how-to pages on the web that give you the step-by-step procedure required to get setup (and these differ based on your version of IIS) - I suggest you use one of those instead. Some links to get you started: [PHP Documentation](http://www.php.net/manual/en/install.windows.iis.php) [Installing PHP 5 on IIS in 5 simple steps](http://www.iis-aid.com/articles/how_to_guides/installing_php_5_on_iis_in_5_simple_steps)
Easy IIS install of php
[ "", "php", "iis", "" ]
I've got a "Schroedinger's Cat" type of problem here -- my program (actually the test suite for my program, but a program nonetheless) is crashing, but only when built in release mode, and only when launched from the command line. Through caveman debugging (ie, nasty printf() messages all over the place), I have determined the test method where the code is crashing, though unfortunately the actual crash seems to happen in some destructor, since the last trace messages I see are in other destructors which execute cleanly. When I attempt to run this program inside of Visual Studio, it doesn't crash. Same goes when launching from WinDbg.exe. The crash only occurs when launching from the command line. This is happening under Windows Vista, btw, and unfortunately I don't have access to an XP machine right now to test on. It would be really nice if I could get Windows to print out a stack trace, or *something* other than simply terminating the program as if it had exited cleanly. Does anyone have any advice as to how I could get some more meaningful information here and hopefully fix this bug? Edit: The problem was indeed caused by an out-of-bounds array, [which I describe more in this post](https://stackoverflow.com/questions/186237/program-only-crashes-as-release-build-how-to-debug#187966). Thanks everybody for your help in finding this problem!
In 100% of the cases I've seen or heard of, where a C or C++ program runs fine in the debugger but fails when run outside, the cause has been writing past the end of a function local array. (The debugger puts more on the stack, so you're less likely to overwrite something important.)
When I have encountered problems like this before it has generally been due to variable initialization. In debug mode, variables and pointers get initialized to zero automatically but in release mode they do not. Therefore, if you have code like this ``` int* p; .... if (p == 0) { // do stuff } ``` In debug mode the code in the if is not executed but in release mode p contains an undefined value, which is unlikely to be 0, so the code is executed often causing a crash. I would check your code for uninitialized variables. This can also apply to the contents of arrays.
Program only crashes as release build -- how to debug?
[ "", "c++", "debugging", "" ]
I am working on a project that is built on an extended version of the default PDO and PDOStatement classes and as such it uses PDO style named parameters instead of the "?" placeholder for variables. However, the client is now requesting that we look into using a different driver because their version OS X Web Server apparently doesn't have the ability to install this driver easily. So is anyone aware of any other database abstraction libraries or database access drivers that I could easily extend with our current set of functionality that takes named parameters like PDO does?
PDO and ADODB both support prepared statements, but the API is not great. Most frameworks provide an abstraction to one of those. I'm using Zend\_Db from the Zend Framework and I highly recommend it. It has a select class for composing queries (which turn into prepared statements when queried against the database adapter), a table class for handling CRUD operations - which I use as a base for most of my models and much more. Check it out [here](http://framework.zend.com/manual/en/zend.db.html).
I've used PEAR::MDB2 as well as PDO. MDB2's documentation is a little shaky, but once you get it all working its a dream to work with. <http://pear.php.net/package/MDB2>
What PHP / MySQL drivers or Database Abstraction Layers Support Prepared Statements?
[ "", "php", "pdo", "prepared-statement", "database-abstraction", "" ]
Does anyone have C# code handy for doing a ping and traceroute to a target computer? I am looking for a pure code solution, not what I'm doing now, which is invoking the ping.exe and tracert.exe program and parsing the output. I would like something more robust.
Although the Base Class Library includes [Ping](http://msdn.microsoft.com/en-us/library/system.net.networkinformation.ping.aspx), the BCL does not include any tracert functionality. However, a quick search reveals two open-source attempts, the first in C# the second in C++: * <http://www.codeproject.com/KB/IP/tracert.aspx> * <http://www.codeguru.com/Cpp/I-N/network/basicnetworkoperations/article.php/c5457/>
Given that I had to write a TraceRoute class today I figured I might as well share the source code. ``` using System.Collections.Generic; using System.Net.NetworkInformation; using System.Text; using System.Net; namespace Answer { public class TraceRoute { private const string Data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; public static IEnumerable<IPAddress> GetTraceRoute(string hostNameOrAddress) { return GetTraceRoute(hostNameOrAddress, 1); } private static IEnumerable<IPAddress> GetTraceRoute(string hostNameOrAddress, int ttl) { Ping pinger = new Ping(); PingOptions pingerOptions = new PingOptions(ttl, true); int timeout = 10000; byte[] buffer = Encoding.ASCII.GetBytes(Data); PingReply reply = default(PingReply); reply = pinger.Send(hostNameOrAddress, timeout, buffer, pingerOptions); List<IPAddress> result = new List<IPAddress>(); if (reply.Status == IPStatus.Success) { result.Add(reply.Address); } else if (reply.Status == IPStatus.TtlExpired || reply.Status == IPStatus.TimedOut) { //add the currently returned address if an address was found with this TTL if (reply.Status == IPStatus.TtlExpired) result.Add(reply.Address); //recurse to get the next address... IEnumerable<IPAddress> tempResult = default(IEnumerable<IPAddress>); tempResult = GetTraceRoute(hostNameOrAddress, ttl + 1); result.AddRange(tempResult); } else { //failure } return result; } } } ``` And a VB version for anyone that wants/needs it ``` Public Class TraceRoute Private Const Data As String = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" Public Shared Function GetTraceRoute(ByVal hostNameOrAddress As String) As IEnumerable(Of IPAddress) Return GetTraceRoute(hostNameOrAddress, 1) End Function Private Shared Function GetTraceRoute(ByVal hostNameOrAddress As String, ByVal ttl As Integer) As IEnumerable(Of IPAddress) Dim pinger As Ping = New Ping Dim pingerOptions As PingOptions = New PingOptions(ttl, True) Dim timeout As Integer = 10000 Dim buffer() As Byte = Encoding.ASCII.GetBytes(Data) Dim reply As PingReply reply = pinger.Send(hostNameOrAddress, timeout, buffer, pingerOptions) Dim result As List(Of IPAddress) = New List(Of IPAddress) If reply.Status = IPStatus.Success Then result.Add(reply.Address) ElseIf reply.Status = IPStatus.TtlExpired Then 'add the currently returned address result.Add(reply.Address) 'recurse to get the next address... Dim tempResult As IEnumerable(Of IPAddress) tempResult = GetTraceRoute(hostNameOrAddress, ttl + 1) result.AddRange(tempResult) Else 'failure End If Return result End Function End Class ```
TraceRoute and Ping in C#
[ "", "c#", ".net", "networking", "ping", "traceroute", "" ]
I have a C# method which accepts a Predicate<Foo> and returns a list of matching items... ``` public static List<Foo> FindAll( Predicate<Foo> filter ) { ... } ``` The filter will often be one of a common set... ``` public static class FooPredicates { public static readonly Predicate<Foo> IsEligible = ( foo => ...) ... } ``` ...but may be an anonymous delegate. I'd now like to have this method cache its results in the ASP.NET cache, so repeated calls with the same delegate just return the cached result. For this, I need to create a cache key from the delegate. Will Delegate.GetHashCode() produce sensible results for this purpose? Is there some other member of Delegate that I should look at? Would you do this another way entirely?
To perform your caching task, you can follow the other suggestions and create a Dictionary<Predicate<Foo>,List<Foo>> (static for global, or member field otherwise) that caches the results. Before actually executing the Predicate<Foo>, you would need to check if the result already exists in the dictionary. The general name for this deterministic function caching is called Memoization - and its awesome :) Ever since C# 3.0 added lambda's and the swag of Func/Action delegates, adding Memoization to C# is quite easy. Wes Dyer has a [great post](http://blogs.msdn.com/wesdyer/archive/2007/01/26/function-memoization.aspx) that brings the concept to C# with some great examples. If you want me to show you how to do this, let me know...otherwise, Wes' post should be adequate. In answer to your query about delegate hash codes. If two delegates are the same, d1.GetHashCode() should equal d2.GetHashCode(), but I'm not 100% about this. You can check this quickly by giving Memoization a go, and adding a WriteLine into your FindAll method. If this ends up not being true, another option is to use Linq.Expression<Predicate<Foo>> as a parameter. If the expressions are not closures, then expressions that do the same thing should be equal. Let me know how this goes, I'm interested to know the answer about delegate.Equals.
Delegate equality looks at each invocation in the invocation list, testing for equality of method to be invoked, and target of method. The method is a simple piece of the cache key, but the target of the method (the instance to call it on - assuming an instance method) could be impossible to cache in a serializable way. In particular, for anonymous functions which capture state, it will be an instance of a nested class created to capture that state. If this is all in memory, just keeping the delegate itself as the hash key will be okay - although it may mean that some objects which clients would expect to be garbage collected hang around. If you need to serialize this to a database, it gets hairier. Could you make your method accept a cache key (e.g. a string) as well? (That's assuming an in memory cache is inadequate.)
Caching delegate results
[ "", "c#", ".net", "asp.net", "delegates", "memoization", "" ]
I'm having trouble wrapping my head around this. Right now I have some models that looks kind of like this: ``` def Review(models.Model) ...fields... overall_score = models.FloatField(blank=True) def Score(models.Model) review = models.ForeignKey(Review) question = models.TextField() grade = models.IntegerField() ``` A Review is has several "scores", the overall\_score is the average of the scores. When a review or a score is saved, I need to recalculate the overall\_score average. Right now I'm using a overridden save method. Would there be any benefits to using Django's signal dispatcher?
Save/delete signals are generally favourable in situations where you need to make changes which aren't completely specific to the model in question, or could be applied to models which have something in common, or could be configured for use across models. One common task in overridden `save` methods is automated generation of slugs from some text field in a model. That's an example of something which, if you needed to implement it for a number of models, would benefit from using a `pre_save` signal, where the signal handler could take the name of the slug field and the name of the field to generate the slug from. Once you have something like that in place, any enhanced functionality you put in place will also apply to all models - e.g. looking up the slug you're about to add for the type of model in question, to ensure uniqueness. Reusable applications often benefit from the use of signals - if the functionality they provide can be applied to any model, they generally (unless it's unavoidable) won't want users to have to directly modify their models in order to benefit from it. With [django-mptt](https://github.com/django-mptt/django-mptt/), for example, I used the `pre_save` signal to manage a set of fields which describe a tree structure for the model which is about to be created or updated and the `pre_delete` signal to remove tree structure details for the object being deleted and its entire sub-tree of objects before it and they are deleted. Due to the use of signals, users don't have to add or modify `save` or `delete` methods on their models to have this management done for them, they just have to let django-mptt know which models they want it to manage.
You asked: *Would there be any benefits to using Django's signal dispatcher?* I found this in the django docs: > Overridden model methods are not called on bulk operations > > Note that the delete() method for an object is not necessarily called > when deleting objects in bulk using a QuerySet or as a result of a > cascading delete. To ensure customized delete logic gets executed, you > can use pre\_delete and/or post\_delete signals. > > Unfortunately, there isn’t a workaround when creating or updating > objects in bulk, since none of save(), pre\_save, and post\_save are > called. From: [Overriding predefined model methods](https://docs.djangoproject.com/en/dev/topics/db/models/#overriding-predefined-model-methods)
Django signals vs. overriding save method
[ "", "python", "django", "django-models", "django-signals", "" ]
my java application has a loading task which requires two server calls which can be parallelized. So I launch a Thread t1 (doing **task1**) and a Thread t2 (for **task2**). I want then to do a specific task, **task3** when both other tasks (1 & 2) are over. Naturally I can't tell which of **task1** and **task2** will finish first... Which would be for you the simplest (and safest) way to code this ? Thank you for your help
You've got several options: 1. If task3 is on a separate thread AND task1 and task2 threads are exclusive to their tasks (no thread pooling) and finish when the task finish, you can use {T1.join(); T2.join();} to wait for both threads. Pros: Easy. Cons: The situation is rarely that simple. 2. If task3 is on a separate thread, you could use java.util.concurrent.CountDownLatch shared between all threads. Task 3 will wait for the latch while task1 and task2 will decrease it. Pros: quite easy, oblivious to the environment. Cons: require T3 to be created before it's really needed. 3. If task3 should only be created AFTER task1 and task2 are finished (no separate thread for it until after task1 and task2 finishes), you'd have to build something a bit more complex. I'd recommend either creating your own ExecutorService that take a condition n addition to the future and only executes the future when the condition changes, or creating a management service that will check conditions and submit given futures based on these conditions. Mind, this is of the top of my head, there might be simpler solutions. Pros: resources-friendly. Cons: complex.
You can [`join`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#join()) both `t1` and `t2` (in either order), then run task3 after the join.
Thread synchronization question (in java)
[ "", "java", "multithreading", "concurrency", "synchronization", "" ]
Apparently ;-) the standard containers provide some form of guarantees. What type of guarantees and what exactly are the differences between the different types of container? Working from [the SGI page](http://www.sgi.com/tech/stl/) (about [STL](http://en.wikipedia.org/wiki/Standard_Template_Library)) I have come up with this: ``` Container Types: ================ Container: Forward Container Reverse Container Random Access Container Sequence Front Insert Sequence Back Insert Sequence Associative Container Simple Associative Container Pair Associative Container Sorted Associative Container Multiple Associative Container Container Types mapped to Standard Containers ============================================= std::vector: Sequence Back Sequence Forward/Reverse/Random Container std::deque: Sequence Front/Back Sequence Forward/Reverse/Random Container std::list: Sequence Front/Back Sequence Forward/Reverse Container std::set: Sorted/Simple/Unique Associative Container Forward Container std::map: Sorted/Pair/Unique Associative Container Forward Container std::multiset: Sorted/Simple/Multiple Associative Container Forward Container std::multimap: Sorted/Pair/Multiple Associative Container Forward Container Container Guarantees: ===================== Simp or For Rev Rand Front Back Assoc Sort Mult Cont: Cont: Cont Cont: Sequ: Sequ: Sequ: Cont: Cont: Cont: Copy Const: O(n) Fill Const: O(n) begin() O(1) end() O(1) rbegin() O(1) rend() O(1) front() O(1) push_front() O(1) pop_front() O(1) push_back() O(1) pop_back() O(1) Insert() O(ln(n)) Insert: fill O(n) Insert: range O(n) O(kln(n)+n) size() O(1) swap() O(1) erase key O(ln(n)) erase element O(1) erase range O(ln(n)+S) count() O(log(n)+k) find() O(ln(n)) equal range O(ln(n)) Lower Bound/Upper Bound O(ln(n)) Equality O(n) InEquality O(n) Element Access O(1) ```
I found the nice resource *[Standard C++ Containers](http://www.cs.northwestern.edu/~riesbeck/programming/c++/stl-summary.html)*. Probably this is what you all looking for. **VECTOR** **Constructors** ``` vector<T> v; Make an empty vector. O(1) vector<T> v(n); Make a vector with N elements. O(n) vector<T> v(n, value); Make a vector with N elements, initialized to value. O(n) vector<T> v(begin, end); Make a vector and copy the elements from begin to end. O(n) ``` **Accessors** ``` v[i] Return (or set) the I'th element. O(1) v.at(i) Return (or set) the I'th element, with bounds checking. O(1) v.size() Return current number of elements. O(1) v.empty() Return true if vector is empty. O(1) v.begin() Return random access iterator to start. O(1) v.end() Return random access iterator to end. O(1) v.front() Return the first element. O(1) v.back() Return the last element. O(1) v.capacity() Return maximum number of elements. O(1) ``` **Modifiers** ``` v.push_back(value) Add value to end. O(1) (amortized) v.insert(iterator, value) Insert value at the position indexed by iterator. O(n) v.pop_back() Remove value from end. O(1) v.assign(begin, end) Clear the container and copy in the elements from begin to end. O(n) v.erase(iterator) Erase value indexed by iterator. O(n) v.erase(begin, end) Erase the elements from begin to end. O(n) ``` For other containers, refer to the page.
I'm not aware of anything like a single table that lets you compare all of them in at one glance (I'm not sure such a table would even be feasible). Of course the ISO standard document enumerates the complexity requirements in detail, sometimes in various rather readable tables, other times in less readable bullet points for each specific method. Also the STL library reference at <http://www.cplusplus.com/reference/stl/> provides the complexity requirements where appropriate.
What are the complexity guarantees of the standard containers?
[ "", "c++", "stl", "containers", "big-o", "" ]
I'm working on a control to tie together the view from one ListView to another so that when the master ListView is scrolled, the child ListView view is updated to match. So far I've been able to get the child ListViews to update their view when the master scrollbar buttons are clicked. The problem is that when clicking and dragging the ScrollBar itself, the child ListViews are not updated. I've looked at the messages being sent using Spy++ and the correct messages are getting sent. Here is my current code: ``` public partial class LinkedListViewControl : ListView { [DllImport("User32.dll")] private static extern bool SendMessage(IntPtr hwnd, UInt32 msg, IntPtr wParam, IntPtr lParam); [DllImport("User32.dll")] private static extern bool ShowScrollBar(IntPtr hwnd, int wBar, bool bShow); [DllImport("user32.dll")] private static extern int SetScrollPos(IntPtr hWnd, int wBar, int nPos, bool bRedraw); private const int WM_HSCROLL = 0x114; private const int SB_HORZ = 0; private const int SB_VERT = 1; private const int SB_CTL = 2; private const int SB_BOTH = 3; private const int SB_THUMBPOSITION = 4; private const int SB_THUMBTRACK = 5; private const int SB_ENDSCROLL = 8; public LinkedListViewControl() { InitializeComponent(); } private readonly List<ListView> _linkedListViews = new List<ListView>(); public void AddLinkedView(ListView listView) { if (!_linkedListViews.Contains(listView)) { _linkedListViews.Add(listView); HideScrollBar(listView); } } public bool RemoveLinkedView(ListView listView) { return _linkedListViews.Remove(listView); } private void HideScrollBar(ListView listView) { //Make sure the list view is scrollable listView.Scrollable = true; //Then hide the scroll bar ShowScrollBar(listView.Handle, SB_BOTH, false); } protected override void WndProc(ref Message msg) { if (_linkedListViews.Count > 0) { //Look for WM_HSCROLL messages if (msg.Msg == WM_HSCROLL) { foreach (ListView view in _linkedListViews) { SendMessage(view.Handle, WM_HSCROLL, msg.WParam, IntPtr.Zero); } } } } } ``` Based on [this post](http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=3111420&SiteID=1) on the MS Tech Forums I tried to capture and process the SB\_THUMBTRACK event: ``` protected override void WndProc(ref Message msg) { if (_linkedListViews.Count > 0) { //Look for WM_HSCROLL messages if (msg.Msg == WM_HSCROLL) { Int16 hi = (Int16)((int)msg.WParam >> 16); Int16 lo = (Int16)msg.WParam; foreach (ListView view in _linkedListViews) { if (lo == SB_THUMBTRACK) { SetScrollPos(view.Handle, SB_HORZ, hi, true); int wParam = 4 + 0x10000 * hi; SendMessage(view.Handle, WM_HSCROLL, (IntPtr)(wParam), IntPtr.Zero); } else { SendMessage(view.Handle, WM_HSCROLL, msg.WParam, IntPtr.Zero); } } } } // Pass message to default handler. base.WndProc(ref msg); } ``` This will update the location of the child ListView ScrollBar but does not change the actual view in the child. So my questions are: 1. Is it possible to update the child ListViews when the master ListView ScrollBar is dragged? 2. If so, how?
I wanted to do the same thing, and after searching around I found your code here, which helped, but of course didn't solve the problem. But after playing around with it, I have found a solution. The key came when I realized that since the scroll buttons work, that you can use that to make the slider work. In other words, when the SB\_THUMBTRACK event comes in, I issue repeated SB\_LINELEFT and SB\_LINERIGHT events until my child ListView gets close to where the master is. Yes, this isn't perfect, but it works close enough. In my case, my master ListView is called "reportView", while my child ListView is called "summaryView". Here's my pertinent code: ``` public class MyListView : ListView { public event ScrollEventHandler HScrollEvent; protected override void WndProc(ref System.Windows.Forms.Message msg) { if (msg.Msg==WM_HSCROLL && HScrollEvent != null) HScrollEvent(this,new ScrollEventArgs(ScrollEventType.ThumbTrack, (int)msg.WParam)); base.WndProc(ref msg); } } ``` And then the event handler itself: ``` reportView.HScrollEvent += new ScrollEventHandler((sender,e) => { if ((ushort) e.NewValue != SB_THUMBTRACK) SendMessage(summaryView.Handle, WM_HSCROLL, (IntPtr) e.NewValue, IntPtr.Zero); else { int newPos = e.NewValue >> 16; int oldPos = GetScrollPos(reportView .Handle, SB_HORZ); int pos = GetScrollPos(summaryView.Handle, SB_HORZ); int lst; if (pos != newPos) if (pos<newPos && oldPos<newPos) do { lst=pos; SendMessage(summaryView.Handle,WM_HSCROLL,(IntPtr)SB_LINERIGHT,IntPtr.Zero); } while ((pos=GetScrollPos(summaryView.Handle,SB_HORZ)) < newPos && pos!=lst); else if (pos>newPos && oldPos>newPos) do { lst=pos; SendMessage(summaryView.Handle,WM_HSCROLL,(IntPtr)SB_LINELEFT, IntPtr.Zero); } while ((pos=GetScrollPos(summaryView.Handle,SB_HORZ)) > newPos && pos!=lst); } }); ``` Sorry about the odd formatting of the while loops there, but that's how I prefer to code things like that. The next problem was getting rid of the scroll bars in the child ListView. I noticed you had a method called HideScrollBar. This didn't really work for me. I found a better solution in my case was leaving the scroll bar there, but "covering" it up instead. I do this with the column header as well. I just slide my child control up under the master control to cover the column header. And then I stretch the child to fall out of the panel that contains it. And then to provide a bit of a border along the edge of my containing panel, I throw in a control to cover the visible bottom edge of my child ListView. It ends up looking rather nice. I also added an event handler to sync changing column widths, as in: ``` reportView.ColumnWidthChanging += new ColumnWidthChangingEventHandler((sender,e) => { summaryView.Columns[e.ColumnIndex].Width = e.NewWidth; }); ``` While this all seems a bit of a kludge, it works for me.
This is conjecture just to get the mental juices flowing so take it as you will: In the scroll handler for the master list, can you call the scroll handler for the child list (passing the sender and eventargs from the master)? Add this to your Form load: ``` masterList.Scroll += new ScrollEventHandler(this.masterList_scroll); ``` Which references this: ``` private void masterList_scroll(Object sender, System.ScrollEventArgs e) { childList_scroll(sender, e); } private void childList_scroll(Object sender, System.ScrollEventArgs e) { childList.value = e.NewValue } ```
Synchronized ListViews in .Net
[ "", "c#", ".net", "winforms", "winapi", "listview", "" ]
I'm wondering what the best practices are for storing a relational data structure in XML. Particulary, I am wondering about best practices for enforcing node order. For example, say I have three objects: `School`, `Course`, and `Student`, which are defined as follows: ``` class School { List<Course> Courses; List<Student> Students; } class Course { string Number; string Description; } class Student { string Name; List<Course> EnrolledIn; } ``` I would store such a data structure in XML like so: ``` <School> <Courses> <Course Number="ENGL 101" Description="English I" /> <Course Number="CHEM 102" Description="General Inorganic Chemistry" /> <Course Number="MATH 103" Description="Trigonometry" /> </Courses> <Students> <Student Name="Jack"> <EnrolledIn> <Course Number="CHEM 102" /> <Course Number="MATH 103" /> </EnrolledIn> </Student> <Student Name="Jill"> <EnrolledIn> <Course Number="ENGL 101" /> <Course Number="MATH 103" /> </EnrolledIn> </Student> </Students> </School> ``` With the XML ordered this way, I can parse `Courses` first. Then, when I parse `Students`, I can look up each `Course` listed in `EnrolledIn` (by its `Number`) in the `School.Courses` list. This will give me an object reference to add to the `EnrolledIn` list in `Student`. If `Students`, however, comes *before* `Courses`, such a lookup to get a object reference is not possible. (Since `School.Courses` has not yet been populated.) So what are the best practices for storing relational data in XML? - Should I enforce that `Courses` must always come before `Students`? - Should I tolerate any ordering and create a stub `Course` object whenever I encounter one I have not yet seen? (To be expanded when the definition of the `Course` is eventually reached later.) - Is there some other way I should be persisting/loading my objects to/from XML? (I am currently implementing `Save` and `Load` methods on all my business objects and doing all this manually using `System.Xml.XmlDocument` and its associated classes.) I am used to working with relational data out of SQL, but this is my first experience trying to store a non-trivial relational data structure in XML. Any advice you can provide as to how I should proceed would be greatly appreciated.
Don't think in SQL or relational when working with XML, because there are no order constraints. You can however query using XPath to any portion of the XML document at any time. You want the courses first, then "//Courses/Course". You want the students enrollments next, then "//Students/Student/EnrolledIn/Course". The bottom line being... just because XML is stored in a file, don't get caught thinking all your accesses are serial. --- I posted a separate question, ["Can XPath do a foreign key lookup across two subtrees of an XML?"](https://stackoverflow.com/questions/142010/can-xpath-do-a-foreign-key-lookup-across-two-subtrees-of-an-xml), in order to clarify my position. The solution shows how you can use XPath to make relational queries against XML data.
While you can specify order of child elements using a <xsd:sequence>, by requiring child objects to come in specific order you make your system less flexible (i.e., harder to update using notepad). Best thing to do is to parse out all your data, then perform what actions you need to do. Don't act during the parse. --- Obviously, the design of the XML and the data behind it precludes serializing a single POCO to XML. You need to control the serialization and deserialization logic in order to unhook and re-hook objects together. I'd suggest creating a custom serializer that builds the xml representation of this object graph. It can thereby control not only the order of serialization, but also handle situations where nodes aren't in the expected order. You could do other things such as adding custom attributes to use for linking objects together which don't exist as public properties on the objects being serialized. Creating the xml would be as simple as iterating over your objects a few times, building up collections of XElements with the expected representation of the objects as xml. When you're done you can stitch them together into an XDocument and grab the xml from it. You can make multiple passes over the xml on the reverse side to re-create your object graph and restore all references.
Storing Relational Data in XML
[ "", "c#", "xml", "" ]
How can I make my window not have a title bar but appear in the task bar with some descriptive text? If you set the Form's .Text property then .net gives it a title bar, which I don't want. ``` this.ControlBox = false; this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.FixedDialog; this.MaximizeBox = false; this.MinimizeBox = false; this.ShowInTaskbar = true; this.Text = "My title for task bar"; ``` I've found a partial solution, to override CreateParams: ``` protected override System.Windows.Forms.CreateParams CreateParams { get { System.Windows.Forms.CreateParams cp = base.CreateParams; cp.Style &= ~0x00C00000; // WS_CAPTION return cp; } } ``` However this causes my window to be resized as if they have a title bar, ie it's taller than it should be. Is there any good solution to this?
One approach to look into might be to set the `FormBorderStyle` property of your `Form` to `None` (instead of `FixedDialog`). The drawback to this approach is that you lose the borders of your window as well as the Titlebar. A result of this is that you lose the form repositioning/resizing logic that you normally get "for free" with Windows Forms; you would need to deal with this by implementing your own form move/resize logic in the form's MouseDown and MouseMove event handlers. I would also be interested to hear about better solutions.
In my case I have a Form with `FormBorderStyle = FormBorderStyle.SizableToolWindow` and the following `CreateParams` override did the trick (i.e. I now have a form without caption and without additional margin for the title, but it keeps the title in the task bar): ``` protected override System.Windows.Forms.CreateParams CreateParams { get { var parms = base.CreateParams; parms.Style &= ~0x00C00000; // remove WS_CAPTION parms.Style |= 0x00040000; // include WS_SIZEBOX return parms; } } ```
How to make a window have taskbar text but no title bar
[ "", "c#", ".net", "winforms", "" ]
What Javascript libraries can you recommend for syntax highlighting <code> blocks in HTML? (One suggestion per answer please).
StackOverflow uses the [Prettify](https://github.com/google/code-prettify) library.
I recently developed one called rainbow. The main design goal was to make the core library really small and make it really easy for developers to extend. See <http://rainbowco.de>.
Syntax highlighting code with Javascript
[ "", "javascript", "html", "syntax", "syntax-highlighting", "" ]
I personally like the *exclusive or*, `^`, operator when it makes sense in the context of boolean checks because of its conciseness. I much prefer to write ``` if (boolean1 ^ boolean2) { //do it } ``` than ``` if((boolean1 && !boolean2) || (boolean2 && !boolean1)) { //do it } ``` but I often get confused looks from other experienced Java developers (not just the newbies), and sometimes comments about how it should only be used for bitwise operations. I'm curious as to the best practices regarding the usage of the `^` operator.
You can simply use `!=` instead.
I think you've answered your own question - if you get strange looks from people, it's probably safer to go with the more explicit option. If you need to comment it, then you're probably better off replacing it with the more verbose version and not making people ask the question in the first place.
Is it good practice to use the xor operator for boolean checks?
[ "", "java", "conditional-statements", "bitwise-operators", "logical-operators", "xor", "" ]
I have an *index.php* file which has to process many different file types. How do I guess the filetype based on the `REQUEST_URI`? If I request `http://site/image.jpg`, and all requests redirect through *index.php*, which looks like this ``` <?php include('/www/site'.$_SERVER['REQUEST_URI']); ?> ``` How would I make that work correctly? Should I test based on the extension of the file requested, or is there a way to get the filetype?
If you are sure you're only ever working with images, you can check out the [exif\_imagetype()](https://www.php.net/manual/en/function.exif-imagetype.php) PHP function, which attempts to return the image MIME type. If you don't mind external dependencies, you can also check out the excellent [getID3](http://getid3.sourceforge.net) library which can determine the MIME type of many different file types. Lastly, you can check out the [mime\_content\_type()](https://www.php.net/manual/en/function.mime-content-type.php) function - but it has been deprecated for the [Fileinfo](https://www.php.net/manual/en/ref.fileinfo.php) PECL extension.
*mime\_content\_type()* is deprecated, so you won't be able to count on it working in the future. There is a "fileinfo" PECL extension, but I haven't heard good things about it. If you are running on a Unix-like server, you can do the following, which has worked fine for me: ``` $file = escapeshellarg($filename); $mime = shell_exec("file -bi " . $file); $filename should probably include the absolute path. ```
How do I find the MIME type of a file with PHP?
[ "", "php", "filesystems", "http-headers", "content-type", "" ]
This is the way I read file: ``` public static string readFile(string path) { StringBuilder stringFromFile = new StringBuilder(); StreamReader SR; string S; SR = File.OpenText(path); S = SR.ReadLine(); while (S != null) { stringFromFile.Append(SR.ReadLine()); } SR.Close(); return stringFromFile.ToString(); } ``` The problem is it so long (the .txt file is about 2.5 megs). Took over 5 minutes. Is there a better way? **Solution taken** ``` public static string readFile(string path) { return File.ReadAllText(path); } ``` Took less than 1 second... :)
Leaving aside the horrible variable names and the lack of a using statement (you won't close the file if there are any exceptions) that should be okay, and *certainly* shouldn't take 5 minutes to read 2.5 megs. Where does the file live? Is it on a flaky network share? By the way, the only difference between what you're doing and using File.ReadAllText is that you're losing line breaks. Is this deliberate? How long does ReadAllText take?
``` S = SR.ReadLine(); while (S != null) { stringFromFile.Append(SR.ReadLine()); } ``` Of note here, `S` is never set after that initial `ReadLine()`, so the `S != null` condition never triggers if you enter the while loop. Try: ``` S = SR.ReadLine(); while (S != null) { stringFromFile.Append(S = SR.ReadLine()); } ``` or use one of the other comments. If you need to remove newlines, use string.Replace(Environment.NewLine, "")
Reading from file not fast enough, how would I speed it up?
[ "", "c#", ".net", "optimization", ".net-2.0", "io", "" ]
As a test engineer, I often engage in exploratory testing. When I encounter a bug, it is not always immediately clear which steps led to the problem. So, obviously, I have to find the required steps to reproduce the problem. Depending on the situation, this can take a large amount of time. What would be really helpful is a logging tool that keeps track of mouse and keyboard actions and possibly also stores the properties of components that have been clicked (the AUT is written in Java). I could simply not find a suitable tool, but maybe my search approach is erroneous. Could anyone point me to the right direction?
You may want to check out [BB Test Assistant](http://www.bbsoftware.co.uk/BBTestAssistant.aspx). I saw a very early version of this and was impressed. I've also heard good things from people who have used it since then.
[This question](https://stackoverflow.com/questions/7440/what-do-you-use-to-unit-test-your-web-ui) lists tools that can be used test web applications. Some of the answers may be useful. One for example is Selenium - a mozilla plugin that records your actions and can be replayed later.
Are there tools that log user actions to help reproduce bugs?
[ "", "java", "testing", "logging", "manual-testing", "exploratory", "" ]
What is currying? How can currying be done in C++? Please Explain binders in STL container?
In short, currying takes a function `f(x, y)` and given a fixed `Y`, gives a new function `g(x)` where ``` g(x) == f(x, Y) ``` This new function may be called in situations where only one argument is supplied, and passes the call on to the original `f` function with the fixed `Y` argument. The binders in the STL allow you to do this for C++ functions. For example: ``` #include <functional> #include <iostream> #include <vector> using namespace std; // declare a binary function object class adder: public binary_function<int, int, int> { public: int operator()(int x, int y) const { return x + y; } }; int main() { // initialise some sample data vector<int> a, b; a.push_back(1); a.push_back(2); a.push_back(3); // here we declare a function object f and try it out adder f; cout << "f(2, 3) = " << f(2, 3) << endl; // transform() expects a function with one argument, so we use // bind2nd to make a new function based on f, that takes one // argument and adds 5 to it transform(a.begin(), a.end(), back_inserter(b), bind2nd(f, 5)); // output b to see what we got cout << "b = [" << endl; for (vector<int>::iterator i = b.begin(); i != b.end(); ++i) { cout << " " << *i << endl; } cout << "]" << endl; return 0; } ```
# 1. What is currying? Currying simply means a transformation of a function of several arguments to a function of a single argument. This is most easily illustrated using an example: Take a function `f` that accepts three arguments: ``` int f(int a,std::string b,float c) { // do something with a, b, and c return 0; } ``` If we want to call `f`, we have to provide all of its arguments `f(1,"some string",19.7f)`. Then a curried version of `f`, let's call it `curried_f=curry(f)` only expects a single argument, that corresponds to the first argument of `f`, namely the argument `a`. Additionally, `f(1,"some string",19.7f)` can also be written using the curried version as `curried_f(1)("some string")(19.7f)`. The return value of `curried_f(1)` on the other hand is just another function, that handles the next argument of `f`. In the end, we end up with a function or callable `curried_f` that fulfills the following equality: ``` curried_f(first_arg)(second_arg)...(last_arg) == f(first_arg,second_arg,...,last_arg). ``` # 2. How can currying be achieved in C++? The following is a little bit more complicated, but works very well for me (using c++11)... It also allows currying of arbitrary degree like so: `auto curried=curry(f)(arg1)(arg2)(arg3)` and later `auto result=curried(arg4)(arg5)`. Here it goes: ``` #include <functional> namespace _dtl { template <typename FUNCTION> struct _curry; // specialization for functions with a single argument template <typename R,typename T> struct _curry<std::function<R(T)>> { using type = std::function<R(T)>; const type result; _curry(type fun) : result(fun) {} }; // recursive specialization for functions with more arguments template <typename R,typename T,typename...Ts> struct _curry<std::function<R(T,Ts...)>> { using remaining_type = typename _curry<std::function<R(Ts...)> >::type; using type = std::function<remaining_type(T)>; const type result; _curry(std::function<R(T,Ts...)> fun) : result ( [=](const T& t) { return _curry<std::function<R(Ts...)>>( [=](const Ts&...ts){ return fun(t, ts...); } ).result; } ) {} }; } template <typename R,typename...Ts> auto curry(const std::function<R(Ts...)> fun) -> typename _dtl::_curry<std::function<R(Ts...)>>::type { return _dtl::_curry<std::function<R(Ts...)>>(fun).result; } template <typename R,typename...Ts> auto curry(R(* const fun)(Ts...)) -> typename _dtl::_curry<std::function<R(Ts...)>>::type { return _dtl::_curry<std::function<R(Ts...)>>(fun).result; } #include <iostream> void f(std::string a,std::string b,std::string c) { std::cout << a << b << c; } int main() { curry(f)("Hello ")("functional ")("world!"); return 0; } ``` [View output](http://ideone.com/CQ8ELv) OK, as Samer commented, I should add some explanations as to how this works. The actual implementation is done in the `_dtl::_curry`, while the template functions `curry` are only convenience wrappers. The implementation is recursive over the arguments of the `std::function` template argument `FUNCTION`. For a function with only a single argument, the result is identical to the original function. ``` _curry(std::function<R(T,Ts...)> fun) : result ( [=](const T& t) { return _curry<std::function<R(Ts...)>>( [=](const Ts&...ts){ return fun(t, ts...); } ).result; } ) {} ``` Here the tricky thing: For a function with more arguments, we return a lambda whose argument is bound to the first argument to the call to `fun`. Finally, the remaining currying for the remaining `N-1` arguments is delegated to the implementation of `_curry<Ts...>` with one less template argument. # Update for c++14 / 17: A new idea to approach the problem of currying just came to me... With the introduction of `if constexpr` into c++17 (and with the help of `void_t` to determine if a function is fully curried), things seem to get a lot easier: ``` template< class, class = std::void_t<> > struct needs_unapply : std::true_type { }; template< class T > struct needs_unapply<T, std::void_t<decltype(std::declval<T>()())>> : std::false_type { }; template <typename F> auto curry(F&& f) { /// Check if f() is a valid function call. If not we need /// to curry at least one argument: if constexpr (needs_unapply<decltype(f)>::value) { return [=](auto&& x) { return curry( [=](auto&&...xs) -> decltype(f(x,xs...)) { return f(x,xs...); } ); }; } else { /// If 'f()' is a valid call, just call it, we are done. return f(); } } int main() { auto f = [](auto a, auto b, auto c, auto d) { return a * b * c * d; }; return curry(f)(1)(2)(3)(4); } ``` See code in action on [here](https://godbolt.org/g/wrli32). With a similar approach, [here](https://godbolt.org/g/mp8OD1) is how to curry functions with arbitrary number of arguments. The same idea seems to work out also in C++14, if we exchange the `constexpr if` with a template selection depending on the test `needs_unapply<decltype(f)>::value`: ``` template <typename F> auto curry(F&& f); template <bool> struct curry_on; template <> struct curry_on<false> { template <typename F> static auto apply(F&& f) { return f(); } }; template <> struct curry_on<true> { template <typename F> static auto apply(F&& f) { return [=](auto&& x) { return curry( [=](auto&&...xs) -> decltype(f(x,xs...)) { return f(x,xs...); } ); }; } }; template <typename F> auto curry(F&& f) { return curry_on<needs_unapply<decltype(f)>::value>::template apply(f); } ``` If you want to force our curry function to inline, we can apply compiler specific macros defined like this. ``` #if defined(_MSC_VER) #define FORCE_INLINE __forceinline #elif defined(__GNUC__) || defined(__GNUG__) #define FORCE_INLINE __attribute__((always_inline)) #else // Clang or else #define FORCE_INLINE inline #endif ```
How can currying be done in C++?
[ "", "c++", "stl", "functional-programming", "currying", "binders", "" ]
I've got a php script. Most of the time the script returns html, which is working fine, but on one occasion (parameter ?Format=XML) the script returns XML instead of HTML. Is there any way to change the returned mime type of the php output on the fly from text/html to text/xml or application/xml?
``` header('Content-type: application/xml'); ``` More information available at the PHP documentation for [`header()`](http://php.net/manual/en/function.header.php)
Set the `Content-Type` header: ``` header('Content-Type: text/xml'); ``` Though you should probably use "application/xml" instead.
change mime type of output in php
[ "", "php", "xml", "mime-types", "" ]
Say that I write an article or document about a certain topic, but the content is meant for readers with certain prior knowledge about the topic. To help people who don't have the "required" background information, I would like to add a note to the top of the page with an explanation and possibly a link to some reference material. Here's an example: > **Using The Best Product in the World to Create World Peace** > > *Note: This article assumes you are already familiar with The Best Product in the World. To learn more about The Best Product in the World, please see the official web site.* > > The Best Product in the World ... Now, I don't want the note to show up in Google search engine results, only the title and the content that follows the note. Is there any way I can achieve this? Also, is it possible to do this without direct control over the entire HTML file and/or HTTP response, i.e. on blog hosted by a third party, like [Wordpress.com](http://www.wordpress.com)? **Update** Unfortunately, both the JavaScript solution and the HTML meta tag approach does not work on hosted Wordpress.com blogs, since they don't allow JavaScript in posts and they don't provide access to edit the HTML meta tags directly.
You can build that portion of the content dynamically using Javascript. For example: ``` <html> <body> <div id="dynContent"> </div> Rest of the content here. </body> <script language='javascript' type='text/javascript'> var dyn = document.getElementById('dynContent'); dyn.innerHTML = "Put the dynamic content here"; </script> </html> ``` If you're really stuck, you can just go old school and reference an image that has your text as part of it. It's not particularly "accessibility-friendly" though.
If you can use an iframe, then place the content on a static html page and use the meta tag in it's head to tell the search engines to ignore it. Since it's a seperate page, google etc.. should ignore it. meta tag: ``` <meta name="robots" content="noindex, nofollow"> ```
How can I hide content in a HTML file from search engines?
[ "", "javascript", "search-engine", "" ]
What is the best method for displaying major/minor versions in a C# console application? The `System.Windows.Forms` namespace includes a `ProductVersion` class that can be used to display the name/version information set via the Visual Studio project properties (Assembly Information). As such, here is my current mechanism: ``` Console.WriteLine("{0} ({1})", System.Windows.Forms.Application.ProductName, System.Windows.Forms.Application.ProductVersion); ``` Why is this part of `Forms`? Is this appropriate for a Console application?
``` Assembly.GetExecutingAssembly().GetName().Version ``` Also, you can still use the class, you just have to reference the containing assembly. It's no biggie.
`Assembly.GetExecutingAssembly().GetName().Version` is not the same as `Application.ProductVersion` (but may be good enough depending on your environment. As can be seen with Lutz Reflector, `Application.ProductVersion` first attempts to use the **AssemblyInformationalVersion** attribute from `Assembly.GetEntryAssembly()` if it's present, and if `GetEntryAssembly()` is not null. Otherwise it uses the file version of the executable file. I don't see any reason not to use `Application.ProductVersion` in a console application.
Method for Application Version on a Console Utility App
[ "", "c#", ".net", "console", "" ]
Let's say I have a model like this ``` class Foo(db.Model): id = db.StringProperty() bar = db.StringProperty() baz = db.StringProperty() ``` And I'm going a GqlQuery like this ``` foos = db.GqlQuery("SELECT * FROM Foo") ``` **I want to take the results of the GqlQuery and turn into some sort of JSON string that I can manipulate from different languages.** --- Here's how I'm doing it now 1. Add a method to the **Foo** class that converts it into a dictionary ``` def toDict(self): return { 'id': self.id, 'bar': self.bar, 'baz': self'baz } ``` 2. Loop through the GqlQuery results and manually add each Foo instance to a dictionary ``` fooDict = {} for foo in foos: fooDict[foo.id] = foo.toDict() return simplejson.dumps(fooDict) ``` --- My approach above works but it feels kind of gross. Is there a cleaner, more "Pythonic" way to handle this? **The end format doesn't have to be exactly what I've done above. It just has to be something that converts nicely to JSON so I can deal with it from Javascript/PHP/whatever.**
Take a look at [google.appengine.api.datastore](http://code.google.com/p/googleappengine/source/browse/trunk/google/appengine/api/datastore.py). It's the lower level datastore API that google.appengine.ext.db builds on, and it returns Entity objects, which subclass dict. You can query it using GQL with [google.appengine.ext.gql](http://code.google.com/p/googleappengine/source/browse/trunk/google/appengine/ext/gql/__init__.py), or (my personal preference) use the Query class, which avoids the need for you to construct text strings for the GQL parser to parse. The Query class in api.datastore behaves exactly like the one [documented here](http://code.google.com/appengine/docs/datastore/queryclass.html) (but returns the lower level Entity objects instead of Model instances). As an example, your query above can be reformulated as "datastore.Query("Foo").all()".
I can't do too much better than that, but here are a couple of ideas: ``` class Foo: id = db.StringProperty() # etc. json_attrs = 'id bar baz'.split() # Depending on how easy it is to identify string properties, there # might also be a way to assign json_attrs programmatically after the # definition of Foo, like this Foo.json_attrs = [attr for attr in dir(Foo) if isStringProperty(getattr(Foo, attr))] fooDict=dict((foo.id,dict(getattr(foo, attr) for attr in Foo.json_attrs)) for foo in foos) ```
Turning a GqlQuery result set into a python dictionary
[ "", "python", "google-app-engine", "gqlquery", "" ]
I seem to only be able to write to the Apache error log via stderr. Anyone know of a more structured logging architecture that I could use from my python web project, like commons?
There isn't any built in support for mod\_python logging to Apache currently. If you really want to work within the Apache logs you can check out this thread (make sure you get the second version of the posted code, rather than the first): * <http://www.dojoforum.com/node/13239> * <http://www.modpython.org/pipermail/mod_python/2005-October/019295.html> If you're just looking to use a more structured logging system, the Python standard logging module referred to by Blair is very feature complete. Aside from the Python.org docs Blair linked, here's a more in-depth look at the module's features from onLamp: * <http://www.onlamp.com/pub/a/python/2005/06/02/logging.html> And for a quickie example usage: * <http://hackmap.blogspot.com/2007/06/note-to-self-using-python-logging.html>
This must have changed in the past four years. If you come across this question and want to do this then you can do it through the request object, i.e ``` def handler(req) : req.log_error('Hello apache') ```
How do I write to a log from mod_python under apache?
[ "", "python", "apache", "logging", "" ]
I'm looking for some good references for learning how to model 2d physics in games. I am **not** looking for a library to do it for me - I want to think and learn, not blindly use someone else's work. I've done a good bit of Googling, and while I've found a few tutorials on GameDev, etc., I find their tutorials hard to understand because they are either written poorly, or assume a level of mathematical understanding that I don't yet possess. For specifics - I'm looking for how to model a top-down 2d game, sort of like a tank combat game - and I want to accurately model (among other things) acceleration and speed, heat buildup of 'components,' collisions between models and level boundaries, and missile-type weapons. Websites, recommended books, blogs, code examples - all are welcome if they will aid understanding. I'm considering using C# and F# to build my game, so code examples in either of those languages would be great - but don't let language stop you from posting a good link. =) **Edit**: I don't mean that I don't understand math - it's more the case that I don't know what I need to know in order to understand the systems involved, and don't really know how to find the resources that will teach me in an understandable way.
Here are some resources I assembled a few years ago. Of note is the Verlet Integration. I am also including links to some open source and commercial physics engines I found at that time. There is a stackoverflow article on this subject here: [2d game physics?](https://stackoverflow.com/questions/98628/2d-game-physics) **Physics Methods** * [Verlet Integration](http://en.wikipedia.org/wiki/Verlet_integration) (Wikipedia Article) * [Advanced Character Physics](http://www.gamasutra.com/resource_guide/20030121/jacobson_01.shtml) (Great article! Includes movement, collisions, joints, and other constraints.) **Books** * ["Game Physics Engine Development"](https://rads.stackoverflow.com/amzn/click/com/012369471X), Ian Millington -- I own this book and highly recommend it. The book builds a physics engine in C++ from scratch. The Author starts with basic particle physics and then adds "laws of motion", constraints, rigid-body physics and on and on. He includes well documented source code all the way through. **Physics Engines** * [Tokamak](http://www.tokamakphysics.com) (Open source physics API) * [APE](http://www.cove.org/ape/index.htm) (Actionscript Physics Engine) * [FLADE](http://www.cove.org/flade/) (Flash Dynamics Engine) * [Fisix Engine](http://www.fisixengine.com) (another Flash Actionscript engine) * [Simple Physics Engine](http://spehome.com) (commercial)
[![](https://i.stack.imgur.com/dGkA7.gif)](https://i.stack.imgur.com/dGkA7.gif) (source: [oreilly.com](http://web.archive.org/web/20070225050059/http://www.oreilly.com:80/catalog/covers/0596000065_cat.gif)) [Physics for Game Developers](http://oreilly.com/catalog/9780596000066/) by O'Reilly
Resources for 2d game physics
[ "", "c#", "f#", "2d", "physics", "" ]
If I have a class that needs to implement an interface but one or more of the methods on that interface don't make sense in the context of this particular class, what should I do? For example, lets say I'm implementing an adapter pattern where I want to create a wrapper class that implements [java.util.Map](http://java.sun.com/javase/6/docs/api/java/util/Map.html) by wrapping some immutable object and exposing it's data as key/value pairs. In this case the methods put and putAll don't make sense as I have no way to modify the underlying object. So the question is what should those methods do?
Any method that cannot be implemented according to the semantics of the interface should throw an [UnsupportedOperationException](http://java.sun.com/javase/6/docs/api/java/lang/UnsupportedOperationException.html).
That depends on your business case. 2 options: * Do nothing. * Throw an [`UnsupportedOperationException`](http://java.sun.com/javase/6/docs/api/java/lang/UnsupportedOperationException.html). Use whichever makes more sense. If you do nothing, you are not obeying the contract of the interface. However, throwing a runtime exception can wreak havoc on the calling code. Thus, the decision will have to be made based on how you will be using the class. Another option would be to use a simpler or different interface, if possible. Do note that the Java library goes the exception route in the specific case of [read-only](http://java.sun.com/j2se/1.4.2/docs/api/java/util/Collections.html#unmodifiableList(java.util.List)) collections. --- It was noted below that UnsupportedOperationException is a part of the java collections framework. If your situation is outside of collections, and the semantics bother you, you can roll your own `NotImplementedException`, or if you are already using commons-lang, you could use [theirs](http://commons.apache.org/lang/api/org/apache/commons/lang/NotImplementedException.html).
Correct behavior for interface methods that can't be implemented
[ "", "java", "" ]
I'm trying to create an access control system. Here's a stripped down example of what the table I'm trying to control access to looks like: ``` things table: id group_id name 1 1 thing 1 2 1 thing 2 3 1 thing 3 4 1 thing 4 5 2 thing 5 ``` And the access control table looks like this: ``` access table: user_id type object_id access 1 group 1 50 1 thing 1 10 1 thing 2 100 ``` Access can be granted either by specifying the id of the 'thing' directly, or granted for an entire group of things by specifying a group id. In the above example, user 1 has been granted an access level of 50 to group 1, which should apply unless there are any other rules granting more specific access to an individual thing. I need a query that returns a list of things (ids only is okay) along with the access level for a specific user. So using the example above I'd want something like this for user id 1: ``` desired result: thing_id access 1 10 2 100 3 50 (things 3 and 4 have no specific access rule, 4 50 so this '50' is from the group rule) 5 (thing 5 has no rules at all, so although I still want it in the output, there's no access level for it) ``` The closest I can come up with is this: ``` SELECT * FROM things LEFT JOIN access ON user_id = 1 AND ( (access.type = 'group' AND access.object_id = things.group_id) OR (access.type = 'thing' AND access.object_id = things.id) ) ``` But that returns multiple rows, when I only want one for each row in the 'things' table. I'm not sure how to get down to a single row for each 'thing', or how to prioritise 'thing' rules over 'group' rules. If it helps, the database I'm using is PostgreSQL. Please feel free to leave a comment if there's any information I've missed out. Thanks in advance!
I don't know the Postgres SQL dialect, but maybe something like: ``` select thing.*, coalesce ( ( select access from access where userid = 1 and type = 'thing' and object_id = thing.id ), ( select access from access where userid = 1 and type = 'group' and object_id = thing.group_id ) ) from things ``` Incidentally, I don't like the design. I would prefer the access table to be split into two: ``` thing_access (user_id, thing_id, access) group_access (user_id, group_id, access) ``` My query then becomes: ``` select thing.*, coalesce ( ( select access from thing_access where userid = 1 and thing_id = thing.id ), ( select access from group_access where userid = 1 and group_id = thing.group_id ) ) from things ``` I prefer this because foreign keys can now be used in the access tables.
I just read a paper last night on this. It has some ideas on how to do this. If you can't use the link on the title try using Google Scholar on [Limiting Disclosure in Hippocratic Databases.](http://portal.acm.org/citation.cfm?id=1316701)
How can I do access control via an SQL table?
[ "", "sql", "postgresql", "access-control", "" ]
.Net 3.5 doesn't support tuples. Too bad, But not sure whether the future version of .net will support tuples or not?
I've just read this article from the MSDN Magazine: [Building Tuple](http://msdn.microsoft.com/en-us/magazine/dd942829.aspx) Here are excerpts: > The upcoming 4.0 release of Microsoft > .NET Framework introduces a new type > called System.Tuple. System.Tuple is a > fixed-size collection of > heterogeneously typed data. > Like an array, a tuple has a fixed > size that can't be changed once it has > been created. Unlike an array, each > element in a tuple may be a different > type, and a tuple is able to guarantee > strong typing for each element. > There is already one example of a > tuple floating around the Microsoft > .NET Framework, in the > System.Collections.Generic namespace: > KeyValuePair. While KeyValuePair can be thought of as the same > as Tuple, since they are both > types that hold two things, > KeyValuePair feels different from > Tuple because it evokes a relationship > between the two values it stores (and > with good reason, as it supports the > Dictionary class). > > Furthermore, tuples can be arbitrarily > sized, whereas KeyValuePair holds only > two things: a key and a value. --- While some languages like F# have special syntax for tuples, you can use the new common tuple type from any language. Revisiting the first example, we can see that while useful, tuples can be overly verbose in languages without syntax for a tuple: ``` class Program { static void Main(string[] args) { Tuple<string, int> t = new Tuple<string, int>("Hello", 4); PrintStringAndInt(t.Item1, t.Item2); } static void PrintStringAndInt(string s, int i) { Console.WriteLine("{0} {1}", s, i); } } ``` Using the var keyword from C# 3.0, we can remove the type signature on the tuple variable, which allows for somewhat more readable code. ``` var t = new Tuple<string, int>("Hello", 4); ``` We've also added some factory methods to a static Tuple class which makes it easier to build tuples in a language that supports type inference, like C#. ``` var t = Tuple.Create("Hello", 4); ```
``` #region tuples public class Tuple<T> { public Tuple(T first) { First = first; } public T First { get; set; } } public class Tuple<T, T2> : Tuple<T> { public Tuple(T first, T2 second) : base(first) { Second = second; } public T2 Second { get; set; } } public class Tuple<T, T2, T3> : Tuple<T, T2> { public Tuple(T first, T2 second, T3 third) : base(first, second) { Third = third; } public T3 Third { get; set; } } public class Tuple<T, T2, T3, T4> : Tuple<T, T2, T3> { public Tuple(T first, T2 second, T3 third, T4 fourth) : base(first, second, third) { Fourth = fourth; } public T4 Fourth { get; set; } } #endregion ``` And to make declarations prettier: ``` public static class Tuple { //Allows Tuple.New(1, "2") instead of new Tuple<int, string>(1, "2") public static Tuple<T1, T2> New<T1, T2>(T1 t1, T2 t2) { return new Tuple<T1, T2>(t1, t2); } //etc... } ```
Will a future version of .NET support tuples in C#?
[ "", "c#", ".net", "tuples", "language-features", "" ]
## Problem Language: C# 2.0 or later --- I would like to register context handlers to create menues when the user right clicks certain files (in my case \*.eic). What is the procedure to register, unregister (clean up) and handle events (clicks) from these menues? I have a clue it's something to do with the windows registry, but considering how much stuff there is in .net, I wouldn't be surprised if there are handy methods to do this clean and easy. Code snippets, website references, comments are all good. Please toss them at me. ## Update --- Obviously there is a slight problem creating context menues in managed languages, as several users have commented. Is there any other preferred way of achieving the same behaviour, or should I spend time looking into these workarounds? I don't mind doing that at all, I'm glad people have put effort into making this possible - but I still want to know if there is a "proper/clean" way of achieving this.
Resist writing Shell Extensions in managed languages - there are a multitude of things that could go bang if you pursue this route. Have a browse through this thread for more details. It contains links to do it if really want, and sagely advice of why it can be done, but shouldn't. <http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/1428326d-7950-42b4-ad94-8e962124043e/> You're back to unmanaged C/C++ as your only real tools here.
This is not a good idea because of potential dependency issues between different versions of the .NET Framework. Your shell extension could be expecting one version, while a different version may have already been loaded by the application that's currently running. [This thread](http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/1428326d-7950-42b4-ad94-8e962124043e/) contains a good summary of the situation.
How can I create my custom Shell Context Handlers for Windows?
[ "", "c#", ".net", "windows-shell", "" ]
I have a submission table that is very simple: userId, submissionGuid I want to select the username (simple inner join to get it) of all the users who have more than 10 submissions in the table. I would do this with embedded queries and a group by to count submissions... but is there a better way of doing it (without embedded queries)? Thanks!
This is the simplest way, I believe: ``` select userId from submission group by userId having count(submissionGuid) > 10 ```
``` select userId, count(*) from submissions having count(*) > 10 group by userId ```
How to select all users who made more than 10 submissions
[ "", "sql", "" ]
I want to be able to compare an image taken from a webcam to an image stored on my computer. The library doesn't need to be one hundred percent accurate as it won't be used in anything mission critical (e.g. police investigation), I just want something OK I can work with. I have tried a demonstration project for [Image Recognition from CodeProject](http://www.codeproject.com/KB/cs/BackPropagationNeuralNet.aspx), and it only works with small images / doesn't work at all when I compare an exact same image 120x90 pixels (this is not classified as OK :P ). Has there been any success with image recognition before? If so, would you be able to provide a link to a library I could use in either C# or VB.NET?
You could try this: <http://code.google.com/p/aforge/> It includes a comparison analysis that will give you a score. There are many other great imaging features of all types included as well. ``` // The class also can be used to get similarity level between two image of the same size, which can be useful to get information about how different/similar are images: // Create template matching algorithm's instance // Use zero similarity to make sure algorithm will provide anything ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0); // Compare two images TemplateMatch[] matchings = tm.ProcessImage( image1, image2 ); // Check similarity level if (matchings[0].Similarity > 0.95) { // Do something with quite similar images } ```
You can exactly use [EmguCV](http://www.emgu.com/wiki/index.php/Main_Page) for .NET.
Are there any OK image recognition libraries for .NET?
[ "", "c#", ".net", "vb.net", "image", "image-recognition", "" ]
Someone asked me how familiar I am with VC++ and how familiar I am with C++. What is the difference?
C++ is the actual language, VC++ is Microsoft's Visual C++, an IDE for C++ development. From [stason.org](http://stason.org/TULARC/webmaster/lang/c-cpp-faq/20-What-is-the-difference-between-C-and-Visual-C.html): > C++ is the programming language, Visual C++ is Microsoft's implementation of it. > > When people talk about learning Visual C++, it usually has more to do with learning how to use the programming environment, and how to use the Microsoft Foundation Classes (MFCs) for Windows rather than any language issues. Visual C++ can and will compile straight C and C++.
C++ is the name of a programming language that has been approved by international standards committees (ANSI/ISO). VC++ (“Visual C++”) is the product name of Microsoft's C++ compiler and sometimes, but not officially, the IDE it's shipped with (“Visual Studio”).
What is the difference between "VC++" and "C++"?
[ "", "c++", "visual-c++", "" ]
What's the best/easiest to integrate templating system for PHP, and what are the benefits of using one? I currently don't use one at all, and am thinking that it might help to seperate content from presentation a little more.
# [Smarty](http://www.smarty.net) --- I've found it to be fast, easy to use, and easy to install (even in a shared-hosting environment). It also doesn't require that you use validating XHTML which is handy sometimes (although I think the template engines that **do** require valid XHTML are probably faster.) It's really nice to have your content in one place and code somewhere else. Plus you're not limited to just HTML. I've used Smarty to generate **XML** and even **SQL**.
PHP is a pretty good templating language by itself. Most leading PHP frameworks don't use a seperate templating langauge for just this reason. Just make sure you use (something resemebling) MVC, and don't do any data access or business logic inside your view files. I've also used Smarty extensively, but find it has little advantage compared to straight PHP, except for forcing you to keep your view dumb. It might also looks a little better to designers, but the down side is flexibility to you, the template implementor. Things like looping over triply-nested arrays become harder than with straight PHP.
What's a good templating system for PHP?
[ "", "php", "templates", "" ]
What is the purpose of the `Using` block in C#? How is it different from a local variable?
If the type implements IDisposable, it automatically disposes that type. Given: ``` public class SomeDisposableType : IDisposable { ...implmentation details... } ``` These are equivalent: ``` SomeDisposableType t = new SomeDisposableType(); try { OperateOnType(t); } finally { if (t != null) { ((IDisposable)t).Dispose(); } } ``` ``` using (SomeDisposableType u = new SomeDisposableType()) { OperateOnType(u); } ``` The second is easier to read and maintain. --- Since C# 8 there is a [new syntax for `using`](https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8#using-declarations) that may make for more readable code: ``` using var x = new SomeDisposableType(); ``` It doesn't have a `{ }` block of its own and the scope of the using is from the point of declaration to the end of the block it is declared in. It means you can avoid stuff like: ``` string x = null; using(var someReader = ...) { x = someReader.Read(); } ``` And have this: ``` using var someReader = ...; string x = someReader.Read(); ```
`Using` calls `Dispose()` after the `using`-block is left, even if the code throws an exception. So you usually use `using` for classes that require cleaning up after them, like IO. So, this using block: ``` using (MyClass mine = new MyClass()) { mine.Action(); } ``` would do the same as: ``` MyClass mine = new MyClass(); try { mine.Action(); } finally { if (mine != null) mine.Dispose(); } ``` Using `using` is way shorter and easier to read.
What is the C# Using block and why should I use it?
[ "", "c#", ".net", "syntax", "using", "using-statement", "" ]
I've been trying to track this one for literally a month now without any success. I have this piece of code on an car advertising website which basically allows thumbnails to rotate in search results given that a car has multiple pictures. You can see it in action at the following: > [`http://www.abcavendre.com/4506691919/`](http://www.abcavendre.com/4506691919/ "Inventaire") It is built on the [mootools 1.2](http://www.mootools.net/ "Mootools") framework. The problem is that this script, under Firefox 3, consumes a rather large amount of memory overtime when a page is full of those rotating pictures, such as this inventory page: > [`http://www.abcavendre.com/Vitrine/Israel_Huttman/`](http://www.abcavendre.com/Vitrine/Israel_Huttman/ "Inventaire") You can see the source of the script in question here: > [`http://www.abcavendre.com/scripts/showcase_small.js`](http://www.abcavendre.com/scripts/showcase_small.js "Javascript Source") Any ideas as to what is causing the memory leak? The weird thing is this code behaves properly under IE7.
A way to track memory leaks in Firefox is with the [Leak Monitor Addon](https://addons.mozilla.org/de/firefox/addon/2490). It shows memory leaks of javascript (including extension-scripts). Please remind that the plugin will sometimes show leaked objects that will get cleaned up later by the garbage collection. If that is the case the plugin will launch a new window showing you the new status.
Try nulling elements variable array in the end of the initialize function ``` ... if (ads.length > 0) { this.imagesFx = new Fx.Elements(elements, { wait: false, duration: 1000 }); this.moveNext.periodical(2500, this); } elements = null; //Add THIS! } ```
How do I track and debug JavaScript memory leaks in Firefox?
[ "", "javascript", "memory-leaks", "mootools", "" ]
When I add a breakpoint and hit F5 to run in the debugger (I am using my debug build), a dialog pops up telling my my web.config file does not have debug=true in it (which is does) and I get 2 choices a) run without the debugger or b) let visual studio update my web.config file. If I choose b) the web.config is updated badly and nothing will work. If I choose a) then the site appears and I can test it out, but no debugging. I am an experienced developer, but I have never used visual studio and asp.net for web development before, so I am feeling rather frustrated by all the walls it is putting up to prevent me working. So far I have not been able to use the debugger. Is there something totally obvious that I am missing? What would you check if it was happening to you?
Open web.config manually and make sure the following line is in there ``` <compilation defaultLanguage="c#" debug="true" /> ``` Now you should be able to debug from VS. If this does not work I suggest that you recreate the project. EDIT: perhaps from what you say it could be that web.config is screwed up, e.g.contains invalid xml, no closing tag for some element etc.
In your project do a solution wide search for 'debug=' and if only one shows up then do a folder search. I've had it happen where there are multiple config files and this can cause problems.
Why can't I debug my asp.net web app
[ "", "c#", "asp.net", "iis", "visual-c++", "" ]
I'm currently turning an array of pixel values (originally created with a java.awt.image.PixelGrabber object) into an Image object using the following code: ``` public Image getImageFromArray(int[] pixels, int width, int height) { MemoryImageSource mis = new MemoryImageSource(width, height, pixels, 0, width); Toolkit tk = Toolkit.getDefaultToolkit(); return tk.createImage(mis); } ``` *Is it possible to achieve the same result using classes from the ImageIO package(s) so I don't have to use the AWT Toolkit?* Toolkit.getDefaultToolkit() does not seem to be 100% reliable and will sometimes throw an AWTError, whereas the ImageIO classes should always be available, which is why I'm interested in changing my method.
You can create the image without using ImageIO. Just create a BufferedImage using an image type matching the contents of the pixel array. ``` public static Image getImageFromArray(int[] pixels, int width, int height) { BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); WritableRaster raster = (WritableRaster) image.getData(); raster.setPixels(0,0,width,height,pixels); return image; } ``` When working with the PixelGrabber, don't forget to extract the RGBA info from the pixel array before calling `getImageFromArray`. There's an example of this in the [handlepixelmethod](http://java.sun.com/javase/6/docs/api/java/awt/image/PixelGrabber.html) in the PixelGrabber javadoc. Once you do that, make sure the image type in the BufferedImage constructor to `BufferedImage.TYPE_INT_ARGB`.
Using the raster I got an `ArrayIndexOutOfBoundsException` even when I created the `BufferedImage` with `TYPE_INT_ARGB`. However, using the `setRGB(...)` method of `BufferedImage` worked for me.
Turn an array of pixels into an Image object with Java's ImageIO?
[ "", "java", "image", "awt", "toolkit", "javax.imageio", "" ]
Since Python is a dynamic, interpreted language you don't have to compile your code before running it. Hence, it's very easy to simply write your code, run it, see what problems occur, and fix them. Using hotkeys or macros can make this incredibly quick. So, because it's so easy to immediately see the output of your program and any errors that may occur, I haven't uses a debugger tool yet. What situations may call for using a real debugger vs. the method I currently use? I'd like to know before I get into a situation and get frustrated because I don't know how to fix the problem.
I use pdb for basic python debugging. Some of the situations I use it are: * When you have a loop iterating over 100,000 entries and want to break at a specific point, it becomes really helpful.(conditional breaks) * Trace the control flow of someone else's code. * Its always better to use a debugger than litter the code with prints. * Normally there can be more than one point of failures resulting in a bug, all are not obvious in the first look. So you look for obvious places, if nothing is wrong there, you move ahead and add some more prints.. debugger can save you time here, you dont need to add the print and run again.
In 30 years of programming I've used a debugger exactly 4 times. All four times were to read the `core` file produced from a C program crashing to locate the traceback information that's buried in there. I don't think debuggers help much, even in compiled languages. Many people like debuggers, there are some reasons for using them, I'm sure, or people wouldn't lavish such love and care on them. Here's the point -- **software is knowledge capture**. Yes, it does have to run. More importantly, however, software has **meaning**. This is not an indictment of *your* use of a debugger. However, I find that the folks who rely on debugging will sometimes produce really odd-looking code and won't have a good justification for what it **means**. They can only say "it may be a hack, but it works." My suggestion on debuggers is "don't bother". "But, what if I'm totally stumped?" you ask, "should I learn the debugger then?" Totally stumped by what? The language? Python's too simple for utter befuddlement. Some library? Perhaps. Here's what you do -- with or without a debugger. 1. You have the source, read it. 2. You write small tests to exercise the library. Using the interactive shell, if possible. [All the really good libraries seem to show their features using the interactive Python mode -- I strive for this level of tight, clear simplicity.] 3. You have the source, add print functions.
When to use the Python debugger
[ "", "python", "debugging", "" ]
i need to get the color at a particular coordinate from a texture. There are 2 ways i can do this, by getting and looking at the raw png data, or by sampling my generated opengl texture. Is it possible to sample an opengl texture to get the color (RGBA) at a given UV or XY coord? If so, how?
Off the top of my head, your options are 1. Fetch the entire texture using glGetTexImage() and check the texel you're interested in. 2. Draw the texel you're interested in (eg. by rendering a GL\_POINTS primitive), then grab the pixel where you rendered it from the framebuffer by using glReadPixels. 3. Keep a copy of the texture image handy and leave OpenGL out of it. Options 1 and 2 are horribly inefficient (although you could speed 2 up somewhat by using pixel-buffer-objects and doing the copy asynchronously). So my favourite by FAR is option 3. **Edit:** If you have the `GL_APPLE_client_storage` extension (ie. you're on a Mac or iPhone) then that's option 4 which is the winner by a long way.
The most efficient way I've found to do it is to access the texture data (you should have our PNG decoded to make into a texture anyway) and interpolate between the texels yourself. Assuming your texcoords are [0,1], multiply texwidth*u and texheight*v and then use that to find the position on the texture. If they're whole numbers, just use the pixel directly, otherwise use the int parts to find the bordering pixels and interpolate between them based on the fractional parts. Here's some HLSL-like psuedocode for it. Should be fairly clear: ``` float3 sample(float2 coord, texture tex) { float x = tex.w * coord.x; // Get X coord in texture int ix = (int) x; // Get X coord as whole number float y = tex.h * coord.y; int iy = (int) y; float3 x1 = getTexel(ix, iy); // Get top-left pixel float3 x2 = getTexel(ix+1, iy); // Get top-right pixel float3 y1 = getTexel(ix, iy+1); // Get bottom-left pixel float3 y2 = getTexel(ix+1, iy+1); // Get bottom-right pixel float3 top = interpolate(x1, x2, frac(x)); // Interpolate between top two pixels based on the fractional part of the X coord float3 bottom = interpolate(y1, y2, frac(x)); // Interpolate between bottom two pixels return interpolate(top, bottom, frac(y)); // Interpolate between top and bottom based on fractional Y coord } ```
Texture Sampling in Open GL
[ "", "c++", "opengl", "" ]
Has anyone had luck with removing large amount of issues from a jira database instead of using the frontend? Deleting 60000 issues with the bulktools is not really feasible. Last time I tried it, the jira went nuts because of its own way of doing indexes.
We got gutsy and did a truncate on the jiraissues table and then use the rebuild index feature on the frontend. It looks like it's working!
How about doing a backup to xml, editing the xml, and reimporting?
Using SQL for cleaning up JIRA database
[ "", "sql", "jira", "" ]
What is the best way to secure an intranet website developed using `PHP` from outside attacks?
That's a stunningly thought-provoking question, and I'm surprised that you haven't received better answers. ## Summary Everything you would do for an external-facing application, and then some. ## Thought Process If I'm understanding you correctly, then you are asking a question which *very* few developers are asking themselves. Most companies have poor defence in depth, and once an attacker is in, he's in. Clearly you want to take it up a level. So, what kind of attack are we thinking about? If I'm the attacker and I'm attacking your intranet application, then I must have got access to your network somehow. This may not be as difficult as it sounds - I might try spearphishing (targetting email to individuals in your organisation, containing either malware attachements or links to sites which install malware) to get a trojan installed on an internal machine. Once I've done this (and got control of an internal PC), I'll try all the same attacks I would try against any internet application. However, that's not the end of the story. I've got more options: if I've got one of your user's PCs, then I might well be able to use a keylogger to gather usernames and passwords, as well as watching all your email for names and phone numbers. Armed with these, I may be able to log into your application directly. I may even learn an admin username/password. Even if I don't, a list of names and phone numbers along with a feel for company lingo gives me a decent shot at socially engineering my way into wider access within your company. ## Recommendations * First and foremost, before all technical solutions: **TRAIN YOUR USERS IN SECURITY** The common answers to securing a web app: * Use multi-factor authentication + e.g. username/password and some kind of pseudo-random number gadget. * Sanitise all your input. + to protect against cross-site scripting and SQL injection. * Use SSL (otherwise known as HTTPS). + this is a pain to set up (EDIT: actually that's improving), but it makes for much better security. * Adhere to the principals of "Segregation of Duties" and "Least Priviledge" + In other words, by ensuring that all users have only the permissions they need to do their jobs (and nobody else's jobs) you make sure they have the absolute minimum ability to do damage.
If it is on an internal network, why is it even possible to get to the app from the outside? Firewall rules should be in place at the very least.
PHP - Security what is best way?
[ "", "php", "security", "" ]
What are the pros and cons of using [Criteria](http://docs.jboss.org/hibernate/orm/4.1/manual/en-US/html/ch17.html) or [HQL](http://docs.jboss.org/hibernate/orm/4.1/manual/en-US/html/ch16.html)? The Criteria API is a nice object-oriented way to express queries in Hibernate, but sometimes Criteria Queries are more difficult to understand/build than HQL. When do you use Criteria and when HQL? What do you prefer in which use cases? Or is it just a matter of taste?
I mostly prefer Criteria Queries for dynamic queries. For example it is much easier to add some ordering dynamically or leave some parts (e.g. restrictions) out depending on some parameter. On the other hand I'm using HQL for static and complex queries, because it's much easier to understand/read HQL. Also, HQL is a bit more powerful, I think, e.g. for different join types.
There is a difference in terms of performance between HQL and criteriaQuery, everytime you fire a query using criteriaQuery, it creates a new alias for the table name which does not reflect in the last queried cache for any DB. This leads to an overhead of compiling the generated SQL, taking more time to execute. Regarding fetching strategies [[http://www.hibernate.org/315.html]](http://www.hibernate.org/315.html) > * Criteria respects the laziness settings in your mappings and guarantees that what you want loaded is loaded. This means one Criteria query might result in several SQL immediate SELECT statements to fetch the subgraph with all non-lazy mapped associations and collections. If you want to change the "how" and even the "what", use setFetchMode() to enable or disable outer join fetching for a particular collection or association. Criteria queries also completely respect the fetching strategy (join vs select vs subselect). > * HQL respects the laziness settings in your mappings and guarantees that what you want loaded is loaded. This means one HQL query might result in several SQL immediate SELECT statements to fetch the subgraph with all non-lazy mapped associations and collections. If you want to change the "how" and even the "what", use LEFT JOIN FETCH to enable outer-join fetching for a particular collection or nullable many-to-one or one-to-one association, or JOIN FETCH to enable inner join fetching for a non-nullable many-to-one or one-to-one association. HQL queries do not respect any fetch="join" defined in the mapping document.
JPA and Hibernate - Criteria vs. JPQL or HQL
[ "", "java", "hibernate", "hql", "criteria", "hibernate-criteria", "" ]
I need to enumerate all classes in a package and add them to a List. The non-dynamic version for a single class goes like this: ``` List allClasses = new ArrayList(); allClasses.add(String.class); ``` How can I do this dynamically to add all classes in a package and all its subpackages? --- ***Update:*** Having read the early answers, it's absolutely true that I'm trying to solve another secondary problem, so let me state it. And I know this is possible since other tools do it. See new question [here](https://stackoverflow.com/questions/176913/how-can-i-run-all-unit-tests-except-those-ending-in-integrationtest-in-my-intel). ***Update:*** Reading this again, I can see how it's being misread. I'm looking to enumerate all of MY PROJECT'S classes from the file system after compilation.
\*\*\*\*UPDATE 1 (2012)\*\*\*\* OK, I've finally gotten around to cleaning up the code snippet below. I stuck it into it's own github project and even added tests. <https://github.com/ddopson/java-class-enumerator> \*\*\*\*UPDATE 2 (2016)\*\*\*\* For an even more robust and feature-rich classpath scanner, see <https://github.com/classgraph/classgraph> . I'd recommend first reading my code snippet to gain a high level understanding, then using lukehutch's tool for production purposes. \*\*\*\*Original Post (2010)\*\*\*\* Strictly speaking, it isn't possible to list the classes in a *package*. This is because a package is really nothing more than a namespace (eg com.epicapplications.foo.bar), and any jar-file in the classpath could potentially add classes into a package. Even worse, the classloader will load classes on demand, and part of the classpath might be on the other side of a network connection. It is possible to solve a more restrictive problem. eg, all classes in a JAR file, or all classes that a JAR file defines within a particular package. This is the more common scenario anyways. Unfortunately, there isn't any framework code to make this task easy. You have to scan the filesystem in a manner similar to how the ClassLoader would look for class definitions. There are a lot of samples on the web for class files in plain-old-directories. Most of us these days work with JAR files. To get things working with JAR files, try this... ``` private static ArrayList<Class<?>> getClassesForPackage(Package pkg) { String pkgname = pkg.getName(); ArrayList<Class<?>> classes = new ArrayList<Class<?>>(); // Get a File object for the package File directory = null; String fullPath; String relPath = pkgname.replace('.', '/'); System.out.println("ClassDiscovery: Package: " + pkgname + " becomes Path:" + relPath); URL resource = ClassLoader.getSystemClassLoader().getResource(relPath); System.out.println("ClassDiscovery: Resource = " + resource); if (resource == null) { throw new RuntimeException("No resource for " + relPath); } fullPath = resource.getFile(); System.out.println("ClassDiscovery: FullPath = " + resource); try { directory = new File(resource.toURI()); } catch (URISyntaxException e) { throw new RuntimeException(pkgname + " (" + resource + ") does not appear to be a valid URL / URI. Strange, since we got it from the system...", e); } catch (IllegalArgumentException e) { directory = null; } System.out.println("ClassDiscovery: Directory = " + directory); if (directory != null && directory.exists()) { // Get the list of the files contained in the package String[] files = directory.list(); for (int i = 0; i < files.length; i++) { // we are only interested in .class files if (files[i].endsWith(".class")) { // removes the .class extension String className = pkgname + '.' + files[i].substring(0, files[i].length() - 6); System.out.println("ClassDiscovery: className = " + className); try { classes.add(Class.forName(className)); } catch (ClassNotFoundException e) { throw new RuntimeException("ClassNotFoundException loading " + className); } } } } else { try { String jarPath = fullPath.replaceFirst("[.]jar[!].*", ".jar").replaceFirst("file:", ""); JarFile jarFile = new JarFile(jarPath); Enumeration<JarEntry> entries = jarFile.entries(); while(entries.hasMoreElements()) { JarEntry entry = entries.nextElement(); String entryName = entry.getName(); if(entryName.startsWith(relPath) && entryName.length() > (relPath.length() + "/".length())) { System.out.println("ClassDiscovery: JarEntry: " + entryName); String className = entryName.replace('/', '.').replace('\\', '.').replace(".class", ""); System.out.println("ClassDiscovery: className = " + className); try { classes.add(Class.forName(className)); } catch (ClassNotFoundException e) { throw new RuntimeException("ClassNotFoundException loading " + className); } } } } catch (IOException e) { throw new RuntimeException(pkgname + " (" + directory + ") does not appear to be a valid package", e); } } return classes; } ```
The most robust mechanism for listing all classes in a given package is currently [ClassGraph](https://github.com/classgraph/classgraph/wiki/Code-examples), because it handles the [widest possible array of classpath specification mechanisms](https://github.com/classgraph/classgraph/wiki/Classpath-Specification-Mechanisms), including the new JPMS module system. (I am the author.) ``` List<String> classNames; try (ScanResult scanResult = new ClassGraph().whitelistPackages("my.package") .enableClassInfo().scan()) { classNames = scanResult.getAllClasses().getNames(); } ```
How can I enumerate all classes in a package and add them to a List?
[ "", "java", "" ]
I am using the jQuery library to implement drag and drop. How do I get at the element that is being dragged when it is dropped? I want to get the id of the image inside the div. The following element is dragged: ``` <div class="block"> <asp:Image ID="Image9" AlternateText="10/12/2008 - Retina" Width=81 Height=84 ImageUrl="~/uploads/ImageModifier/retina.jpg" runat=server /> </div> ``` I have the standard dropped function from their example: ``` $(".drop").droppable({ accept: ".block", activeClass: 'droppable-active', hoverClass: 'droppable-hover', drop: function(ev, ui) { } }); ``` I have tried various `ui.id` etc. which doesn't seem to work.
Is it not the ui.draggable? If you go here (in Firefox and assuming you have firebug) and look in the firebug console youll see I am doing a console.dir of the ui.draggable object which is the div being dragged <http://jsbin.com/ixizi> Therefore the code you need in the drop function is ``` drop: function(ev, ui) { //to get the id //ui.draggable.attr('id') or ui.draggable.get(0).id or ui.draggable[0].id console.dir(ui.draggable) } ```
``` $(ui.draggable).attr("id") ``` ...
jQuery drag and drop - how to get at element being dragged
[ "", "javascript", "jquery", "drag-and-drop", "" ]
I got this bad feeling about how I insert larger amounts of HTML. Lets assume we got: `var html="<table>..<a-lot-of-other-tags />..</table>"` and I want to put this into `$("#mydiv")` previously I did something like `var html_obj = $(html);` `$("#mydiv").append(html_obj);` Is it correct that jQuery is parsing `html` to create DOM-Objects ? Well this is what I read somewhere **(UPDATE:** I meant that I have read, jQuery parses the html to create the whole DOM tree by hand - its nonsense right?!**)**, so I changed my code: `$("#mydiv").attr("innerHTML", $("#mydiv").attr("innerHTML") + html);` Feels faster, is it ? And is it correct that this is equivalent to: `document.getElementById("mydiv").innerHTML += html` ? or is jquery doing some additional expensive stuff in the background ? Would love to learn alternatives as well.
innerHTML is remarkably fast, and in many cases you will get the best results just setting that (I would just use append). **However, if there is much already in "mydiv" then you are forcing the browser to parse and render all of that content again (everything that was there before, plus all of your new content).** You can avoid this by appending a document fragment onto "mydiv" instead: ``` var frag = document.createDocumentFragment(); frag.innerHTML = html; $("#mydiv").append(frag); ``` In this way, only your new content gets parsed (unavoidable) and the existing content does not. EDIT: My bad... I've discovered that innerHTML isn't well supported on document fragments. You can use the same technique with any node type. For your example, you could create the root table node and insert the innerHTML into that: ``` var frag = document.createElement('table'); frag.innerHTML = tableInnerHtml; $("#mydiv").append(frag); ```
Try the following: ``` $("#mydiv").append(html); ``` The other answers, including the accepted answer, are **slower** by **2-10x**: [jsperf](http://jsperf.com/jquery-append-html/1). The accepted answer does not work in **IE 6, 7, and 8** because you can't set `innerHTML` of a `<table>` element, due to a bug in IE: [jsbin](http://jsbin.com/akoje4/5/).
jquery: fastest DOM insertion?
[ "", "javascript", "jquery", "dom", "" ]
I've seen this is various codebases, and wanted to know if this generally frowned upon or not. For example: ``` public class MyClass { public int Id; public MyClass() { Id = new Database().GetIdFor(typeof(MyClass)); } } ```
There are several reasons this is not generally considered good design some of which like causing difficult unit testing and difficulty of handling errors have already been mentioned. The main reason I would choose not to do so is that your object and the data access layer are now very tightly coupled which means that any use of that object outside of it original design requires significant rework. As an example what if you came across an instance where you needed to use that object without any values assigned for instance to persist a new instance of that class? you now either have to overload the constructor and then make sure all of your other logic handles this new case, or inherit and override. If the object and the data access were decoupled then you could create an instance and then not hydrate it. Or if your have a different project that uses the same entities but uses a different persistence layer then the objects are reusable. Having said that I have taken the easier path of coupling in projects in the past :)
Well.. I wouldn't. But then again my approach usually involves the class NOT being responsible for retrieving its own data.
Is it OK to put a database initialization call in a C# constructor?
[ "", "c#", "database", "constructor", "" ]
I have a checkstyle suppression filter setup (e.g. ignore magic numbers in unit test code). The suppression xml file resides in the same folder as the checkstyle xml file. However, where this file actually is varies: on my windows dev box it is in d:\dev\shared\checkstyle\config on the Linux CI server it will be in /root/repo/shared/checkstyle/config on another developers box it could be anywhere (they check out their svn repo to). The only "consistent" thing is that the suppression file is always in the same folder as the checkstyle xml file. I cannot work out how to ensure that this file is always consistently picked up. Also I don't know why checkstyle does not support embedded suppression within the checkstyle xml file. any help?
I had this same problem with the Checkstyle suppression configuration when I was going back and forth between Linux and Windows. Here's how I solved it in my Ant-based build system: Basically, I inject the proper, platform-specific directory value into the main Checkstyle configuration file by configuring a Checkstyle properties file with an Ant build script. My main Checkstyle configuration file has a `SuppressionFilter` module declaration as shown below. The value of the `checkstyle-suppressions-file` property comes from a Checkstyle properties file: ``` <module name="SuppressionFilter"> <property name="file" value="${checkstyle-suppressions-file}"/> </module> ``` The Checkstyle properties file is not static, it is generated by an Ant build script from a properties file template called `template-checkstyle.properties`. Here's what the template looks like for the suppressions file property: ``` checkstyle-suppressions-file=@SCM_DIR@/checkstyle_suppressions.xml ``` My Ant build script copies this file to a file named `checkstyle.properties`. The copy has the special token replaced with the proper value of the directory in which the suppressions file is found: ``` <copy file="${scm.dir}/template-checkstyle.properties" tofile="${scm.dir}/checkstyle.properties"> <filterset> <filter token="SCM_DIR" value="${scm.dir.unix}"/> </filterset> </copy> ``` Now, where does the value of `scm.dir.unix` come from? Well, it's *derived* from a property of my build, read on. You'll need to specify such a value with the directory values that you mentioned. Note that there is one slightly non-obvious issue concerning the way in which you specify this directory. I say that the `scm.dir.unix` value is derived from a build property because I observed that the main Checkstyle configuration file cannot contain backslashes, i.e. Windows path separator characters, in the value of the `file` property of the `SuppressionFilter` module. For example, specifying something like `C:\foo\bar\baz` leads to a Checkstyle error message saying that `C:foobarbaz` cannot be found. I work around this by "converting" the `scm.dir` directory build property to a "unix" format with Ant's `pathconvert` task: ``` <pathconvert targetos="unix" property="scm.dir.unix"> <path location="${scm.dir}"/> </pathconvert> ``` Then I call the `checkstyle` Ant task like this: ``` <checkstyle config="${scm.dir}/checkstyle_checks.xml" properties="${scm.dir}/checkstyle.properties"> <!-- details elided --> </checkstyle> ``` The call to the `checkstyle` task injects the key/value pairs contained in the `checkstyle.properties` file into the main Checkstyle configuration. If you like, you can see the full scripts [here](http://virtualteamtls.svn.sourceforge.net/viewvc/virtualteamtls/trunk/scm/) Hope this helps
In eclipse I put the following which did not require me to add any additional properties: ``` <module name="SuppressionFilter"> <property name="file" value="${samedir}/suppressions.xml"/> </module> ```
checkstyle + suppression filters
[ "", "java", "code-analysis", "static-analysis", "checkstyle", "" ]
One of the major advantages with Javascript is said to be that it is a prototype based language. But what does it mean that Javascript is prototype based, and why is that an advantage?
**Prototypal inheritance** is a form of object-oriented **code reuse**. Javascript is one of the only [mainstream] object-oriented languages to use prototypal inheritance. Almost all other object-oriented languages are classical. In **classical inheritance**, the programmer writes a class, which defines an object. Multiple objects can be instantiated from the same class, so you have code in one place which describes several objects in your program. Classes can then be organized into a hierarchy, furthering code reuse. More general code is stored in a higher-level class, from which lower level classes inherit. This means that an object is sharing code with other objects of the same class, as well as with its parent classes. In the **prototypal inheritance** form, objects **inherit directly** from other objects. All of the business about classes goes away. If you want an object, you just write an object. But code reuse is still a valuable thing, so objects are allowed to be linked together in a hierarchy. In javascript, every object has a secret link to the object which created it, forming a chain. When an object is asked for a property that it does not have, its parent object will be asked... continually up the chain until the property is found or until the root object is reached. Each function in JavaScript (which are objects themselves) actually has a member called "prototype", which is responsible for providing values when an object is asked for them. Having this member allows the constructor mechanism (by which objects are constructed from functions) to work. Adding a property to the prototype of a function object will make it available to the constructed object, as well as to all of the objects which inherit from it. **Advantages** There may not be a hard and fast rule as to why prototypal inheritance is an advantageous form of code-reuse. Code reuse itself is advantageous, and prototypal inheritance is a sensible way of going about it. You might argue that prototypal inheritance is a fairly **simple model** of code reuse, and that code can be heavily reused in **direct ways**. But classical languages are certainly able to accomplish this as well. **Sidenote:** *@Andrew Hedges* makes a good point, that there are actually [many](http://en.wikipedia.org/wiki/Prototype-based_programming#Languages) prototypal languages. It's worth noting that these others exist, but also worth noting that none of them are anything close to mainstream. NewtonScript seemed to have some traction for a while, but died with its platform. It's also possible to extend some modern languages in ways which add prototypal capabilities.
A prototype-based language, does not make the distinction of classes vs objects: it simply has objects. A prototype-based language has the notion of a prototypical object, an object used as a template from which to get the initial properties for a new object. Any object can specify its own properties, either when you create it or at run time. In addition, **any object can be associated as the prototype for another object**, allowing the second object to share the first object's properties.
What does it mean that Javascript is a prototype based language?
[ "", "javascript", "oop", "inheritance", "prototype-programming", "" ]
If I am passing an object to a method, why should I use the ref keyword? Isn't this the default behaviour anyway? For example: ``` class Program { static void Main(string[] args) { TestRef t = new TestRef(); t.Something = "Foo"; DoSomething(t); Console.WriteLine(t.Something); } static public void DoSomething(TestRef t) { t.Something = "Bar"; } } public class TestRef { public string Something { get; set; } } ``` The output is "Bar" which means that the object was passed as a reference.
Pass a `ref` if you want to change what the object is: ``` TestRef t = new TestRef(); t.Something = "Foo"; DoSomething(ref t); void DoSomething(ref TestRef x) { x = new TestRef(); x.Something = "Not just a changed TestRef, but a completely different TestRef object"; } ``` After calling DoSomething, `t` does not refer to the original `new TestRef`, but refers to a completely different object. This may be useful too if you want to change the value of an immutable object, e.g. a `string`. You cannot change the value of a `string` once it has been created. But by using a `ref`, you could create a function that changes the string for another one that has a different value. It is not a good idea to use `ref` unless it is needed. Using `ref` gives the method freedom to change the argument for something else, callers of the method will need to be coded to ensure they handle this possibility. Also, when the parameter type is an object, then object variables always act as references to the object. This means that when the `ref` keyword is used you've got a reference to a reference. This allows you to do things as described in the example given above. But, when the parameter type is a primitive value (e.g. `int`), then if this parameter is assigned to within the method, the value of the argument that was passed in will be changed after the method returns: ``` int v = 1; Change(ref v); Debug.Assert(v == 5); WillNotChange(v); Debug.Assert(v == 5); // Note: v doesn't become 10 void Change(ref int x) { x = 5; } void WillNotChange(int x) { x = 10; } ```
You need to distinguish between "passing a reference by value", and "passing a parameter/argument by reference". I've written a [reasonably long article on the subject](https://web.archive.org/web/20190303121828/http://jonskeet.uk/csharp/parameters.html) to avoid having to write carefully each time this comes up on newsgroups.
Why use the 'ref' keyword when passing an object?
[ "", "c#", ".net", "pass-by-reference", "ref", "" ]
I am looking for the best way to test if a website is alive from a C# application. ### Background My application consists of a *Winforms UI*, a backend *WCF service* and a *website* to publish content to the UI and other consumers. To prevent the situation where the UI starts up and fails to work properly because of a missing WCF service or website being down I have added an app startup check to ensure that all everything is alive. The application is being written in C#, .NET 3.5, Visual Studio 2008 ### Current Solution Currently I am making a web request to a test page on the website that will inturn test the web site and then display a result. ``` WebRequest request = WebRequest.Create("http://localhost/myContentSite/test.aspx"); WebResponse response = request.GetResponse(); ``` I am assuming that if there are no exceptions thown during this call then all is well and the UI can start. ### Question Is this the simplest, right way or is there some other sneaky call that I don't know about in C# or a better way to do it.
``` HttpWebResponse response = (HttpWebResponse)request.GetResponse(); if (response == null || response.StatusCode != HttpStatusCode.OK) ``` As @Yanga mentioned, HttpClient is probably the more common way to do this now. ``` HttpClient client = new HttpClient(); var checkingResponse = await client.GetAsync(url); if (!checkingResponse.IsSuccessStatusCode) { return false; } ```
While using WebResponse please make sure that you close the response stream ie (.close) else it would hang the machine after certain repeated execution. Eg ``` HttpWebRequest req = (HttpWebRequest)WebRequest.Create(sURL); HttpWebResponse response = (HttpWebResponse)req.GetResponse(); // your code here response.Close(); ```
Test if a website is alive from a C# application
[ "", "c#", "webrequest", "" ]
I've read all the advice on const-correctness in C++ and that it is important (in part) because it helps the compiler to optimize your code. What I've never seen is a good explanation on how the compiler uses this information to optimize the code, not even the good books go on explaining what happens behind the curtains. For example, how does the compiler optimize a method that is declared const vs one that isn't but should be. What happens when you introduce mutable variables? Do they affect these optimizations of const methods?
Let's disregard methods and look only at const objects; the compiler has much more opportunity for optimization here. If an object is declared const, then (ISO/IEC 14882:2003 7.1.5.1(4)): > Except that any class member declared > mutable (7.1.1) can be modified, any > attempt to modify a const object > during its lifetime (3.8) results in > undefined behavior. Lets disregard objects that may have mutable members - the compiler is free to assume that the object will not be modified, therefore it can produce significant optimizations. These optimizations can include things like: * incorporating the object's value directly into the machines instruction opcodes * complete elimination of code that can never be reached because the const object is used in a conditional expression that is known at compile time * loop unrolling if the const object is controlling the number of iterations of a loop Note that this stuff applies only if the actual object is const - it does not apply to objects that are accessed through const pointers or references because those access paths can lead to objects that are not const (it's even well-defined to change objects though const pointers/references as long as the actual object is non-const and you cast away the constness of the access path to the object). In practice, I don't think there are compilers out there that perform any significant optimizations for all kinds of const objects. but for objects that are primitive types (ints, chars, etc.) I think that compilers can be quite aggressive in optimizing the use of those items.
I think that the const keyword was primarily introduced for compilation checking of the program semantic, not for optimization. Herb Sutter, in the [GotW #81 article](http://www.gotw.ca/gotw/081.htm), explains very well why the compiler can't optimize anything when passing parameters by const reference, or when declaring const return value. The reason is that the compiler has no way to be sure that the object referenced won't be changed, even if declared const : one could use a const\_cast, or some other code can have a non-const reference on the same object. However, quoting Herb Sutter's article : > There is [only] one case where saying > "const" can really mean something, and > that is when objects are made const at > the point they are defined. In that > case, the compiler can often > successfully put such "really const" > objects into read-only memory[...]. There is a lot more in this article, so I encourage you reading it: you'll have a better understanding of constant optimization after that.
Constants and compiler optimization in C++
[ "", "c++", "optimization", "compiler-construction", "compiler-optimization", "const-correctness", "" ]
I am using Apache [HttpClient](http://hc.apache.org/httpclient-3.x/) and would like to communicate HTTP errors (400 Bad Request, 404 Not Found, 500 Server Error, etc.) via the Java exception mechanism to the calling code. Is there an exception in the Java standard library or in a widely used library that would be appropriate to use or to subclass for this purpose? The alternative is to check status return codes. This appears to be the HttpClient design philosophy, but since these errors are truly exceptional in my app, I would like to have the stack trace and other nice exception things set up for me when they happen.
If it's not an Exception in HttpClient design philosophy, but an Exception in your code, then create your own Exception classes. ( As a subclass of org.apache.commons.httpclient.HttpException )
### Quick answer In Spring you have **exactly** what you want: * [HttpClientErrorException](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/client/HttpClientErrorException.html) - Exception thrown when an HTTP 4xx is received. * [HttpServerErrorException](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/client/HttpServerErrorException.html) - Exception thrown when an HTTP 5xx is received. ### And a recommended practice Minimally, you should differentiate exceptions related to business logic (e.g., insufficient balance, email address is not valid) from *other* exceptions (e.g., server not available, unsupported media type, SQLException). In our REST API, we have a library for Java clients that parses responses and throws only three different exceptions: * **400, 401, 403, 404, 409, 422**: throw `MyBusinessException`, which contains a message that can be shown to the end user. The message comes in the response body (exception handling on the service side), but if not present we have a default message specific to each status code. * **405, 412, 415**: throw `HttpClientErrorException` with a message that is specific to each status code. * **other 4xx codes**: throw `HttpClientErrorException` with a generic message. * **5xx codes**: throw `HttpServerErrorException` with a generic message. *All these exceptions are unchecked.*
What Java exception class to use for HTTP errors?
[ "", "java", "http", "exception", "" ]
I wonder what the time complexity of the `pop` method of `list` objects is in Python (in CPython particulary). Also does the value of `N` for `list.pop(N)` affect the complexity?
`Pop()` for the last element ought to be O(1) since you only need to return the element referred to by the last element in the array and update the index of the last element. I would expect `pop()` for an arbitrary element to be O(N) and require on average N/2 operations since you would need to move any elements beyond the element you are removing one position up in the array of pointers.
Yes, it is O(1) to pop the *last* element of a Python list, and O(N) to pop an *arbitrary* element (since the whole rest of the list has to be shifted). Here's a great article on how Python lists are stored and manipulated: <http://effbot.org/zone/python-list.htm>
What is the time complexity of popping elements from list in Python?
[ "", "python", "list", "performance", "" ]
I have a python module that makes use of a huge dictionary global variable, currently I put the computation code in the top section, every first time import or reload of the module takes more then one minute which is totally unacceptable. How can I save the computation result somewhere so that the next import/reload doesn't have to compute it? I tried cPickle, but loading the dictionary variable from a file(1.3M) takes approximately the same time as computation. To give more information about my problem, ``` FD = FreqDist(word for word in brown.words()) # this line of code takes 1 min ```
Just to clarify: the code in the body of a module is *not* executed every time the module is imported - it is run only once, after which future imports find the already created module, rather than recreating it. Take a look at sys.modules to see the list of cached modules. However, if your problem is the time it takes for the first import after the program is run, you'll probably need to use some other method than a python dict. Probably best would be to use an on-disk form, for instance a sqlite database, one of the dbm modules. For a minimal change in your interface, the shelve module may be your best option - this puts a pretty transparent interface between the dbm modules that makes them act like an arbitrary python dict, allowing any picklable value to be stored. Here's an example: ``` # Create dict with a million items: import shelve d = shelve.open('path/to/my_persistant_dict') d.update(('key%d' % x, x) for x in xrange(1000000)) d.close() ``` Then in the next process, use it. There should be no large delay, as lookups are only performed for the key requested on the on-disk form, so everything doesn't have to get loaded into memory: ``` >>> d = shelve.open('path/to/my_persistant_dict') >>> print d['key99999'] 99999 ``` It's a bit slower than a real dict, and it **will** still take a long time to load if you do something that requires all the keys (eg. try to print it), but may solve your problem.
Calculate your global var on the first use. ``` class Proxy: @property def global_name(self): # calculate your global var here, enable cache if needed ... _proxy_object = Proxy() GLOBAL_NAME = _proxy_object.global_name ``` Or better yet, access necessery data via special data object. ``` class Data: GLOBAL_NAME = property(...) data = Data() ``` Example: ``` from some_module import data print(data.GLOBAL_NAME) ``` See [Django settings](http://docs.djangoproject.com/en/dev/topics/settings/).
How to avoid computation every time a python module is reloaded
[ "", "python", "nltk", "" ]
This was an interview question. Given Visual Studio 2008 and an icon saved as a .PNG file, they required the image as an embedded resource and to be used as the icon within the title bar of a form. I'm looking for what would have been the model answer to this question, Both (working!) code and any Visual Studio tricks. (Model answer is one that should get me the job if I meet it next time around.) Specifically I don't know how to load the image once it is an embedded resource nor how to get it as the icon for the title bar. As a part solution, ignoring the embedded bit, I copied the resource to the ouput directory and tried the following:- ``` public partial class Form1 : Form { public Form1() { InitializeComponent(); this.Icon = new Icon("Resources\\IconImage.png"); } } ``` This failed with the error "Argument 'picture' must be a picture that can be used as a Icon." I presuming that the .PNG file actually needed to be a .ICO, but I couldn't see how to make the conversion. Is this presumption correct or is there a different issue?
Fire up VS, start new Windows Application. Open the properties sheet, add the .png file as a resource (in this example: glider.png ). From hereon, you can access the resource as a Bitmap file as WindowsFormsApplication10.Properties.Resources.glider Code for using it as an application icon: ``` public Form1() { InitializeComponent(); Bitmap bmp = WindowsFormsApplication10.Properties.Resources.glider; this.Icon = Icon.FromHandle(bmp.GetHicon()); } ```
`Icon.FromHandle` will cause problems with a PNG, because PNGs have more than one bit of transparency. This type of issue can be solved with a library like [IconLib](http://www.codeproject.com/KB/cs/IconLib.aspx "IconLib"). Chances are they didn't know how to do it and they were trying to squeeze the answer out of potential employees. Furthermore, setting the icon of the form from a PNG is an unnecessary performance hit, it should have been an ICO in the first place.
Got .PNG file. Want embeddded icon resource displayed as icon on form title bar
[ "", "c#", ".net", "winforms", "" ]
Anyone know a simple way using Java calendar to subtract X days from a date? I have not been able to find any function which allows me to directly subtract X days from a date in Java. Can someone point me to the right direction?
Taken from [the docs here](http://docs.oracle.com/javase/7/docs/api/java/util/Calendar.html#add%28int,%20int%29): > Adds or subtracts the specified amount of time to the given calendar field, based on the calendar's rules. For example, to subtract 5 days from the current time of the calendar, you can achieve it by calling: > > ``` > Calendar calendar = Calendar.getInstance(); // this would default to now > calendar.add(Calendar.DAY_OF_MONTH, -5). > ```
You could use the `add` method and pass it a negative number. However, you could also write a simpler method that doesn't use the `Calendar` class such as the following ``` public static void addDays(Date d, int days) { d.setTime( d.getTime() + (long)days*1000*60*60*24 ); } ``` This gets the timestamp value of the date (milliseconds since the epoch) and adds the proper number of milliseconds. You could pass a negative integer for the days parameter to do subtraction. This would be simpler than the "proper" calendar solution: ``` public static void addDays(Date d, int days) { Calendar c = Calendar.getInstance(); c.setTime(d); c.add(Calendar.DATE, days); d.setTime( c.getTime().getTime() ); } ``` Note that both of these solutions change the `Date` object passed as a parameter rather than returning a completely new `Date`. Either function could be easily changed to do it the other way if desired.
How to subtract X days from a date using Java calendar?
[ "", "java", "calendar", "subtraction", "" ]
Provided I have admin access, I need a way to manage (Create, modify, remove) local accounts in a remote machine from an ASP.NET client. I'm clueless in how to approach this. Is WMI a possibility (System.Management namespace)? Any pointers?
Give this a try: ``` DirectoryEntry directoryEntry = new DirectoryEntry("WinNT://ComputerName" & ",computer", "AdminUN", "AdminPW"); DirectoryEntry user = directoryEntry.Children.Add("username", "user"); user.Invoke("SetPassword", new object[] { "password"}); ser.CommitChanges(); ``` If you do need to go the Active Directory route, you can change the directoryEntry path string to something like this: LDAP://CN=ComputerName,DC=MySample,DC=com
I used System.DirectoryServices to get data from users in a n ActiveDirectory (LDAP). I don't know if that's the kind of thing you're looking for. Hope it helps.
Creating local users on remote windows server using c#
[ "", "c#", ".net", "windows", "" ]
Sample code that shows how to create threads using MFC declares the thread function as both static and `__cdecl`. Why is the latter required? Boost threads don't bother with this convention, so is it just an anachronism? For example (MFC): ``` static __cdecl UINT MyFunc(LPVOID pParam) { ... } CWinThread* pThread = AfxBeginThread(MyFunc, ...); ``` Whereas Boost: ``` static void func() { ... } boost::thread t; t.create(&func); ``` (the code samples might not be 100% correct as I am nowhere near an IDE). What is the point of \_\_cdecl? How does it help when creating threads?
\_\_cdecl tells the compiler to use the C calling convention (as opposed to the stdcall, fastcall or whatever other calling convention your compiler supports). I believe, VC++ uses stdcall by default. The calling convention affects things such as how arguments are pushed onto the stack (or registers, in the case of fastcall) and who pops arguments off the stack (caller or callee). In the case of Boost. I believe it uses template specialization to figure out the appropriate function type and calling convention.
Look at the prototype for [`AfxBeginThread()`](http://msdn.microsoft.com/en-us/library/s3w9x78e(VS.80).aspx): ``` CWinThread* AfxBeginThread( AFX_THREADPROC pfnThreadProc, LPVOID pParam, int nPriority = THREAD_PRIORITY_NORMAL, UINT nStackSize = 0, DWORD dwCreateFlags = 0, LPSECURITY_ATTRIBUTES lpSecurityAttrs = NULL ); ``` `AFX_THREADPROC` is a typedef for `UINT(AFX_CDECL*)(LPVOID)`. When you pass a function to `AfxBeginThread()`, it must match that prototype, including the calling convention. The MSDN pages on [`__cdecl`](http://msdn.microsoft.com/en-us/library/zkwh89ks.aspx) and [`__stdcall`](http://msdn.microsoft.com/en-us/library/zxk0tw93.aspx) (as well as `__fastcall` and `__thiscall`) explain the pros and cons of each calling convention. The [`boost::thread`](http://www.boost.org/doc/libs/1_36_0/doc/html/thread/thread_management.html#thread.thread_management.thread) constructor uses templates to allow you to pass a function pointer or callable function object, so it doesn't have the same restrictions as MFC.
Why do thread functions need to be declared as '__cdecl'?
[ "", "c++", "multithreading", "mfc", "boost", "" ]
I have a python script that analyzes a set of error messages and checks for each message if it matches a certain pattern (regular expression) in order to group these messages. For example "file x does not exist" and "file y does not exist" would match "file .\* does not exist" and be accounted as two occurrences of "file not found" category. As the number of patterns and categories is growing, I'd like to put these couples "regular expression/display string" in a configuration file, basically a dictionary serialization of some sort. I would like this file to be editable by hand, so I'm discarding any form of binary serialization, and also I'd rather not resort to xml serialization to avoid problems with characters to escape (& <> and so on...). Do you have any idea of what could be a good way of accomplishing this? Update: thanks to Daren Thomas and Federico Ramponi, but I cannot have an external python file with possibly arbitrary code.
You have two decent options: 1. Python standard config file format using [ConfigParser](http://docs.python.org/lib/module-ConfigParser.html "ConfigParser") 2. [YAML](http://www.yaml.org/ "YAML") using a library like [PyYAML](http://pyyaml.org/ "PyYAML") The standard Python configuration files look like INI files with `[sections]` and `key : value` or `key = value` pairs. The advantages to this format are: * No third-party libraries necessary * Simple, familiar file format. YAML is different in that it is designed to be a human friendly data serialization format rather than specifically designed for configuration. It is very readable and gives you a couple different ways to represent the same data. For your problem, you could create a YAML file that looks like this: ``` file .* does not exist : file not found user .* not found : authorization error ``` Or like this: ``` { file .* does not exist: file not found, user .* not found: authorization error } ``` Using PyYAML couldn't be simpler: ``` import yaml errors = yaml.load(open('my.yaml')) ``` At this point `errors` is a Python dictionary with the expected format. YAML is capable of representing more than dictionaries: if you prefer a list of pairs, use this format: ``` - - file .* does not exist - file not found - - user .* not found - authorization error ``` Or ``` [ [file .* does not exist, file not found], [user .* not found, authorization error]] ``` Which will produce a list of lists when `yaml.load` is called. One advantage of YAML is that you could use it to export your existing, hard-coded data out to a file to create the initial version, rather than cut/paste plus a bunch of find/replace to get the data into the right format. The YAML format will take a little more time to get familiar with, but using PyYAML is even simpler than using ConfigParser with the advantage is that you have more options regarding how your data is represented using YAML. Either one sounds like it will fit your current needs, ConfigParser will be easier to start with while YAML gives you more flexibilty in the future, if your needs expand. Best of luck!
I sometimes just write a python module (i.e. file) called `config.py` or something with following contents: ``` config = { 'name': 'hello', 'see?': 'world' } ``` this can then be 'read' like so: ``` from config import config config['name'] config['see?'] ``` easy.
Configuration file with list of key-value pairs in python
[ "", "python", "configuration", "serialization", "" ]
I'm trying to catch a ClassCastException when deserializing an object from xml. So, ``` try { restoredItem = (T) decoder.readObject(); } catch (ClassCastException e){ //don't need to crash at this point, //just let the user know that a wrong file has been passed. } ``` And yet this won't as the exception doesn't get caught. What would you suggest?
The code in the question should give you an unchecked cast warning. Listen to -Xlint. All the compiler knows about T is its bounds, which it probably doesn't have (other than explicitly extending Object and a super of the null type). So effectively the cast at runtime is (Object) - not very useful. What you can do is pass in an instance of the Class of the parameterised type (assuming it isn't generic). ``` class MyReader<T> { private final Class<T> clazz; MyReader(Class<T> clazz) { if (clazz == null) { throw new NullPointerException(); } this.clazz = clazz; } public T restore(String from) { ... try { restoredItem = clazz.cast(decoder.readObject()); ... return restoredItem; } catch (ClassCastException exc) { ... } } } ``` Or as a generic method: ``` public <T> T restore(Class<T> clazz, String from) { ... try { restoredItem = clazz.cast(decoder.readObject()); ... ```
There will not be any ClassCastException, except when your T has some base: ``` public class GenericsTest { public static void main(String[] args) { System.out.println(cast(Integer.valueOf(0))); System.out.println(GenericsTest.<Long> cast(Integer.valueOf(0))); System.out.println(GenericsTest.<Long> cast("Hallo")); System.out.println(castBaseNumber(Integer.valueOf(0))); System.out.println(GenericsTest.<Long> castBaseNumber(Integer.valueOf(0))); System.out.println(GenericsTest.<Long> castBaseNumber("Hallo")); } private static <T extends Number> T castBaseNumber(Object o) { T t = (T)o; return t; } private static <T> T cast(Object o) { T t = (T)o; return t; } } ``` In the above example, there will be no ClassCastException in the first 5 calls to cast and castBaseNumber. Only the 6th call throws a ClassCastException, because the compiler effectively translates the cast() to return (Object) o and the castBaseNumber() to return (Number)o;. Wenn you write ``` String s = GenericsTest.<Long> cast("Hallo"); ``` You would get a ClassCastException, but not whithin the cast-method, but at the assignment to s. Therefore I do think, your "T" is not just "T", but "T extends Something". So you could check: ``` Object o = decoder.readObject(); if (o instanceof Something) restoredItem = (T) o; else // Error handling ``` But this will still lead to an error later, when the you use your class. ``` public Reader<T extends Number>{...} Long l = new Reader<Long>("file.xml").getValue(); // there might be the ClassCastException ``` For this case only Tom's advise might help.
How do I catch ClassCastException?
[ "", "java", "exception", "" ]
My master page contains a list as shown here. What I'd like to do though, is add the "class=active" attribute to the list li thats currently active but I have no idea how to do this. I know that the code goes in the aspx page's page\_load event, but no idea how to access the li I need to add the attribute. Please enlighten me. Many thanks. ``` <div id="menu"> <ul id="nav"> <li class="forcePadding"><img src="css/site-style-images/menu_corner_right.jpg" /></li> <li id="screenshots"><a href="screenshots.aspx" title="Screenshots">Screenshots</a></li> <li id="future"><a href="future.aspx" title="Future">Future</a></li> <li id="news"><a href="news.aspx" title="News">News</a></li> <li id="download"><a href="download.aspx" title="Download">Download</a></li> <li id="home"><a href="index.aspx" title="Home">Home</a></li> <li class="forcePadding"><img src="css/site-style-images/menu_corner_left.jpg" /></li> </ul> </div> ```
In order to access these controls from the server-side, you need to make them runat="server" ``` <ul id="nav" runat="server"> <li class="forcePadding"><img src="css/site-style-images/menu_corner_right.jpg" /></li> <li id="screenshots"><a href="screenshots.aspx" title="Screenshots">Screenshots</a></li> <li id="future"><a href="future.aspx" title="Future">Future</a></li> <li id="news"><a href="news.aspx" title="News">News</a></li> <li id="download"><a href="download.aspx" title="Download">Download</a></li> <li id="home"><a href="index.aspx" title="Home">Home</a></li> <li class="forcePadding"><img src="css/site-style-images/menu_corner_left.jpg" /></li> </ul> ``` in the code-behind: ``` foreach(Control ctrl in nav.controls) { if(!ctrl is HtmlAnchor) { string url = ((HtmlAnchor)ctrl).Href; if(url == GetCurrentPage()) // <-- you'd need to write that ctrl.Parent.Attributes.Add("class", "active"); } } ```
The code below can be used to find a named control anywhere within the control hierarchy: ``` public static Control FindControlRecursive(Control rootControl, string id) { if (rootControl != null) { if (rootControl.ID == id) { return rootControl; } for (int i = 0; i < rootControl.Controls.Count; i++) { Control child; if ((child = FindControlRecursive(rootControl.Controls[i], id)) != null) { return child; } } } return null; } ``` So you could do something like: ``` Control foundControl= FindControlRecursive(Page.Master, "theIdOfTheControlYouWantToFind"); ((HtmlControl)foundControl).Attributes.Add("class", "active"); ``` Forgot to mention previously, that you do need runat="server" on any control you want to be able to find in this way =)
C# - How to change HTML elements attributes
[ "", "c#", "html", "asp.net", "" ]
I need a way to bind POJO objects to an external entity, that could be XML, YAML, structured text or anything easy to write and maintain in order to create Mock data for unit testing and TDD. Below are some libraries I tried, but the main problems with them were that I am stuck (for at least more 3 months) to Java 1.4. I'd like any insights on what I could use instead, with as low overhead and upfront setup (like using Schemas or DTDs, for instance) as possible and without complex XML. Here are the libraries I really like (but that apparently doesn't work with 1.4 or doesn't support constructors - you gotta have setters): **RE-JAXB (or Really Easy Java XML Bindings)** [<http://jvalentino.blogspot.com/2008/07/in-response-to-easiest-java-xml-binding.html>](http://jvalentino.blogspot.com/2008/07/in-response-to-easiest-java-xml-binding.html) [<http://sourceforge.net/projects/rejaxb/>](http://sourceforge.net/projects/rejaxb/) Seamlessy binds this: ``` <item> <title>Astronauts' Dirty Laundry</title> <link>http://liftoff.msfc.nasa.gov/news/2003/news-laundry.asp</link> <description>Compared to earlier spacecraft, the International Space Station has many luxuries, but laundry facilities are not one of them. Instead, astronauts have other options.</description> <pubDate>Tue, 20 May 2003 08:56:02 GMT</pubDate> <guid>http://liftoff.msfc.nasa.gov/2003/05/20.html#item570</guid> </item> ``` To this: ``` @ClassXmlNodeName("item") public class Item { private String title; private String link; private String description; private String pubDate; private String guid; //getters and settings go here... } ``` Using: ``` Rss rss = new Rss(); XmlBinderFactory.newInstance().bind(rss, new File("Rss2Test.xml")); ``` Problem: It relies on annotations, so no good for Java 1.4 **jYaml** [<http://jyaml.sourceforge.net/>](http://jyaml.sourceforge.net/) Seamlessly binds this: ``` --- !user name: Felipe Coury password: felipe modules: - !module id: 1 name: Main Menu admin: !user name: Admin password: password ``` To this: ``` public class User { private String name; private String password; private List modules; } public class Module { private int id; private String name; private User admin; } ``` Using: ``` YamlReader reader = new YamlReader(new FileReader("example.yaml")); reader.getConfig().setClassTag("user", User.class); reader.getConfig().setClassTag("module", Module.class); User user = (User) reader.read(User.class); ``` Problem: It won't work with constructors (so no good for immutable objects). I'd have to either change my objects or write custom code por handling the YAML parsing. Remember that I would like to avoid - as much as I can - writing data descriptors, I'd like something that "just works". Do you have any suggestions?
If the objects to be populated are simple beans it may be a good idea to look at apache common's BeanUtils class. The populate() method might suit the described cases. Generally dependency injection frameworks like Spring can be very useful, but that might not be answer for the current problem. For input in form of xml, jibx might be a good alternative, so would be jaxb 1.0.
Just use XStream (for XML or you could give a try to JSON). But... Man, I just can't avoid to think that put the test data outside the unit test itself will leads you to unreadable tests. You will need look two files when reading a test case, you will lose refactoring tools (when changing property's name). Jay Fields can explain it better than me: <http://blog.jayfields.com/2007/06/testing-inline-setup.html> Kind Regards
What libs can I use to bind POJOs to external files for TDD without much overhead?
[ "", "java", "binding", "pojo", "" ]
In Java, I have a subclass `Vertex` of the Java3D class `Point3f`. Now `Point3f` computes `equals()` based on the values of its coordinates, but for my `Vertex` class I want to be stricter: two vertices are only equal if they are the same object. So far, so good: ``` class Vertex extends Point3f { // ... public boolean equals(Object other) { return this == other; } } ``` I know this violates the contract of `equals()`, but since I'll only compare vertices to other vertices this is not a problem. Now, to be able to put vertices into a `HashMap`, the `hashCode()` method must return results consistent with `equals()`. It currently does that, but probably bases its return value on the fields of the `Point3f`, and therefore will give hash collisions for different `Vertex` objects with the same coordinates. Therefore I would like to base the `hashCode()` on the object's address, instead of computing it from the `Vertex`'s fields. I know that the `Object` class does this, but I cannot call its `hashCode()` method because `Point3f` overrides it. So, actually my question is twofold: * Should I even want such a shallow `equals()`? * If yes, then, how do I get the object's address to compute the hash code from? Edit: I just thought of something... I could generate a random `int` value on object creation, and use that for the hash code. Is that a good idea? Why (not)?
Either use System.identityHashCode() or use an IdentityHashMap.
[`System.identityHashCode()`](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/System.html#identityHashCode(java.lang.Object)) returns the same hash code for the given object as would be returned by the default method `hashCode()`, whether or not the given object's class overrides `hashCode()`.
How to compute the hashCode() from the object's address?
[ "", "java", "hash", "equals", "hashcode", "" ]
In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two separate databases with a view on the main database which would union the two separate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirky things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?
I don't think that you are really going to gain anything by partitioning the table across multiple databases in a single server. All you have essentially done there is increased the overhead in working with the "table" in the first place by having several instances (i.e. open in two different DBs) of it under a single SQL Server instance. How large of a dataset do you have? I have a client with a 6 million row table in SQL Server that contains 2 years worth of sales data. They use it transactionally and for reporting without any noticiable speed problems. Tuning the indexes and choosing the correct clustered index is crucial to performance of course. If your dataset is really large and you are looking to partition, you will get more bang for your buck partitioning the table across physical servers.
Partitioning is not something to be undertaken lightly as there can be many subtle performance implications. My first question is are you referring simply to placing larger table objects in separate filegroups (on separate spindles) or are you referring to data partitioning inside of a table object? I suspect that the situation described is an attempt to have the physical storage of certain large tables on different spindles from the rest of the tables. In this case, adding the extra overhead of separate databases, losing any ability to enforce referential integrity across databases, and the security implications of enabling cross-database ownership chaining does not provide any benefit over using multiple filegroups within a single database. If, as is quite possible, the separate databases you refer to in your question are not even stored on separate spindles but are all stored on the same spindle then you negate even the slight performance benefit you could have gained by physically separating your disk activity and have received absolutely no benefit. I would suggest instead of using additional databases to hold large tables you look into the Filegroup topic in SQL Server Books Online or for a quick review see this [article:](http://www.mssqltips.com/tip.asp?tip=1112) If you are interested in data partitioning (including partitioning into multiple file groups) then I recommend reading articles by Kimberly Tripp, who gave an excellent presentation at the time SQL Server 2005 came out about the improvements available there. A good place to start is this [whitepaper](http://www.sqlskills.com/resources/Whitepapers/Partitioning%20in%20SQL%20Server%202005%20Beta%20II.htm)
What is the best way to partition large tables in SQL Server?
[ "", "sql", "sql-server", "partitioning", "" ]
The `JPA` (Java Persistence API) specification has 2 different ways to specify entity composite keys: `@IdClass` and `@EmbeddedId`. I'm using both annotations on my mapped entities, but it turns out to be a big mess to people who aren't very familiar with `JPA`. I want to adopt only one way to specify composite keys. Which one is really the best? Why?
I consider that `@EmbeddedId` is probably more verbose because with `@IdClass` you cannot access the entire primary key object using any field access operator. Using the `@EmbeddedId` you can do like this: ``` @Embeddable class EmployeeId { name, dataOfBirth } @Entity class Employee { @EmbeddedId EmployeeId employeeId; ... } ``` This gives a clear notion of the fields that make the composite key because they are all aggregated in a class that is accessed trough a field access operator. Another difference with `@IdClass` and `@EmbeddedId` is when it comes to write HQL : With `@IdClass` you write: ``` select e.name from Employee e ``` and with `@EmbeddedId` you have to write: ``` select e.employeeId.name from Employee e ``` You have to write more text for the same query. Some may argue that this differs from a more natural language like the one promoted by `IdClass`. But most of the times understanding right from the query that a given field is part of the composite key is of invaluable help.
There are three strategies to use a compound primary key: * Mark it as `@Embeddable` and add to your entity class a normal property for it, marked with `@Id`. * Add to your entity class a normal property for it, marked with `@EmbeddedId`. * Add properties to your entity class for all of its fields, mark them with `@Id`,and mark your entity class with `@IdClass`, supplying the class of your primary key class. The use of `@Id` with a class marked as `@Embeddable` is the most natural approach. The `@Embeddable` tag can be used for non-primary key embeddable values anyway. It allows you to treat the compound primary key as a single property, and it permits the reuse of the `@Embeddable` class in other tables. The next most natural approach is the use of the `@EmbeddedId` tag. Here, the primary key class cannot be used in other tables since it is not an `@Embeddable` entity, but it does allow us to treat the key as a single attribute of some class. Finally, the use of the `@IdClass` and `@Id` annotations allows us to map the compound primary key class using properties of the entity itself corresponding to the names of the properties in the primary key class. **The names must correspond** (there is no mechanism for overriding this), and the primary key class must honor the same obligations as with the other two techniques. The only advantage to this approach is its ability to “hide” the use of the primary key class from the interface of the enclosing entity. The `@IdClass` annotation takes a value parameter of Class type, which must be the class to be used as the compound primary key. **The fields that correspond to the properties of the primary key class to be used must all be annotated with `@Id`.** Reference: <http://www.apress.com/us/book/9781430228509>
Which annotation should I use: @IdClass or @EmbeddedId
[ "", "java", "jpa", "annotations", "java-persistence-api", "" ]
I'm looking for a log viewer with similar capablilties as Chainsaw, in which I can tail Glassfish log files over for instance SSH/SCP. Does anyone know if such a tool exist?
From [other post](https://stackoverflow.com/questions/186968/map-ssh-drive-in-windows): 1. Configure putty to write out a log file from the connection. 2. Do the log analysis against the putty log file.
OtrosLogViewer user VFSFileSystems and can load logs from remote server (ftp, sftp, smb, http). It supports java.util.logging.XMLFormatter and java.util.logging.SimpleFormatter. See: <http://code.google.com/p/otroslogviewer/>
Remote log viewer for Java/Glassfish log files
[ "", "java", "logging", "" ]
I have been learning about the basics of C# but haven't come across a good explanation of what this is: ``` var l = new List<string>(); ``` I don't know what the `<string>` is doing or if it's the `List` that is doing the magic. I have also seen objects been thrown within the `< >` tags. Can someone explain this to me with examples, please?
That is the generic syntax for C#. The basic concept is that it allows you to use a Type placeholder and substitute the actual real type in at compile time. For example, the old way: ``` ArrayList foos = new Arraylist(); foos.Add("Test"); ``` worked by making ArrayList store a list of System.Objects (The base type for all things .NET). So, when adding or retrieving an object from the list, The CLR would have to cast it to object, basically what really happens is this: ``` foos.Add("Test" as System.Object); string s = foos[1] as String. ``` This causes a performance penalty from the casting, and its also unsafe because I can do this: ``` ArrayList listOfStrings = new ArrayList(); listOfStrings.Add(1); listOfStrings.Add("Test"); ``` This will compile just fine, even though I put an integer in listOfStrings. Generics changed all of this, now using Generics I can declare what Type my collection expects: ``` List<int> listOfIntegers = new List<int>(); List<String> listOfStrings = new List<String>(); listOfIntegers.add(1); // Compile time error. listOfIntegers.add("test"); ``` This provides compile-time type safety, as well as avoids expensive casting operations. The way you leverage this is pretty simple, though there are some advanced edge cases. The basic concept is to make your class type agnostic by using a type placeholder, for example, if I wanted to create a generic "Add Two Things" class. ``` public class Adder<T> { public T AddTwoThings(T t1, T t2) { return t1 + t2; } } Adder<String> stringAdder = new Adder<String>(); Console.Writeline(stringAdder.AddTwoThings("Test,"123")); Adder<int> intAdder = new Adder<int>(); Console.Writeline(intAdder.AddTwoThings(2,2)); ``` For a much more detailed explanation of generics, I can't recommend enough the book CLR via C#.
It's *generics* - it's a form of type parameterisation. In your example, it's making l refer to a list of strings - the list will only ever contain strings: the compiler treats it (pretty much) as if everywhere that the API docs mention "T" it actually says "string". So, you can only add strings to it, and if you use the indexer you don't need to cast to string, etc. To be honest, giving generics detailed coverage on an online forum is pretty much impossible. (In C# in Depth, I take nearly 50 pages talking about generics.) However, armed with the name of the feature, you should be in a much better position to find out more. The MSDN ["Introduction to C# Generics"](http://msdn.microsoft.com/en-us/library/ms379564(VS.80).aspx) is probably a good starting point. Asking specific questions about generics on SO is likely to yield good results - I just don't think it can really be covered properly in one question/answer.
What is the "< >" syntax within C#
[ "", "c#", ".net", "" ]
What are the most common browser compatibility issues across the major desktop browsers? No dups please. Up-vote problems you've run into. I'm hoping for the list to self-sort. "IE sux" is not a pitfall, but a call for down-vote. [Edit] Yes, I know it's a poll - I'm not posting answers in this to gather points - I'm actually interested in knowing what people typically run into.
[Quirksmode](http://quirksmode.org/) has a comprehensive list of a lot of differencies requiring attention !-) -- but he is, like most other sites and bloggers on the net, focused in his way, and that results in some minor or major bugs and inconsistencies ...
Transparent PNGs in Internet Explorer 6, especially because the common, JavaScript-less workaround of using the AlphaImageLoader can have the side effect of [locking up IE6](http://blogs.cozi.com/tech/2008/03/transparent-png.html).
Typical pitfalls of cross-browser compatibility
[ "", "javascript", "html", "css", "cross-browser", "" ]
In many C/C++ macros I'm seeing the code of the macro wrapped in what seems like a meaningless `do while` loop. Here are examples. ``` #define FOO(X) do { f(X); g(X); } while (0) #define FOO(X) if (1) { f(X); g(X); } else ``` I can't see what the `do while` is doing. Why not just write this without it? ``` #define FOO(X) f(X); g(X) ```
The `do ... while` and `if ... else` are there to make it so that a semicolon after your macro always means the same thing. Let's say you had something like your second macro. ``` #define BAR(X) f(x); g(x) ``` Now if you were to use `BAR(X);` in an `if ... else` statement, where the bodies of the if statement were not wrapped in curly brackets, you'd get a bad surprise. ``` if (corge) BAR(corge); else gralt(); ``` The above code would expand into ``` if (corge) f(corge); g(corge); else gralt(); ``` which is syntactically incorrect, as the else is no longer associated with the if. It doesn't help to wrap things in curly braces within the macro, because a semicolon after the braces is syntactically incorrect. ``` if (corge) {f(corge); g(corge);}; else gralt(); ``` There are two ways of fixing the problem. The first is to use a comma to sequence statements within the macro without robbing it of its ability to act like an expression. ``` #define BAR(X) f(X), g(X) ``` The above version of bar `BAR` expands the above code into what follows, which is syntactically correct. ``` if (corge) f(corge), g(corge); else gralt(); ``` This doesn't work if instead of `f(X)` you have a more complicated body of code that needs to go in its own block, say for example to declare local variables. In the most general case the solution is to use something like `do ... while` to cause the macro to be a single statement that takes a semicolon without confusion. ``` #define BAR(X) do { \ int i = f(X); \ if (i > 4) g(i); \ } while (0) ``` You don't have to use `do ... while`, you could cook up something with `if ... else` as well, although when `if ... else` expands inside of an `if ... else` it leads to a "[dangling else](http://en.wikipedia.org/wiki/Dangling_else)", which could make an existing dangling else problem even harder to find, as in the following code. ``` if (corge) if (1) { f(corge); g(corge); } else; else gralt(); ``` The point is to use up the semicolon in contexts where a dangling semicolon is erroneous. Of course, it could (and probably should) be argued at this point that it would be better to declare `BAR` as an actual function, not a macro. In summary, the `do ... while` is there to work around the shortcomings of the C preprocessor. When those C style guides tell you to lay off the C preprocessor, this is the kind of thing they're worried about.
Macros are copy/pasted pieces of text the pre-processor will put in the genuine code; the macro's author hopes the replacement will produce valid code. There are three good "tips" to succeed in that: ## Help the macro behave like genuine code Normal code is usually ended by a semi-colon. Should the user view code not needing one... ``` doSomething(1) ; DO_SOMETHING_ELSE(2) // <== Hey? What's this? doSomethingElseAgain(3) ; ``` This means the user expects the compiler to produce an error if the semi-colon is absent. But the real real good reason is that at some time, the macro's author will perhaps need to replace the macro with a genuine function (perhaps inlined). So the macro should **really** behave like one. So we should have a macro needing semi-colon. ## Produce a valid code As shown in jfm3's answer, sometimes the macro contains more than one instruction. And if the macro is used inside a if statement, this will be problematic: ``` if(bIsOk) MY_MACRO(42) ; ``` This macro could be expanded as: ``` #define MY_MACRO(x) f(x) ; g(x) if(bIsOk) f(42) ; g(42) ; // was MY_MACRO(42) ; ``` The `g` function will be executed regardless of the value of `bIsOk`. This means that we must have to add a scope to the macro: ``` #define MY_MACRO(x) { f(x) ; g(x) ; } if(bIsOk) { f(42) ; g(42) ; } ; // was MY_MACRO(42) ; ``` ## Produce a valid code 2 If the macro is something like: ``` #define MY_MACRO(x) int i = x + 1 ; f(i) ; ``` We could have another problem in the following code: ``` void doSomething() { int i = 25 ; MY_MACRO(32) ; } ``` Because it would expand as: ``` void doSomething() { int i = 25 ; int i = 32 + 1 ; f(i) ; ; // was MY_MACRO(32) ; } ``` This code won't compile, of course. So, again, the solution is using a scope: ``` #define MY_MACRO(x) { int i = x + 1 ; f(i) ; } void doSomething() { int i = 25 ; { int i = 32 + 1 ; f(i) ; } ; // was MY_MACRO(32) ; } ``` The code behaves correctly again. ## Combining semi-colon + scope effects? There is one C/C++ idiom that produces this effect: The do/while loop: ``` do { // code } while(false) ; ``` The do/while can create a scope, thus encapsulating the macro's code, and needs a semi-colon in the end, thus expanding into code needing one. The bonus? The C++ compiler will optimize away the do/while loop, as the fact its post-condition is false is known at compile time. This means that a macro like: ``` #define MY_MACRO(x) \ do \ { \ const int i = x + 1 ; \ f(i) ; g(i) ; \ } \ while(false) void doSomething(bool bIsOk) { int i = 25 ; if(bIsOk) MY_MACRO(42) ; // Etc. } ``` will expand correctly as ``` void doSomething(bool bIsOk) { int i = 25 ; if(bIsOk) do { const int i = 42 + 1 ; // was MY_MACRO(42) ; f(i) ; g(i) ; } while(false) ; // Etc. } ``` and is then compiled and optimized away as ``` void doSomething(bool bIsOk) { int i = 25 ; if(bIsOk) { f(43) ; g(43) ; } // Etc. } ```
Why use apparently meaningless do-while and if-else statements in macros?
[ "", "c++", "c", "c-preprocessor", "c++-faq", "" ]
I have this script: ``` select name,create_date,modify_date from sys.procedures order by modify_date desc ``` I can see what procedures were modified lately. I will add a "where modify\_date >= " And I'd like to use some system stored procedure, that will generate me : drop + create scripts for the (let's say 5 matching) stored procedures Can i do this somehow? thanks --- ok. i have the final version: <http://swooshcode.blogspot.com/2008/10/generate-stored-procedures-scripts-for.html> you guys helped a lot thanks
This ain't pretty, but it works. Run the output from it manually or execute it with sp\_executesql. ``` SELECT OBJECT_DEFINITION(object_id), 'drop procedure [' + name + ']' FROM sys.procedures WHERE modify_date >= @date ``` You will have to worry about lost rights as well.
No cursor necessary (modify as desired for schemas, etc): ``` DECLARE @dt AS datetime SET @dt = '10/1/2008' DECLARE @sql AS varchar(max) SELECT @sql = COALESCE(@sql, '') + '-- ' + o.name + CHAR(13) + CHAR(10) + 'DROP PROCEDURE ' + o.name + CHAR(13) + CHAR(10) + 'GO' + CHAR(13) + CHAR(10) + m.definition + CHAR(13) + CHAR(10) + 'GO' + CHAR(13) + CHAR(10) FROM sys.sql_modules AS m INNER JOIN sys.objects AS o ON m.object_id = o.object_id INNER JOIN sys.procedures AS p ON m.object_id = p.object_id WHERE p.modify_date >= @dt PRINT @sql -- or EXEC (@sql) ```
Generate Multiple and Filtered Drop + Create Stored Procedures
[ "", "sql", "sql-server", "stored-procedures", "system-stored-procedures", "" ]
What good profilers do you know? What is a good way to measure and tweak the performance of a C++ MFC application? Is Analysis of algorithms really neccesary? <http://en.wikipedia.org/wiki/Algorithm_analysis>
I strongly recommend [AQTime](http://www.automatedqa.com) if you are staying on the Windows platform. It comes with a load of profilers, including static code analysis, and works with most important Windows compilers and systems, including Visual C++, .NET, Delphi, Borland C++, Intel C++ and even gcc. And it integrates into Visual Studio, but can also be used standalone. I love it.
If you're (still) using Visual C++ 6.0, I suggest using the built-in profiler. For more recent versions you could try [Compuware DevPartner Performance Analysis Community Edition](http://www.compuware.com/media.asp?cid=701000000004ej0).
How to measure performance in a C++ (MFC) application?
[ "", "c++", "performance", "optimization", "mfc", "" ]
I've been working on a very simple crud generator for pylons. I came up with something that inspects ``` SomeClass._sa_class_manager.mapper.c ``` Is it ok to inspect this (or to call methods begining with underscore)? I always kind of assumed this is legal though frowned upon as it relies heavily on the internal structure of a class/object. But hey, since python does not really have interfaces in the Java sense maybe it is OK.
It is intentional (in Python) that there are no "private" scopes. It is a convention that anything that starts with an underscore should not ideally be used, and hence you may not complain if its behavior or definition changes in a next version.
In general, this usually indicates that the method is effectively internal, rather than part of the documented interface, and should not be relied on. Future versions of the library are free to rename or remove such methods, so if you care about future compatability without having to rewrite, avoid doing it.
Is it OK to inspect properties beginning with underscore?
[ "", "python", "sqlalchemy", "pylons", "" ]
I have scripts calling other script files but I need to get the filepath of the file that is currently running within the process. For example, let's say I have three files. Using [execfile](http://docs.python.org/library/functions.html#execfile): * `script_1.py` calls `script_2.py`. * In turn, `script_2.py` calls `script_3.py`. How can I get the file name and path of **`script_3.py`**, *from code within `script_3.py`*, without having to pass that information as arguments from `script_2.py`? (Executing `os.getcwd()` returns the original starting script's filepath not the current file's.)
p1.py: ``` execfile("p2.py") ``` p2.py: ``` import inspect, os print (inspect.getfile(inspect.currentframe())) # script filename (usually with path) print (os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))) # script directory ```
``` __file__ ``` as others have said. You may also want to use [os.path.realpath](https://docs.python.org/3/library/os.path.html#os.path.realpath) to eliminate symlinks: ``` import os os.path.realpath(__file__) ```
How do I get the path and name of the python file that is currently executing?
[ "", "python", "file", "reflection", "scripting", "filesystems", "" ]
Where can I find a free, very quick, and reliable implementation of FFT in C#? That can be used in a product? Or are there any restrictions?
[AForge.net](http://code.google.com/p/aforge/) is a free (open-source) library with Fast Fourier Transform support. (See Sources/Imaging/[ComplexImage.cs](https://github.com/andrewkirillov/AForge.NET/blob/master/Sources/Imaging/ComplexImage.cs) for usage, Sources/Math/[FourierTransform.cs](https://github.com/andrewkirillov/AForge.NET/blob/master/Sources/Math/FourierTransform.cs) for implemenation)
The guy that did AForge did a fairly good job but it's not commercial quality. It's great to learn from but you can tell he was learning too so he has some pretty serious mistakes like assuming the size of an image instead of using the correct bits per pixel. I'm not knocking the guy, I respect the heck out of him for learning all that and show us how to do it. I think he's a Ph.D now or at least he's about to be so he's really smart it's just not a commercially usable library. The Math.Net library has its own weirdness when working with Fourier transforms and complex images/numbers. Like, if I'm not mistaken, it outputs the Fourier transform in human viewable format which is nice for humans if you want to look at a picture of the transform but it's not so good when you are expecting the data to be in a certain format (the normal format). I could be mistaken about that but I just remember there was some weirdness so I actually went to the original code they used for the Fourier stuff and it worked much better. (ExocortexDSP v1.2 <http://www.exocortex.org/dsp/>) Math.net also had some other funkyness I didn't like when dealing with the data from the FFT, I can't remember what it was I just know it was much easier to get what I wanted out of the ExoCortex DSP library. I'm not a mathematician or engineer though; to those guys it might make perfect sense. So! I use the FFT code yanked from ExoCortex, which Math.Net is based on, without anything else and it works great. And finally, I know it's not C#, but I've started looking at using FFTW (<http://www.fftw.org/>). And this guy already made a C# wrapper so I was going to check it out but haven't actually used it yet. (<http://www.sdss.jhu.edu/~tamas/bytes/fftwcsharp.html>) OH! I don't know if you are doing this for school or work but either way there is a GREAT free lecture series given by a Stanford professor on iTunes University. <https://podcasts.apple.com/us/podcast/the-fourier-transforms-and-its-applications/id384232849>
An implementation of the fast Fourier transform (FFT) in C#
[ "", "c#", "signal-processing", "fft", "" ]
Does anybody know a way with JavaScript or CSS to basically grey out a certain part of a form/div in HTML? I have a '*User Profile*' form where I want to disable part of it for a '*Non-Premium*' member, but want the user to see what is behind the form and place a '*Call to Action*' on top of it. Does anybody know an easy way to do this either via CSS or JavaScript? Edit: I will make sure that the form doesn't work on server side so CSS or JavaScript will suffice.
You might try the jQuery [BlockUI](http://malsup.com/jquery/block/) plugin. It's quite flexible and is very easy to use, if you don't mind the dependency on jQuery. It supports [element-level](http://malsup.com/jquery/block/#element) blocking as well an overlay message, which seems to be what you need. The code to use it is as simple as: ``` $('div.profileform').block({ message: '<h1>Premium Users only</h1>', }); ``` You should also keep in mind that you may still need some sort of server-side protection to make sure that Non-Premium users can't use your form, since it'll be easy for people to access the form elements if they use something like Firebug.
Add this to your HTML: ``` <div id="darkLayer" class="darkClass" style="display:none"></div> ``` And this to your CSS: ``` .darkClass { background-color: white; filter:alpha(opacity=50); /* IE */ opacity: 0.5; /* Safari, Opera */ -moz-opacity:0.50; /* FireFox */ z-index: 20; height: 100%; width: 100%; background-repeat:no-repeat; background-position:center; position:absolute; top: 0px; left: 0px; } ``` And finally this to turn it off and on with JavaScript: ``` function dimOff() { document.getElementById("darkLayer").style.display = "none"; } function dimOn() { document.getElementById("darkLayer").style.display = ""; } ``` Change the dimensions of the darkClass to suite your purposes.
CSS/JavaScript Use Div to grey out section of page
[ "", "javascript", "css", "" ]
I'm slowly moving all of my LAMP websites from mysql\_ functions to PDO functions and I've hit my first brick wall. I don't know how to loop through results with a parameter. I am fine with the following: ``` foreach ($database->query("SELECT * FROM widgets") as $results) { echo $results["widget_name"]; } ``` However if I want to do something like this: ``` foreach ($database->query("SELECT * FROM widgets WHERE something='something else'") as $results) { echo $results["widget_name"]; } ``` Obviously the 'something else' will be dynamic.
Here is an example for using PDO to connect to a DB, to tell it to throw Exceptions instead of php errors (will help with your debugging), and using parameterised statements instead of substituting dynamic values into the query yourself (highly recommended): ``` // connect to PDO $pdo = new PDO("mysql:host=localhost;dbname=test", "user", "password"); // the following tells PDO we want it to throw Exceptions for every error. // this is far more useful than the default mode of throwing php errors $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); // prepare the statement. the placeholders allow PDO to handle substituting // the values, which also prevents SQL injection $stmt = $pdo->prepare("SELECT * FROM product WHERE productTypeId=:productTypeId AND brand=:brand"); // bind the parameters $stmt->bindValue(":productTypeId", 6); $stmt->bindValue(":brand", "Slurm"); // initialise an array for the results $products = array(); $stmt->execute(); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { $products[] = $row; } ```
According to the [PHP documentation](https://www.php.net/manual/en/pdo.query.php) is says you should be able to to do the following: ``` $sql = "SELECT * FROM widgets WHERE something='something else'"; foreach ($database->query($sql) as $row) { echo $row["widget_name"]; } ```
How do I loop through a MySQL query via PDO in PHP?
[ "", "php", "mysql", "pdo", "" ]
I need to set a dependency property on a control (Slider.Value) in my code without it triggering a ValueChanged event (since I set the value, not the user). What is the best way to handle this situation in WPF? For clarification, what I am trying to do is hook up WPF sliders to a WinForms User Control. Currently in my app I have a ValueChanged event handler that passes the slider value to the WinForms control via a method call. The WinForms control (which is actually a wrapper around a native OpenGL window) needs to be able to dynamically change the slider based on it's internal calculations. To do this I use an abstraction of a slider (ISlider) , I instantiate a WPF-flavor of that slider in my app, and pass a handle to it's base to the WinForms control via a .NET property on the WinForms User Control. All of this is currently working, it's just that when the internal logic decides the slider needs to change, it calls ISlider::SetPos(), which then changes the WPF slider, which then triggers a ValueChanged event on the slider, and the handler for that event extracts the slider's position and passes it in to the WinForms control which originated the event in the first place. The suggestions by ligaz and Alan Le both seem like they should work, but I'm just not sure I'm going about this in the best way.
Here's a simply workaround/hack. Add a boolean to keep track whether you changed the setting, let's say "IsChangedByMe". When you change the dependency property in code, set the bool to true. In the ValueChanged event, if IsChangedByMe is true, don't do anything.
Are you sure you really want to do that? If there's a piece of UI databound to that property and you allow for changing the value without triggering the ValueChanged event you'd quickly end up with your UI no longer synchronized with the data. In the case of your slider, imagine the user places it at 75%. Now your code changes it to 10% in the background but suppresses the change notification. It still looks on the screen like it's at 75% (since it wasn't told it changed), but it's being used as 10% by the code that's running. Sounds like a recipe for confusion.
Setting WPF dependency property without triggering events
[ "", "c#", "wpf", "events", "dependency-properties", "" ]
I've got a Java client that needs to access a remote database. It is the goal to hide database credentials from the user and not hardcode any credentials within the code. Therefore, the database access will probably have to be on the server side. I'm restricted to use Ibatis as a data abstraction framework. Apart from that I have JBoss running on the webserver, allowing me to use data sources. How would you design the remote database access and data serialization/deserialization. would you prefer web services of some kind of data stream over a socket? How would you realize either of both?
Build a Service Layer and expose it over RMI - possibly as EJB3 stateless session beans as you have JBoss, possibly as pure RMI. I wouldn't bother with web services unless you have a specific need. RMI will take case of serialisation for you. Your service layer needs to expose a method to authenticate users using their credentials entered on startup of the Swing app. All calls for data go through the service layer. No SQL exists in the Swing app. There are other benfits of this arrangment other than just hiding the database credentials. Not only do you end up with a layered architecture, but you gain efficiencies from sharing prepared statements amongst all your clients by having a single data source on the server.
As has been already said, you have to connect to a server which handles the database connection. There is no way to effectively prevent someone from breaking your security, with 30 minutes of effort. If the clients are connecting somewhat locally, within an intranet, using EJB's on your appserver is probably the best choice... though you probably want stateless session beans, i wouldnt necessarily discount message driven beans. For longer distances where the traffic is coming from the outside, I would use webservices over HTTPS In any event, most appservers have mechanisms to expose their EJB's as webservices, with the WSDL; and there are about a hundred utilities to generate clients, to call the webservice, from a WSDL (axis's wsdl2java works well enough)
JAVA Swing client, Data Access to Remote Database; Ibatis
[ "", "java", "jboss", "data-access-layer", "ibatis", "data-access", "" ]
Unfortunately, sometimes the only way to debug a program is by going through its long log files. I searched for a decent log viewer for a while now, and haven't found a real solution. The only program that seemed to be most appropriate was [Chainsaw](http://logging.apache.org/chainsaw/2.x/download.html) with its Socket connector but after a few short uses the program proved to be buggy and unresponsive at best. For my purposes, a log viewer should at least be able to mark log levels (for example with different colors) and perform easy filtering based on packages and free-text. Is there any other (free) log viewer? I'm looking for anything that could work well with log4j.
You didn't mention an OS, so I'll mention this though it is only on Windows. **Bare Metal Software makes a product called [BareTail](http://www.baremetalsoft.com/baretail/)** that has a nice interface and works well. They have a free version with a startup nag screen, a licensed version with no nag, and a pro version with additional features. **It has configurable highlighting based on matching lines against keywords.** They also have a BareGrep product too, which provides similar grep capabilities. Both are excellent and very stable and better than anything I've seen on Windows. I liked them so much I bought the bundle with both pro versions for $50.
Just wanted to say that I've finally found a tool that I can get along with just fine... It's called LogExpert (see <http://www.log-expert.de/>) and is free. Besides the usual tail function, it also has a filter and a search function - two crucial things that are missing from BareTail. And if you happen to want to customize the way it parses columns further, it's dead simple. Just implement an interface in .NET and you're done (and I'm a Java/Flex programmer...)
Java Log Viewer
[ "", "java", "log4j", "viewer", "" ]
Recently I have started playing with jQuery, and have been following a couple of tutorials. Now I feel slightly competent with using it (it's pretty easy), and I thought it would be cool if I were able to make a 'console' on my webpage (as in, you press the ` key like you do in [FPS](http://en.wiktionary.org/wiki/first-person_shooter) games, etc.), and then have it Ajax itself back to the server in-order to do stuff. I originally thought the best way would be to just get the text inside the textarea, and then split it, or should I use the keyup event, convert the keycode returned to an ASCII character, append the character to a string and send the string to the server (then empty the string). I couldn't find any information on getting text from a textarea, all I got was keyup information. Also, how can I convert the keycode returned to an ASCII character?
Why would you want to convert key strokes to text? Add a button that sends the text inside the textarea to the server when clicked. You can get the text using the value attribute as the poster before has pointed out, or using jQuery's API: ``` $('input#mybutton').click(function() { var text = $('textarea#mytextarea').val(); //send to server and process response }); ```
Where it is often the text function you use (e.g. in divs etc) then for text area it is val get: ``` $('#myTextBox').val(); ``` set: ``` $('#myTextBox').val('new value'); ```
jQuery get textarea text
[ "", "javascript", "jquery", "ajax", "" ]
I'm trying to generate some code at runtime where I put in some boiler-plate stuff and the user is allowed to enter the actual working code. My boiler-plate code looks something like this: ``` using System; public class ClassName { public double TheFunction(double input) { // user entered code here } } ``` Ideally, I think I want to use string.Format to insert the user code and create a unique class name, but I get an exception on the format string unless it looks like this: ``` string formatString = @" using System; public class ClassName {0} public double TheFunction(double input) {0} {2} {1} {1}"; ``` Then I call string.Format like this: ``` string entireClass = string.Format(formatString, "{", "}", userInput); ``` This is fine and I can deal with the ugliness of using {0} and {1} in the format string in place of my curly braces except that now my user input cannot use curly braces either. Is there a way to either escape the curly braces in my format string, or a good way to turn the curly braces in the user code into {0}'s and {1}'s? BTW, I know that this kind of thing is a security problem waiting to happen, but this is a Windows Forms app that's for internal use on systems that are not connected to the net so the risk is acceptable in this situation.
Escape them by doubling them up: ``` string s = String.Format("{{ hello to all }}"); Console.WriteLine(s); //prints '{ hello to all }' ``` From <http://msdn.microsoft.com/en-us/netframework/aa569608.aspx#Question1>
"{{" and "}}"
How do you put { and } in a format string
[ "", "c#", ".net", "string", "formatting", "" ]
I have a list of Date objects, and a target Date. I want to find the date in the list that's nearest to the target date, but only dates that are before the target date. Example: 2008-10-1 2008-10-2 2008-10-4 With a target date of 2008-10-3, I want to get 2008-10-2 What is the best way to do it?
Sietse de Kaper solution assumes a *reverse* sorted list, definitely not the most natural thing to have around The natural sort order in java is following the ascending natural ordering. (see Collection.sort <http://java.sun.com/j2se/1.5.0/docs/api/java/util/Collections.html#sort(java.util.List)> documentation) From your example, ``` target date = 2008-10-03 list = 2008-10-01 2008-10-02 2008-10-04 ``` If another developper uses your method with a naive approach he would get 2008-10-01 which is not what was expected - Don't make assumptions as to the ordering of the list.- If you have to for performance reasons try to follow the most natural convention (sorted ascending)- If you really have to follow another convention you really should document the hell out of it. ``` private Date getDateNearest(List<Date> dates, Date targetDate){ Date returnDate = targetDate for (Date date : dates) { // if the current iteration'sdate is "before" the target date if (date.compareTo(targetDate) <= 0) { // if the current iteration's date is "after" the current return date if (date.compareTo(returnDate) > 0){ returnDate=date; } } } return returnDate; } ``` edit - I also like the Treeset answer but I think it might be slightly slower as it is equivalent to sorting the data then looking it up => nlog(n) for sorting and then the documentation implies it is log(n) for access so that would be nlog(n)+log(n) vs n
``` private Date getDateNearest(List<Date> dates, Date targetDate){ return new TreeSet<Date>(dates).lower(targetDate); } ``` Doesn't require a pre-sorted list, TreeSort fixes that. It'll return null if it can't find one though, so you will have to modify it if that's a problem. Not sure of the efficency either :P
Best way to find date nearest to target in a list of dates?
[ "", "java", "date", "" ]
Ok, this is a bit of a cheeky question. I want to build a simple text editor (using my own text mode screen handling). I just want a good example of data structures that can be used to represent the text buffer, and some simple examples of char/text insertion/deletion. I can handle all the rest of the code myself (file i/o, console i/o etc). A link to a nice simple editor source would be great (C or C++).
This is 2008. Don't write a text editor; you're reinventing fire. Still here? I'm not sure if this applies or what platforms you plan to support, but the [Neatpad series of tutorials](http://www.catch22.net/tuts/neatpad/) is a great place to start thinking about writing a text editor. They focus on Win32 as the basic platform, but many of the lessons learned will apply anywhere.
I used to work for a company whose main product was a text editor. While I mainly worked on the scripting language for it, the internal design of the editor itself was naturally a major topic of discussion. It seemed like it broke down into two general trains of thought. One was that you stored each line by itself, and then link them together in a linked list or other overall data structure that you were happy with. The advantage was that any line-oriented editing actions (such as deleting an entire line, or moving a line block within a file) were beyond trivial to implement and therefore lightning fast. The down side was that loading and saving the file took a bit more work, because you'd have to traverse the entire file and build these data structures. The other train of thought at that time was to try to keep hunks of text together regardless of line breaks when they hadn't been changed, breaking them up only as required by editing. The advantage was that an unedited hunk of the file could be blasted out to a file very easily. So simple edits where you load a file, change one line, and save the file, were super fast. The disadvantage was that line-oriented or column-block operations were very time consuming to execute because you would have to parse through these hunks of text and move alot of data around. We always stuck with the line-oriented design, for whatever that is worth, and our product was considered one of the fastest editors at the time.
Editing a text buffer
[ "", "c++", "c", "open-source", "editor", "" ]
I tried: ``` DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(f); Node mapNode = getMapNode(doc); System.out.print("\r\n elementName "+ mapNode.getNodeName());//This works fine. Element e = (Element) mapNode; //This is where the error occurs //it seems to work on my machine, but not on the server. e.setAttribute("objectId", "OBJ123"); ``` But this throws a java.lang.ClassCastException error on the line that casts it to Element. **mapNode is a valid node.** I already have it printing out I think maybe this code does not work in Java 1.4. What I really need is an alternative to using Element. I tried doing ``` NamedNodeMap atts = mapNode.getAttributes(); Attr att = doc.createAttribute("objId"); att.setValue(docId); atts.setNamedItem(att); ``` But getAttributes() returns null on the server. Even though its not and I am using the same document locally as on the server. And it can print out the getNodeName() its just that the getAttributes() does not work.
I was using a different dtd file on the server. That was causing the issue.
Might the first child be a whitespace only text node or suchlike? Try: ``` System.out.println(doc.getFirstChild().getClass().getName()); ``` EDIT: Just looked it up in my own code, you need: ``` doc.getDocumentElement().getChildNodes(); ``` Or: ``` NodeList nodes = doc.getElementsByTagName("MyTag"); ```
How to add an attribute to an XML node in Java 1.4
[ "", "java", "xml", "" ]
Where X is any programming language (C#, Javascript, Lisp, Perl, Ruby, Scheme, etc) which supports some flavour of closures. Some limitations are mentioned in the [Closures in Python](http://ivan.truemesh.com/archives/000411.html) (compared to Ruby's closures), but the article is old and many limitations do not exist in modern Python any more. Seeing a code example for a concrete limitation would be great. **Related questions**: * [Can you explain closures (as they relate to Python)?](https://stackoverflow.com/questions/13857/can-you-explain-closures-as-they-relate-to-python) * [What is a ‘Closure’?](https://stackoverflow.com/questions/36636/what-is-a-closure) * [How does a javascript closure work ?](https://stackoverflow.com/questions/111102/how-does-a-javascript-closure-work)
The most important limitation, currently, is that you cannot assign to an outer-scope variable. In other words, closures are read-only: ``` >>> def outer(x): ... def inner_reads(): ... # Will return outer's 'x'. ... return x ... def inner_writes(y): ... # Will assign to a local 'x', not the outer 'x' ... x = y ... def inner_error(y): ... # Will produce an error: 'x' is local because of the assignment, ... # but we use it before it is assigned to. ... tmp = x ... x = y ... return tmp ... return inner_reads, inner_writes, inner_error ... >>> inner_reads, inner_writes, inner_error = outer(5) >>> inner_reads() 5 >>> inner_writes(10) >>> inner_reads() 5 >>> inner_error(10) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 11, in inner_error UnboundLocalError: local variable 'x' referenced before assignment ``` A name that gets assigned to in a local scope (a function) is always local, unless declared otherwise. While there is the 'global' declaration to declare a variable global even when it is assigned to, there is no such declaration for enclosed variables -- yet. In Python 3.0, there is (will be) the 'nonlocal' declaration that does just that. You can work around this limitation in the mean time by using a mutable container type: ``` >>> def outer(x): ... x = [x] ... def inner_reads(): ... # Will return outer's x's first (and only) element. ... return x[0] ... def inner_writes(y): ... # Will look up outer's x, then mutate it. ... x[0] = y ... def inner_error(y): ... # Will now work, because 'x' is not assigned to, just referenced. ... tmp = x[0] ... x[0] = y ... return tmp ... return inner_reads, inner_writes, inner_error ... >>> inner_reads, inner_writes, inner_error = outer(5) >>> inner_reads() 5 >>> inner_writes(10) >>> inner_reads() 10 >>> inner_error(15) 10 >>> inner_reads() 15 ```
The only difficulty I've seen people encounter with Python's in particular is when they try to mix non-functional features like variable reassignment with closures, and are surprised when this doesn't work: ``` def outer (): x = 1 def inner (): print x x = 2 return inner outer () () ``` Usually just pointing out that a function has its own local variables is enough to deter such silliness.
What limitations have closures in Python compared to language X closures?
[ "", "python", "closures", "" ]
> **Possible Duplicate:** > [C#: How to enumerate an enum?](https://stackoverflow.com/questions/105372/c-how-to-enumerate-an-enum) The subject says all. I want to use that to add the values of an enum in a combobox. Thanks vIceBerg
``` string[] names = Enum.GetNames (typeof(MyEnum)); ``` Then just populate the dropdown withe the array
I know others have already answered with a correct answer, however, if you're wanting to use the enumerations in a combo box, you may want to go the extra yard and associate strings to the enum so that you can provide more detail in the displayed string (such as spaces between words or display strings using casing that doesn't match your coding standards) This blog entry may be useful - [Associating Strings with enums in c#](http://blog.spontaneouspublicity.com/2008/01/17/associating-strings-with-enums-in-c/) ``` public enum States { California, [Description("New Mexico")] NewMexico, [Description("New York")] NewYork, [Description("South Carolina")] SouthCarolina, Tennessee, Washington } ``` As a bonus, he also supplied a utility method for enumerating the enumeration that I've now updated with Jon Skeet's comments ``` public static IEnumerable<T> EnumToList<T>() where T : struct { Type enumType = typeof(T); // Can't use generic type constraints on value types, // so have to do check like this if (enumType.BaseType != typeof(Enum)) throw new ArgumentException("T must be of type System.Enum"); Array enumValArray = Enum.GetValues(enumType); List<T> enumValList = new List<T>(); foreach (T val in enumValArray) { enumValList.Add(val.ToString()); } return enumValList; } ``` Jon also pointed out that in C# 3.0 it can be simplified to something like this (which is now getting so light-weight that I'd imagine you could just do it in-line): ``` public static IEnumerable<T> EnumToList<T>() where T : struct { return Enum.GetValues(typeof(T)).Cast<T>(); } // Using above method statesComboBox.Items = EnumToList<States>(); // Inline statesComboBox.Items = Enum.GetValues(typeof(States)).Cast<States>(); ```
Is there a way to iterate through all enum values?
[ "", "c#", "enumeration", "" ]
Currently, primary keys in our system are 10 digits longs, just over the limit for Java Integers. I want to avoid any maintenance problems down the road caused by numeric overflow in these keys, but at the same time I do not want to sacrifice much system performance to store infinitely large numbers that I will never need. How do you handle managing the size of a primary key? Am I better off sticking with Java integers, for the performance benefit over the larger Long, and increasing the size when needed, or should I bite the bullet, go with Java Long for most of my PKs, and never have to worry about overflowing the sequence size?
I've always gone with long keys (number(18,0) in database) because they simply remove the possibility of this situation happening in pretty much all situations (extreme data hoarding style applications aside). Having the same data-type across all tables for the key means you can share that field across all of your model objects in a parent class, as well as having consistent code your your SQL getters, and so on.
It seems like the answer depends on how likely you are to overflow the Java integers with your data. And there's no way to know that without some idea of what your data is.
What is the proper size for a sequence-generated primary key?
[ "", "java", "database", "integer", "primary-key", "long-integer", "" ]
I'd like all queries like ``` http://mysite.com/something/otherthing?foo=bar&x=y ``` to be rewritten as ``` http://mysite.com/something/otherthing.php?foo=bar&x=y ``` In other words, just make the .php extension optional, universally.
I would do it this way. Basically, if file doesn't exist, try adding .php to it. ``` RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)$ $1.php [QSA,L] ```
This works: ``` RewriteCond %{QUERY_STRING} ^.+$ RewriteRule ^/?([^/\.]+)$ /$1.php [L] ``` The idea is to make sure there's a query string (question mark plus stuff) and if so check if the stuff before the question mark has no extension and if so, append .php.
Rewrite all queries to not need the .php extension using a mod_rewrite RewriteRule
[ "", "php", "url", "mod-rewrite", "friendly-url", "" ]
I have a class that has a Generic type "G" In my class model i have ``` public class DetailElement : ElementDefinition ``` Let's say i have a method like this ``` public void DoSomething<G>(G generic) where G : ElementDefinition { if (generic is DetailElement) { ((DetailElement)generic).DescEN = "Hello people"; //line 1 ////// ElementDefinition element = generic; ((DetailElement)element).DescEN = "Hello again"; //line 3 ////// (generic as DetailElement).DescEN = "Howdy"; //line 5 } else { //do other stuff } } ``` Compiler reports one error in line 1: ``` Cannot convert type 'G' to 'DetailElement' ``` But line 3 works fine. I can workaround this issue by doing the code written in line 5. **What i would like to know is why does the compiler reports the error in line 1 and not the one in line 3, given that, as far as i know, they are identical.** edit: I am afraid i might be missing some important piece of the framework logic edit2: Although solutions for the compiler error are important, my question is about why the compiler reports an error on line 1 and not in line 3.
If `G` was constrained to be a `DetailElement` (`where G : DetailElement`) then you can go ahead and cast `G` to ElementDefinition, i.e., "`(ElementDefinition) generic`". But because `G` might be another subclass of `ElementDefinition` other than `DetailElement` at run-time it won't allow it at compile-time where the type is unknown and unverifiable. **In line 3 the type you cast from *is* known to be an `ElementDefinition` so all you're doing is an up-cast**. The compiler doesn't know if it will be a succcesful cast at run-time but it will trust you there. The compiler is not so trusting for generics. The *`as`* operator in line 5 might also return null and the compiler doesn't statically check the type to see if it's safe in that case. You can use `as` with *any* type, not just ones that are compatible with `ElementDefinition`. From *[Can I Cast to and from Generic Type Parameters?](http://msdn.microsoft.com/en-us/library/aa479858.aspx#bestpractices_topic5 "Generics FAQ: Best Practices")* on MSDN: > The compiler will only let you implicitly cast generic type parameters to object, or to constraint-specified types. > > Such implicit casting is of course type safe, because any incompatibility is discovered at compile-time. > > The compiler will let you explicitly cast generic type parameters to any interface, but not to a class: > > ``` > interface ISomeInterface {...} > class SomeClass {...} > class MyClass<T> > { > void SomeMethod(T t) > { > ISomeInterface obj1 = (ISomeInterface)t;//Compiles > SomeClass obj2 = (SomeClass)t; //Does not compile > } > } > ``` > > However, you can force a cast from a generic type parameter to any other type using a temporary object variable > > ``` > void SomeMethod<T>(T t) > { object temp = t; > MyOtherClass obj = (MyOtherClass)temp; > } > ``` > > Needless to say, such explicit casting is dangerous because it may throw an exception at run-time if the concrete type used instead of the generic type parameter does not derive from the type you explicitly cast to. > > Instead of risking a casting exception, a better approach is to use the `is` or `as` operators. The `is` operator returns true if the generic type parameter is of the queried type, and `as` will perform a cast if the types are compatible, and will return null otherwise. > > ``` > public void SomeMethod(T t) > { > if(t is int) {...} > > string str = t as string; > if(str != null) {...} > } > ```
Shouldn't your where clause be "where G : DetailElement"? In the code you've written, a DetailElement is an ElementDefinition, but an ElementDefinition is not necessarily a DetailElement. So the implicit conversion is illegal. Are there other types of ElementDefinition that you might pass into this method? If so, they'll throw an exception when you try to cast them into DetailElement instances. EDIT: Okay, so now that you've changed your code listing, I can see that you're checking the type to make sure it really is a DetailElement before entering that block of code. Unfortunately, the fact of the matter is that you can't implicitly downcast, even if you've already checked the types yourself. I think you really ought to use the "as" keyword at the beginning of your block: ``` DetailElement detail = generic as DetailElement; if (detail == null) { // process other types of ElementDefinition } else { // process DetailElement objects } ``` Better yet, why not use polymorphism to allow each kind of ElementDefinition to define its own DoSomething method, and let the CLR take care of type-checking and method invocation for you?
Compiler fails converting a constrained generic type
[ "", "c#", "generics", "casting", "constraints", "" ]
Everything I read about better PHP coding practices keeps saying don't use `require_once` because of speed. Why is this? What is the proper/better way to do the same thing as `require_once`? If it matters, I'm using PHP 5.
`require_once` and `include_once` both require that the system keeps a log of what's already been included/required. Every `*_once` call means checking that log. So there's definitely *some* extra work being done there but enough to detriment the speed of the whole app? ... I really doubt it... Not unless you're on *really* old hardware or doing it a *lot*. If you *are* doing thousands of `*_once`, you could do the work yourself in a lighter fashion. For simple apps, just making sure you've only included it once *should* suffice but if you're still getting redefine errors, you could something like this: ``` if (!defined('MyIncludeName')) { require('MyIncludeName'); define('MyIncludeName', 1); } ``` I'll personally stick with the `*_once` statements but on silly million-pass benchmark, you can see a difference between the two: ``` php hhvm if defined 0.18587779998779 0.046600103378296 require_once 1.2219581604004 3.2908599376678 ``` 10-100× slower with `require_once` and it's curious that `require_once` is seemingly slower in `hhvm`. Again, this is only relevant to your code if you're running `*_once` thousands of times. --- ``` <?php // test.php $LIMIT = 1000000; $start = microtime(true); for ($i=0; $i<$LIMIT; $i++) if (!defined('include.php')) { require('include.php'); define('include.php', 1); } $mid = microtime(true); for ($i=0; $i<$LIMIT; $i++) require_once('include.php'); $end = microtime(true); printf("if defined\t%s\nrequire_once\t%s\n", $mid-$start, $end-$mid); ``` --- ``` <?php // include.php // do nothing. ```
This thread makes me cringe, because there's already been a "solution posted", and it's, for all intents and purposes, wrong. Let's enumerate: 1. Defines are **really** expensive in PHP. You can [look it up](http://bugs.php.net/bug.php?id=40165) or test it yourself, but the only efficient way of defining a global constant in PHP is via an extension. (Class constants are actually pretty decent performance wise, but this is a moot point, because of 2) 2. If you are using `require_once()` appropriately, that is, for inclusion of classes, you don't even need a define; just check if `class_exists('Classname')`. If the file you are including contains code, i.e. you're using it in the procedural fashion, there is absolutely no reason that `require_once()` should be necessary for you; each time you include the file you presume to be making a subroutine call. So for a while, a lot of people did use the `class_exists()` method for their inclusions. I don't like it because it's fugly, but they had good reason to: `require_once()` was pretty inefficient before some of the more recent versions of PHP. But that's been fixed, and it is my contention that the extra bytecode you'd have to compile for the conditional, and the extra method call, would by far overweigh any internal hashtable check. Now, an admission: this stuff is tough to test for, because it accounts for so little of the execution time. Here is the question you should be thinking about: includes, as a general rule, are expensive in PHP, because every time the interpreter hits one it has to switch back into parse mode, generate the opcodes, and then jump back. If you have a 100+ includes, this will definitely have a performance impact. The reason why using or not using require\_once is such an important question is because it makes life difficult for opcode caches. An [explanation for this](http://t3.dotgnu.info/blog/php/demystifying-autofilter.html) can be found here, but what this boils down to is that: * If during parse time, you know exactly what include files you will need for the entire life of the request, `require()` those at the very beginning and the opcode cache will handle everything else for you. * If you are not running an opcode cache, you're in a hard place. Inlining all of your includes into one file (don't do this during development, only in production) can certainly help parse time, but it's a pain to do, and also, you need to know exactly what you'll be including during the request. * Autoload is very convenient, but slow, for the reason that the autoload logic has to be run every time an include is done. In practice, I've found that autoloading several specialized files for one request does not cause too much of a problem, but you should not be autoloading all of the files you will need. * If you have maybe 10 includes (this is a *very* back of the envelope calculation), all this wanking is not worth it: just optimize your database queries or something.
Why is require_once so bad to use?
[ "", "php", "performance", "require-once", "" ]
I have a HTML select list with quite a few (1000+) names. I have a javascript in place which will select the first matching name if someone starts typing. This matching looks at the start of the item: ``` var optionsLength = dropdownlist.options.length; for (var n=0; n < optionsLength; n++) { var optionText = dropdownlist.options[n].text; if (optionText.indexOf(dropdownlist.keypressBuffer,0) == 0) { dropdownlist.selectedIndex = n; return false; } } ``` The customer would like to have a suggest or autofilter: typing part of a name should 'find' all names containing that part. I've seen a few Google Suggest like options, most using Ajax, but I'd like a pure javascript option, since the select list is already loaded anyway. Pointers anyone?
use this filter script <http://www.barelyfitz.com/projects/filterlist/>
Change ``` if (optionText.indexOf(dropdownlist.keypressBuffer,0) == 0) ``` to ``` if (optionText.indexOf(dropdownlist.keypressBuffer) > 0) ``` To find `dropdownlist.keypressBuffer` anywhere in the `optionText`.
How can I auto filter a HTML selectlist?
[ "", "javascript", "asp.net", "autosuggest", "autofilter", "" ]
I am attempting to write an application that uses libCurl to post soap requests to a secure web service. This Windows application is built against libCurl version 7.19.0 which, in turn, is built against openssl-0.9.8i. The pertinent curl related code follows: > ``` > FILE *input_file = fopen(current->post_file_name.c_str(), "rb"); > FILE *output_file = fopen(current->results_file_name.c_str(), "wb"); > if(input_file && output_file) > { > struct curl_slist *header_opts = 0; > CURLcode rcd; > > header_opts = curl_slist_append(header_opts, "Content-Type: application/soap+xml; charset=utf8"); > curl_easy_reset(curl_handle); > curl_easy_setopt(curl_handle, CURLOPT_NOPROGRESS, 1); > curl_easy_setopt(curl_handle, CURLOPT_WRITEDATA, output_file); > curl_easy_setopt(curl_handle, CURLOPT_READDATA, input_file); > curl_easy_setopt(curl_handle, CURLOPT_URL, fs_service_url); > curl_easy_setopt(curl_handle, CURLOPT_POST, 1); > curl_easy_setopt(curl_handle, CURLOPT_HTTPHEADER, header_opts); > rcd = curl_easy_perform(curl_handle); > if(rcd != 0) > { > current->curl_result = rcd; > current->curl_error = curl_easy_strerror(rcd); > } > curl_slist_free_all(header_opts); > } > ``` When I attempt to execute the URL, curl returns an CURLE\_OUT\_OF\_MEMORY error which appears to be related to a failure to allocate an SSL context. Has anyone else encountered this problem before?
After further investigation, I found that this error was due to a failure to initialise the openSSL library by calling SSL\_library\_init().
I had the same problem, just thought I'd add the note that rather than calling the OpenSsl export SSL\_library\_init directly it can be fixed by adding the flag CURL\_GLOBAL\_SSL to [curl\_global\_init](http://curl.haxx.se/libcurl/c/curl_global_init.html)
"CURLE_OUT_OF_MEMORY" error when posting via https
[ "", "c++", "curl", "https", "openssl", "" ]
Which is the best timer approach for a C# console batch application that has to process as follows: 1. Connect to datasources 2. process batch until timeout occurs or processing complete. "Do something with datasources" 3. stop console app gracefully. related question: [How do you add a timer to a C# console application](https://stackoverflow.com/questions/186084/how-do-you-add-a-timer-to-a-c-console-application)
Sorry for this being an entire console app... but here's a complete console app that will get you started. Again, I appologize for so much code, but everyone else seems to be giving a "oh, all you have to do is do it" answer :) ``` using System; using System.Collections.Generic; using System.Threading; namespace ConsoleApplication1 { class Program { static List<RunningProcess> runningProcesses = new List<RunningProcess>(); static void Main(string[] args) { Console.WriteLine("Starting..."); for (int i = 0; i < 100; i++) { DoSomethingOrTimeOut(30); } bool isSomethingRunning = false; do { foreach (RunningProcess proc in runningProcesses) { // If this process is running... if (proc.ProcessThread.ThreadState == ThreadState.Running) { isSomethingRunning = true; // see if it needs to timeout... if (DateTime.Now.Subtract(proc.StartTime).TotalSeconds > proc.TimeOutInSeconds) { proc.ProcessThread.Abort(); } } } } while (isSomethingRunning); Console.WriteLine("Done!"); Console.ReadLine(); } static void DoSomethingOrTimeOut(int timeout) { runningProcesses.Add(new RunningProcess { StartTime = DateTime.Now, TimeOutInSeconds = timeout, ProcessThread = new Thread(new ThreadStart(delegate { // do task here... })), }); runningProcesses[runningProcesses.Count - 1].ProcessThread.Start(); } } class RunningProcess { public int TimeOutInSeconds { get; set; } public DateTime StartTime { get; set; } public Thread ProcessThread { get; set; } } } ```
It depends on how accurate do you want your stopping time to be. If your tasks in the batch are reasonably quick and you don't need to be very accurate, then I would try to make it single threaded: ``` DateTime runUntil = DataTime.Now.Add(timeout); forech(Task task in tasks) { if(DateTime.Now >= runUntil) { throw new MyException("Timeout"); } Process(task); } ``` Otherwise you need to go mulithreaded, which is always more difficult, because you need to figure out how to terminate your task in the middle without causing side effects. You could use the Timer from System.Timers: <http://msdn.microsoft.com/en-us/library/system.timers.timer(VS.71).aspx> or Thread.Sleep. When the time-out event occurs you can terminate the thread that does the actual processing, clean up and end the process.
Best Timer approach in C# console batch application
[ "", "c#", "batch-file", "console", "timer", "" ]
In the Google C++ Style Guide, there's a section on [Operator Overloading](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Operator_Overloading) that has a curious statement: > Overloading also has surprising > ramifications. For instance, you can't > forward declare classes that overload > `operator&`. This seems incorrect, and I haven't been able to find any code that causes GCC to have a problem with it. Does anyone know what that statement is referring to?
5.3.1 of the Standard has "The address of an object of incomplete type can be taken, but if the complete type of that object is a class type that declares operator&() as a member function, then the behavior is undefined (and no diagnostic is required)." I didn't know this either, but as another poster has pointed out, it's easy to see how it could cause a compiler to generate incorrect code.
I hadn't heard of it either, but this gives potentially confusing results for the same code before and after the overload: ``` #include <iostream> class Foo; void bar (Foo& foo) { std::cout << &foo << std::endl; } class Foo { public: bool operator & () { return true; } }; void baz (Foo& foo) { std::cout << &foo << std::endl; } int main () { Foo foo; bar(foo); baz(foo); return 0; } ``` output: ``` 0x7fff092c55df 1 ``` Though there are other reasons why you wouldn't do that anyway - overloading address-of doesn't play nicely with stl or much generic code.
"You can't forward declare classes that overload operator&"?
[ "", "c++", "gcc", "" ]
I have few asynchronous tasks running and I need to wait until at least one of them is finished (in the future probably I'll need to wait util M out of N tasks are finished). Currently they are presented as Future, so I need something like ``` /** * Blocks current thread until one of specified futures is done and returns it. */ public static <T> Future<T> waitForAny(Collection<Future<T>> futures) throws AllFuturesFailedException ``` Is there anything like this? Or anything similar, not necessary for Future. Currently I loop through collection of futures, check if one is finished, then sleep for some time and check again. This looks like not the best solution, because if I sleep for long period then unwanted delay is added, if I sleep for short period then it can affect performance. I could try using ``` new CountDownLatch(1) ``` and decrease countdown when task is complete and do ``` countdown.await() ``` , but I found it possible only if I control Future creation. It is possible, but requires system redesign, because currently logic of tasks creation (sending Callable to ExecutorService) is separated from decision to wait for which Future. I could also override ``` <T> RunnableFuture<T> AbstractExecutorService.newTaskFor(Callable<T> callable) ``` and create custom implementation of RunnableFuture with ability to attach listener to be notified when task is finished, then attach such listener to needed tasks and use CountDownLatch, but that means I have to override newTaskFor for every ExecutorService I use - and potentially there will be implementation which do not extend AbstractExecutorService. I could also try wrapping given ExecutorService for same purpose, but then I have to decorate all methods producing Futures. All these solutions may work but seem very unnatural. It looks like I'm missing something simple, like ``` WaitHandle.WaitAny(WaitHandle[] waitHandles) ``` in c#. Are there any well known solutions for such kind of problem? UPDATE: Originally I did not have access to Future creation at all, so there were no elegant solution. After redesigning system I got access to Future creation and was able to add countDownLatch.countdown() to execution process, then I can countDownLatch.await() and everything works fine. Thanks for other answers, I did not know about ExecutorCompletionService and it indeed can be helpful in similar tasks, but in this particular case it could not be used because some Futures are created without any executor - actual task is sent to another server via network, completes remotely and completion notification is received.
As far as I know, Java has no analogous structure to the `WaitHandle.WaitAny` method. It seems to me that this could be achieved through a "WaitableFuture" decorator: ``` public WaitableFuture<T> extends Future<T> { private CountDownLatch countDownLatch; WaitableFuture(CountDownLatch countDownLatch) { super(); this.countDownLatch = countDownLatch; } void doTask() { super.doTask(); this.countDownLatch.countDown(); } } ``` Though this would only work if it can be inserted before the execution code, since otherwise the execution code would not have the new `doTask()` method. But I really see no way of doing this without polling if you cannot somehow gain control of the Future object before execution. Or if the future always runs in its own thread, and you can somehow get that thread. Then you could spawn a new thread to join each other thread, then handle the waiting mechanism after the join returns... This would be really ugly and would induce a lot of overhead though. And if some Future objects don't finish, you could have a lot of blocked threads depending on dead threads. If you're not careful, this could leak memory and system resources. ``` /** * Extremely ugly way of implementing WaitHandle.WaitAny for Thread.Join(). */ public static joinAny(Collection<Thread> threads, int numberToWaitFor) { CountDownLatch countDownLatch = new CountDownLatch(numberToWaitFor); foreach(Thread thread in threads) { (new Thread(new JoinThreadHelper(thread, countDownLatch))).start(); } countDownLatch.await(); } class JoinThreadHelper implements Runnable { Thread thread; CountDownLatch countDownLatch; JoinThreadHelper(Thread thread, CountDownLatch countDownLatch) { this.thread = thread; this.countDownLatch = countDownLatch; } void run() { this.thread.join(); this.countDownLatch.countDown(); } } ```
simple, check out [ExecutorCompletionService](http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ExecutorCompletionService.html).
Wait until any of Future<T> is done
[ "", "java", "multithreading", "concurrency", "" ]
Many database systems don't allow comments or descriptions of tables and fields, so how do you go about documenting the purpose of a table/field apart from the obvious of having good naming conventions? (Let's assume for now that "excellent" table and field names are not enough to document the full meaning of every table, field and relationship in the database.) I know many people use UML diagrams to visualize the database, but I have rarely—if ever—seen a UML diagram including field comments. However, I have good experience with using comments inside `.sql` files. The downside to this approach is that it requires the `.sql` files to be manually kept up-to-date as the database structure changes over time—but if you do, you can also have it under version control. Some other techniques I have seen are separate document describing database structure and relationships and manually maintained comments inside ORM code or other database-mapping code. How have you solved this problem in the past? What methods exists and what are the various pros and cons associated with them? How you would you like this solved in "a perfect world"? **Update** As others have pointed out, most of the popular SQL engines do in fact allow comments, which is great. Oddly enough, people don't seem to be using these features much. At least not on the projects I have been involved with in the past.
MySQL [allows](http://dev.mysql.com/doc/refman/5.0/en/create-table.html) comments on tables and rows. PostgreSQL [does](http://www.postgresql.org/docs/7.4/interactive/sql-comment.html) as well. From other answers, Oracle and MSSQL have comments too. For me, a combination of UML diagram for a quick refresher on field names, types, and constraints, and an external document (TeX, but could be any format) with extended description of everything database-related - special values, field comments, access notes, whatever - works best.
Late one but hopefully useful… Here is a process we used when developing relatively large database (around 100 tables and around 350 objects in total) * Developers were required to use extended properties to add details to all objects. * Admins rejected any DDL that didn’t have extended properties * Third party tool was used to automatically generate visual documentation via command line interface every day. We used [ApexSQL Doc](http://www.apexsql.com/sql_tools_doc.aspx) and it worked just fine but I also successfully used SQL Doc from Red Gate in other company. This process ensured that we have all objects documented and document up to date. The difficult thing though was getting developers to write good comments consistently ;)
How do you document your database structure?
[ "", "sql", "database", "documentation", "relational", "" ]
Ok, let's see if I can make this make sense. I have a program written that parses an Excel file and it works just fine. I use the following to get into the file: ``` string FileToConvert = Server.MapPath(".") + "\\App_Data\\CP-ARFJN-FLAG.XLS"; string connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + FileToConvert + ";Extended Properties=Excel 8.0;"; OleDbConnection connection = new OleDbConnection(connectionString); connection.Open(); //this next line assumes that the file is in default Excel format with Sheet1 as the first sheet name, adjust accordingly OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM [CP-ARFJN-FLAG$]", connection); ``` and this works just fine. But when I try it on the actual file (it is supplied to me by another program) I get this error: ``` System.Data.OleDb.OleDbException: External table is not in the expected format. at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OleDb.OleDbConnection.Open() at wetglobe.Page_Load(Object sender, EventArgs e) ``` BUT, this is where I think the problem lies. If I take that file, and save it with my local Excel, first I get this popup: > CP-ARFJN-FLAG.XLS may contain features > that are not compatible with Text (Tab > delimited). Do you want to keep the > workbook in this format? > > * To Keep this format, which leaves > out any incompatible features, click > Yes. > * To preserve the features, click No. > Ten save a copy in the latest Excel > format. > * To see what might be lost, click > Help. If I click No and then save it as the current Excel format, the program will then work fine. So I am assuming this is saved in some crazy old Excel format? I suppose my questions would be: * How can I tell what Excel version saved this? * How can I parse it in its current state? * -or- Can I programatically save it as a newer version? I hope that is clear... Thank you.
It sounds like the XLS file generated by your third-party app may not really be in Excel format - it might actually be a tab-delimited text file with an .xls extension. Try opening it with a text editor and see. If it is tab delimited, you can ditch the OleDB adapter and open/parse it as a standard text file.
If the format of the generated file is likely to change in future (perhaps when you upgrade the third-party app), you may prefer to use the [Office Primary Interop Assemblies](http://www.microsoft.com/downloads/details.aspx?familyid=59DAEBAA-BED4-4282-A28C-B864D8BFA513&displaylang=en). These will load any version or format of file produced by Excel. The downside is that you'll need Office installed on the server.
Problem parsing an XLS file with C#
[ "", "c#", "excel", "" ]
Do you know what may cause memory leaks in JavaScript? I am interested in browsers: IE 7, FireFox 3, Safari 3
There is a nice article about [**JavaScript and memory leaks**](http://www.javascriptkit.com/javatutors/closuresleak/index.shtml). It does not specific about on browser, it rather describes the whole problematic of memory leaks and JavaScript. * [JavaScript and memory leaks](http://www.javascriptkit.com/javatutors/closuresleak/index.shtml) * [Introducing the closure](http://www.javascriptkit.com/javatutors/closuresleak/index2.shtml) * [More leakage patterns](http://www.javascriptkit.com/javatutors/closuresleak/index3.shtml) * [Conclusion](http://www.javascriptkit.com/javatutors/closuresleak/index4.shtml) I think it is a better approach to be as browser unspecific as possible insted of optimizing for a few browsers, when developing a website for the public.
Here is a classic memory leak in IE:- ``` function body_onload() { var elem = document.getElementById('someElementId'); // do stuff with elem elem.onclick = function() { //Some code that doesn't need the elem variable } } ``` After this code has run there is circular reference because an element has a function assigned its onclick event which references a scope object which in turn holds a reference to element. someElement->onclick->function-scope->elem->someElement In IE DOM elements are COM based reference counting objects that the Javascript GC can't cleanup. The addition of a final line in the above code would clean it up:- ``` var elem = null; ```
Do you know what may cause memory leaks in JavaScript?
[ "", "javascript", "memory-leaks", "" ]
I frequently run into problems of this form and haven't found a good solution yet: Assume we have two database tables representing an e-commerce system. ``` userData (userId, name, ...) orderData (orderId, userId, orderType, createDate, ...) ``` For all users in the system, select their user information, their most recent order information with type = '1', and their most recent order information with type = '2'. I want to do this in one query. Here is an example result: ``` (userId, name, ..., orderId1, orderType1, createDate1, ..., orderId2, orderType2, createDate2, ...) (101, 'Bob', ..., 472, '1', '4/25/2008', ..., 382, '2', '3/2/2008', ...) ```
This should work, you'll have to adjust the table / column names: ``` select ud.name, order1.order_id, order1.order_type, order1.create_date, order2.order_id, order2.order_type, order2.create_date from user_data ud, order_data order1, order_data order2 where ud.user_id = order1.user_id and ud.user_id = order2.user_id and order1.order_id = (select max(order_id) from order_data od1 where od1.user_id = ud.user_id and od1.order_type = 'Type1') and order2.order_id = (select max(order_id) from order_data od2 where od2.user_id = ud.user_id and od2.order_type = 'Type2') ``` Denormalizing your data might also be a good idea. This type of thing will be fairly expensive to do. So you might add a `last_order_date` to your userData.
I have provided three different approaches for solving this problem: 1. Using Pivots 2. Using Case Statements 3. Using inline queries in the where clause All of the solutions assume we are determining the "most recent" order based on the `orderId` column. Using the `createDate` column would add complexity due to timestamp collisions and seriously hinder performance since `createDate` is probably not part of the indexed key. I have only tested these queries using MS SQL Server 2005, so I have no idea if they will work on your server. Solutions (1) and (2) perform almost identically. In fact, they both result in the same number of reads from the database. Solution (3) is **not** the preferred approach when working with large data sets. It consistently makes hundreds of logical reads more than (1) and (2). When filtering for one specific user, approach (3) is comparable to the other methods. In the single user case, a drop in the cpu time helps to counter the significantly higher number of reads; however, as the disk drive becomes busier and cache misses occur, this slight advantage will disappear. ## Conclusion For the presented scenario, use the pivot approach if it is supported by your DBMS. It requires less code than the case statement and simplifies adding order types in the future. Please note, in some cases, PIVOT is not flexible enough and characteristic value functions using case statements are the way to go. ## Code Approach (1) using PIVOT: ``` select ud.userId, ud.fullname, od1.orderId as orderId1, od1.createDate as createDate1, od1.orderType as orderType1, od2.orderId as orderId2, od2.createDate as createDate2, od2.orderType as orderType2 from userData ud inner join ( select userId, [1] as typeOne, [2] as typeTwo from (select userId, orderType, orderId from orderData) as orders PIVOT ( max(orderId) FOR orderType in ([1], [2]) ) as LatestOrders) as LatestOrders on LatestOrders.userId = ud.userId inner join orderData od1 on od1.orderId = LatestOrders.typeOne inner join orderData od2 on od2.orderId = LatestOrders.typeTwo ``` Approach (2) using Case Statements: ``` select ud.userId, ud.fullname, od1.orderId as orderId1, od1.createDate as createDate1, od1.orderType as orderType1, od2.orderId as orderId2, od2.createDate as createDate2, od2.orderType as orderType2 from userData ud -- assuming not all users will have orders use outer join inner join ( select od.userId, -- can be null if no orders for type max (case when orderType = 1 then ORDERID else null end) as maxTypeOneOrderId, -- can be null if no orders for type max (case when orderType = 2 then ORDERID else null end) as maxTypeTwoOrderId from orderData od group by userId) as maxOrderKeys on maxOrderKeys.userId = ud.userId inner join orderData od1 on od1.ORDERID = maxTypeTwoOrderId inner join orderData od2 on OD2.ORDERID = maxTypeTwoOrderId ``` Approach (3) using inline queries in the where clause (based on Steve K.'s response): ``` select ud.userId,ud.fullname, order1.orderId, order1.orderType, order1.createDate, order2.orderId, order2.orderType, order2.createDate from userData ud, orderData order1, orderData order2 where ud.userId = order1.userId and ud.userId = order2.userId and order1.orderId = (select max(orderId) from orderData od1 where od1.userId = ud.userId and od1.orderType = 1) and order2.orderId = (select max(orderId) from orderData od2 where od2.userId = ud.userId and od2.orderType = 2) ``` Script to generate tables and 1000 users with 100 orders each: ``` CREATE TABLE [dbo].[orderData]( [orderId] [int] IDENTITY(1,1) NOT NULL, [createDate] [datetime] NOT NULL, [orderType] [tinyint] NOT NULL, [userId] [int] NOT NULL ) CREATE TABLE [dbo].[userData]( [userId] [int] IDENTITY(1,1) NOT NULL, [fullname] [nvarchar](50) NOT NULL ) -- Create 1000 users with 100 order each declare @userId int declare @usersAdded int set @usersAdded = 0 while @usersAdded < 1000 begin insert into userData (fullname) values ('Mario' + ltrim(str(@usersAdded))) set @userId = @@identity declare @orderSetsAdded int set @orderSetsAdded = 0 while @orderSetsAdded < 10 begin insert into orderData (userId, createDate, orderType) values ( @userId, '01-06-08', 1) insert into orderData (userId, createDate, orderType) values ( @userId, '01-02-08', 1) insert into orderData (userId, createDate, orderType) values ( @userId, '01-08-08', 1) insert into orderData (userId, createDate, orderType) values ( @userId, '01-09-08', 1) insert into orderData (userId, createDate, orderType) values ( @userId, '01-01-08', 1) insert into orderData (userId, createDate, orderType) values ( @userId, '01-06-06', 2) insert into orderData (userId, createDate, orderType) values ( @userId, '01-02-02', 2) insert into orderData (userId, createDate, orderType) values ( @userId, '01-08-09', 2) insert into orderData (userId, createDate, orderType) values ( @userId, '01-09-01', 2) insert into orderData (userId, createDate, orderType) values ( @userId, '01-01-04', 2) set @orderSetsAdded = @orderSetsAdded + 1 end set @usersAdded = @usersAdded + 1 end ``` Small snippet for testing query performance on MS SQL Server in addition to SQL Profiler: ``` -- Uncomment these to clear some caches --DBCC DROPCLEANBUFFERS --DBCC FREEPROCCACHE set statistics io on set statistics time on -- INSERT TEST QUERY HERE set statistics time off set statistics io off ```
How to join the newest rows from a table?
[ "", "sql", "aggregate", "pivot", "" ]
How do I create subdomain like `http://user.mywebsite.example`? Do I have to access `.htaccess` somehow? Is it actually simply possible to create it via pure PHP code or I need to use some external script-server side language? To those who answered: Well, then, should I ask my hosting if they provide some sort of DNS access?
You're looking to create a custom **A record**. I'm pretty sure that you can use wildcards when specifying A records which would let you do something like this: ``` *.mywebsite.example IN A 127.0.0.1 ``` *`127.0.0.1` would be the IP address of your webserver. The method of actually adding the record will depend on your host.* --- Then you need to configure your web-server to serve all subdomains. * Nginx: `server_name .mywebsite.example` * Apache: `ServerAlias *.mywebsite.example` Regarding .htaccess, you don't really need any rewrite rules. The `HTTP_HOST` header is available in PHP as well, so you can get it already, like ``` $username = strtok($_SERVER['HTTP_HOST'], "."); ``` --- If you don't have access to DNS/web-server config, doing it like `http://mywebsite.example/user` would be a lot easier to set up if it's an option.
The feature you are after is called [Wildcard Subdomains](http://www.google.com.au/search?sourceid=chrome&ie=UTF-8&q=wildcard+subdomains). It allows you not have to setup DNS for each subdomain, and instead use Apache rewrites for the redirection. You can find a nice tutorial [here](http://steinsoft.net/static/archive/2014/steinsoft.net/index5d37.html?site=Programming/Articles/apachewildcarddomain), but there are thousands of tutorials out there. Here is the necessary code from that tutorial: ``` <VirtualHost 111.22.33.55> DocumentRoot /www/subdomain ServerName www.domain.example ServerAlias *.domain.example </VirtualHost> ``` However as it required the use of VirtualHosts it must be set in the server's `httpd.conf` file, instead of a local `.htaccess`.
How to let PHP to create subdomain automatically for each user?
[ "", "php", "subdomain", "" ]
``` Foo* set = new Foo[100]; // ... delete [] set; ``` You don't pass the array's boundaries to `delete[]`. But where is that information stored? Is it standardised?
When you allocate memory on the heap, your allocator will keep track of how much memory you have allocated. This is usually stored in a "head" segment just before the memory that you get allocated. That way when it's time to free the memory, the de-allocator knows exactly how much memory to free.
ONE OF THE approaches for compilers is to allocate a little more memory and to store a count of elements in a head element. Example how it could be done: Here ``` int* i = new int[4]; ``` compiler will allocate `sizeof(int)*5` bytes. ``` int *temp = malloc(sizeof(int)*5) ``` Will store "4" in the first `sizeof(int)` bytes ``` *temp = 4; ``` and set `i` ``` i = temp + 1; ``` So `i` will points to an array of 4 elements, not 5. And deletion ``` delete[] i; ``` will be processed in the following way: ``` int *temp = i - 1; int numbers_of_element = *temp; // = 4 ... call destructor for numbers_of_element elements ... that are stored in temp + 1, temp + 2, ... temp + 4 if needed free (temp) ```
How does delete[] "know" the size of the operand array?
[ "", "c++", "" ]