Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
If for example I create an array of pixels, like so: ``` int *getPixels() { int *pixels = new int[10]; pixels[0] = 1; pixels[1] = 0; pixels[1] = 1; // etc... } glTexImage2D(..., getPixels()); ``` Does glTexImage2D use that reference or copy the pixels into it's own memory? If the answer is the former, then should I do the following? ``` int *p = getPixels(); glTexImage2D(..., p); /* Just changed to delete[], because delete * would only delete the first element! */ delete[] p; ```
From this quote in the [man page](http://www.opengl.org/documentation/specs/man_pages/hardcopy/GL/html/gl/teximage2d.html), it sounds like glTexImage2D allocates its own memory. This would make sense, ideally the OpenGL API would send data to be stored on the graphics card itself (if drivers/implementation/etc permitted). > In GL version 1.1 or greater, pixels may be a null pointer. > In this case texture memory is allocated to accommodate a texture > of width width and height height. You can then download subtextures to > initialize this texture memory. The image is undefined if the user tries > to apply an uninitialized portion of the texture > image to a primitive. So yea, I'd imagine there is no harm in freeing the memory once you've generated your texture.
Yes, after the call to `geTexImage2D()` returns it is safe to discard the data you passed to it. infact, if you don't do that you'll have a memory leak, like in this code: ``` int *getPixels() { int *pixels = new int[10]; pixels[0] = 1; pixels[1] = 0; pixels[1] = 1; // etc... } glTexImage2D(..., getPixels()); ``` You pass the pointer to the buffer to the call but then the pointer is lost and most likely leaks. What you should do is store it and delete it aftet the call retuns: ``` int *pbuf = getPixels(); glTexImage2D(..., pbuf); delete[] pbuf; ``` alternativly, if the texture is of a constant size, you can pass a pointer to an array that is on the stack: ``` { int buf[10]; ... glTexImage2D(..., pbuf); } ``` Finally, if you don't want to worry about pointers and arrays, you can use STL: ``` vector<int> buf; fillPixels(buf); getTexImage2D(..., buf.begin()); ```
What happens to pixels after passing them into glTexImage2D()?
[ "", "c++", "opengl", "glteximage2d", "" ]
I've written a C++ matrix template class. It's parameterized by its dimensions and by its datatype: ``` template<int NRows, int NCols, typename T> struct Mat { typedef Mat<NRows, NCols, T> MyType; typedef T value_type; typedef const T *const_iterator; typedef T *iterator; enum { NumRows = NRows }; enum { NumCols = NCols }; T m_data[NRows * NCols]; // ... Lot's of operations and functions ... // Some overloads typedef Mat<2, 1, int> Vec2i; typedef Mat<2, 1, float> Vec2f; typedef Mat<3, 1, float> Vec3f; typedef Mat<4, 1, unsigned char> Vec4ub; typedef Mat<4, 4, float> Mat44f; typedef Vec3f Vector; typedef Vec4ub Color; typedef Vec2f LonLat; ``` It's a fantastic little class that handles lots of vector arithmetic and linear algebra. Also, it is able to interface easily with both low level code (OpenGL) and high level code (PID and Kalman filters). Its design seems sound to me, but that's not what this question is about. This class has one dreadful flaw: **it is a dread to use in the debugger**. Because it is a struct that contains an array, there is always that one annoying level of indirection when examining variables in Xcode (and I assume in VisualStudio). This is further aggravated by the fact that Xcode sometimes fails to follow chains of pointers correctly and I can't examine the values at all. This happens if I have, say, a `vector` of these objects. I realize that I could perhaps customize the debugger to somehow work with my type. But I was wondering if there is perhaps **a different way to store the data of my Matrix so that it has better "debugability"**? This is more of a "use your imagination" type question than one looking for a best-practices design. I'm just curious what alternative class layouts could be used while still maintaining the generic nature of the code.
XCode allows you to create [custom data formatters](http://developer.apple.com/documentation/DeveloperTools/Conceptual/XcodeDebugging/600-Viewing_Variables_and_Memory/variables_and_memory.html#//apple_ref/doc/uid/TP40007057-CH9-SW24) to format the data in the debugger in any way you'd want.
These kind of Matrix classes are always tricky to debug. Unlike Rick, I prefer the data to be structured the way you have it (i.e., T m\_data[NRows\*NCols]), but you may want to add some methods to make testing a little easier; for instance, a method that prints the matrix automatically and a method to lookup a datum at a specific row and column can make your life easier: ``` void printMat() const; void printMatToFile( const char *fileName ) const; T &get(int row, int col); ``` I usually use the gdb debugger, which allows you to call a method while debugging. I don't know if your debugger supports this, so you may want to try using g++/gdb for testing, or some debugger that supports function calls while debugging.
How to make this Matrix class easier to use in the debugger
[ "", "c++", "xcode", "debugging", "math", "" ]
Just that. I found a similar question here : [c# console, Console.Clear problem](https://stackoverflow.com/questions/377927/c-console-console-clear-problem) but that didn't answer the question. UPDATES : Console.Clear() throws : IOException (The handle is invalid) The app is a WPF app. Writing to the console however is no problem at all, nor is reading from.
`Console.Clear()` works in a console application. When Calling `Console.Clear()` in a ASP.NET web site project or in a windows forms application, then you'll get the IOException. What kind of application do you have? Update: I'm not sure if this will help, but as you can read in [this forum thread](http://www.eggheadcafe.com/conversation.aspx?messageid=31733751&threadid=31733751), `Console.Clear()` throws an IOException if the console output is being redirected. Maybe this is the case for WPF applications? The article describes how to check whether the console is being redirected.
Try ``` Console.Clear(); ``` **EDIT** Are you trying this method on a non-Console application? If so that would explain the error. Other types of applications, ASP.Net projects, WinForms, etc ... don't actually create a console for writing. So the Clear has nothing to operate on and throws an exception.
How to clear the Console in c#.net?
[ "", "c#", "wpf", "console", "" ]
I have used Ruby on Rails with ActiveRecord, so I am quite used to switching to Production/Development database. I am wondering, how do people implement Development and Production Database difference in ASP.NET MVC (preferably with ado.net entity). I tried to have it by creating 2 entity datasets with the same name in a different namespace, so I would only need to switch the used namespace in my database code. Unfortunately, this didn't work because the entity datasets do not allow equal names...
The way I do it (with a web app) is to have separate versions of the Web.config file for each of Development (web.config), QA (web-qa.config), and Production (web-prod.config) installs. The config files have different connection strings pointing to the corresponding instances of the database. The default connection string points to the SQL server instance on my local machine which isn't available to the network. As part of the publishing step, I manually delete the web.config file and rename the appropriate QA/Prod configuration to web.config. This hooks up that instance of the app to the correct instance of the database.
The easiest way (though it's still not as easy as in Rails) will be to include connection strings into [App.config](http://msdn.microsoft.com/en-us/library/aa903313(VS.71).aspx).
How to implement Development and ProductionDatabase in ASP.NET MVC with Ado.NET Entity?
[ "", "c#", "asp.net-mvc", "database", "ado.net", "" ]
I need to create a regular expression that allows a string to contain any number of: * alphanumeric characters * spaces * ( * ) * & * . No other characters are permitted. I used [RegexBuddy](http://www.regexbuddy.com/) to construct the following regex, which works correctly when I test it within RegexBuddy: ``` \w* *\(*\)*&*\.* ``` Then I used RegexBuddy's "Use" feature to convert this into Java code, but it doesn't appear to work correctly using a simple test program: ``` public class RegexTest { public static void main(String[] args) { String test = "(AT) & (T)."; // Should be valid System.out.println("Test string matches: " + test.matches("\\w* *\\(*\\)*&*\\.*")); // Outputs false } } ``` * I must admit that I have a bit of a blind spot when it comes to regular expressions. Can anyone explain why it doesn't work please?
That regular expression tests for any amount of whitespace, followed by any amount of alphanumeric characters, followed by any amount of open parens, followed by any amount of close parens, followed by any amount of ampersands, followed by any amount of periods. What you want is... ``` test.matches("[\\w \\(\\)&\\.]*") ``` As mentioned by mmyers, this allows the empty string. If you do not want to allow the empty string... ``` test.matches("[\\w \\(\\)&\\.]+") ``` Though that will also allow a string that is only spaces, or only periods, etc.. If you want to ensure at least one alpha-numeric character... ``` test.matches("[\\w \\(\\)&\\.]*\\w+[\\w \\(\\)&\\.]*") ``` So you understand what the regular expression is saying... anything within the square brackets ("[]") indicates a set of characters. So, where "a\*" means 0 or more a's, [abc]\* means 0 or more characters, all of which being a's, b's, or c's.
Maybe I'm misunderstanding your description, but aren't you essentially defining a class of characters without an order rather than a specific sequence? Shouldn't your regexp have a structure of [xxxx]+, where xxxx are the actual characters you want ?
Why doesn't this Java regular expression work?
[ "", "java", "regex", "regexbuddy", "" ]
Do any good multi-select dropdownlist with checkboxes (webcontrol) exist for asp.net? Thanks a lot
You could use the `System.Web.UI.WebControls.CheckBoxList` control or use the `System.Web.UI.WebControls.ListBox` control with the `SelectionMode` property set to `Multiple`.
[**jQuery Dropdown Check List**](http://code.google.com/p/dropdown-check-list/) can be used to transform a regular multiple select html element into a dropdown checkbox list, it works on client so can be used with any server side technology: [![alt text](https://i.stack.imgur.com/CPNOF.png)](https://i.stack.imgur.com/CPNOF.png) (source: [googlecode.com](http://dropdown-check-list.googlecode.com/svn/trunk/doc/demo.png))
Multi-select dropdown list in ASP.NET
[ "", "c#", "asp.net", "web-controls", "" ]
I'm having trouble running an `INSERT` statement where there's an autonumber as the PK field. I have an Auto-incrementing `long` as the Primary Key, and then 4 fields of type `double`; and yet Access (using ADO) seems to want five values for the insert statement. ``` INSERT INTO [MY_TABLE] VALUES (1.0, 2.0, 3.0, 4.0); >> Error: Number of query values and destinations fields are not the same. INSERT INTO [MY_TABLE] VALUE (1, 1.0, 2.0, 3.0, 4.0); >> Success!! ``` How do I use Autonumbering to actually autonumber?
If you do not want to provide values for all columns that exists in your table, you've to specify the columns that you want to insert. (Which is logical, otherwise how should access, or any other DB, know for which columns you're providing a value)? So, what you have to do is this: ``` INSERT INTO MyTable ( Column2, Column3, Column4) VALUES ( 1, 2, 3 ) ``` Also , be sure that you omit the Primary Key column (which is the autonumber field). Then, Access will set it to the next value by itself. You can then retrieve the primary-key value of the newly inserted record by executing a ``` SELECT @@identity FROM MyTable ``` statement.
Mention the column names in your query as you are providing only 4 values whereas you have 5 columns in that table. Database need to know the value you providing is for which column.
Using Autonumbering in Access - INSERT statements
[ "", "sql", "ms-access", "insert", "ado", "autonumber", "" ]
This code generates "AttributeError: 'Popen' object has no attribute 'fileno'" when run with Python 2.5.1 Code: ``` def get_blame(filename): proc = [] proc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE)) proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE) proc.append(Popen(['tr', r"'\040'", r"';'"], stdin=proc[-1]), stdout=PIPE) proc.append(Popen(['cut', r"-d", r"\;", '-f', '3'], stdin=proc[-1]), stdout=PIPE) return proc[-1].stdout.read() ``` Stack: ``` function walk_folder in blame.py at line 55 print_file(os.path.join(os.getcwd(), filename), path) function print_file in blame.py at line 34 users = get_blame(filename) function get_blame in blame.py at line 20 proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE) function __init__ in subprocess.py at line 533 (p2cread, p2cwrite, function _get_handles in subprocess.py at line 830 p2cread = stdin.fileno() ``` This code should be working the python docs describe [this usage](http://docs.python.org/library/subprocess.html#subprocess-replacements).
Three things First, your ()'s are wrong. Second, the result of `subprocess.Popen()` is a process object, not a file. ``` proc = [] proc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE)) proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1]), stdout=PIPE) ``` The value of `proc[-1]` isn't the file, it's the process that contains the file. ``` proc.append(Popen(['tr', '-s', r"'\040'"], stdin=proc[-1].stdout, stdout=PIPE)) ``` Third, don't do all that `tr` and `cut` junk in the shell, few things could be slower. Write the `tr` and `cut` processing in Python -- it's faster and simpler.
There's a few weird things in the script, * Why are you storing each process in a list? Wouldn't it be much more readable to simply use variables? Removing all the `.append()s` reveals an syntax error, several times you have passed stdout=PIPE to the `append` arguments, instead of Popen: ``` proc.append(Popen(...), stdout=PIPE) ``` So a straight-rewrite (still with errors I'll mention in a second) would become.. ``` def get_blame(filename): blame = Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE) tr1 = Popen(['tr', '-s', r"'\040'"], stdin=blame, stdout=PIPE) tr2 = Popen(['tr', r"'\040'", r"';'"], stdin=tr1), stdout=PIPE) cut = Popen(['cut', r"-d", r"\;", '-f', '3'], stdin=tr2, stdout=PIPE) return cut.stdout.read() ``` * On each subsequent command, you have passed the Popen object, *not* that processes `stdout`. From the ["Replacing shell pipeline"](http://docs.python.org/library/subprocess.html#replacing-shell-pipeline) section of the subprocess docs, you do.. ``` p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) ``` ..whereas you were doing the equivalent of `stdin=p1`. The `tr1 =` (in the above rewritten code) line would become.. ``` tr1 = Popen(['tr', '-s', r"'\040'"], stdin=blame.stdout, stdout=PIPE) ``` * You do not need to escape commands/arguments with subprocess, as subprocess does not run the command in any shell (unless you specify `shell=True`). See the [Security](http://docs.python.org/library/subprocess.html#security)section of the subprocess docs. Instead of.. ``` proc.append(Popen(['svn', 'blame', shellquote(filename)], stdout=PIPE)) ``` ..you can safely do.. ``` Popen(['svn', 'blame', filename], stdout=PIPE) ``` * As S.Lott suggested, don't use subprocess to do text-manipulations easier done in Python (the tr/cut commands). For one, tr/cut etc aren't hugely portable (different versions have different arguments), also they are quite hard to read (I've no idea what the tr's and cut are doing) If I were to rewrite the command, I would probably do something like.. ``` def get_blame(filename): blame = Popen(['svn', 'blame', filename], stdout=PIPE) output = blame.communicate()[0] # preferred to blame.stdout.read() # process commands output: ret = [] for line in output.split("\n"): split_line = line.strip().split(" ") if len(split_line) > 2: rev = split_line[0] author = split_line[1] line = " ".join(split_line[2:]) ret.append({'rev':rev, 'author':author, 'line':line}) return ret ```
Python subprocess "object has no attribute 'fileno'" error
[ "", "python", "pipe", "subprocess", "" ]
This `HyperLink` syntax is not working to pass parameters to a small PopUp window: ``` <asp:HyperLink ID="HyperLink2" runat="server" Text="Manage Related Items" NavigateUrl='<%# "editRelatedItems.aspx?" + "ProductSID=" + Eval("ProductSID") + "&CollectionTypeID=" + Eval("CollectionTypeID")+ "&ProductTypeID=" + Eval("ProductTypeID") %>' onclick="window.open('editRelatedItems.aspx?','name','height=550, width=790,toolbar=no,directories=no,status=no, menubar=no,scrollbars=yes,resizable=no'); return false;) target="_blank" /> ``` Looks like the `<asp:HyperLink>` tag does not take the `"onclick"`. Any ideas on how to get a pop up to fire that can get these parameters? I'm using C#, so perhaps there is a way to build the `NavigateURL` string in the code behind? Thanks for any insight you may have.
``` <asp:HyperLink ID="HyperLink2" runat="server" Text="Manage Related Items" NavigateUrl="#" onClick='<%# "window.open('editRelatedItems.aspx" + "?ProductSID=" + Eval("ProductSID") + "&CollectionTypeID=" + Eval("CollectionTypeID")+ "&ProductTypeID=" + Eval("ProductTypeID") + ",'name','height=550, width=790,toolbar=no, directories=no,status=no, menubar=no,scrollbars=yes,resizable=no'); return false;%>' Target="_blank" /> ``` That should work, but I would really create a javascript method to call which would open the window. ``` <script type="text/javascript"> function openRelatedItems(productSID, collectionTypeId, productTypeId) { window.open('editRelatedItems.aspx" + "?ProductSID=" + productSID + "&CollectionTypeID=" + collectionTypeID + "&ProductTypeID=" + productTypeID + ",'name','height=550, width=790,toolbar=no, directories=no,status=no, menubar=no,scrollbars=yes,resizable=no'); } </script> ```
From the server side code you can do ``` HyperLink2.Attributes.Add("OnClick", "yourstuffhere"); ``` that will allow you to specify the OnClick items. You could also set the `NavigateUrl` to be something like "javascript: window.open..." if you wanted, again, can be done from the code behind. *also* Given that a Hyperlink is just a standard run of the mill anchor tag, if you are doing all of this processing in the code behind, it might just be easier to build the whole link yourself to save some hassle.
Create PopUp with <asp:HyperLink> that is passing parameters
[ "", "asp.net", "javascript", "parameters", "" ]
I'm somewhat new to c#, more accustomed to scripting languages. I like the idea of 'using', you instantiate an object, and then you operate within its scope as long as you need it, then you let it dispose of itself when it's done its purpose. But, it's not natural for me. When people show me examples using it, I recognize it as a good tool for the job, but it never occurs to me to solve problems with it in my own programming. How can I recognize good places to use `using` and how do I use it in conjunction with try-catch blocks. Do they go inside the block, or do you usually want to enclose a using statement within a try block?
I rarely write try/catch blocks - most exceptions get thrown up to (near) the top of the stack. If I *do* need a try/catch block, I'm not sure I'm particularly consistent between putting it inside the `using` statement vs outside. It really depends on whether you want the resource to be disposed before of after your exception handling code is run. If you're asking about *when* you should be writing `using` statements - any time you "own" an object which implements `IDisposable` (either directly or indirectly through inheritance) and control its lifetime. That's usually an object which uses an unmanaged resource like a file handle or network connection. It's not always hugely obvious, but you learn through experience. Almost *anything* to do with IO will be disposable, and Windows handles (for fonts etc) are similar.
`using` can only be used with types that implement `IDisposable`; it guarantees that the `Dispose()` method will be called even if an error occurs. This code: ``` using (MyDisposableType x = new MyDisposableType()) { // use x } ``` is equivalent to this: ``` MyDisposableType x = new MyDisposableType(); try { // use x } finally { x.Dispose(); } ```
How do I know the best place to use 'using'?
[ "", "c#", "using-statement", "" ]
I'm relatively new in the Java world and I have a problem which I don't understand. I have a Class (to get the fibonacci row): ``` class Fib { public static int f(int x){ if ( x < 2 ) return 1; else return f(x-1)+ f(x-2); } } ``` The task now is to start f(x-1) and f(x-2) each in a separate Thread. One time with implementing the Thread class and the other with implementing Runnable. As you probably know, it's an exercise from my prof. I know how to start a Thread in Java and I know how this whole Thread thing theoretically works, but I can't find a solution for starting separate Threads in this recursive function. What has to be done in the run function? Probably ``` public void run(){ //int foo=start f(this.x-1) //int bar=start f(this.x-2) //return foo+bar? } ``` And how can I paste x in my runnable function? Is x passed into the object at creation? ``` Class Fib ...{ int x; public ... run ... public ... f(x).... } ``` in the main method ``` (new Fib(x)).start(); ``` Or am I on a totally wrong path?
For this to work, you need 1) a way to pass the number into the new thread, 2) to start the thread, 3) to wait for the thread to finish, and 4) a way to get the result back from the thread. You can pass in the number through the constructor. You can have a public data member called "answer" to contain the result of the computation. Starting the thread can be done with the `start()` method, and the `join()` method waits for the thread to complete. The following example demonstrates this. That should be a good starting point; from here you can abstract away some of the messiness to get a better API as desired. ``` public class Fib extends Thread { private int x; public int answer; public Fib(int x) { this.x = x; } public void run() { if( x <= 2 ) answer = 1; else { try { Fib f1 = new Fib(x-1); Fib f2 = new Fib(x-2); f1.start(); f2.start(); f1.join(); f2.join(); answer = f1.answer + f2.answer; } catch(InterruptedException ex) { } } } public static void main(String[] args) throws Exception { try { Fib f = new Fib( Integer.parseInt(args[0]) ); f.start(); f.join(); System.out.println(f.answer); } catch(Exception e) { System.out.println("usage: java Fib NUMBER"); } } } ```
Using threads is usually intended to improve performance. However each thread adds an overhead and if the task performed is small, there can be much more over head than actual work done. Additionally most PCs can only handle about 1000 threads and will hang if you have much more than 10K threads. In your case, fib(20) will generate 6765 threads, fib(30) creates 832K, fib(40) creates 102M threads, fib(50) creates over 12 trillion. I hope you can see this is not scalable. However, using a different approach you can calculate fib(1000000) in under one minute. ``` import java.math.BigInteger; /* 250000th fib # is: 36356117010939561826426 .... 10243516470957309231046875 Time to compute: 3.466557 seconds. 1000000th fib # is: 1953282128707757731632 .... 93411568996526838242546875 Time to compute: 58.1 seconds. */ public class Main { public static void main(String... args) { int place = args.length > 0 ? Integer.parseInt(args[0]) : 250 * 1000; long start = System.nanoTime(); BigInteger fibNumber = fib(place); long time = System.nanoTime() - start; System.out.println(place + "th fib # is: " + fibNumber); System.out.printf("Time to compute: %5.1f seconds.%n", time / 1.0e9); } private static BigInteger fib(int place) { BigInteger a = new BigInteger("0"); BigInteger b = new BigInteger("1"); while (place-- > 1) { BigInteger t = b; b = a.add(b); a = t; } return b; } } ```
Using threads and recursion in Java to calculate Fibonacci numbers
[ "", "java", "multithreading", "recursion", "" ]
I'm kind of lost on how to approach this problem, I'd like to write a GUI ideally using Tkinter with python, but I initially started with Qt and found that the problem extends either with all GUI frameworks or my limited understanding. The data in this case is coming from a named pipe, and I'd like to display whatever comes through the pipe into a textbox. I've tried having one thread listen on the pipe and another create the GUI, but in both cases one thread always seems to hang or the GUI never gets created. Any suggestions?
Here is the way I would do it (on windows): ``` import wx, wx.lib.newevent, threading import win32event, win32pipe, win32file, pywintypes, winerror NewMessage, EVT_NEW_MESSAGE = wx.lib.newevent.NewEvent() class MessageNotifier(threading.Thread): pipe_name = r"\\.\pipe\named_pipe_demo" def __init__(self, frame): threading.Thread.__init__(self) self.frame = frame def run(self): open_mode = win32pipe.PIPE_ACCESS_DUPLEX | win32file.FILE_FLAG_OVERLAPPED pipe_mode = win32pipe.PIPE_TYPE_MESSAGE sa = pywintypes.SECURITY_ATTRIBUTES() sa.SetSecurityDescriptorDacl(1, None, 0) pipe_handle = win32pipe.CreateNamedPipe( self.pipe_name, open_mode, pipe_mode, win32pipe.PIPE_UNLIMITED_INSTANCES, 0, 0, 6000, sa ) overlapped = pywintypes.OVERLAPPED() overlapped.hEvent = win32event.CreateEvent(None, 0, 0, None) while 1: try: hr = win32pipe.ConnectNamedPipe(pipe_handle, overlapped) except: # Error connecting pipe pipe_handle.Close() break if hr == winerror.ERROR_PIPE_CONNECTED: # Client is fast, and already connected - signal event win32event.SetEvent(overlapped.hEvent) rc = win32event.WaitForSingleObject( overlapped.hEvent, win32event.INFINITE ) if rc == win32event.WAIT_OBJECT_0: try: hr, data = win32file.ReadFile(pipe_handle, 64) win32file.WriteFile(pipe_handle, "ok") win32pipe.DisconnectNamedPipe(pipe_handle) wx.PostEvent(self.frame, NewMessage(data=data)) except win32file.error: continue class Messages(wx.Frame): def __init__(self): wx.Frame.__init__(self, None) self.messages = wx.TextCtrl(self, style=wx.TE_MULTILINE | wx.TE_READONLY) self.Bind(EVT_NEW_MESSAGE, self.On_Update) def On_Update(self, event): self.messages.Value += "\n" + event.data app = wx.PySimpleApp() app.TopWindow = Messages() app.TopWindow.Show() MessageNotifier(app.TopWindow).start() app.MainLoop() ``` Test it by sending some data with: ``` import win32pipe print win32pipe.CallNamedPipe(r"\\.\pipe\named_pipe_demo", "Hello", 64, 0) ``` (you also get a response in this case)
When I did something like this I used a separate thread listening on the pipe. The thread had a pointer/handle back to the GUI so it could send the data to be displayed. I suppose you could do it in the GUI's update/event loop, but you'd have to make sure it's doing non-blocking reads on the pipe. I did it in a separate thread because I had to do lots of processing on the data that came through. Oh and when you're doing the displaying, make sure you do it in non-trivial "chunks" at a time. It's very easy to max out the message queue (on Windows at least) that's sending the update commands to the textbox.
Showing data in a GUI where the data comes from an outside source
[ "", "python", "user-interface", "named-pipes", "" ]
A delegate is a function pointer. So it points to a function which meets the criteria (parameters and return type). This begs the question (for me, anyway), what function will the delegate point to if there is more than one method with exactly the same return type and parameter types? Is the function which appears first in the class? Thanks
The exact method is specified when you create the Delegate. ``` public delegate void MyDelegate(); private void Delegate_Handler() { } void Init() { MyDelegate x = new MyDelegate(this.Delegate_Handler); } ```
As Henk says, the method is specified when you create the delegate. Now, it's possible for more than one method to meet the requirements, for two reasons: * Delegates are variant, e.g. you can use a method with an `Object` parameter to create an `Action<string>` * You can overload methods by making them generic, e.g. ``` static void Foo() {} static void Foo<T>(){} static void Foo<T1, T2>(){} ``` The rules get quite complicated, but they're laid down in section 6.6 of the C# 3.0 spec. Note that inheritance makes things tricky too.
What function will a delegate point two if there is more than one method which meets the delegate criteria?
[ "", "c#", "" ]
I have two XmlDocuments. Something like: ``` <document1> <inner /> </document1> ``` and ``` <document2> <stuff/> </document2> ``` I want to put document2 inside of the inner node of document1 so that I end up with a single docement containing: ``` <document1> <inner> <document2> <stuff/> </document2> </inner> </document1> ```
Here's the code... ``` XmlDocument document1, document2; // Load the documents... XmlElement xmlInner = (XmlElement)document1.SelectSingleNode("/document1/inner"); xmlInner.AppendChild(document1.ImportNode(document2.DocumentElement, true)); ```
You can, but effectively a copy will be created. You have to use `XmlNode node = document1.ImportNode(document2.RootElement)`, find the node and add `node` as a child element. Example on msdn: <http://msdn.microsoft.com/en-us/library/system.xml.xmldocument.importnode.aspx>
Can I add one XmlDocument within a node of another XmlDocument in C#?
[ "", "c#", "xml", "" ]
Should C# methods that *can* be static be static? We were discussing this today and I'm kind of on the fence. Imagine you have a long method that you refactor a few lines out of. The new method probably takes a few local variables from the parent method and returns a value. This means it *could* be static. The question is: *should* it be static? It's not static by design or choice, simply by its nature in that it doesn't reference any instance values.
It depends. There are really 2 types of static methods: 1. Methods that are static because they CAN be 2. Methods that are static because they HAVE to be In a small to medium size code base you can really treat the two methods interchangeably. If you have a method that is in the first category (can-be-static), and you need to change it to access class state, it's relatively straight forward to figure out if it's possible to turn the static method into a instance method. In a large code base, however, the sheer number of call sites might make searching to see if it's possible to convert a static method to a non static one too costly. Many times people will see the number of calls, and say "ok... I better not change this method, but instead create a new one that does what I need". That can result in either: 1. A lot of code duplication 2. An explosion in the number of method arguments Both of those things are bad. So, my advice would be that if you have a code base over 200K LOC, that I would only make methods static if they are must-be-static methods. The refactoring from non-static to static is relatively easy (just add a keyword), so if you want to make a can-be-static into an actual static later (when you need it's functionality outside of an instance) then you can. However, the inverse refactoring, turning a can-be-static into a instance method is MUCH more expensive. With large code bases it's better to error on the side of ease of extension, rather than on the side of idealogical purity. So, for big projects don't make things static unless you need them to be. For small projects, just do what ever you like best.
I would *not* make it a *public* static member of *that class*. The reason is that making it public static is saying something about the class' type: not only that "this type knows how to do this behavior", but also "it is the responsibility of this type to perform this behavior." And odds are the behavior no longer has any real relationship with the larger type. That doesn't mean I wouldn't make it static at all, though. Ask yourself this: could the new method logically belong elsewhere? If you can answer "yes" to that, you probably do want to make it static (and move it as well). Even if that's not true, you could still make it static. Just don't mark it `public`. As a matter of convenience, you could at least mark it `internal`. This typically avoids needing to move the method if you don't have easy access to a more appropriate type, but still leaves it accessible where needed in a way that it won't show up as part of the public interface to users of your class.
Should C# methods that *can* be static be static?
[ "", "c#", "static", "methods", "" ]
I had the following C++ code, where the argument to my constructor in the declaration had different constness than the definition of the constructor. ``` //testClass.hpp class testClass { public: testClass(const int *x); }; //testClass.cpp testClass::testClass(const int * const x) {} ``` I was able to compile this with no warnings using g++, should this code compile or at least give some warnings? It turns out that the built-in C++ compiler on 64 bit solaris gave me a linker error, which is how I noticed that there was an issue. What is the rule on matching arguments in this case? Is it up to compilers?
In cases like this, the const specifier is allowed to be ommitted from the *declaration* because it doesn't change anything for the caller. It matters only to the context of the implementation details. So that's why it is on the *definition* but not the *declaration*. Example: ``` //Both f and g have the same signature void f(int x); void g(const int x); void f(const int x)//this is allowed { } void g(const int x) { } ``` Anyone calling f won't care that you are going to treat it as const because it is your own copy of the variable. With int \* const x, it is the same, it is your copy of the pointer. Whether you can point to something else doesn't matter to the caller. If you ommitted the first const though in const int \* const, then that would make a difference because it matters to the caller if you change the data it is pointing to. **Reference: *The C++ Standard*, 8.3.5 para 3:** > "Any cv-qualifier modifying a > parameter type is deleted ... Such > cv-qualifiers affect only the > definition of the parameter with the > body of the function; they do not > affect the function type"
Think of it as the same difference between ``` //testClass.hpp class testClass { public: testClass(const int x); }; //testClass.cpp testClass::testClass(int x) {} ``` Which also compiles. You can't overload based on the const-ness of a pass-by-value parameter. Imagine this case: ``` void f(int x) { } void f(const int x) { } // Can't compile both of these. int main() { f(7); // Which gets called? } ``` From the standard: > Parameter declarations that differ > only in the presence or absence of > const and/or volatile are equivalent. > That is, the const and volatile > type-specifiers for each parameter > type are ignored when determining > which function is being declared, > defined, or called. [Example: ``` typedef const int cInt; int f (int); int f (const int); // redeclaration of f(int) int f (int) { ... } // definition of f(int) int f (cInt) { ... } // error: redefinition of f(int) ``` > —end example] Only the const and > volatile type-specifiers at the > outermost level of the parameter type > specification are ignored in this > fashion; const and volatile > type-specifiers buried within a > parameter type specification are > significant and can be used to > distinguish overloaded function > declarations.112) In particular, > for any type T, “pointer to T,” > “pointer to const T,” and “pointer to > volatile T” are considered distinct > parameter types, as are “reference to > T,” “reference to const T,” and > “reference to volatile T.”
Mismatch between constructor definition and declaration
[ "", "c++", "g++", "solaris", "" ]
On a scheduled interval I need to call a WCF service call another WCF Service asyncronously. Scheduling a call to a WCF service I have worked out. What I think I need and I have read about here on stackoverflow that it is necessary to.., (in essence) prepare or change the code of your WCF services as to be able to handle an async call to them. If so what would a simple example of that look like?(Maybe a before and after example) Also is it still necessary in .Net 3.5? Second I am using a proxy from the WCF Service doing the call to the next WCF Service and need a sample of an async call to a WCF service if it looks any different than what is typical with BeginEnvoke and EndEnvoke with typical async examples. I would believe it if I am completely off on my question and would appreciate any correction to establish a better question as well.
Set the [IsOneWay](http://msdn.microsoft.com/en-us/library/system.servicemodel.operationcontractattribute.isoneway(v=vs.110).aspx) property of the OperationContract attribute to true on the WCF method that you are calling to. This tells WCF that the call only matters for one direction and the client won't hang around for the method to finish executing. Even when calling BeginInvoke your client code will still hang-out waiting for the server method to finish executing but it will do it on a threadpool thread. ``` [ServiceContract] interface IWCFContract { [OperationContract(IsOneWay = true)] void CallMe() } ``` The other way to do what you want is to have the WCF service spin its work off onto a background thread and return immediately.
Be sure to carefully test the way a OneWay WCF call performs. I've seen it stall when you reach X number of simultaneous calls, as if WCF actually does wait for the call to end. A safer solution is to have the "target" code return control ASAP: Instead of letting it process the call fully, make it only put the data into a queue and return. Have another thread poll that queue and work on the data asynchronously. And be sure to apply a thread safety mechanism to avoid clashes between the two threads working on that queue.
Need sample fire and forget async call to WCF service
[ "", "c#", "wcf", "asynchronous", "" ]
Is there any way to redirect the output sound back to recording input through C# without the use of any cable to do it? Thanks
I don't know whether there's a way to do this programmatically for just your app (there might be ways to hook into the media pipeline to do this, but how to do that is beyond my ken). But if you don't need programmatic access, there are separate tools that can record any Windows audio output. I've heard good things about [Total Recorder](http://www.highcriteria.com/).
Most audio mixer drivers (the sound card/peripheral has a built-in software controlled audio mixer) have the ability to route the speaker output to a recording channel. I don't know if the sound library in C# supports this natively, but you might check out DirectSound. This feature is very useful for echoes and other sound loopback features that are becoming common in some audio software, so the hardware should be able to manage it, but you may have to dig into obscure DLLs if you can't find it in DirectSound or similar audio libraries.
Redirecting sound in Windows Vista/XP
[ "", "c#", "audio", "windows-vista", "windows-xp", "" ]
I've seen [this](https://stackoverflow.com/questions/159456/pivot-table-and-concatenate-columns-sql-problem#159803), so I know how to create a pivot table with a dynamically generated set of fields. My problem now is that I'd like to get the results into a temporary table. I know that in order to get the result set into a temp table from an **EXEC** statement you need to predefine the temp table. In the case of a dynamically generated pivot table, there is no way to know the fields beforehand. The only way I can think of to get this type of functionality is to create a permanent table using dynamic SQL. Is there a better way?
you could do this: ``` -- add 'loopback' linkedserver if exists (select * from master..sysservers where srvname = 'loopback') exec sp_dropserver 'loopback' go exec sp_addlinkedserver @server = N'loopback', @srvproduct = N'', @provider = N'SQLOLEDB', @datasrc = @@servername go declare @myDynamicSQL varchar(max) select @myDynamicSQL = 'exec sp_who' exec(' select * into #t from openquery(loopback, ''' + @myDynamicSQL + '''); select * from #t ') ``` EDIT: addded dynamic sql to accept params to openquery
Ran in to this issue today, and posted on my [blog](https://knarfalingus.net/2016/02/09/database/mssql/dynamic-sql-pivot-into-temp-table/). Short description of solution, is to create a temporary table with one column, and then ALTER it dynamically using sp\_executesql. Then you can insert the results of the dynamic PIVOT into it. Working example below. ``` CREATE TABLE #Manufacturers ( ManufacturerID INT PRIMARY KEY, Name VARCHAR(128) ) INSERT INTO #Manufacturers (ManufacturerID, Name) VALUES (1,'Dell') INSERT INTO #Manufacturers (ManufacturerID, Name) VALUES (2,'Lenovo') INSERT INTO #Manufacturers (ManufacturerID, Name) VALUES (3,'HP') CREATE TABLE #Years (YearID INT, Description VARCHAR(128)) GO INSERT INTO #Years (YearID, Description) VALUES (1, '2014') INSERT INTO #Years (YearID, Description) VALUES (2, '2015') GO CREATE TABLE #Sales (ManufacturerID INT, YearID INT,Revenue MONEY) GO INSERT INTO #Sales (ManufacturerID, YearID, Revenue) VALUES(1,2,59000000000) INSERT INTO #Sales (ManufacturerID, YearID, Revenue) VALUES(2,2,46000000000) INSERT INTO #Sales (ManufacturerID, YearID, Revenue) VALUES(3,2,111500000000) INSERT INTO #Sales (ManufacturerID, YearID, Revenue) VALUES(1,1,55000000000) INSERT INTO #Sales (ManufacturerID, YearID, Revenue) VALUES(2,1,42000000000) INSERT INTO #Sales (ManufacturerID, YearID, Revenue) VALUES(3,1,101500000000) GO DECLARE @SQL AS NVARCHAR(MAX) DECLARE @PivotColumnName AS NVARCHAR(MAX) DECLARE @TempTableColumnName AS NVARCHAR(MAX) DECLARE @AlterTempTable AS NVARCHAR(MAX) --get delimited column names for various SQL statements below SELECT -- column names for pivot @PivotColumnName= ISNULL(@PivotColumnName + N',',N'') + QUOTENAME(CONVERT(NVARCHAR(10),YearID)), -- column names for insert into temp table @TempTableColumnName = ISNULL(@TempTableColumnName + N',',N'') + QUOTENAME('Y' + CONVERT(NVARCHAR(10),YearID)), -- column names for alteration of temp table @AlterTempTable = ISNULL(@AlterTempTable + N',',N'') + QUOTENAME('Y' + CONVERT(NVARCHAR(10),YearID)) + ' MONEY' FROM (SELECT DISTINCT [YearID] FROM #Sales) AS Sales CREATE TABLE #Pivot ( ManufacturerID INT ) -- Thats it! Because the following step will flesh it out. SET @SQL = 'ALTER TABLE #Pivot ADD ' + @AlterTempTable EXEC sp_executesql @SQL --execute the dynamic PIVOT query into the temp table SET @SQL = N' INSERT INTO #Pivot (ManufacturerID, ' + @TempTableColumnName + ') SELECT ManufacturerID, ' + @PivotColumnName + ' FROM #Sales S PIVOT(SUM(Revenue) FOR S.YearID IN (' + @PivotColumnName + ')) AS PivotTable' EXEC sp_executesql @SQL SELECT M.Name, P.* FROM #Manufacturers M INNER JOIN #Pivot P ON M.ManufacturerID = P.ManufacturerID ```
Getting a Dynamically-Generated Pivot-Table into a Temp Table
[ "", "sql", "sql-server", "database", "pivot", "" ]
We use grep, cut, sort, uniq, and join at the command line all the time to do data analysis. They work great, although there are shortcomings. For example, you have to give column numbers to each tool. We often have wide files (many columns) and a column header that gives column names. In fact, our files look a lot like SQL tables. I'm sure there is a driver (ODBC?) that will operate on delimited text files, and some query engine that will use that driver, so we could just use SQL queries on our text files. Since doing analysis is usually ad hoc, it would have to be minimal setup to query new files (just use the files I specify in this directory) rather than declaring particular tables in some config. Practically speaking, what's the easiest? That is, the SQL engine and driver that is easiest to set up and use to apply against text files?
Riffing off someone else's suggestion, here is a Python script for sqlite3. A little verbose, but it works. I don't like having to completely copy the file to drop the header line, but I don't know how else to convince sqlite3's .import to skip it. I could create INSERT statements, but that seems just as bad if not worse. Sample invocation: ``` $ sql.py --file foo --sql "select count(*) from data" ``` The code: ``` #!/usr/bin/env python """Run a SQL statement on a text file""" import os import sys import getopt import tempfile import re class Usage(Exception): def __init__(self, msg): self.msg = msg def runCmd(cmd): if os.system(cmd): print "Error running " + cmd sys.exit(1) # TODO(dan): Return actual exit code def usage(): print >>sys.stderr, "Usage: sql.py --file file --sql sql" def main(argv=None): if argv is None: argv = sys.argv try: try: opts, args = getopt.getopt(argv[1:], "h", ["help", "file=", "sql="]) except getopt.error, msg: raise Usage(msg) except Usage, err: print >>sys.stderr, err.msg print >>sys.stderr, "for help use --help" return 2 filename = None sql = None for o, a in opts: if o in ("-h", "--help"): usage() return 0 elif o in ("--file"): filename = a elif o in ("--sql"): sql = a else: print "Found unexpected option " + o if not filename: print >>sys.stderr, "Must give --file" sys.exit(1) if not sql: print >>sys.stderr, "Must give --sql" sys.exit(1) # Get the first line of the file to make a CREATE statement # # Copy the rest of the lines into a new file (datafile) so that # sqlite3 can import data without header. If sqlite3 could skip # the first line with .import, this copy would be unnecessary. foo = open(filename) datafile = tempfile.NamedTemporaryFile() first = True for line in foo.readlines(): if first: headers = line.rstrip().split() first = False else: print >>datafile, line, datafile.flush() #print datafile.name #runCmd("cat %s" % datafile.name) # Create columns with NUMERIC affinity so that if they are numbers, # SQL queries will treat them as such. create_statement = "CREATE TABLE data (" + ",".join( map(lambda x: "`%s` NUMERIC" % x, headers)) + ");" cmdfile = tempfile.NamedTemporaryFile() #print cmdfile.name print >>cmdfile,create_statement print >>cmdfile,".separator ' '" print >>cmdfile,".import '" + datafile.name + "' data" print >>cmdfile, sql + ";" cmdfile.flush() #runCmd("cat %s" % cmdfile.name) runCmd("cat %s | sqlite3" % cmdfile.name) if __name__ == "__main__": sys.exit(main()) ```
q - Run SQL directly on CSV or TSV files: <https://github.com/harelba/q>
SQL query engine for text files on Linux?
[ "", "sql", "command-line", "text", "" ]
One of my favorite things about owning a USB flash storage device is hauling around a bunch of useful tools with me. I'd like to write some tools, and make them work well in this kind of environment. I know C# best, and I'm productive in it, so I could get a windows forms application up in no time that way. But what considerations should I account for in making a portable app? A few I can think of, but don't know answers to: 1) Language portability - Ok, I know that any machine I use it on will require a .NET runtime be installed. But as I only use a few windows machines regularly, this shouldn't be a problem. I could use another language to code it, but then I lose out on productivity especially in regards to an easy forms designer. Are there any other problems with running a .NET app from a flash drive? 2) Read/Write Cycles - In C#, how do I make sure that my application isn't writing unnecessarily to the drive? Do I always have control of writes, or are there any "hidden writes" that I need to account for? 3) Open question: are there any other issues relating to portable applications I should be aware of, or perhaps suggestions to other languages with good IDEs that would get me a similar level of productivity but better portability?
* 1) There shouldn't be any problems running a .NET app from a flash drive. * 2) You should have control of most writes. Be sure you write to temp or some other location on the hard drive, and not on the flash drive. But write-cycles shouldn't be a problem - even with moderate to heavy usage most flashdrives have a life time of years. * 3) Just treat it like's it any app that has xcopy style deployment and try to account for your app gracefully failing if some dependency is not on the box.
If you want to use com objects, use reg-free com and include the com objects with your program.
Writing USB Drive Portable Applications in C#
[ "", "c#", "usb", "portability", "" ]
Is it possible to get the path to my .class file containing my main function from within main?
``` URL main = Main.class.getResource("Main.class"); if (!"file".equalsIgnoreCase(main.getProtocol())) throw new IllegalStateException("Main class is not stored in a file."); File path = new File(main.getPath()); ``` Note that most class files are assembled into JAR files so this won't work in every case (hence the `IllegalStateException`). However, you can locate the JAR that contains the class with this technique, and you can get the content of the class file by substituting a call to `getResourceAsStream()` in place of `getResource()`, and that will work whether the class is on the file system or in a JAR.
According to <http://www.cs.caltech.edu/courses/cs11/material/java/donnie/java-main.html>, no. However, I suggest reading [$0 (Program Name) in Java? Discover main class?](https://stackoverflow.com/questions/41894/0-program-name-in-java-discover-main-class) , which at least gives you the main class .
Getting directory path to .class file containing main
[ "", "java", "file", "path", "" ]
I would like to create a delegate and a method that can be used to call any number of Web services that my application requires: Example: ``` public DateCheckResponseGetDate(DateCheckRequest requestParameters) { delegate object WebMethodToCall(object methodObject); WebMethodToCall getTheDate = new WebMethodToCall(WebServices.GetTheDate); return (DateCheckResponse)CallWebMethod(getTheDate , requestParameters); } public TimeCheckResponse GetTime(TimeCheckRequest requestParameters) { delegate object WebMethodToCall(object methodObject); WebMethodToCall getTheTime = new WebMethodToCall(WebServices.GetTheTime); return (TimeCheckResponse)CallWebMethod(getTheTime, requestParameters); } private object CallWebMethod(WebMethodToCall method, object methodObject) { return method(methodObject); } ``` But, unfortunately, when I try to compile, I get these errors: > No overload for 'GetTheDate' matches delegate 'WebMethodToCall' > No overload for 'GetTheTime' matches delegate 'WebMethodToCall' It seems like the delegate should work. WebServices.GetTheDate and WebServices.GetTheTime both take a single parameter (DateCheckRequest and TimeCheckRequest, respectively) and both return a value. So doesn't the delegate match the signature of the two web methods? (both accept and return types derived from object). Is it possible to use the object type to make a very reusable delegate in .NET 2.0?
I suggest you use a generic delegate such as `Func<T, TResult>`: ``` public DateCheckResponseGetDate(DateCheckRequest requestParameters) { // Split declaration/assignment just to avoid wrapping Func<DateCheckRequest, DateCheckResponse> method; method = WebServices.GetTheDate; return CallWebMethod(method, requestParameters); } ``` You'd then make `CallWebMethod` generic too: ``` public TResponse CallWebMethod<TRequest, TResponse> (Func<TRequest, TResponse> method, TRequest request) { // Whatever you do in here. } ```
I'd suggest you change your code to something like: ``` public DateCheckResponseGetDate(DateCheckRequest requestParameters) { Func<DateCheckRequest, DateCheckResponse> getTheDate = new Func<DateCheckRequest, DateCheckResponse>(WebServices.GetTheDate); return CallWebMethod(getTheDate , requestParameters); } //DEFINE CallWebMethod ADEQUATELY! public T CallWebMethod<T,U> (Func<T,U> webMethod, U arg) { return webMethod(arg); } ``` This way you can avoid all of the ugly downcasting :)
Can a C# delegate use the object type to be more generic?
[ "", "c#", "functional-programming", "delegates", "type-conversion", "" ]
I can successfully run a new process using ShellExecuteEx, but if I access the HasExited property, a Win32Exception is thrown. What is this exception and how do I check if the process is alive? ``` ShellExecuteEx(ref info); clientProcessId = NativeMethods.GetProcessId(info.hProcess); Process clientProcess = Process.GetProcessById((int)clientProcessId); if (clientProcess.HasExited) //<---- throws Win32Exception, but the process is alive! I can see its window { //run new one } ``` Thanks
For info, did you set fMask to `SEE_MASK_NOCLOSEPROCESS`, so that hProcess is valid? Also, why are you P/Invoke'ing to ShellExecuteEx, why not use Process.Start w/ ProcessStartInfo, which will handle ShellExecuteEx for you? EDIT: Since you are doing runas, you only get SYNCHRONIZE access on the handle, not `PROCESS_QUERY_INFORMATION` access, hence GetExitCodeProcess fails, which results in hasEnded throwing a Win32 exception. As a workaround, you could P/Invoke WaitForSingleObject with a timeout of zero to see if the process has exited.
I faced this problem today. There is another solution. The WaitForExit function runs fine on another user environment. So if you do a runas to elevate the process privileges, you can query the process status with this function.
Why did hasExited throw 'System.ComponentModel.Win32Exception'?
[ "", "c#", "process", "" ]
I am trying to change the class of an element using javascript. So far I'm doing : ``` var link = document.getElementById("play_link"); link.className = "play_button"; ``` ***edit***: here is the actual code that replace the classname In the HTML : ``` <a href="#play_link_<%=i%>" id="play_link_<%=i%>"onclick="changeCurrentTo(<%=i%>);return false;" class="play_button"></a> ``` In the Javascript function changeCurrentTo(id){ activatePlayButton(current\_track); current\_track = id; inactivatePlayButton(current\_track); } ``` function inactivatePlayButton(id){ document.getElementById("recording_"+id).style.backgroundColor="#F7F2D1"; var link = document.getElementById("play_link_"+id); link.className="stop_button"; link.onclick = function(){stopPlaying();return false;}; } function activatePlayButton(id){ document.getElementById("recording_"+id).style.backgroundColor=""; var link = document.getElementById("play_link_"+id); link.className = "play_button"; var temp = id; link.onclick = function(){changeCurrentTo(temp);return false;}; } ``` with ``` .play_button{ background:url(/images/small_play_button.png) no-repeat; width:25px; height:24px; display:block; } ``` the old class is ``` .stop_button{ background:url(/images/small_stop_button.png) no-repeat; width:25px; height:24px; display:block; } ``` The context is a music player. When you click the play button (triangle) it turns into a stop button (square) and replace the function that is called. The problem is that the class get changed, but in IE6 and 7 the new background (here /images/small\_play\_button.png) does not display right away. Sometime it doesn't even display at all. Sometime it doesn't display but if I shake the mouse a bit then it displays. It works perfectly in FF, Chrome, Opera and Safari, so it's an IE bug. I know it's hard to tell right away from only these information, but if I could get some pointers and directions that would be helpful. Thanks-
You should create *one* image with a width of `50px` and a height of `24px` where you have both the play part and the stop part. Then you just ajust the background position like this: ``` .button { background-image: url(/images/small_buttons.png); bacground-repeat: no-repeat; width: 25px; height: 24px; display: block; } .play_button { background-position: left top; } .stop_button { background-position: right top; } ``` Then you load "both images" at the same time, and no delay will happen when you change which part of the image gets displayed. Note that I have made a new CSS class so that you dont need to repeat your CSS for different buttons. You now need to apply two classes on your element. Example: ``` <div class="button play_button"></div> ```
You need to use setAttribute in your two funcitons. Try this out: `link.setAttribute((document.all ? "className" : "class"), "play_button");` `link.setAttribute((document.all ? "className" : "class"), "stop_button");`
IE6/7 And ClassName (JS/HTML)
[ "", "javascript", "html", "internet-explorer", "xhtml", "classname", "" ]
I overloaded the `[]` operator in my class. Is there a nicer way to call this function from within my class other than `(*this)[i]`?
Add function `at(size_t i)` and use this function. **EDIT**: If you actively using stl avoid semantic inconsistence: in `std::vector operator[]` does not check if index is valid, but `at(..)` check and could throw `std::out_of_range` exception. I think in project with more stl similar behavior will expected from your class. Maybe this name is not best one for this function.
Well, you could use `operator[](i)`.
(*this)[i] after overloading [] operator?
[ "", "c++", "syntax", "" ]
Here's my script: ``` #!/usr/bin/python import smtplib msg = 'Hello world.' server = smtplib.SMTP('smtp.gmail.com',587) #port 465 or 587 server.ehlo() server.starttls() server.ehlo() server.login('myname@gmail.com','mypass') server.sendmail('myname@gmail.com','somename@somewhere.com',msg) server.close() ``` I'm just trying to send an email from my gmail account. The script uses starttls because of gmail's requirement. I've tried this on two web hosts, 1and1 and webfaction. 1and1 gives me a 'connection refused' error and webfaction reports no error but just doesn't send the email. I can't see anything wrong with the script, so I'm thinking it might be related to the web hosts. Any thoughts and comments would be much appreciated. EDIT: I turned on debug mode. From the output, it looks like it sent the message successfully...I just never receive it. ``` send: 'ehlo web65.webfaction.com\r\n' reply: '250-mx.google.com at your service, [174.133.21.84]\r\n' reply: '250-SIZE 35651584\r\n' reply: '250-8BITMIME\r\n' reply: '250-STARTTLS\r\n' reply: '250-ENHANCEDSTATUSCODES\r\n' reply: '250 PIPELINING\r\n' reply: retcode (250); Msg: mx.google.com at your service, [174.133.21.84] SIZE 35651584 8BITMIME STARTTLS ENHANCEDSTATUSCODES PIPELINING send: 'STARTTLS\r\n' reply: '220 2.0.0 Ready to start TLS\r\n' reply: retcode (220); Msg: 2.0.0 Ready to start TLS send: 'ehlo web65.webfaction.com\r\n' reply: '250-mx.google.com at your service, [174.133.21.84]\r\n' reply: '250-SIZE 35651584\r\n' reply: '250-8BITMIME\r\n' reply: '250-AUTH LOGIN PLAIN\r\n' reply: '250-ENHANCEDSTATUSCODES\r\n' reply: '250 PIPELINING\r\n' reply: retcode (250); Msg: mx.google.com at your service, [174.133.21.84] SIZE 35651584 8BITMIME AUTH LOGIN PLAIN ENHANCEDSTATUSCODES PIPELINING send: 'AUTH PLAIN *****\r\n' reply: '235 2.7.0 Accepted\r\n' reply: retcode (235); Msg: 2.7.0 Accepted send: 'mail FROM:<myname@gmail.com> size=12\r\n' reply: '250 2.1.0 OK 4sm652580yxq.48\r\n' reply: retcode (250); Msg: 2.1.0 OK 4sm652580yxq.48 send: 'rcpt TO:<myname@gmail.com>\r\n' reply: '250 2.1.5 OK 4sm652580yxq.48\r\n' reply: retcode (250); Msg: 2.1.5 OK 4sm652580yxq.48 send: 'data\r\n' reply: '354 Go ahead 4sm652580yxq.48\r\n' reply: retcode (354); Msg: Go ahead 4sm652580yxq.48 data: (354, 'Go ahead 4sm652580yxq.48') send: 'Hello world.\r\n.\r\n' reply: '250 2.0.0 OK 1240421143 4sm652580yxq.48\r\n' reply: retcode (250); Msg: 2.0.0 OK 1240421143 4sm652580yxq.48 data: (250, '2.0.0 OK 1240421143 4sm652580yxq.48') ```
I think that the GMail SMTP server does a reverse DNS lookup on the IP address that you connect from, and refuses the connection if no domain can be found. This is to avoid spammer from using their SMTP server as an open relay.
Some self-promotion here, but I feel on a valid ground. You would literally only need this code to do exactly what you wrote: ``` import yagmail yag = yagmail.SMTP('myname@gmail.com') yag.send('somename@somewhere.com', subject = None, contents = 'Hello') ``` Or a one liner: ``` yagmail.SMTP('myname@gmail.com').send('somename@somewhere.com', None, 'Hello world.') ``` What is nice is that I propose to use keyring to store your password, so you never have a risk of people seeing your password in your script. You can set this up by running once in your interpreter: ``` import yagmail yagmail.register("my@gmail.com", "mypassword") ``` and exit. Then you can just use: ``` import yagmail yagmail.SMTP("my@gmail.com") # without password ``` If you add .yagmail with "my@gmail.com" in your home dir, then you can just do: `yagmail.SMTP()`, but that's rather pointless by now. Warning: If you get serious about sending a lot of messages, better set up OAuth2, yagmail can help with that. ``` yagmail.SMTP("my@gmail.com", oauth2_file="/path/to/save/creds.json") ``` The first time ran, it will guide you through the process of getting OAuth2 credentials and store them in the file so that next time you don't need to do anything with it. Do you suspect someone found your credentials? They'll have limited permissions, but you better invalidate their credentials through gmail. For the package/installation please look at [git](https://github.com/kootenpv/yagmail) or [readthedocs](https://yagmail.readthedocs.io), available for both Python 2 and 3.
smtplib and gmail - python script problems
[ "", "python", "smtp", "gmail", "smtplib", "" ]
I have a list of strings that should be unique. I want to be able to check for duplicates quickly. Specifically, I'd like to be able to take the original list and produce a new list containing any repeated items. I don't care how many times the items are repeated so it doesn't have to have a word twice if there are two duplicates. Unfortunately, I can't think of a way to do this that wouldn't be clunky. Any suggestions? EDIT: Thanks for the answers and I thought I'd make a clarification. I'm not concerned with having a list of uniques for it's own sake. I'm generating the list based off of text files and I want to know what the duplicates are so I can go in the text files and remove them if any show up.
This code should work: ``` duplicates = set() found = set() for item in source: if item in found: duplicates.add(item) else: found.add(item) ```
`groupby` from [itertools](http://docs.python.org/library/itertools.html) will probably be useful here: ``` from itertools import groupby duplicated=[k for (k,g) in groupby(sorted(l)) if len(list(g)) > 1] ``` Basically you use it to find elements that appear more than once... NB. the call to `sorted` is needed, as `groupby` only works properly if the input is sorted.
In Python, how do I take a list and reduce it to a list of duplicates?
[ "", "python", "" ]
We have a database currently sitting on 15000 RPM drives that is simply a logging database and we want to move it to 10000 RPM drives. While we can easily detach the database, move the files and reattach, that would cause a minor outage that we're trying to avoid. So we're considering using `DBCC ShrinkFile with EMPTYFILE`. We'll create a data and a transaction file on the 10000 RPM drive slightly larger than the existing files on the 15000 RPM drive and then execute the `DBCC ShrinkFile with EMPTYFILE` to migrate the data. What kind of impact will that have?
I've tried this and had mixed luck. I've had instances where the file couldn't be emptied because it was the primary file in the primary filegroup, but I've also had instances where it's worked completely fine. It does hold huge locks in the database while it's working, though. If you're trying to do it on a live production system that's got end user queries running, forget it. They're going to have problems because it'll take a while.
Why not using log shipping. Create new database on 10.000 rpm disks. Setup log shipping from db on 15K RPM to DB on 10k RPM. When both DB's are insync stop log shipping and switch to the database on 15K RPM.
Performance Impact of Empty file by migrating the data to other files in the same filegroup
[ "", "sql", "sql-server-2005", "sqlperformance", "" ]
Please explain what is meant by tuples in sql?Thanks..
Most of the answers here are on the right track. However, a **row is not a tuple**. **Tuples**`*` are *unordered* sets of known values with names. Thus, the following tuples are the same thing (I'm using an imaginary tuple syntax since a relational tuple is largely a theoretical construct): ``` (x=1, y=2, z=3) (z=3, y=2, x=1) (y=2, z=3, x=1) ``` ...assuming of course that x, y, and z are all integers. Also note that there is no such thing as a "duplicate" tuple. Thus, not only are the above equal, they're the *same thing*. Lastly, tuples can only contain known values (thus, no nulls). A **row**`**` is an ordered set of known or unknown values with names (although they may be omitted). Therefore, the following comparisons return false in SQL: ``` (1, 2, 3) = (3, 2, 1) (3, 1, 2) = (2, 1, 3) ``` Note that there are ways to "fake it" though. For example, consider this `INSERT` statement: ``` INSERT INTO point VALUES (1, 2, 3) ``` Assuming that x is first, y is second, and z is third, this query may be rewritten like this: ``` INSERT INTO point (x, y, z) VALUES (1, 2, 3) ``` Or this: ``` INSERT INTO point (y, z, x) VALUES (2, 3, 1) ``` ...but all we're really doing is changing the ordering rather than removing it. And also note that there may be unknown values as well. Thus, you may have rows with unknown values: ``` (1, 2, NULL) = (1, 2, NULL) ``` ...but note that this comparison will always yield `UNKNOWN`. After all, how can you know whether two unknown values are equal? And lastly, rows may be duplicated. In other words, `(1, 2)` and `(1, 2)` may compare to be equal, but that doesn't necessarily mean that they're the same thing. If this is a subject that interests you, I'd highly recommend reading [SQL and Relational Theory: How to Write Accurate SQL Code](https://rads.stackoverflow.com/amzn/click/com/0596523068) by CJ Date. `*` Note that I'm talking about tuples as they exist in the relational model, which is a bit different from mathematics in general. `**`And just in case you're wondering, just about everything in SQL is a row or table. Therefore, `(1, 2)` is a row, while `VALUES (1, 2)` is a table (with one row). **UPDATE**: I've expanded a little bit on this answer in a blog post [here](http://jasonmbaker.wordpress.com/2009/07/05/the-relational-model-of-tuples-relations-rows-and-tables/).
It's a shortened "`N-tuple`" (like in `quadruple`, `quintuple` etc.) It's a row of a rowset taken as a whole. If you issue: ``` SELECT col1, col2 FROM mytable ``` , whole result will be a `ROWSET`, and each pair of `col1, col2` will be a `tuple`. Some databases can work with a tuple as a whole. Like, you can do this: ``` SELECT col1, col2 FROM mytable WHERE (col1, col2) = ( SELECT col3, col4 FROM othertable ) ``` , which checks that a whole `tuple` from one `rowset` matches a whole `tuple` from another `rowset`.
What does the term "Tuple" Mean in Relational Databases?
[ "", "sql", "terminology", "rdbms", "definition", "tuples", "" ]
I'm trying to get this function to work, which does a request for parameter `url` then sends the responseText to `callback` which is a function. It seems that it only gets to `readyState 1` (thanks to the Firebug commands). Here it is: ``` function Request(url, callback){ if (window.XMLHttpRequest) { // Mozilla, Safari, ... httpRequest = new XMLHttpRequest(); } else if (window.ActiveXObject) { // IE httpRequest = new ActiveXObject("Microsoft.XMLHTTP"); } else{ return false; } httpRequest.onreadystatechange = function(){ console.log(httpRequest.readyState); if (httpRequest.readyState == 4) { callback(httpRequest.responseText); } }; console.log(httpRequest, url); httpRequest.open('GET', url, true); httpRequest.send(null); } ```
I workarounded this problem assigning onload event instead of onreadystatechange: ``` function Request(url, callback){ if (window.XMLHttpRequest) { // Mozilla, Safari, ... httpRequest = new XMLHttpRequest(); } else if (window.ActiveXObject) { // IE httpRequest = new ActiveXObject("Microsoft.XMLHTTP"); } else{ return false; } var readyStateChange = function(){ console.log(httpRequest.readyState); if (httpRequest.readyState == 4) { callback(httpRequest.responseText); } }; if (isFirefox && firefoxVersion > 3) { httpRequest.onload = readyStateChange; } else { httpRequest.onreadystatechange = readyStateChange; } console.log(httpRequest, url); httpRequest.open('GET', url, true); httpRequest.send(null); } ```
Check that the URL in question does actually respond by visiting it directly in the browser. Test with a different browser do you get the same result. Use some form of HTTP monitor to watch the client to server conversation (my favorite is [Fiddler](http://www.fiddlertool.com/fiddler))
Ajax won't get past readyState 1, why?
[ "", "javascript", "ajax", "readystate", "" ]
I have to insert some records in a table in a legacy database and, since it's used by other ancient systems, changing the table is not a solution. The problem is that the target table has a int primary key but no identity specification. So I have to find the next available ID and use that: ``` select @id=ISNULL(max(recid)+1,1) from subscriber ``` However, I want to prevent other applications from inserting into the table when I'm doing this so that we don't have any problems. I tried this: ``` begin transaction declare @id as int select @id=ISNULL(max(recid)+1,1) from subscriber WITH (HOLDLOCK, TABLOCK) select @id WAITFOR DELAY '00:00:01' insert into subscriber (recid) values (@id) commit transaction select * from subscriber ``` in two different windows in SQL Management Studio and the one transaction is always killed as a deadlock victim. I also tried `SET TRANSACTION ISOLATION LEVEL SERIALIZABLE` first with the same result... Any good suggestions to how I can ensure that I get the next id and use that without risking that someone else (or me!) is getting hosed? Sorry for not mentioning this earlier, but this is a SQL 2000 server so I can't use things like FOR UPDATE and OUTPUT ## **UPDATE**: This is the solution that worked for me: ``` BEGIN TRANSACTION DECLARE @id int SELECT @id=recid FROM identities WITH (UPDLOCK, ROWLOCK) WHERE table_name = 'subscriber' waitfor delay '00:00:06' INSERT INTO subscriber (recid) values (@id) UPDATE identities SET recid=recid+1 WHERE table_name = 'subscriber' COMMIT transaction select * from subscriber ``` The WAITFOR is so that I can have multiple connections and start the query several times to provoke concurrency. Thanks to Quassnoi for the answer and to all you other guys that contributed! Awesome!
Create another table: ``` t_identity (id INT NOT NULL PRIMARY KEY CHECK (id = 1), value INT NOT NULL) ``` with a single row, lock this row, and increment `value` by one each time you need an `IDENTITY`. To lock, increment, and return the new value in a single statement, use: ``` UPDATE t_identity SET value = value + 1 OUTPUT INSERTED.value ``` If you don't want to update, just lock, then issue: ``` SELECT value FROM t_identity WITH (UPDLOCK, ROWLOCK) ``` This will lock the table until the end of the transaction. If you always first lock `t_identity` before messing with `ancient_table`, you will never get a deadlock.
Add another table with an identity column and use this new table and column to select/generate your identity values for the old table. **Update**: Depending on the frequency of INSERTS (and the number of existing rows ***e***) you could seed your new IDENTITY values at ***e+x*** where ***x*** is sufficiently large. Thhis would avoid conflict with the legacy inserts. A sad solution, an imperfect one for sure, but something to think about?
Best way to get the next id number without "identity"
[ "", "sql", "sql-server", "transactions", "locking", "deadlock", "" ]
I have a Set class (This is J2ME, so I have limited access to the standard API; just to explain my apparent wheel-reinvention). I am using my set class to create constant sets of things in classes and subclasses. It sort of looks like this... ``` class ParentClass { protected final static Set THE_SET = new Set() {{ add("one"); add("two"); add("three"); }}; } class SubClass extends ParentClass { protected final static Set THE_SET = new Set() {{ add("four"); add("five"); add("six"); union(ParentClass.THE_SET); /* [1] */ }}; } ``` All looks fine, except the line at [1] causes a null pointer exception. Presumably this means that the static initialiser in the subclass is being run before that of the parent class. This surprised me because I'd have thought it would run the static blocks in any new imports first, before running any in the instatiated subclass. Am I right in this assumption? Is there any way to control or work around this behaviour? **Update:** Things are even stranger. I tried this instead (Note the 'new ParentClass()' line): ``` class ParentClass { public ParentClass() { System.out.println(THE_SET); } protected final static Set THE_SET = new Set() {{ add("one"); add("two"); add("three"); }}; } class SubClass extends ParentClass { protected final static Set THE_SET = new Set() {{ System.out.println("a"); new ParentClass(); System.out.println("b"); add("four"); System.out.println("c"); add("five"); System.out.println("d"); add("six"); System.out.println("e"); union(ParentClass.THE_SET); /* [1] */ System.out.println("f"); }}; } ``` And the output is strange: ``` a ["one", "two", "three"] b c d e Exception in thread "main" java.lang.ExceptionInInitializerError Caused by: java.lang.NullPointerException ``` So ParentClass is initialised, but the subclass doesn't have access to it in its static initializer.
Is this what you are trying to accomplish? Or do you need a local implementation of the Set interface? ``` class ParentClass { protected final static Set THE_SET; static { THE_SET = new HashSet(); THE_SET.add("one"); THE_SET.add("two"); THE_SET.add("three"); } } class SubClass extends ParentClass { protected final static Set THE_SECOND_SET; static { THE_SECOND_SET = new HashSet(); THE_SECOND_SET.add("four"); THE_SECOND_SET.add("five"); THE_SECOND_SET.add("six"); union(ParentClass.THE_SET); /* [1] */ } } ```
There is no guarantee for static initializer order among classes. Within a class, they run in the order of the source code. If you think about it, therre really couldn't be an order *among* classes, because you don't control when the classes are loaded either; you might dynamically load a class, or the JVM might optimize the load order.
Can I guarantee the order in which static initializers are run in Java?
[ "", "java", "inheritance", "static", "initialization", "static-initializer", "" ]
What is the quickest way to determine which members of an enum are not being used?
If you're using [ReSharper](http://www.jetbrains.com/resharper/), click on the enum to check, hit Alt+F7 (Shift+F12 if you're using VS shortcuts), and it'll give you a list of every place it's used in your entire solution.
Comment the enum members out one by one and see if your code compiles. If compilation breaks, the member is used. Although this method only catches compile-time use of your enums. They could be used at runtime.
Finding unused enum members
[ "", "c#", "enums", "" ]
I am currently using a third party component to handle telnet connections in .NET. I want it to be synchronous where I send a command to the receiving telnet server and then I get the response back as text or byte array. Only problem is that the component is not set up to do that. The component allows me to send commands to the server, but the response is returned via a function handle. So in essence, I need a way to pause the application while the handler does it's processing. Here is an example of how I plan to get around that issue: ``` static void Main(string[] args) { Telnet telCon = new Telnet(); telCon.OnDataIn += new Telnet.OnDataInHandler(HandleDataIn); telCon.Connect(remoteHostStr); while (true) ; } public static void HandleDataIn(object sender, TelnetDataInEventArgs e) { string responseStr = e.Text; if (responseStr.Contains("Username:")) { ((Telnet)sender).Send(System.Text.ASCIIEncoding.ASCII.GetBytes(username)); } else if (responseStr.Contains("Password:")) { ((Telnet)sender).Send(System.Text.ASCIIEncoding.ASCII.GetBytes(password)); } } ``` The solution above will not work since the `while` will always run, but I will probably build a future version that uses some sort of global variable to track if the loop still needs to run. However, everything I have been taught about programming says this is very dirty. Can anyone think of another way around my dilemma? Thanks, Chris
Here is an example of using a ManualResetEvent to suspend execution (and delay program end) until your event handler says it's finished. ``` static ManualResetEvent finishGate; static void Main(string[] args) { finishGate = new ManualResetEvent(false); // initial state unsignaled Telnet telCon = new Telnet(); telCon.OnDataIn += new Telnet.OnDataInHandler(HandleDataIn); telCon.Connect(remoteHostStr); finishGate.WaitOne(); // waits until the gate is signaled } public static void HandleDataIn(object sender, TelnetDataInEventArgs e) { // handle event if (processingComplete) finishGate.Set(); // signals the gate } ``` The WaitOne() method of ManualResetEvent also includes overrides that accept a timespan or number of milliseconds. It returns bool - true if it was signaled, false if it timed out. If you put that in a loop, you could have your main thread wake up every 30 seconds and perform some housekeeping tasks, but still have an instantaneous response when the gate is signaled.
Your while loop: ``` while(true) ; ``` will drive CPU usage to 100% (well, 100% of 1 core on a multicore machine) and leave it there, permanently. This will starve other processes of CPU power, and may prevent the Telnet component from working *at all* because you've bypassed the message pump. There are better ways, but without more information on what you're doing, it will be hard to advise you. To begin, do you want a WindowsForms/WPF/Console application? [And *please*, use comments to answer, not Answers.]
Creating a Loop to Pause a Script While a Callback Function Operates
[ "", "c#", ".net", "event-handling", "" ]
I want to disable the hover event on a particular list when another event occurs.
You can use the [`unbind`](http://docs.jquery.com/Events/unbind) function to remove those events. ``` $('#theListId').unbind('mouseenter').unbind('mouseleave'); ```
You could also use: ``` $('#item').unbind('mouseenter mouseleave'); ``` Which is same as: ``` $('#theListId').unbind('mouseenter').unbind('mouseleave'); ```
How do I disable a jQuery function after some other event?
[ "", "javascript", "jquery", "jquery-events", "" ]
I'm a bit stuck with my code: ``` def setVenueImage(img): img = images.Image(img.read()) x, y = photo_utils.getIdealResolution(img.width, img.height) img.resize(x, y) img.execute_transforms() venue_obj = getVenueSingletonObject() if venue_obj is None: venue_obj = Venue(images = [img]) else: venue_obj.images.append(img) db.put(venue_obj) ``` I'm using django with app engine - so img.read() works fine. In fact all of this code works fine up until I try to store img into the database. My model expects a Blob, so when I put in the image as img, then it throws a fit, and I get: > BadValueError at /admin/venue/ > Items in the images list must all be Blob instances Ok, so an Image must not be a Blob, but then how do I make it a blob? Blobs take in a byte string, but how do I make my image a byte string? I haven't seen in the docs anywhere where they *actually* use image objects, so I'm not sure how this is all supposed to work, but I do want to use image objects to resize my image (I know you can do it in PIL, but I'd like to know how to do it with google's Image class). Thanks for any pointers :)
This will probably work: ``` def setVenueImage(img): img = images.Image(img.read()) x, y = photo_utils.getIdealResolution(img.width, img.height) img.resize(x, y) img_bytes = img.execute_transforms() # Converts to PNG venue_obj = getVenueSingletonObject() if venue_obj is None: venue_obj = Venue(images = [img_bytes]) else: venue_obj.images.append(img_bytes) db.put(venue_obj) ``` I'm assuming that Venue.images is a ListProperty(db.Blob), correct? This is probably the wrong thing to do. Define a VenueImage model with a simple blob property and store its key into the Venue. If you put the images in there directly you'll hit the 1MB row limit on the datastore.
I'm not happy with this solution as it doesn't convert an Image object to a blob, but it will do for the time being: ``` def setVenueImage(img): original = img.read() img = images.Image(original) x, y = photo_utils.getIdealResolution(img.width, img.height) img = images.resize(original, x, y) venue_obj = getVenueSingletonObject() if venue_obj is None: venue_obj = Venue(images = [db.Blob(img)]) else: venue_obj.images.append(db.Blob(img)) db.put(venue_obj) ```
How do you store an app engine Image object in the db?
[ "", "python", "google-app-engine", "image", "" ]
What's the best practice to store database credentials in a CMS? Now I declare them in my Database singleton class: ``` $this->credentials = array("hostname"=>"hostname", "username"=>"username","password"=>"password", "database"=>"database"); ``` But it's not so intuitive to search where to change them and also I'm planning to make install file for cms later. Where and how do you store your connection preferences?
You can use a singleton class, as you mentioned, or something simpler. In all my config.inc.php files I have a standard associative array ``` $config['Main_Database'] = ''; $config['db_user'] = ''; $config['db_pass'] = ''; $config['db_host'] = ''; ``` The concept is the same and you're on the right track. Make it something that, as a human, makes sense to you. If someone has access to your server your screwed anyway so it's not a big deal in terms of what is more secure. As for the install file, I've seen many apps open the config file, adjust a few specific parts via the code and then actually re-write the file back to the server (rather than "store a setting"). It achieves the same result but done through a wizard as opposed to manually.
CakePHP uses a config file called database.php (in /app/config/), in which a DATABASE\_CONFIG class is declared: ``` class DATABASE_CONFIG { var $default = array( 'driver' => 'mysql', 'persistent' => false, 'host' => 'localhost', 'login' => 'user', 'password' => 'password', 'database' => 'database_name', 'prefix' => '', ); var $test = array( 'driver' => 'mysql', 'persistent' => false, 'host' => 'localhost', 'login' => 'user', 'password' => 'password', 'database' => 'test_database_name', 'prefix' => '', ); } ``` This creates a single place where the user can set all their database configurations while the intuitive directory structure makes it obvious where database configurations are stored. And you can also specify multiple configurations for production, development, and testing purposes, which are easy to switch between.
How do you store database credentials in typical CMS build on PHP?
[ "", "php", "content-management-system", "database-permissions", "" ]
Can please someone one explain how to deal with out-of-order packets. I'm using raw socket to capture packets, and parse them as they come, but some of them come in wrong order, for example: 1. Id...........Flags 2. 16390 : (PSH, ACK) 3. 16535 : (PSH, ACK) 4. 16638 : (ACK) 5. 16640 : (PSH, ACK) 6. 16639 : (ACK) 7. 16695 : (PSH, ACK) Packets with IDs: 16390, 16535, 16695 are separate packets and can be processed freely Packets with IDs: 16638, 16640, 16639 are a sequence of packets and should be put in ascending order before parsing. To make it worse packets with Push flag sometimes come first so I just pass them along to parser, and then packet that preceds it comes and parser just discards it as corrupted. Is there any way to deal with it?
TCP guarantees order. So I will just assume you are talking about IP. One thing you could try is putting the packets in a min-heap and then waiting until the next packet ID number you want is available. As for the push packets, those are supposed to be received as soon as possible without a restriction on ordering, so its up to you to decide how long you want to wait to see if you'll receive an earlier push packet.
TCP segments will not be out of order because the next one will not be sent until you ACK the previous one. > TCP numbers the segments that it sends > to a particular destination port > sequentially, so that if they arrive > out of order, the TCP entity can > reorder them. This happens on a transport layer below TCP so any TCP connections would never "see" this happen. In terms of TCP they are always in order. So if you see them out of order then you are not working on the TCP transport layer, you're at a lower level. Also, FYI... * TCP data is a "segment" * IP data is a "datagram" * Network-level is a "packet" **Edit:** The link you provided will provide you with a stream of IP datagrams so you would have to handle the TCP stream on your own. I'm not going to pretend like it's easy and try to explain that here.
C#: How to deal with out-of-order TCP packets?
[ "", "c#", "sockets", "" ]
Right now I'm having it with `Guid`s. I certainly remember that throughout the code in some places this implicit conversion works, in others it does not. Until now I fail to see the pattern. How the compiler decides when it cannot? I mean, the type method `Guid.ToString()` is present, isn't it called whenever this transformation is needed? Can someone please tell me under what circumstances this transformation is done automatically and when I have to call `myInstance.ToString()` explicitly?
In short, when there is an implicit or explicit conversion operator defined: ``` class WithImplicit { public static implicit operator string(WithImplicit x) { return x.ToString();} } class WithExplicit { public static explicit operator string(WithExplicit x) { return x.ToString(); } } class WithNone { } class Program { static void Main() { var imp = new WithImplicit(); var exp = new WithExplicit(); var none = new WithNone(); string s1 = imp; string s2 = (string)exp; string s3 = none.ToString(); } } ```
The only place where you effectively don't need to call ToString() yourself is when concatenating strings. ``` Guid g; int i; string s = "Hello "+g+' '+i; ``` Then there are some situations where the call is made by the .NET Framework, such as in [String.Format()](http://msdn.microsoft.com/en-us/library/system.string.format.aspx). Other than that, the compiler will only convert a type if it is known to be compatible (e.g. base class or implementing an interface or via an explicitly coded conversion operator). When you use a cast and the compiler knows that the types cannot be compatible (e.g. not in the same inheritance line, and not interfaces), it will also say that it cannot convert it. The same goes for generic type parameters.
Cannot implicitly convert type 'X' to 'string' - when and how it decides that it "cannot"?
[ "", "c#", "type-conversion", "tostring", "" ]
I would like to authenticate username and passwords for my application on a windows operating system with any directory service. For example it could be microsoft active directory, Novell eDirecotry, or SunOne. I already know how to do this code natively for Microsoft Active Direcotry with c#. ( I totally gave up using ADSI and creating a low level com component) The way im attempting to authenticate with Novel eDirecotory is i have installed the Mono project. Inside the mono project they provide you with Novell.Directory.ldap.dll The code looks somewhat the same as for Microsoft Active Directory.(<http://www.novell.com/coolsolutions/feature/11204.html>) For SunOne, i have been told to use the same code as active direcotry, but the ldap connecton string is a little different.(<http://forums.asp.net/t/354314.aspx>) (<http://technet.microsoft.com/en-us/library/cc720649.aspx>) To complicate my project, most customers use a "Service account:" which means i need to bind with an administrative username and password before i can authenticate a regular username and password. My questions is in 2 parts. 1) From what I have explained above, is this the correct direction I should be going to authenticate against each individual direcotory service? 2) I feel that i dont not need to do any of this code at all. I also feel the stipulation of using a service account is not imporant at all. If all I care about is authenticating a username and password on a windows machine why do i even need to use ldap? I mean think about it. When you login to your machine in the morning, you do not have to provide a service account just to login. I can easily authenticate a username and password at a DOS prompt by using the runas feature and i will be denied or not and could parse the text file. Im sure there are other ways i could pass a username and password to the windows operating system that i am on and will tell me if a username and password is valid for the domain that it is on. Am i right? If so what suggested ways do you guys have? Michael Evanchik www.MikeEvanchik.com
All this can be done with System.DirectoryServices.Protocols. If you create an LdapConnection to the directory you can use the service account to bind with, and then make a subsequent bind to authenticate the credentials. The service account is generally used to limit access to the authentication mechanism of the server. This way no random person on the street can try to auth with your LDAP server. Also, do you expect that each user will provide their distinguished name when logging in? With Active Directory, only the sAMAccountName is required, yet other providers like eDirectory and SunONE require the distinguished name for authentication. To perform this type of authentication, you would need to use the service account that is provided to authenticate to the server, perform a search for a user with the given username, and grab that users distinguished name. You can then authenticate using that distinguished name and the password that was provided. This will work for all LDAP systems, with the exception of Active Directory which will be happy with just the sAMAccountName.
I'm not sure I entirely understand the question, but in some situations I've found it easy to authenticate a user by simply doing a search for their account and using their credentials as the username and password. A successful query means everything provided was correct, not finding the account means something was wrong. ``` //use the users credentials for the query DirectoryEntry root = new DirectoryEntry( "LDAP://dc=domain,dc=com", loginUser, loginPassword ); //query for the username provided DirectorySearcher searcher = new DirectorySearcher( root, "(sAMAccountName=" + loginUser + ")" ); //a success means the password was right bool success = false; try { searcher.FindOne(); success = true; } catch { success = false; } ``` Probably not "best practice", but might get around your issue you are having...
How to Authenticate LDAP in .NET
[ "", "c#", "ldap", "novell", "" ]
I am executing this query: ``` SELECT voterfile_county.Name, voterfile_precienct.PREC_ID, voterfile_precienct.Name, COUNT((SELECT voterfile_voter.ID FROM voterfile_voter JOIN voterfile_household WHERE voterfile_voter.House_ID = voterfile_household.ID AND voterfile_household.Precnum = voterfile_precienct.PREC_ID)) AS Voters FROM voterfile_precienct JOIN voterfile_county WHERE voterfile_precienct.County_ID = voterfile_County.ID; ``` I am trying to make it return something like this: ``` County_Name Prec_ID Prec_Name Voters(Count of # of voters in that precienct) ``` However, I am getting the error: > #1242 - Subquery returns more than 1 row. I have tried placing the `COUNT` statement in the subquery but I get an invalid syntax error.
You can try it without the subquery, with a simple group by: ``` SELECT voterfile_county.Name, voterfile_precienct.PREC_ID, voterfile_precienct.Name, count(voterfile_voter.ID) FROM voterfile_county JOIN voterfile_precienct ON voterfile_precienct.County_ID = voterfile_County.ID JOIN voterfile_household ON voterfile_household.Precnum = voterfile_precienct.PREC_ID JOIN voterfile_voter ON voterfile_voter.House_ID = voterfile_household.ID GROUP BY voterfile_county.Name, voterfile_precienct.PREC_ID, voterfile_precienct.Name ``` When you use GROUP BY, any column that you are not grouping on must have an aggregate clause (f.e. SUM or COUNT.) So in this case you have to group on county name, precienct.id and precient.name.
If you get error:error no 1242 Subquery returns more than one row, try to put ANY before your subquery. Eg: This query return error: ``` SELECT * FROM t1 WHERE column1 = (SELECT column1 FROM t2); ``` This is good query: ``` SELECT * FROM t1 WHERE column1 = ANY (SELECT column1 FROM t2); ```
MySQL subquery returns more than one row
[ "", "sql", "mysql", "" ]
Say you have an object which, for the sake of example, we will call the ScoreHotChicksEngine. And say that the ScoreHotChicksEngine's constructor is expecting to be passed an IDataReader containing property values pertaining to, apparantly, Scoring Hot Chicks for Lonely Geeks. ``` ScoreChicksEngine(IDataReader reader); ``` Ok, here's what I would like to gather input on... As a developer would you find it more useful to assume that the reader must be read before being passed into the ScoreChicksEngine ``` IDataReader = command.ExecuteReader(); reader.Read(); ScoreChicksEngine SCE = new ScoreChicksEngine(reader); ``` or would you assume that the engine itself would call that function and possibly deal with the empty values? ``` IDataReader = command.ExecuteReader(); ScoreChicksEngine SCE = new ScoreChicksEngine(reader); if (SCE.HasReaderData()) doSomething(); ```
I'd choose the first method. The second method violates single responsibility principle. I'd also declare the input parameter of the constructor as `IDataRecord` and not `IDataReader`. Basically the SCE class constructs itself based on a single record and doesn't care about a set of records out there.
Shouldn't you be thinking of decoupling the data collection from the algorithm and use a bridging solution in between (an iterator adapter, say)? Just my $0.02. The implication of such a design is that the algorithm is responsible for making calls to read the data on an as-required basis via the adapter. The adapter hides the collection and any particular facets thereof not germane to the problem being solved.
Completely Arbitary C# Question
[ "", "c#", "architecture", "" ]
I'm looking at making a logging class which has members like Info, Error etc that can configurably output to console, file, or to nowhere. For efficiency, I would like to avoid the overhead of formatting messages that are going to be thrown away (ie info messages when not running in a verbose mode). If I implement a custom std::streambuf that outputs to nowhere, I imagine that the std::ostream layer will still do all the formatting. Can anyone suggest a way to have a truly "null" std::ostream that avoids doing any work at all on the parameters passed to it with `<<`?
To prevent the `operator<<()` invocations from doing formatting, you should know the streamtype at compile-time. This can be done either with macros or with templates. My template solution follows. ``` class NullStream { public: void setFile() { /* no-op */ } template<typename TPrintable> NullStream& operator<<(TPrintable const&) { return *this; } /* no-op */ } template<class TErrorStream> // add TInfoStream etc class Logger { public: TErrorStream& errorStream() { return m_errorStream; } private: TErrorStream m_errorStream; }; //usage int main() { Logger<std::ofstream> normal_logger; // does real output normal_logger.errorStream().open("out.txt"); normal_logger.errorStream() << "My age is " << 19; Logger<NullStream> null_logger; // does zero output with zero overhead null_logger.errorStream().open("out.txt"); // no-op null_logger.errorStream() << "My age is " << 19; // no-op } ``` Since you have to do this at compile-time, it is of course quite inflexible. For example, you cannot decide the logging level at runtime from a configuration file.
A swift google came up with this example which may be of use. I offer no guarantees, except that it compiles and runs :-) ``` #include <streambuf> #include <ostream> template <class cT, class traits = std::char_traits<cT> > class basic_nullbuf: public std::basic_streambuf<cT, traits> { typename traits::int_type overflow(typename traits::int_type c) { return traits::not_eof(c); // indicate success } }; template <class cT, class traits = std::char_traits<cT> > class basic_onullstream: public std::basic_ostream<cT, traits> { public: basic_onullstream(): std::basic_ios<cT, traits>(&m_sbuf), std::basic_ostream<cT, traits>(&m_sbuf) { init(&m_sbuf); } private: basic_nullbuf<cT, traits> m_sbuf; }; typedef basic_onullstream<char> onullstream; typedef basic_onullstream<wchar_t> wonullstream; int main() { onullstream os; os << 666; } ```
Implementing a no-op std::ostream
[ "", "c++", "logging", "debugging", "" ]
I am trying to create a very simple Swing UI that logs information onto the screen via a JTextArea as processing occurs in the background. When the user clicks a button, I want each call to: ``` textArea.append(someString + "\n"); ``` to immediately show up in the UI. At the moment, the JTextArea does not show all log information until the processing has completed after clicking the button. How can I get it to refresh dynamically?
I ran into the same issue with my application. I had a "Run" button my application that performed some actions and outputted the results to a JTextArea. I had to call the method from a Thread. Here is what I did. I have several radio buttons of actions that can be done, and then one "Run" button to execute that particular action. I have an action called Validate. So when I check that radio button and click the "Run" button, it calls the method validate(). So I first placed this method into an inner class that implemented Runnable ``` class ValidateThread implements Runnable { public void run() { validate(); } } ``` I then called this thread in the ActionListener of the "Run" button as so ``` runButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent ae) { // Some code checked on some radio buttons if(radioButton.isSelected()) { if(radioButton.getText().equals("VALIDATE")) { Runnable runnable = new ValidateThread(); Thread thread = new Thread(runnable); thread.start(); } } } }); ``` Voila! The output is now sent to the JTextArea. Now, you will notice that the JTextArea will not scroll down with the text. So you need to set the caret position like ``` textArea.setCaretPosition(textArea.getText().length() - 1); ``` Now when the data is added to the JTextArea, it will always scroll down.
Try this: ``` jTextArea.update(jTextArea.getGraphics()); ```
Dynamically refresh JTextArea as processing occurs?
[ "", "java", "multithreading", "swing", "" ]
What are the best resources for a C#/.NET 2.0 developer for learning .NET 3.5? I'm struggling to learn ASP.NET MVC and I feel that a lot of my stumbling blocks have to do with not ever having explicitly studied 3.5. Note: I know there are already these questions, but they both seem to focus on ASP.NET * [Learning C#, ASP.NET 3.5 - what order should I learn in / what to skip?](https://stackoverflow.com/questions/282086/learning-c-asp-net-3-5-what-order-should-i-learn-in-what-to-skip) * <https://stackoverflow.com/questions/276229/best-book-on-aspnet-35-c-programming>
You could try browsing MSDN from [What's New in the .NET Framework Version 3.5](http://msdn.microsoft.com/en-us/library/bb332048.aspx).
MVC has really changed the game on how everything works if you come from traditional ASP.NET Web Forms. I would recommend the following books: * [ASP.NET MVC 1.0 Website Programming](https://rads.stackoverflow.com/amzn/click/com/0470410957) * [Professional ASP.NET MVC 1.0](https://rads.stackoverflow.com/amzn/click/com/0470384611) * [ASP.NET 3.5 Website Programming](https://rads.stackoverflow.com/amzn/click/com/0470187581)
Resources to learn .NET 3.5
[ "", "c#", ".net", ".net-3.5", "c#-3.0", "" ]
I have a set of controls with attached commands and logic that are constantly reused in the same way. I decided to create a user control that holds all the common controls and logic. However I also need the control to be able to hold content that can be named. I tried the following: ``` <UserControl.ContentTemplate> <DataTemplate> <Button>a reused button</Button> <ContentPresenter Content="{TemplateBinding Content}"/> <Button>a reused button</Button> </DataTemplate> </UserControl.ContentTemplate> ``` However it seems any content placed inside the user control cannot be named. For example if I use the control in the following way: ``` <lib:UserControl1> <Button Name="buttonName">content</Button> </lib:UserControl1> ``` I receive the following error: > Cannot set Name attribute value 'buttonName' > on element 'Button'. 'Button' is > under the scope of element > 'UserControl1', which already had a > name registered when it was defined in > another scope. If I remove the buttonName, then it compiles, however I need to be able to name the content. How can I achieve this?
It seems this is not possible when XAML is used. Custom controls seem to be a overkill when I actually have all the controls I need, but just need to group them together with a small bit of logic and allow named content. The solution on [JD's blog](http://blog.bluecog.co.nz/archives/2007/08/27/wpf-cannot-set-name-attribute/) as mackenir suggests, seems to have the best compromise. A way to extend JD's solution to allow controls to still be defined in XAML could be as follows: ``` protected override void OnInitialized(EventArgs e) { base.OnInitialized(e); var grid = new Grid(); var content = new ContentPresenter { Content = Content }; var userControl = new UserControlDefinedInXAML(); userControl.aStackPanel.Children.Add(content); grid.Children.Add(userControl); Content = grid; } ``` In my example above I have created a user control called UserControlDefinedInXAML which is define like any normal user controls using XAML. In my UserControlDefinedInXAML I have a StackPanel called aStackPanel within which I want my named content to appear.
The answer is to not use a UserControl to do it. Create a class that extends **ContentControl** ``` public class MyFunkyControl : ContentControl { public static readonly DependencyProperty HeadingProperty = DependencyProperty.Register("Heading", typeof(string), typeof(MyFunkyControl), new PropertyMetadata(HeadingChanged)); private static void HeadingChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { ((MyFunkyControl) d).Heading = e.NewValue as string; } public string Heading { get; set; } } ``` then use a style to specify the contents ``` <Style TargetType="control:MyFunkyControl"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="control:MyFunkyControl"> <Grid> <ContentControl Content="{TemplateBinding Content}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> ``` and finally - use it ``` <control:MyFunkyControl Heading="Some heading!"> <Label Name="WithAName">Some cool content</Label> </control:MyFunkyControl> ```
How to create a WPF UserControl with NAMED content
[ "", "c#", "wpf", "xaml", "user-controls", "controls", "" ]
I have 2 columns: "string" "int" I set both as primary keys for unique combination. Would searching based on the string column be poorer than normalizing further the string column and having 2 int columns instead?
Do you expect to have lots of duplicate strings? If no, then normalizing will yield nothing. If yes, then indexing becomes less effective the more duplication you have. (because all items with the same index must be retrieved) If your database engine supports it, you could simply set the string as a clustered index, in which case similar strings will be stored next to each other. And if you already have an index on both columns, and are comparing both of them, you are receiving the full benefit of the index.
If you actually need the string value, then the answer is no. Comparing a string in the DB is less expensive than comparing an int, then looking up the string from the int. As for duplication of strings, the more duplication you have, then the less storage you would need if they were denormalized. It would still be slower though. If you're looking for performance over storage, then stick with the string,int PK.
Is normalization necessary on this 2 column table?
[ "", "sql", "database", "database-design", "" ]
I'm a fairly new C++ programmer and I would like to hear the arguments for and against naming parameters within the class declaration. --- Here's an example: *Student.h* ``` #ifndef STUDENT_H_ #define STUDENT_H_ #include <string> using namespace std; class Student { private: string name; unsigned int age; float height, GPA; public: Student(string, unsigned int, float, float); void setAge(unsigned int); }; #endif /*STUDENT_H_*/ ``` vs. ``` #ifndef STUDENT_H_ #define STUDENT_H_ #include <string> class Student { private: string name; unsigned int age; float height, GPA; public: Student(string name, unsigned int age, float height, float GPA); void setAge(unsigned int age); }; #endif /*STUDENT_H_*/ ``` *Student.cpp* ``` #include "Student.h" Student::Student( string name, unsigned int age, float height, float GPA) : name(name), age(age), height(height), GPA(GPA) {} void Student::setAge(unsigned int age) { this -> age = age; } ``` --- I cannot decide. On the one hand, I feel that it is redundant to name the variables in both the declaration (*.h*) and the definition (*.cpp*). Especially since you have to worry about updating the names in both places so that they match. On the other hand, without names, it can often be confusing to determine what variables the parameters correspond to just by looking at the declaration. So, what are your thoughts?
It is much better to use the parameter names in the declaration, and use good parameter names. This way, they serve as function documentation. Otherwise, you will have to write additional comments in your header, and it is always better to use good parameter/variable names than to use comments. Exception: when a function must have a certain signature for external reasons, but the parameters are not actually used. In this case, you should not name them in the implementation either.
Put the names in both places, clarity is the reward you get for the task of maintaining the signatures in two places.
C++ Style Convention: Parameter Names within Class Declaration
[ "", "c++", "coding-style", "parameters", "naming-conventions", "" ]
I recently created an interface layer to distinguish the DataAccessProvider from our Business logic layer. With this approach we can change our choice of DataAccessProvider whenever we want by changing the values in the Web/App.Config. (more details can be given if needed). Anyway, to do this we use reflection to accomplish our DataProvider class on which we can work. ``` /// <summary> /// The constructor will create a new provider with the use of reflection. /// If the assembly could not be loaded an AssemblyNotFoundException will be thrown. /// </summary> public DataAccessProviderFactory() { string providerName = ConfigurationManager.AppSettings["DataProvider"]; string providerFactoryName = ConfigurationManager.AppSettings["DataProviderFactory"]; try { activeProvider = Assembly.Load(providerName); activeDataProviderFactory = (IDataProviderFactory)activeProvider.CreateInstance(providerFactoryName); } catch { throw new AssemblyNotFoundException(); } } ``` But now I'm wondering how slow reflection is?
In most cases: more than fast enough. For example, if you are using this to create a DAL wrapper object, the time taken to create the object via reflection will be **minuscule** compared to the time it needs to connect to a network. So optimising this would be a waste of time. If you are using reflection in a tight loop, there are tricks to improve it: * generics (using a wrapper `where T : new()` and `MakeGenericType`) * `Delegate.CreateDelegate` (to a typed delegate; doesn't work for constructors) * `Reflection.Emit` - hardcore * `Expression` (like `Delegate.CreateDelegate`, but more flexible, and works for constructors) But for your purposes, `CreateInstance` is perfectly fine. Stick with that, and keep things simple. --- Edit: while the point about relative performance remains, and while the most important thing, "measure it", remains, I should clarify some of the above. Sometimes... it **does** matter. Measure first. However, if you find it *is* too slow, you might want to look at something like [FastMember](http://nuget.org/packages/FastMember), which does all the `Reflection.Emit` code quietly in the background, to give you a nice easy API; for example: ``` var accessor = TypeAccessor.Create(type); List<object> results = new List<object>(); foreach(var row in rows) { object obj = accessor.CreateNew(); foreach(var col in cols) { accessor[obj, col.Name] = col.Value; } results.Add(obj); } ``` which is simple, but will be very fast. In the specific example I mention about a DAL wrapper—if you are doing this lots, consider something like [dapper](http://nuget.org/packages/dapper), which again does all the `Reflection.Emit` code in the background to give you the fastest possible but easy to use API: ``` int id = 12345; var orders = connection.Query<Order>( "select top 10 * from Orders where CustomerId = @id order by Id desc", new { id }).ToList(); ```
Its slower compared to non-reflective code. The important thing is not if its slow, but if its slow **where it counts**. For instance, if you instantiate objects using reflection in web environment where expected concurency can rise up to 10K, it will be slow. Anyway, its good not to be concerned about performance in advance. If things turns out to be slow, you can always speed them up if you designed things correctly so that parts that you expected might be in need of optimisation in future are localised. You can check this famous article if you need speed up: [Dynamic... But Fast: The Tale of Three Monkeys, A Wolf and the DynamicMethod and ILGenerator Classes](http://www.codeproject.com/Articles/19513/Dynamic-But-Fast-The-Tale-of-Three-Monkeys-A-Wolf)
How slow is Reflection
[ "", "c#", "performance", "reflection", "assemblies", "" ]
I am pulling a value via JavaScript from a textbox. If the textbox is empty, it returns `NaN`. I want to return an empty string if it's null, empty, etc. What check do I do? `if(NAN = tb.value)`?
Hm, something is fishy here. In what browser does an empty textbox return NaN? I've never seen that happen, and I cannot reproduce it. The value of a text box is, in fact a string. An empty text box returns an empty string! Oh, and to check if something is NaN, you should use: ``` if (isNaN(tb.value)) { ... } ``` Note: The `isNaN()`-function returns `true` for anything that cannot be parsed as a number, except for empty strings. That means it's a good check for numeric input (much easier than regexes): ``` if (tb.value != "" && !isNaN(tb.value)) { // It's a number numValue = parseFloat(tb.value); } ```
You can also do it this way: ``` var number = +input.value; if (input.value === "" || number != number) { // not a number } ``` NaN is equal to nothing, not even itself. if you don't like to use + to convert from String to Number, use the normal parseInt, but remember to always give a base ``` var number = parseInt(input.value, 10) ``` otherwise "08" becomes 0 because Javascript thinks it's an octal number.
Getting a integer value from a textbox, how to check if it's NaN or null etc?
[ "", "javascript", "numbers", "nan", "" ]
The following code fails with a 400 bad request exception. My network connection is good and I can go to the site but I cannot get this uri with HttpWebRequest. ``` private void button3_Click(object sender, EventArgs e) { WebRequest req = HttpWebRequest.Create(@"http://www.youtube.com/"); try { //returns a 400 bad request... Any ideas??? WebResponse response = req.GetResponse(); } catch (WebException ex) { Log(ex.Message); } } ```
First, cast the WebRequest to an HttpWebRequest like this: ``` HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(@"http://www.youtube.com/"); ``` Then, add this line of code: ``` req.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"; ```
Set **UserAgent** and **Referer** in your **HttpWebRequest**: ``` var request = (HttpWebRequest)WebRequest.Create(@"http://www.youtube.com/"); request.Referer = "http://www.youtube.com/"; // optional request.UserAgent = "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; WOW64; " + "Trident/4.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; " + ".NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30618; " + "InfoPath.2; OfficeLiveConnector.1.3; OfficeLivePatch.0.0)"; try { var response = (HttpWebResponse)request.GetResponse(); using (var reader = new StreamReader(response.GetResponseStream())) { var html = reader.ReadToEnd(); } } catch (WebException ex) { Log(ex); } ```
Why Does my HttpWebRequest Return 400 Bad request?
[ "", "c#", "httpwebrequest", "" ]
Somewhere on net on one Blog I read a sentence that is "DataGridView something like Boeing 777, but what is goal when I do not know how to fly. Before I goo deep in creating my projects I wanna know is there alternative for DataGridView in C#. Something like jQuery in WEB api. The favors things which I am looking for is that is simple for using, if its posibile to be freeware and looks smoth and modern. Best regards Admir
If you want to fly one person across town you can learn to fly a Piper Cub. If you want to fly hundreds across an ocean you need to learn to fly a 777. There are many ways of displaying data in WinForms applications; which one is best for you depends on what you're trying to do. If you want to show a fixed, non-editable, non-scrollable list of data you can use a DataList; simple and limited. If you want to add scrolling you can output the data to a scrolling textbox. If you want to handle a scrolling grid of multiple rows with multiple, resizable, editable columns you will need to move to a DataGrid or DataGridView. There are many alternatives to DataGridViews; simpler controls provide fewer features and more functional controls are more complex.
I have implemented the datagridview extensively as an unbound control in a windows forms project using Visual Studio 2008. Looking back, implementing this control has consumed a large amount of time, mostly because it is loaded with bugs and peculiar behavior that should have been fixed rather than just 'documented'. A good data grid control is essential to any application of substance. I would recommend looking for a third party alternative from a company that is more interested in getting it right rather than just getting something out there.
alternative to DataGridView in Win API
[ "", "c#", "winforms", "winapi", "" ]
Is there some internal difference between the C# syntactic sugar way of making properties: ``` public string FirstName { get; set; } ``` and just making public variables like this: ``` public string LastName; ``` I assume the first way is preferred and the second to be avoided. However, I often see this type of readonly property being used which is a form of the second type above: ``` public readonly string InternalCode; ``` Is this a best-practice way to create readonly property? ``` using System; namespace TestProps { class Program { static void Main(string[] args) { Customer customer = new Customer(); customer.FirstName = "Jim"; customer.LastName = "Smith"; customer.Show(); } } class Customer { public string FirstName { get; set; } //prefered public string LastName; //avoid public readonly string InternalCode; //??? public Customer() { InternalCode = "234729834723984"; } public void Show() { Console.WriteLine("{0}, {1} ({2})", LastName, FirstName, InternalCode); Console.ReadLine(); } } } ```
Since he didn't answer (yet) and no one else referenced this yet: There is a great article on this topic by Jon Skeet amending his book *C# in depth* (give credits to Jon): > [**Why Properties Matter**](http://csharpindepth.com/Articles/Chapter8/PropertiesMatter.aspx)
Using a property provides an interface which is more resistant to change in the future. Let's say some time in the future, a decision is made to add a prefix to the internal code. Using a public readonly variable exposes your internal structure and you will have a hard time adding the prefix to every line you used the internal variable of the class. Using a Property, you can just write the following ``` public string InternalCode { get { return _prefix + _internalCode; } } ``` and you're done!
Is it OK to use a public variable in C# if it is readonly?
[ "", "c#", "properties", "" ]
Is it possible to overload ++ operators in Python?
There is no `++` operator in Python (nor '--'). Incrementing is usually done with the `+=` operator instead.
Nope, it is not possible to overload the unary ++ operator, because it is not an operator at all in Python. Only (a subset of) the operators that are allowed by the Python syntax (those operators that already have one or more uses in the language) may be overloaded. [These](http://docs.python.org/reference/lexical_analysis.html#operators) are valid Python operators, and [this page](http://docs.python.org/library/operator.html) lists the methods that you can define to overload them (the ones with two leading and trailing underscores). Instead of i++ as commonly used in other languages, in Python one writes i += 1. In python the + sign needs an operand to its right. It *may* also have an operand to its left, in which case it will be interpreted as a binary instead of a unary operator. +5, ++5, ..., ++++++5 are all valid Python expressions (all evaluating to 5), as are 7 + 5, 7 ++ 5, ..., 7 ++++++++ 5 (all evaluating to 7 + (+...+5) = 12). 5+ is *not* valid Python. See also [this question](https://stackoverflow.com/questions/470139/why-does-12-3-in-python). **Alternative idea**: Depending on what you actually wanted to use the ++ operator for, you may want to consider overloading the [unary (prefix) plus operator](http://docs.python.org/reference/datamodel.html#object.__pos__). Note, thought, that this may lead to some odd looking code. Other people looking at your code would probably assume it's a no-op and be confused.
Is it possible to overload ++ operators in Python?
[ "", "python", "operator-overloading", "" ]
PHP Print\_r is useful to print out array and dictionary collection. is asp.net build in this function??
What you are looking for is called "pretty printing". Google ".NET pprint" you might have some luck. But really, use the debugger.
You can achieve it by JavaScriptSerializer ``` var json = new JavaScriptSerializer().Serialize(yourObject); Response.Write("yourObject:" + json + "<br/>"); ```
ASP.NET ,equivalent to PHP Print_r Function?
[ "", "php", "asp.net", "" ]
Is there a flag for gcc such that conversions from a long to a short will generate a warning about a possible loss of data? I'm working on a C++ application that is compiled for both Visual Studio (2005) and GCC 4.2 (for Mac OS X). The warnings that Visual Studio prints out follow this pattern: ``` : warning C4244: 'argument' : conversion from 'long' to 'short', possible loss of data ``` I've tried -Wconversion, but that isn't quite what I'm looking for. The only thing I've been able to find so far is an experimental flag, -Wcoercion, which is associated with GCC 4.3 (which I'm not sure if we want to invest in quite yet). **April 22, 2009 @ 11:00 EST Edit:**To clarify, I want to see that warning. We have code where we want to know when a data loss would occur. If I have the code: ``` unsigned long value1 = LONG_MAX; std::cout << "value1: " << value1 << std::endl; unsigned short value2 = value1; std::cout << "value2: " << value2 << std::endl; ``` I get this expected result: **value1: 2147483647   value2: 65535** In our code, we have special asserts put in place that perform the coercion and warn us if the executed-code would result in a loss of data. We found the places in our large code base using Visual Studio's warnings. Is there any way we can generate these warnings in gcc 4.2?
This feature is not supported in GCC 4.2, but it has been added in GCC 4.3. [Wiki page explaining it](http://gcc.gnu.org/wiki/NewWconversion). Thanks to schnaader and Evan Teran for providing the links that led me there.
Use -Wconversion. You seem to need this even if you already specify -Wall. It definitely works in gcc4.3. If it wasn't fixed by version 4.2, you'll have to upgrade to get it. Example warning: ``` warning: conversion to 'short int' from 'int' may alter its value ```
Is there a gcc 4.2 warning similar to Visual Studio's regarding possible loss of data?
[ "", "c++", "gcc", "visual-studio-2005", "" ]
If I have a string like: ``` <p>&nbsp;</p> <p></p> <p class="a"><br /></p> <p class="b">&nbsp;</p> <p>blah blah blah this is some real content</p> <p>&nbsp;</p> <p></p> <p class="a"><br /></p> ``` How can I turn it into just: ``` <p>blah blah blah this is some real content</p> ``` The regex needs to pick up `&nbsp;`s and spaces.
``` $result = preg_replace('#<p[^>]*>(\s|&nbsp;?)*</p>#', '', $input); ``` This doesn't catch literal nbsp characters in the output, but that's very rare to see. Since you're dealing with HTML, if this is user-input I might suggest using HTML Purifier, which will also deal with XSS vulnerabilities. The configuration setting you want there to remove empty p tags is %AutoFormat.RemoveEmpty.
This regex will work against your example: ``` <p[^>]*>(?:\s+|(?:&nbsp;)+|(?:<br\s*/?>)+)*</p> ```
Remove useless paragraph tags from string
[ "", "php", "regex", "" ]
I'm creating web site on local computer. i'm using SQL Server 2005 management studio. I need to copy all data to destination server. Destination server is SQL Server 2005. My problem is 1. when i using import/Export data for management studio, this only copy tables. 2. when i using backup and restore, tables and stored procedure shows like this myuser.aspnet\_application myuser.aspnet\_Membership ... etc. I need to create like this dbo.aspnet\_application dbo.aspnet\_Membership How to copy stored procedures and views to destination server?
If this is a one-time job you can script them all easily. Open SQL Management studio, and browse to the Stored Procedures node for your database. Open the Object Explorer if it isn't open already (click F7) and select all stored procedures you want to copy. Right click the list and select Script Stored Procedure as -> Drop and Create -> To new query window. This will give you a script that drops the procedures if they exist and then creates them. If you get the myuser schema or you get use [databasename] statements in your script you can turn these of by doing the following: Select Tools -> Options in the menus. Navigate to SQL Server Query Explorer -> Scripting and set the following to false: "Script USE " and "Schema qualify object names". The script you get can be run on your new database and should create all the stored procedures you need.
In SQL Server Management Studio navigate to your database. Right click it and select "Tasks" -> "Generate Scripts" "Next" Select your database from the list "Next" Select "Stored Procedures" "Next" "Select All" "Next" "Script to new Query Window" "Next" "Finish" Give it a while. Then when complete, at the very top of the script put "use (yourdatabase)" Execute the use statement. Execute the whole script.
how to transfer stored procedures between SQL Server 2005 databases
[ "", "sql", "sql-server-2005", "deployment", "" ]
My python project has a C++ component which is compiled and distributed as a .pyd file inside a Python egg. I've noticed that it seems to be incompatible with only some of our our brand new 64 bit Windows servers. We have 4 (allegedly) identically provisioned machines - each of them runs Windows 2003 server 64 bit edition, but 2 of these machines do not allow me to call functions in the egg. After some experimentation I was able to find a recipe for [producing a reproducible error](http://pastebin.com/m6d6aef7e). The problem seems to occur when Python tries to import the pyd file. I copied the pyd to a temp folder and ran Python.exe from that location, incidentally we are still using the 32bit edition of Python 2.4.4 since none of our libraries have been ported to 64 bit architecture yet. Next I try to import my module (called pyccalyon). The first time I try this I get an error message: ``` "ImportError: DLL load failed: The specified module could not be found" ``` Next time I try this the python interpreter crashes out: no stacktrace at all! Naturally you are suspecting my PYD - the odd thing about this is that it's already in use on thousands of PCs and 10s of other servers, many of which are identical spec'd 64 bit machines. The project is continuously tested both in development and after release, so if this thing were so tinder-box unstable we'd have known about it a very long time ago. This component is considered to be stable code so it's surprising that it's breaking so spectacularly. Any suggestions to what I can do to debug this troublesome library? Crazy ideas welcome at this point because we've exhausted all the sensible ones. Thanks! **Update 0**: Okay using Process monitor I was able to compare one 64bit server that fails with another that works just fine. I found that the breakage seems to occur due to a missing DLL, SysWOW64/mscoreee.dll - any idea what this component is and where I can get it? I can refer this back to our IT provisioning people who can install stuff.
You could try something like [Process Monitor](http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx), to watch what DLLs it tries to load. I'd assume that one of the other DLLs it relies on can't be found. Edit: It looks like you've already managed to get some useful info out of it, but I'll clarify how you could reduce the deluge of information that procmon produces. Use the filter function to specify the command line (in this case, require that the command line contains python). This will show you messages only from the process you're interested in. Then you can filter out all success results, so you can see which DLL it's looking for. Obviously there are lots of other things you can filter on, but this is how I've got results in the past. It's a really handy tool for working out what's going on in situations like this. (Tools like depends or DependencyWalker are also good for finding out what DLLs a library relies on - they give the static information while procmon will show you the dynamic view. Both of them can be useful.)
Have you tried checking which DLLs that PYD links? You can do that for example with either with [Dependency Walker](http://www.dependencywalker.com/) or VS's depends.exe.
Why does this pyd file not import on some computers?
[ "", "python", "windows", "" ]
I was reading 'Dive Into Python' and in the chapter on classes it gives this example: ``` class FileInfo(UserDict): "store file metadata" def __init__(self, filename=None): UserDict.__init__(self) self["name"] = filename ``` The author then says that if you want to override the `__init__` method, you must explicitly call the parent `__init__` with the correct parameters. 1. What if that `FileInfo` class had more than one ancestor class? * Do I have to explicitly call all of the ancestor classes' `__init__` methods? 2. Also, do I have to do this to any other method I want to override?
The book is a bit dated with respect to subclass-superclass calling. It's also a little dated with respect to subclassing built-in classes. It looks like this nowadays: ``` class FileInfo(dict): """store file metadata""" def __init__(self, filename=None): super(FileInfo, self).__init__() self["name"] = filename ``` Note the following: 1. We can directly subclass built-in classes, like `dict`, `list`, `tuple`, etc. 2. The `super` function handles tracking down this class's superclasses and calling functions in them appropriately.
In each class that you need to inherit from, you can run a loop of each class that needs init'd upon initiation of the child class...an example that can copied might be better understood... ``` class Female_Grandparent: def __init__(self): self.grandma_name = 'Grandma' class Male_Grandparent: def __init__(self): self.grandpa_name = 'Grandpa' class Parent(Female_Grandparent, Male_Grandparent): def __init__(self): Female_Grandparent.__init__(self) Male_Grandparent.__init__(self) self.parent_name = 'Parent Class' class Child(Parent): def __init__(self): Parent.__init__(self) #---------------------------------------------------------------------------------------# for cls in Parent.__bases__: # This block grabs the classes of the child cls.__init__(self) # class (which is named 'Parent' in this case), # and iterates through them, initiating each one. # The result is that each parent, of each child, # is automatically handled upon initiation of the # dependent class. WOOT WOOT! :D #---------------------------------------------------------------------------------------# g = Female_Grandparent() print g.grandma_name p = Parent() print p.grandma_name child = Child() print child.grandma_name ```
Inheritance and Overriding __init__ in python
[ "", "python", "overriding", "superclass", "" ]
I've read the documentation on egg entry points in Pylons and on the Peak pages, and I still don't really understand. Could someone explain them to me?
An "entry point" is typically a function (or other callable function-like object) that a developer or user of your Python package might want to use, though a non-callable object can be supplied as an entry point as well (as correctly pointed out in the comments!). The most popular kind of entry point is the `console_scripts` entry point, which points to a function that you want made available as a command-line tool to whoever installs your package. This goes into your `setup.py` script like: ``` entry_points={ 'console_scripts': [ 'cursive = cursive.tools.cmd:cursive_command', ], }, ``` I have a package I've just deployed called `cursive.tools`, and I wanted it to make available a "cursive" command that someone could run from the command line, like: ``` $ cursive --help usage: cursive ... ``` The way to do this is define a function, like maybe a `cursive_command` function in the file `cursive/tools/cmd.py` that looks like: ``` def cursive_command(): args = sys.argv[1:] if len(args) < 1: print "usage: ..." ``` and so forth; it should assume that it's been called from the command line, parse the arguments that the user has provided, and ... well, do whatever the command is designed to do. Install the [`docutils`](http://pypi.python.org/pypi/docutils/) package for a great example of entry-point use: it will install something like a half-dozen useful commands for converting Python documentation to other formats.
[EntryPoints](https://setuptools.readthedocs.io/en/latest/pkg_resources.html#entry-points "EntryPoints") provide a persistent, filesystem-based object name registration and name-based direct object import mechanism (implemented by the [setuptools](http://pypi.python.org/pypi/setuptools) package). They associate names of Python objects with free-form identifiers. So any other code using the same Python installation and knowing the identifier can access an object with the associated name, no matter where the object is defined. The **associated names can be any names existing in a Python module**; for example name of a class, function or variable. The entry point mechanism does not care what the name refers to, as long as it is importable. As an example, let's use (the name of) a function, and an imaginary python module with a fully-qualified name 'myns.mypkg.mymodule': ``` def the_function(): "function whose name is 'the_function', in 'mymodule' module" print "hello from the_function" ``` Entry points are registered via an entry points declaration in setup.py. To register the\_function under entrypoint called 'my\_ep\_func': ``` entry_points = { 'my_ep_group_id': [ 'my_ep_func = myns.mypkg.mymodule:the_function' ] }, ``` As the example shows, entry points are grouped; there's corresponding API to look up all entry points belonging to a group (example below). Upon a package installation (ie. running 'python setup.py install'), the above declaration is parsed by setuptools. It then writes the parsed information in special file. After that, the [pkg\_resources API](https://setuptools.readthedocs.io/en/latest/pkg_resources.html#api-reference) (part of setuptools) can be used to look up the entry point and access the object(s) with the associated name(s): ``` import pkg_resources named_objects = {} for ep in pkg_resources.iter_entry_points(group='my_ep_group_id'): named_objects.update({ep.name: ep.load()}) ``` Here, setuptools read the entry point information that was written in special files. It found the entry point, imported the module (myns.mypkg.mymodule), and retrieved the\_function defined there, upon call to pkg\_resources.load(). Calling the\_function would then be simple: ``` >>> named_objects['my_ep_func']() hello from the_function ``` Thus, while perhaps a bit difficult to grasp at first, the entry point mechanism is actually quite simple to use. It provides an useful tool for pluggable Python software development.
Explain Python entry points?
[ "", "python", "setuptools", "" ]
What are the best websites to learn entry level Javascript?
Screw w3schools ... seriously. Head to the yahoo developer network, and learn real js. <http://developer.yahoo.com/yui/theater/> And learn JS from the Crockford videos. THAT will save you time.
[www.w3schools.com](http://www.w3schools.com) has a lot of information on Javascript and DOM. They even have an online tutorials/examples that allow you to modify the example and re-run it to try out your new Javascript skills. **Note:** Not everything there is mistake-free. It's pretty useful, nonetheless. For an alternative opinion, see <http://w3fools.com>.
Best website to learn entry level Javascript?
[ "", "javascript", "" ]
It doesn't appear to do it by default, and I don't see any switch for it either. This is PHPUnit 2.3.5, and PHP 5.2.0-8.
You want to upgrade to a more recent version of PHPUnit, have a look at <http://www.phpunit.de/>.
``` function exception_error_handler($errno, $errstr, $errfile, $errline ) { throw new ErrorException($errstr, 0, $errno, $errfile, $errline); } set_error_handler("exception_error_handler"); ```
Can I make PHPUnit fail if the code throws a notice?
[ "", "php", "phpunit", "" ]
I've got a table that has rows that are unique except for one value in one column (let's call it 'Name'). Another column is 'Date' which is the date it was added to the database. What I want to do is find the duplicate values in 'Name', and then delete the ones with the oldest dates in 'Date', leaving the most recent one. Seems like a relatively easy query, but I know very little about SQL apart from simple queries. Any ideas?
Find duplicates and delete oldest one ![alt text](https://farm4.static.flickr.com/3578/3384377552_e72356d8c0_o.png) Here is the Code ``` create table #Product ( ID int identity(1, 1) primary key, Name varchar(800), DateAdded datetime default getdate() ) insert #Product(Name) select 'Chocolate' insert #Product(Name,DateAdded) select 'Candy', GETDATE() + 1 insert #Product(Name,DateAdded) select 'Chocolate', GETDATE() + 5 select * from #Product ;with Ranked as ( select ID, dense_rank() over (partition by Name order by DateAdded desc) as DupeCount from #Product P ) delete R from Ranked R where R.DupeCount > 1 select * from #Product ```
delete from table a1 where exists (select \* from table a2 where a2.name = a1.name and a2.date > a1.date)
How can I find duplicate entries and delete the oldest ones in SQL?
[ "", "sql", "sql-server", "" ]
I've got a table with two columns(among others): id and created\_in\_variant and a stored procedure that calculates the created\_in\_variant value basing on the id. I'd like to do something like this: ``` UPDATE [dbo].[alerts] SET [created_in_variant] = find_root [id] ``` Is there a nice way to do it?
You may want to look at using a *scalar valued function* (also known as a *user defined function*) instead of a stored procedure for this type of problem. **EDIT**: Here is some information concerning SVFs : [Click](http://www.sqlteam.com/article/user-defined-functions) **EDIT 2**: Here is some more information from [15 Seconds](http://www.15seconds.com/Issue/000817.htm)
Change your proc into a UDF and you basically call it exactly as you got ``` UPDATE [dbo].[alerts] SET [created_in_variant] = dbo.find_root([id]) ```
Using stored procedures results in update statement
[ "", "sql", "sql-server", "t-sql", "sql-update", "" ]
Is it possible to limit the number of properties that an attribute is applied to in a particular class?
At compile time no. At runtime you could validate this via a static initialiser which throws if this invariant is violated though this would be considered very poor style it would be safe in the sense that no code could execute while the invariant doesn't hold. If you think about the extensibility inherent in .Net even if you could verify this at compile time imagine: compile dll A with ``` public class Foo { public int Property1 {get;} } ``` compile dll B referencing A.dll with class ``` public class Bar { [OnlyOneAllowedOnAnyPropertiesPerClass] public int Property2 {get;} } ``` then you recompile A.dll with ``` public class Foo { [OnlyOneAllowedOnAnyPropertiesPerClass] public int Property1 {get;} } ``` And attempt to run this new A.dll with the old B.dll (they are binary compatible in all other respects so this is fine) Clearly the runtime would have to do considerable effort sanity checking this, not to mention B might not be loaded for some time suddenly Making either one or both of A and B 'illegal'. Therefore you should not expect this to ever be functionality available in the framework.
I do not think it is possible at compile time. But you could add some code to the instance constructors or to a static constructor that checks this via reflection at run time.
C# Attribute Limit
[ "", "c#", ".net-2.0", "attributes", "" ]
For our company intranet, built in PHP and MySQL, I want to add an area where employees can post a short profile of themselves - a couple of paragraphs of text and an image. What's the best way to convert this kind of plain text input to HTML paragraphs, bulleted lists, links, etc? **Clarification**: This content will be displayed in a modal that comes up when you click on the employee's desk on an office map, so a separate wiki or something like that will not work.
You could implement some very simple wiki-like markup, although "simple" here could vary; consider using something like [Markdown](http://daringfireball.net/projects/markdown/) to speed up the process. If you have a not-very-technical audience, you might try simply integrating a "rich text editor" type control onto the page, using something like [FCKeditor](http://www.fckeditor.net/) or [TinyMCE](http://tinymce.moxiecode.com/), from which you can remove/disable superfluous controls. This provides your users with a more familiar, WYSIWYG-style behaviour. You can perform some simple whitelisting to restrict the subset of HTML allowed through, in any case.
There's a lot to be said for going with the simplest, most intuitive solution. Stackoverflow is a programmer community so people are pretty comfortable with this "Markdown" markup. But to make it more accessible to the average person who isn't used to bothering with markup of any type, then the most basic solution may be best, so there is absolutely minimum difference between what they see as they type it in, and what it turns out to be on the site. Something along the lines of ``` $html = "<p>" . preg_replace("\n\r?\n", "</p><p>", htmlspecialchars($text)) . "</p>"; ``` It doesn't have any formatting except that a blank line creates a paragraph break, but depending on the audience, for example if it's a general audience, you may find that this is a better idea. If you give non-technical people a markup syntax for formatting, you may find that it either confuses them and they use it incorrectly and mess something up, needing help, or you may find that they go overboard on the formatting, using as many text styles as they can. In my role I maintain a MediaWiki wiki as an intranet for this company, and have noticed that in general, while having a wiki is a great idea, its markup language is just not suitable for a general non-technical audience, many of whom prefer to write things up in MS Word, because that is familiar to them, and then upload the Word document to the wiki instead. The other option is a WYSIWYG editing control within the wiki software. It needs to be good, and as close to "actual" WYSIWYG as possible. It shouldn't be possible to get something looking OK in the preview which is messed up after filtering it server-side, because someone will find a way to do that. How to allow non-technical people to create links between pages is another matter which you may find to be complicated.
Convert plain text input to HTML
[ "", "php", "mysql", "html", "user-interface", "" ]
Sorry for the possibly misleading title of this post. Couln't really think of anything better at the moment. Anyway, I have a site set up that allows users to search our customer database. I have a separate section of this site listed under a "reports" directory, which is used to generate billing reports, subscriber counts, etc. Apparently our users are confused about having to type in the reports folder on the url: <http://maindomain/reports/>. I'm wondering if there is a way to create a new domain that points to that specific folder. In other words, users go to <http://reportsdomain/> which would be an alias for <http://maindomain/reports/>. The solutions that I've considered are as follows: 1. Create a new site in IIS that points to the reports folder (which is a sub folder in the other site... seems like a bad practice since then two seperate IIS sites will be doing caching for that site, maintaining session/application state, etc). 2. Build a IIS module to do URL rewriting Of those two options, I'd probably go with 2 before 1. Any other ways to do this that I'm not thinking of? Thanks in advance for your help. Respectfully, Chris
If you're okay with the user typing in the main URL and then being forwarded to the new URL, you just set up the <http://maindomain> site in IIS and then on the "Home Directory" page, set it to forward to <http://maindomain/reports/>. While this isn't as clean as the URL re-writing option, it's painless to enable, and it seems like this might be the option you're looking for.
* **Option 2** would be more ideal than the extra overhead of having two sites. [Using the Microsoft URL Rewrite Module for IIS 7.0](http://www.15seconds.com/issue/081205.htm)
Pointing a Domain to a IIS Site's Sub Folder
[ "", "c#", ".net", "iis", "" ]
I'm building a notification framework and for that I'm serializing and deserializing a basic class, from which all the classes I want to send will derive. The problem is that the code compiles, but when I actually try to serialize this basic class I get an error saying > System.Runtime.Serialization.SerializationException: Type 'Xxx.DataContracts.WQAllocationUpdate' in Assembly 'Xxx.DataContract, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not marked as serializable. Here is the code : ``` public class WCallUpdate : NotificationData { private string m_from = ""; [DataMember] public string From { get { return m_from; } set { m_from = value; } } private WCall m_wCall = new WCall(); [DataMember] public WCall Call { get { return m_wCall; } set { m_wCall = value; } } } ``` The `DataContract` for the Notification is: ``` /// <summary> /// Basic class used in the notification service /// </summary> [DataContract] public class NotificationData { } /// <summary> /// Enum containing all the events used in the application /// </summary> [DataContract] public enum NotificationTypeKey { [EnumMember] Default = 0, [EnumMember] IWorkQueueServiceAttributionAddedEvent = 1, [EnumMember] IWorkQueueServiceAttributionUpdatedEvent = 2, [EnumMember] IWorkQueueServiceAttributionRemovedEvent = 3, } ``` The code used to serialize the data is: ``` #region Create Message /// <summary> /// Creates a memoryStream from a notificationData /// note: we insert also the notificationTypeKey at the beginning of the /// stream in order to treat the memoryStream correctly on the client side /// </summary> /// <param name="notificationTypeKey"></param> /// <param name="notificationData"></param> /// <returns></returns> public MemoryStream CreateMessage(NotificationTypeKey notificationTypeKey, NotificationData notificationData) { MemoryStream stream = new MemoryStream(); BinaryFormatter formatter = new BinaryFormatter(); try { formatter.Serialize(stream, notificationTypeKey); formatter.Serialize(stream, notificationData); } catch (Exception ex) { Logger.Exception(ex); } return stream; } #endregion ``` When I try to create a message: ``` WCallUpdate m_wCallUpdate = new WCallUpdate(); NotificationTypeKey m_notificationTypeKey = new NotificationTypeKey.Default; CreateMessage(notificationTypeKey , wCallUpdate ); ``` I got the following error: ``` System.Runtime.Serialization.SerializationException: Type 'Xxx.DataContracts.WCall' in Assembly 'Xxx.DataContract, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is not marked as serializable. at System.Runtime.Serialization.FormatterServices.InternalGetSerializableMembers(RuntimeType type) at System.Runtime.Serialization.FormatterServices.GetSerializableMembers(Type type, StreamingContext context) at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitMemberInfo() at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter) at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.Serialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter) at System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Write(WriteObjectInfo objectInfo, NameInfo memberNameInfo, NameInfo typeNameInfo) at System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Serialize(Object graph, Header[] inHeaders, __BinaryWriter serWriter, Boolean fCheck) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph, Header[] headers, Boolean fCheck) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph) at Xxx.Notification.NotificationMessageFactory.CreateMessage(NotificationTypeKey notificationTypeKey, NotificationData notificationData) in Xxx.Notification\NotificationCenter.cs:line 36 ``` If I put the Serializable flag before the `DataContract` one does not solve the problem. --- thank you for the fast answer. Sorry that i forgot to put the code of the NotificationData (edited in the main post) I tried putting the Serializable attribute to both class without success :( ``` #region NotificationData /// <summary> /// Basic class used in the notification service /// </summary> [Serializable] [DataContract] public class NotificationData { } #endregion ``` and ``` [Serializable] public class WCallUpdate : NotificationData { private string m_from = ""; [DataMember] public string From { get { return m_from; } set { m_from = value; } } private WCall m_wCall = new WCall(); [DataMember] public WCall Call { get { return m_wCall; } set { m_wCall = value; } } } ``` \*\*Edit: \*\* Mea culpa afterall :) You were both right. I forgot to spread the `[Serializable]` Attribute to all the child class. After updating and compiling, i got no longer the exception. thank you both for your correct answers :) --- @Marc Gravel: Actually i thought about what you are suggesting, and created the following DataContractSerializer, but I'm not sure this will work? As my classes use other classes? the big problem with the DataContractSerializer is that you need to specify the type of the object you want to serialize, and as my class uses other class as private fields, that might cause a problem right? ``` #region DataContractSerializer /// <summary> /// Creates a Data Contract Serializer for the provided type. The type must be marked with /// the data contract attribute to be serialized successfully. /// </summary> /// <typeparam name="T">The type to be serialized</typeparam> /// <returns>A data contract serializer</returns> public static DataContractSerializer CreateDataContractSerializer<T>() where T : class { DataContractSerializer serializer = new DataContractSerializer(typeof(T)); return serializer; } #endregion ```
put [Serializable] at the top of the class. Serializable isn't necessarily inherited either AFAIK. meaning even if the base class has [Serializable], you still need it on the descendent class.
I'm very confused why you're using `BinaryFormatter` with a data-contract. It would be normal to use `DataContractSerializer` here... the logic is then similar to using `[Serializable]`, except you need `[DataContract`], and it serializes the nominated (`[DataMember]`) members, rather than the fields which `BinaryFormatter` works with. Actually, for numerous reasons ([such as brittleness](http://marcgravell.blogspot.com/2009/03/obfuscation-serialization-and.html)) I would suggest switching to `DataContractSerializer`, especially as that seems to be your intention. Or if you want a more compact binary form, [protobuf-net](http://code.google.com/p/protobuf-net/) may be useful (plus is portable between platforms, too). As an aside - you don't need the `[DataContract]` on `enum`s - it does no harm, but doesn't do a lot either.
Serialization in C# with a derived class
[ "", "c#", "serialization", "derived-class", "" ]
I am trying to make an RSS Feed with asp.net,sql and xml. I am getting an error > Compiler Error Message: CS0103: The > name 'MyConnString' does not exist in > the current context" on line 22 > "SqlConnection objConnection = new > SqlConnection("MyConnString"); My web config contains ``` <connectionStrings> <add name="MyConnString" connectionString=" providerName="System.Data.SqlClient" /> </connectionStrings> ``` Here is the code ``` <%@ Page Language="C#" MasterPageFile="ContentMasterPage.master" Debug="true" %> <%@ Import Namespace="System.Xml"%> <%@ Import Namespace="System.Data" %> <%@ Import Namespace="System.Data.SqlClient" %> <script runat="server"> void Page_load(object sender, System.EventArgs e) { Response.Clear(); Response.ContentType = "text/xml"; XmlTextWriter objX = new XmlTextWriter(Response.OutputStream, Encoding.UTF8); objX.WriteStartDocument(); objX.WriteStartElement("rss"); objX.WriteAttributeString("version", "2.0"); objX.WriteElementString("title", "News"); objX.WriteElementString("link", "http://news.ca/news.aspx"); objX.WriteElementString("description", "The latest headlines"); objX.WriteElementString("copyright", "(c)2009, News Club, All rights reserved."); objX.WriteElementString("ttl", "5"); SqlConnection objConnection = new SqlConnection("MyConnString"); objConnection.Open(); string sql = "SELECT TOP 5 Title, Description, ArticleID, DatePulished FROM articles ORDER BY DatePublished DESC"; SqlCommand objCommand = new SqlCommand(sql, objConnection); SqlDataReader objReader = objCommand.ExecuteReader(); while (objReader.Read()) { objX.WriteStartElement("item"); objX.WriteElementString("title", objReader.GetString(0)); objX.WriteElementString("description", objReader.GetString(1)); objX.WriteElementString("link", ("http://news.ca/GetArticle.aspx?id=" + objReader.GetInt32(2).ToString())); objX.WriteElementString("pubDate", objReader.GetDateTime(3).ToString("R")); objX.WriteEndElement(); } objReader.Close(); objConnection.Close(); objX.WriteEndElement(); objX.WriteEndElement(); objX.WriteEndDocument(); objX.Flush(); objX.Close(); Response.End(); } </script> ```
``` SqlConnection objConnection = new SqlConnection("MyConnString"); ``` This is the line that the compiler is having a difficult time with. Did you replace your actually connection string with "MyConnString" or is MyConnString supposed to be a string value on the page? I have a similar line of code in my app: ``` SqlConnection oConn= new SqlConnection(_connString); ``` You can also get this value from the Web.Config ``` SqlConnection oConn = new SqlConnection(ConfigurationManager.ConnectionStrings["MyConnString"].ConnectionString); ```
`SqlConnection` takes the `connectionString` not a reference in the `web.config`. Or double check your `web.config`
ConnectionString Configuration Error
[ "", "c#", ".net", "asp.net", "configuration", "" ]
If Request.ServerVariables["HTTP\_X\_FORWARDED\_FOR"] returns multiple ip's, which one do I take and how would I do it in c#? It is my understanding that if it is blank or null, then the client computer is not going through a proxy and I can just get their ip from Request.ServerVariables["REMOTE\_ADDR"]. Is this a correct statement? By "which one do I take", I mean do I take the first IP in the list or the last IP and is all I have to do is just split it into an array and take the one I want. I am not really sure how HTTP\_X\_FORWARDED\_FOR works.
According to [this](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For), the format of X-Forwarded-For HTTP header is: ``` X-Forwarded-For: client1, proxy1, proxy2, ... ``` So the IP address of the client that you want should be the first one in the list
A further note on the reliability subject: Anyone can forge HTTP\_X\_FORWARDED\_FOR by using a tool such as the Firefox plugin "Tamper Data" or their own local proxy (e.g. Privoxy). This means that the entire string might be fake, and REMOTE\_ADDR is the actual original host. It might also mean that the first "client1" address is faked, and then the client connected through a proxy, resulting in proxy1 being the client's IP address and REMOTE\_ADDR being the single proxy used. If you are looking to deny access based on IP, I would suggest checking every IP address in the XFF header as well as REMOTE\_ADDR. If you're looking to grant access based on the region of an IP, I'd suggest allowing access only if XFF is blank and the IP is from the proper area. As Mastermind already noted, however, there are proxies which will hide the chain of proxies. For instance, the Tor network will make a request appear as if it came from the final proxy machine, rather than the original IP. Anonymizing proxies will often claim they are forwarding for the same IP as reported in REMOTE\_ADDR. IP based filtering is generally a pretty crude, last-resort mechanism of access control.
How do I get the correct IP from HTTP_X_FORWARDED_FOR if it contains multiple IP Addresses?
[ "", "c#", "" ]
Suppose that ThreadA and ThreadB both call `WaitOne()` in that order on the same AutoResetEvent. When the event is set, why does ThreadB get released instead of ThreadA? I ran a test to find out what happens when you set an AutoResetEvent on which mutiple threads are waiting: ``` private static void Test() { // two threads - waiting for the same autoreset event // start it unset i.e. closed i.e. anything calling WaitOne() will block AutoResetEvent autoEvent = new AutoResetEvent(false); Thread thread1 = new Thread(new ThreadStart(WriteSomeMessageToTheConsole)); thread1.Start(); // this will now block until we set the event Thread thread2 = new Thread(new ThreadStart(WriteSomeOtherMessageToTheConsole)); thread2.Start(); // this will now also block until we set the event // simulate some other stuff Console.WriteLine("Doing stuff..."); Thread.Sleep(5000); Console.WriteLine("Stuff done."); // set the event - I thought this would mean both waiting threads are allowed to continue // BUT thread2 runs and thread1 stays blocked indefinitely // So I guess I was wrong and that Set only releases one thread in WaitOne()? // And why thread2 first? autoEvent1.Set(); } ``` The code is of course not useful, it's just a simple example.
IIRC, which thread is released by an auto-reset event is unspecified. As everyone else mentioned, you want a manual reset event if you want to broadcast a condition. If you want to release an exact number (say exactly 3 of n), then you probably want to use a semaphore. If you really want to dig into why the order might be different than you would expect, take a look at "Windows Internals" or anything that Mark Russinovich has written. Chances are that he explains the wait order on executive resources somewhere.
From MSDN > MSDN on ManualResetEvent: "Threads > that call WaitOne on the > ManualResetEvent will block, awaiting > the signal. When the controlling > thread completes the activity, it > calls Set to signal that the waiting > threads can proceed. > **All waiting threads are released.** > > But for AutoResetEvent, MSDN says: > "Calling Set signals AutoResetEvent to > release a waiting thread. > AutoResetEvent remains signaled until > **a single waiting thread is released**, and then automatically > returns to the nonsignaled state. If > no threads are waiting, the state > remains signaled indefinitely. "
Why does the second thread get released before the first one, when they both called WaitOne() and were released by an AutoResetEvent?
[ "", "c#", ".net", "multithreading", "resetevent", "" ]
I'm using the [PHPExcel](http://www.codeplex.com/PHPExcel) lib which it seems to do great jobs, but in my approach it doesn't seem to be so fast. I need to generate a excel file with allot of sheets, data, formulas and styles(bold, color, border) and it takes me allot of computing resources and time. I think that my approach is not so good. Which has to be the right order when is about to generate this kind of file(excel) using PHP code and datas from a mysql DB. 1) generate all the sheets with no format/style, no formula, adding datas and the formulas and style 2) generate sheet by sheet with datas, adding formulas, adding style 3).... Is better to have all the need data from the DB in a multidimensional array and not to query the db each time I wish to write to a new sheet? I'm using a dual core processor machine and 4GB or RAM and takes me about 1minute if the Apache doesn't crush.
1. have mysql do the calculations: I once optimized a report where the original developer used loops to calculate sums rather that sql sum function, it was so badly written it exceeded time limit for php 2. make sure your calculation SQL are optimized enough 3. calculate then generate content
I would suggest profiling your PHP code to find out where the most time is spent. You can find [a step-by-step tutorial about using XDebug to profile PHP conde](http://hype-free.blogspot.com/2008/06/profiling-php-with-xdebug.html) on my blog. Then try to optimize the parts which consume the most time / are called the most frequently. One of the best optimizations you can do is to replace PHP methods by built-in functions if possible, because they are orders of magnitude faster.
multidimensional PHP array or multiple queries to mysql DB
[ "", "php", "mysql", "multidimensional-array", "" ]
I'm trying to write a validation to check that an Object instance can be cast to a variable Type. I have a Type instance for the type of object they need to provide. But the Type can vary. This is basically what I want to do. ``` Object obj = new object(); Type typ = typeof(string); //just a sample, really typ is a variable if(obj is typ) //this is wrong "is" does not work like this { //do something } ``` The type object itself has the IsSubClassOf, and IsInstanceOfType methods. But what I really want to check is if **obj** is either an instance of **typ** or any class derived from **typ**. Seems like a simple question, but I can't seem to figure it out.
How about this: ``` MyObject myObject = new MyObject(); Type type = myObject.GetType(); if(typeof(YourBaseObject).IsAssignableFrom(type)) { //Do your casting. YourBaseObject baseobject = (YourBaseObject)myObject; } ``` This tells you if that object can be casted to that certain type.
I think you need to restate your conditions, because if `obj` is an instance of `Derived`, it will also be an instance of `Base`. And `typ.IsIstanceOfType(obj)` will return true. ``` class Base { } class Derived : Base { } object obj = new Derived(); Type typ = typeof(Base); type.IsInstanceOfType(obj); // = true type.IsAssignableFrom(obj.GetType()); // = true ```
How to tell if an instance is of a certain Type or any derived types
[ "", "c#", "casting", "types", "" ]
We need to print a report periodically from a windows service, we use .NET 2.0. We have modules that produce this report as a PDF or as an HTML file, PDF would be better because we have better control over the look of the report. We also have the username, password and the name of the printer selected for the task. I searched and researched several options on how to print PDF files from windows service, namely using Acrobat Reader from command line and couldn't make it work. Acrobat Reader seems to be unreliable and difficult to use, also we would need to guarantee it is intalled on our clients machines. Do you have a solution for this, which may be in a third party component? Thanks
We found this hidden pearl of open source library called [pdfprint#](http://sourceforge.net/projects/pdfprn/) that does exactly what we needed. It seems it's based on [XPDF](http://foolabs.com/xpdf/) which is an open source library in C++. The author wrote in the forum that "The printer *has* to be post-script capable, the library sends raw postscript to the printer." and I wonder (and fear) how big of a problem this will be... Thanks all for your help!
You may or may not have seen the question I asked about this [here](https://stackoverflow.com/questions/233842/programmatically-printing-in-adobe-reader-9-using-net-interop) which may give you some hints if you are forced to use Acrobat after all. Otherwise this commercial third party component will do what you want - [abcpdf](http://websupergoo.com/abcpdf-11.htm). It's thread-safe BTW. P.S. You'll need the professional version because only the pro version supports rendering.
Printing a report from a windows service
[ "", "c#", "pdf", "windows-services", "printing", "reporting", "" ]
**Background:** Trevor is working with a PHP implementation of a standard algorithm: take a main set of default name-value pairs, and update those name-value pairs, but only for those name-value pairs where a valid update value actually exists. **Problem:** by default, PHP array\_merge works like this ... it will overwrite a non-blank value with a blank value. ``` $aamain = Array('firstname'=>'peter','age'=>'32','nation'=>''); $update = Array('firstname' => '','lastname' => 'griffin', age =>'33','nation'=>'usa'); print_r(array_merge($aamain,$update)); /* Array ( [firstname] => // <-- update set this to blank, NOT COOL! [age] => 33 // <-- update set this to 33, thats cool [lastname] => griffin // <-- update added this key-value pair, thats cool [nation] => usa // <-- update filled in a blank, thats cool. ) */ ``` **Question:** What's the fewest-lines-of-code way to do array\_merge where blank values never overwrite already-existing values? ``` print_r(array_coolmerge($aamain,$update)); /* Array ( [firstname] => peter // <-- don't blank out a value if one already exists! [age] => 33 [lastname] => griffin [nation] => usa ) */ ``` **UPDATE:** 2016-06-17T11:51:54 the question was updated with clarifying context and rename of variables.
``` array_replace_recursive($array, $array2); ``` This is the solution.
Well, if you want a "clever" way to do it, here it is, but it may not be as readable as simply doing a loop. ``` $merged = array_merge(array_filter($foo, 'strval'), array_filter($bar, 'strval')); ``` edit: or using +...
php array_merge without erasing values?
[ "", "php", "arrays", "merge", "" ]
I have the following code: ``` $("#Table1 tbody").children().each(function(e){ $(this).bind('click', function(){ // Do something here }, false) }); ``` The Table1 html table has 2 columns; one for Names and one for a `<button>` element. When I click on a table row, it works fine. When I click on the button, the button code fires; however, so does the row code. How can I filter the selector so the button doesn't trigger the parent element's click event?
This is what you want. It's [stopPropogation](http://docs.jquery.com/Events/jQuery.Event#event.stopPropagation.28.29) that will stop the parents. ``` <table> <tr> <td>The TD: <input type="button" id="anotherThing" value="dothis"></td> </tr> </table> <div id="results"> Results: </div> <script> $(function() { $('#anotherThing').click(function(event) { $('#results').append('button clicked<br>'); event.stopPropagation(); }); $('td').click(function() { $('#results').append('td clicked<br>'); }); }); </script> ``` Here's a link to an example of it working as well: <http://jsbin.com/uyuwi> You can tinker with it at: <http://jsbin.com/uyuwi/edit>
You could also do something like this: ``` $('#Table1 tr').bind('click', function(ev) { return rowClick($(this), ev); }); //Bind the tr click $('#Table1 input').bind('click', function(ev) { return buttonClick($(this), ev); }) //Bind the button clicks function rowClick(item, ev) { alert(item.attr('id')); return true; } function buttonClick(item, ev) { alert(item.attr('id')); ev.stopPropagation(); return true; } ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <table id="Table1"> <tbody> <tr id="tr1"> <td> The TD: <input type="button" id="button1" value="dothis" /> </td> </tr> <tr id="tr2"> <td> The TD: <input type="button" id="Button2" value="dothis" /> </td> </tr> </tbody> </table> ```
jQuery Children selector question
[ "", "javascript", "jquery", "" ]
In addition to the explanation, what does the $ mean in javascript? Here is the code: ``` var ZebraTable = { bgcolor: '', classname: '', stripe: function(el) { if (!$(el)) return; var rows = $(el).getElementsByTagName('tr'); for (var i=1,len=rows.length;i<len;i++) { if (i % 2 == 0) rows[i].className = 'alt'; Event.add(rows[i],'mouseover',function() { ZebraTable.mouseover(this); }); Event.add(rows[i],'mouseout',function() { ZebraTable.mouseout(this); }); } }, mouseover: function(row) { this.bgcolor = row.style.backgroundColor; this.classname = row.className; addClassName(row,'over'); }, mouseout: function(row) { removeClassName(row,'over'); addClassName(row,this.classname); row.style.backgroundColor = this.bgcolor; } } window.onload = function() { ZebraTable.stripe('mytable'); } ``` Here is a link to where I got the code and you can view a demo on the page. It does not appear to be using any framework. I was actually going through a JQuery tutorial that took this code and used JQuery on it to do the table striping. Here is the link: <http://v3.thewatchmakerproject.com/journal/309/stripe-your-tables-the-oo-way>
> Can someone explain the following > javascript code? ``` //Shorthand for document.getElementById function $(id) { return document.getElementById(id); } var ZebraTable = { bgcolor: '', classname: '', stripe: function(el) { //if the el cannot be found, return if (!$(el)) return; //get all the <tr> elements of the table var rows = $(el).getElementsByTagName('tr'); //for each <tr> element for (var i=1,len=rows.length;i<len;i++) { //for every second row, set the className of the <tr> element to 'alt' if (i % 2 == 0) rows[i].className = 'alt'; //add a mouseOver event to change the row className when rolling over the <tr> element Event.add(rows[i],'mouseover',function() { ZebraTable.mouseover(this); }); //add a mouseOut event to revert the row className when rolling out of the <tr> element Event.add(rows[i],'mouseout',function() { ZebraTable.mouseout(this); }); } }, //the <tr> mouse over function mouseover: function(row) { //save the row's old background color in the ZebraTable.bgcolor variable this.bgcolor = row.style.backgroundColor; //save the row's className in the ZebraTable.classname variable this.classname = row.className; //add the 'over' class to the className property //addClassName is some other function that handles this addClassName(row,'over'); }, mouseout: function(row) { //remove the 'over' class form the className of the row removeClassName(row,'over'); //add the previous className that was stored in the ZebraTable.classname variable addClassName(row,this.classname); //set the background color back to the value that was stored in the ZebraTable.bgcolor variable row.style.backgroundColor = this.bgcolor; } } window.onload = function() { //once the page is loaded, "stripe" the "mytable" element ZebraTable.stripe('mytable'); } ```
The $ doesn't mean anything in Javascript, but it's a valid function name and several libraries use it as their all-encompassing function, for example [Prototype](http://www.prototypejs.org/) and [jQuery](http://jquery.com/)
Can someone explain the following javascript code?
[ "", "javascript", "jquery", "" ]
I know I have done this before but I am getting my constructor order of execution in a twist I think.... ``` public class Class1 { Class2 _class2; public Class1() { _class2 = new Class2(this); } } public class Class2 { Class1 _parent; //corrected typo public Class2(Class1 parent) { _parent = parent; } } ``` trouble is that parent always ends up null. What's the proper way to do this? (maybe I can blame my slowness on having a cold..) EDITED TO CORRECT THE TYPO (which isn't the problem in the real code!)
This should, technically, work, provided you change Class2 to include `this.parent = parent;` However, I don't recommend this. I would, instead, recommend lazy initializing your class2 instance inside class1. Depending on what all is done in the constructor of Class2, you can potentially lead yourself into nasty situations. Making a Class2 property on class1 and lazy initializing it would cause Class2 to be constructed after Class1's constructor is completed, not during it's construction, which is most likely less error prone if your classes get more complicated.
You may have mistyped the code, but I think you want this definition for Class2 (notice the *this* qualifier in your Class2 constructor): ``` public class Class2 { Class1 parent; public Class2(Class1 parent) { this.parent = parent; } } ```
pass a reference to 'this' in the constructor
[ "", "c#", "constructor", "" ]
I am using a mysqli prepard query with php, the code is: ``` $retreiveQuery = 'SELECT username, firstname, lastname FROM USERS WHERE username = ?'; if ($getRecords = $con->prepare($retreiveQuery)) { $getRecords->bind_param("s", $username); $getRecords->execute(); $getRecords->bind_result($username, $firstname, $lastname); echo "<h1>".$username."</h1> <p><strong>First Name: </strong>".$firstname." <p><strong>Surname: </strong>".$lastname." } else { print_r($con->error); ``` This is quite strange. username is the only field which is displayed. Why would the other columns not be being returned/bound? If I do ``` SELECT username, firstname, lastname FROM USERS WHERE username = 'test'; ``` Directly to the database, all fields are displayed, and all contain valid data.
You have to call ``` $getRecords->fetch(); ``` after `bind_result()` to actually get the record. Username was being output because you had already set it, to use as an input parameter.
What happens if you use a different variable name for output of username? Like this: SELECT username as user\_name, firstname, lastname FROM USERS WHERE username = 'test'; and then $getRecords->bind\_result($user\_name, $firstname, $lastname); I wonder if it's getting confused by using the same name for both input and output?
displaying the contents of a mysql query, only first field works
[ "", "php", "mysql", "mysqli", "prepared-statement", "" ]
I have a page that is taking 37 seconds to load. While it is loading it pegs MySQL's CPU usage through the roof. I did not write the code for this page and it is rather convoluted so the reason for the bottleneck is not readily apparent to me. I profiled it (using kcachegrind) and find that the bulk of the time on the page is spent doing MySQL queries (90% of the time is spent in 25 different mysql\_query calls). The queries take the form of the following with the tag\_id changing on each of the 25 different calls: ``` SELECT * FROM tbl_news WHERE news_id IN (select news_id from tbl_tag_relations WHERE tag_id = 20) ``` Each query is taking around 0.8 seconds to complete with a few longer delays thrown in for good measure... thus the 37 seconds to completely load the page. My question is, is it the way the query is formatted with that nested select that is causing the problem? Or could it be any one of a million other things? Any advice on how to approach tackling this slowness is appreciated. Running EXPLAIN on the query gives me this (but I'm not clear on the impact of these results... the NULL on primary key looks like it would be bad, yes? The number of results returned seems high to me as well as only a handful of results are returned in the end): ``` 1 PRIMARY tbl_news ALL NULL NULL NULL NULL 1318 Using where 2 DEPENDENT SUBQUERY tbl_tag_relations ref FK_tbl_tag_tags_1 FK_tbl_tag_tags_1 4 const 179 Using where ```
I'e addressed this point in [Database Development Mistakes Made by AppDevelopers](https://stackoverflow.com/questions/621884/database-development-mistakes-made-by-appdevelopers/621891#621891). Basically, favour joins to aggregation. IN isn't aggregation as such but the same principle applies. A good optimize will make these two queries equivalent in performance: ``` SELECT * FROM tbl_news WHERE news_id IN (select news_id from tbl_tag_relations WHERE tag_id = 20) ``` and ``` SELECT tn.* FROM tbl_news tn JOIN tbl_tag_relations ttr ON ttr.news_id = tn.news_id WHERE ttr.tag_id = 20 ``` as I believe Oracle and SQL Server both do but MySQL doesn't. The second version is basically instantaneous. With hundreds of thousands of rows I did a test on my machine and got the first version to sub-second performance by adding appropriate indexes. The join version with indexes is basically instantaneous but even without indexes performs OK. By the way, the above syntax I use is the one you should prefer for doing joins. It's clearer than putting them in the `WHERE` clause (as others have suggested) and the above can do certain things in an ANSI SQL way with left outer joins that WHERE conditions can't. So I would add indexes on the following: * tbl\_news (news\_id) * tbl\_tag\_relations (news\_id) * tbl\_tag\_relations (tag\_id) and the query will execute almost instantaneously. Lastly, don't use \* to select all the columns you want. Name them explicitly. You'll get into less trouble as you add columns later.
The SQL Query itself is definitely your bottleneck. The query has a sub-query in it, which is the IN(...) portion of the code. This is essentially running two queries at once. You can likely halve (or more!) your SQL times with a JOIN (similar to what d03boy mentions above) or a more targeted SQL query. An example might be: ``` SELECT * FROM tbl_news, tbl_tag_relations WHERE tbl_tag_relations.tag_id = 20 AND tbl_news.news_id = tbl_tag_relations.news_id ``` To help SQL run faster you also want to try to avoid using SELECT \*, and only select the information you need; also put a limiting statement at the end. eg: ``` SELECT news_title, news_body ... LIMIT 5; ``` You also will want to look into the database schema itself. Make sure you are indexing all of the commonly referred to columns so that the queries will run faster. In this case, you probably want to check your news\_id and tag\_id fields. Finally, you will want to take a look at the PHP code and see if you can make one single all-encompassing SQL query instead of iterating through several seperate queries. If you post more code we can help with that, and it will probably be the single greatest time savings for your posted problem. :)
Optimizing a PHP page: MySQL bottleneck
[ "", "php", "optimization", "mysql", "" ]
I have an MS Access database that contains translated sentences in source-target pairs (a translation memory for fellow users of CAT tools). Somewhat annoyingly, source and target are not stored in separate columns, but in rows linked by ID, like this: ``` +---+----+--------------+ |id |lang| text | +---+----+--------------+ 1 a lang a text 1 b lang b text 2 a more a text... 2 b more b text... +---+----+--------------+ ``` What SQL could I use to turn that into a table such as: ``` +---+--------------+--------------+ |id | lang A | lang B | +---+--------------+--------------+ 1 lang a text lang b text 2 more a text... more b text... ``` Performance doesn't matter here, since would I only need to do this once in a while, and the db isn't huge (only a few thousand rows).
A crosstab query should suit. ``` TRANSFORM First([Text]) AS LangText SELECT ID, First([Text]) FROM Table GROUP BY ID PIVOT lang ``` Further information: <http://allenbrowne.com/ser-67.html>
You need a self-join: ``` SELECT t1.id, t1.text AS lang_a, t2.text AS lang_b FROM lang_table AS t1 INNER JOIN lang_table AS t2 ON (t1.id = t2.id) WHERE t1.lang = 'a' AND t2.lang = 'b' ```
SQL to transpose row pairs to columns in MS ACCESS database
[ "", "sql", "ms-access", "rows", "transpose", "" ]
I have a `char**` that I frequently need to insert into or perform a lookup. It is very tedious to `realloc()`, `malloc()` the array and insert strings. Is there any standard way that I can add strings to or do lookups in a `char**`? I guess I'm looking for something like string, but using `char**`'s instead.
If you're frequently inserting into this structure, you shouldn't be using a `char**` at all; an array isn't a suitable data structure for these kinds of operations. Consider a `std::vector<string>` or something similar if possible.
Sounds like you want to use something like an STL List or [Boost Array](http://www.boost.org/doc/libs/1_38_0/doc/html/array.html) of char\*. Note that STL Vectors are not preferable if you need to insert or remove elements from your array.
How can I easily work with a char**?
[ "", "c++", "char", "standard-library", "" ]
How do I view the compiler-generated mangled names for overloaded functions in C++? I'm using VC9 but answers for other compilers are welcome too. **Edit:** I find all the answers useful here. Accepting the one I liked best.
You could look in the map file. Assuming you have map file generation turned on.
You can see the decorated function names by using [Dependency Walker](http://www.dependencywalker.com/). Open any DLL\EXE in dependency walker and in right pane you can see a list of decorated function names.
View Compiler Mangled Names in C++
[ "", "c++", "compiler-construction", "name-mangling", "visual-c++-2008", "" ]
Suppose you have a table RULES with 3 columns A, B, and C. As data enters the system, I want to know if any row of the RULES table matches my data with the condition that if the corresponding column in the RULES table is null, all data matches. The obvious SQL is: ``` SELECT * FROM RULES WHERE (A = :a OR A IS NULL) AND (B = :b OR B IS NULL) AND (C = :c OR C IS NULL) ``` So if I have rules: ``` RULE A B C 1 50 NULL NULL 2 51 xyz NULL 3 51 NULL 123 4 NULL xyz 456 ``` An input of (50, xyz, 456) will match rules 1 and 4. **Question:** Is there a better way to do this? With only 3 fields this is no problem. But the actual table will have 15 columns and I worry about how well that SQL scales. **Speculation:** An alternative SQL statement I came up with involved adding an extra column to the table with a count of how many fields are not null. (So in the example, this columns value for rules 1-4 is 1, 2, 2 and 2 respectively.) With this "col\_count" column, the select could be: ``` SELECT * FROM RULES WHERE (CASE WHEN A = :a THEN 1 ELSE 0 END) + (CASE WHEN B = :b THEN 1 ELSE 0 END) + (CASE WHEN C = :c THEN 1 ELSE 0 END) = COL_COUNT ``` Unfortunately, I don't have enough sample data to find our which of these approaches would perform better. Before I start creating random rules, I thought I'd ask here whether there was a better approach. **Note:** Data mining techniques and column constraints are not feasible here. The data must be checked as it enters the system and so it can be flagged pass/fail immediately. And, the users control the addition or removal of rules so I can't convert the rules into column constraints or other data definition statements. One last thing, in the end I need a list of all the rules that the data fails to pass. The solution cannot abort at the first failure. Thanks.
The first query you provided is perfect. I really doubt that adding the column you were speaking of would give you any more speed, since the NOT NULL property of every entry is checked anyway, since every comparison to NULL yields false. So I would guess that `x=y` is expanded to `x IS NOT NULL AND x=y` internally. Maybe someone else can clarify that. All other optimizations I can think of would involve precalculation or caching. You can create [temporary] tables matching certain rules or add further columns holding matching rules.
Are there too many rows/rules? If it's not the case (that's subjective, but say less than 10,000), you could create indexes for all columns. That would increase speed significantly and indexes won't take much space. If you don't plan to make a huge table of rules, then I bet your approach is ok provided you index all columns.
Is there a good way to check rules against n columns?
[ "", "sql", "rule-engine", "" ]
I'm writing a library of Extension Methods for String and DateTime utility functions in C#. Can you please help me out by suggesting the useful utlity functions for String and DateTime you may want to be part of it ? With your suggestions I can make it more cohesive and Collective. Thanks!
``` public static bool IsNullOrEmpty(this string value){ return string.IsNullOrEmpty(value); } public static string Reverse(this string value) { if (!string.IsNullOrEmpty(value)) { char[] chars = value.ToCharArray(); Array.Reverse(chars); value = new string(chars); } return value; } public static string ToTitleCase(this string value) { return CultureInfo.CurrentCulture.TextInfo.ToTitleCase(value); } public static string ToTitleCaseInvariant(this string value) { return CultureInfo.InvariantCulture.TextInfo.ToTitleCase(value); } ``` Trivial, but slighty nicer to call.
What about methods that don't specifically *extend* string or DateTime, but rather *target* or return a string or DateTime? Then you could build some `int` and `TimeSpan` methods as well, so you can write fluent interfaces like: ``` DateTime yesterday = 1.Days().Ago(); ``` . ``` public static TimeSpan Days(this int value) { return new TimeSpan(value, 0, 0, 0); } public static TimeSpan Hours(this int value) { return new TimeSpan(value, 0, 0); } public static TimeSpan Minutes(this int value) { return new TimeSpan(0, value, 0); } //... ``` . ``` public static DateTime Ago(this TimeSpan value) { return DateTime.Now.Add(value.Negate()); } public static DateTime FromNow(this TimeSpan value) { return DateTime.Now.Add(value); } ```
Suggestions For String and DateTime utility functions' Library using Extension Methods
[ "", "c#", "c#-3.0", "extension-methods", "" ]
I am working on a futur projects that will get information to a private website not from interface but from email. After some research, [posterous.com](http://posterous.com/) is doing something interesting, posting on personal blog from email. [flickr](http://www.flickr.com/) is doing the same thing with photo attached from email and post it n the server to show off My question. Where on the net I can find proof of concept or already made script that do 1. read pop email account at specific interval (each 5 minutes) 2. extract subject and content of the mail 3. get attachment photos 4. save that specific information to file on server all that can be done with php, but my qualification is not good enough to do that myself, but i can adapt some script for sure !.
Looking around quickly I found two scripts that hold promise: <http://www.weberdev.com/get_example-4015.html> and <http://www.nerdydork.com/download-pop3imap-email-attachments-with-php.html> It looks like both could be modified to suit your needs.
If you look at something like the Zend Framework's Zend\_Mail libraries - <http://framework.zend.com/manual/en/zend.mail.read.html> - you'd be in good shape. I've been using it for retrieving email for other purposes and it gives you a nice little object with all the headers in an array. Use the data you need and then dump the rest.
php and email interaction
[ "", "php", "email", "" ]
It's been a while since I've done any website with with Java, and am wondering what framework options are out there for Google App Engine. What framework would you suggest for someone who has no real preference? I like Ruby On Rails, and am getting into Django, and like that as well. Professionally I'm a ASP.NET developer so I have the most experience with that, but I'm looking to expand into other technologies, and patterns. It would be nice to have more experience with MVC. thanks, Mark
The [Spring Framework](http://www.springsource.org "Spring Framework") works, although you have to make sure commons-logging isn't called commons-logging-1.1.1.jar (as I had it in maven conventions, Google provides a jar with this same name and there are classloading issues as a result). So, Spring WebMVC is confirmed to work - which raises the possibility that its sister project [Spring Webflow](http://www.springsource.org/webflow) will work - though I can't say I really like where Webflow 2 completely diverged from Webflow 1. Also, I have yet to find a framework that really encompasses the notion of "saving and continuing" well - users often like to do that, and Webflow 2 really tries to make programming that as difficult as possible if you use its persistence context inside the flows themselves.
[Wicket](http://wicket.apache.org/) works on App Engine, you just have to make a few [tweaks to the configuration](http://www.danwalmsley.com/2009/04/08/apache-wicket-on-google-app-engine-for-java/).
What Java framework would you use with Google App Engine?
[ "", "java", "google-app-engine", "frameworks", "" ]
I use a custom 404 page in a PHP application. On that 404 page, I check for certain 'known' older pages that no longer exist, and then redirect user to newer or most relevant current page. Works great. Now I am faced with a series of old images that have been removed, and am **looking for a way to redirect the images to a new image** (all inside of the php code if possible). I have hunted around briefly and came up empty. Any way to do this? Here is a sample of my code: ``` <?php //-- grab info regarding bad request -- $root = $_SERVER['DOCUMENT_ROOT']; $page = parse_url($_SERVER['REQUEST_URI']); $page = $page['path']; $referer = $_SERVER['HTTP_REFERER']; $host = $_SERVER['REMOTE_HOST']; //-- try to redirect old pages/files ---------------------- // $page = urlencode($page); if ( stristr($page, "some_old_file.zip") ) { // Example file redirect echo "<META HTTP-EQUIV=\"refresh\" CONTENT=\"0;URL=http://www.site.com/the/new/file.zip\">"; } elseif ( stristr($page, "some_old_page.php") ) { // example webpage redirect echo "<META HTTP-EQUIV=\"refresh\" CONTENT=\"0;URL=http://www.site.com/the/new/page.php\">"; } elseif ( stristr($page, "some_old_image.jpg") ) { // not sure how to do this ... // ... // ... // ... // ... // ... // not sure how to do this ... } else { // everything else - direct to custom 404 search page echo "<META HTTP-EQUIV=\"refresh\" CONTENT=\"0;URL=http://www.site/com/the/custom/404_help.php?page={$page}\">"; } // // ------------------------------------------------------- ?> ```
instead of outputting a meta refresh, use a location header ``` header("Location: /path/to/image.jpg\r\n"); ``` The \r\n is just a new line to delimit the headers. Note: headers must be sent before any other output
You will need server side redirect. Look around php function "header" and http status code 301 (moved permanently). You'll find a ton of ready made 5 liner solutions. If I were you, I'd use this method for html content too to inform the search engines about the new location of the same content.
404 Error Page ~ Redirect an Image - PHP
[ "", "php", "apache", "redirect", "" ]
I have a model that has an ordering field under its Meta class. When I perform a query and get back a QuerySet for the model it is in the order specified. However if I have instances of this model that are in a list and execute the sort method on the list the order is different from the one I want. Is there a way to sort a list of instances of a model such that the order is equal to that specified in the model definition?
Not automatically, but with a bit of work, yes. You need to define a comparator function (or **cmp** method on the model class) that can compare two model instances according to the relevant attribute. For instance: ``` class Dated(models.Model): ... created = models.DateTimeField(default=datetime.now) class Meta: ordering = ('created',) def __cmp__(self, other): try: return cmp(self.created, other.created) except AttributeError: return cmp(self.created, other) ```
The answer to your question is varying degrees of yes, with some manual requirements. If by `list` you mean a `queryset` that has been formed by some complicated query, then, sure: ``` queryset.order_by(ClassName.Meta.ordering) ``` or ``` queryset.order_by(instance._meta.ordering) ``` or ``` queryset.order_by("fieldname") #If you like being manual ``` If you're not working with a queryset, then of course you can still sort, the same way anyone sorts complex objects in python: * Comparators * Specifying keys * Decorate/Sort/Undecorate [See the python wiki for a detailed explanation of all three.](http://wiki.python.org/moin/HowTo/Sorting)
How do you order lists in the same way QuerySets are ordered in Django?
[ "", "python", "django", "django-models", "" ]
I have a c++ windows application that leaks memory per transaction. Using perfmon I can see the private bytes increase with every transaction, the memory usage is flat while the application is idle. Following previous answers on stackoverflow I used umdh from the microsoft debugging tools to track down one memory leak. However there is still more leaks and the results of umdh don't match up with my perfmon results. First umdh does still reports this leak, the stack trace is: ``` + 36192 ( 2082056 - 2045864) 251 allocs BackTraceCB + 4 ( 251 - 247) BackTraceCB allocations ntdll!RtlAllocateHeapSlowly+00000041 ntdll!RtlAllocateHeap+00000E9F MSVCR80!malloc+0000007A ``` This is no use as the first call is malloc, it doesn't say what called it. I have my doubts about this leak as it is reported both when the application is processing transactions and when it is idle. But I can clearly see that no memory is leaking when it is idle. And the memory leaks reported when processing the transactions are not proportional to the transactions processed as perfmon reports. umhd does not show any other leaks, although I know there is at least one more not shown. I have just learn from searching the net that a windows application can have multiple heaps. * Could it be that umhd only reports memory usage from one of these heap? eg the default or crt heap? * How can I track memory usage in other heaps? * And how do find out what dlls / modules are using the other heaps? Any pointers to tracking down this problem would be gratefully received as I am running out of options.
Sorry to answer my own question, but I finally tracked the issue down to how I used Orbix. It seams that the orbix libraries use their own heap on the windows platform. This means that most memory leak detection does not work for leaks in orbix, I tried boundschecker and umhd.exe. To isolate this issue I found some code that would dump the memory of each heap in the application: <http://www.abstraction.net/content/articles/analyzing%20the%20heaps%20of%20a%20win32%20process.htm> I used this to dump the heap usage before and after each transaction, then after every 500 transactions, this indicated that the same heap was growing each time. Then I listed the the address of each entry in this heap. Examining the memory in these areas I found that these contained orbix marshalling data. With this information I finally found a some object references that were not being cleaned up.
For me, on occasions where umdh failed - another MS free tool called [LeakDiag](http://thetweaker.wordpress.com/2009/04/09/native-memory-leaks-part-1-leakdiag/) succeeded. It allows interception of far more allocator types than umdh, including what it calls a 'MPHeap allocator', which [I suspect](http://msdn.microsoft.com/en-us/library/aa230637.aspx) might have been of use to you. If you got a spare minute - I'm curious if that might have indeed helped..
How to track memory leaks with umdh.exe in all heaps?
[ "", "c++", "memory-leaks", "" ]
table1 has 3 columns: Id UserName SubmittedDate 1 Joe 1/1/2006 2 Joe 1/1/2007 3 Nat 1/1/2008 4 Pat 1/1/2009 I want to return this: Id UserName 2 Joe 3 Nat 4 Pat I just want just one record for Joe, the most recent one. How do I write this query? Thanks
``` SELECT MAX(ID), UserName FROM table GROUP BY UserName ``` Note this assumes that higher ID means later. It doesn't query directly on the Submitted Date field. For that, use Quassnoi's much more complex one :)
``` SELECT Id, UserName, SubmittedDate FROM ( SELECT Id, UserName, SubmittedDate, ROW_NUMBER() OVER (PARTITION BY UserName ORDER BY SubmittedDate DESC) rn FROM table1 ) q WHERE rn = 1 ```
How to write this SQL Statement
[ "", "sql", "sql-server", "" ]
I was trying to "measure" stack depth. Why the following program doesn't print anything? ``` class Program { private static int Depth = 0; static void A(object o) { Depth++; A(o); } static void B(object o, bool e) { Console.WriteLine(Depth); } static void Main(string[] args) { RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(A, B, null); } } ``` Some answers simply include a quote from MSDN, like "Starting with the .NET Framework version 2.0, a StackOverflowException object cannot be caught by a try-catch block and the corresponding process is terminated by default." Believe me, sometimes (when there is enough stack space) it can be cought, the following prints some number just fine: ``` class Program { private static int depth = 0; static void A(object o) { depth++; if (Environment.StackTrace.Length > 8000) throw new StackOverflowException("Catch me if you can."); A(o); } static void B(object o, bool e) { Console.WriteLine(depth); } static void Main(string[] args) { RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(A, B, null); } } ```
If you want to catch it, load it into another process (that calls-back to yours via remoting) and lets the miscreant code execute there. The other process may terminate, and you *could* get a neat SOE popping out the end of the pipe on your side - without the adverse effects of the rather inconvenient exception. Note that a separate AppDomain in the same process won't cut it. If you want to get the stack trace from an exception the following code will do you great justice: ``` class Program { static void Main(string[] args) { try { Recurse(0); } catch (Exception ex) { StackTrace st = new StackTrace(ex); // Go wild. Console.WriteLine(st.FrameCount); } Console.ReadLine(); } static void Recurse(int counter) { if (counter >= 100) throw new Exception(); Recurse(++counter); } } ```
Regarding your edit, I don't think user code throwing a StackOverflowException is the same as the CLR throwing it. There's a bit of discussion about it [here](http://bytes.com/groups/net-c/224083-help-me-understand-stackoverflowexception). > Jeffrey's (Richter, author of [Applied Microsoft® .NET Framework Programming](https://rads.stackoverflow.com/amzn/click/com/0735614229) ) comment applies to real stack overflows, i.e. to stack overflows > that occur if your code contains an infinite recursion, something like: > > void MyMethod() { MyMethod(); } > > If you throw the > StackOverflowException yourself, it > will be handled like any other > exception, and Jeffrey's comment does > not apply. > > Also, Jeffrey's comment says: "if the > stack overflow occurs WITHIN THE CLR > ITSELF ...". So, if the .NET VM can > detect the stack overflow "cleanly", > i.e. without running ITSELF into a > stack overflow, then you should get a > StackOverflowException and your catch > and finally blocks should execute as > usual. But, in the tragic case where > the VM ITSELF runs into a stack > overflow, you won't be as lucky: the > VM will not propagate a > StackOverflowException (but crash in > some other weird way) and your catch > and finally blocks won't execute. > > Morale is: be careful with infinite > recursion because you don't have a > 100% guarantee that the VM will detect > and signal them cleanly! > > Bruno. Clarified who "Jeffrey" was since the OP was citing that book.
Why ExecuteCodeWithGuaranteedCleanup doesn't work?
[ "", "c#", "" ]
Is it possible to have something like: ``` list1 = ... currentValue = 0 list2 = [currentValue += i, i for i in list1] ``` I tried that but didn't work? What's the proper syntax to write those? EDIT: the print statement was an example. Actually I am incrementing a value outside the loop.
Statements *cannot* go inside of expressions in Python; it was a complication that was deliberately designed out of the language. For this problem, try using a complication that **did** make it into the language: generators. Watch: ``` def total_and_item(sequence): total = 0 for i in sequence: total += i yield (total, i) list2 = list(total_and_item(list1)) ``` The generator keeps a running tally of the items seen so far, and prefixes it to each item, just like it looks like you example tries to do. Of course, a straightforward loop might be even simpler, that creates an empty list at the top and just calls append() a lot! :-)
I'm not quite sure what you're trying to do but it's probably something like ``` list2 = [(i, i*2, i) for i in list1] print list2 ``` The statement in the list comprehension has to be a single statement, but you could always make it a function call: ``` def foo(i): print i print i * 2 return i list2 = [foo(i) for i in list1] ```
Multiple statements in list compherensions in Python?
[ "", "python", "list-comprehension", "" ]
I have a text box where I want to limit the number of selected items to MaxSelection. The desired behaviour is that once MaxSelection items are selected any futher selections are ignored. (Thus this question is different from "[limit selections in a listbox in vb.net](https://stackoverflow.com/questions/541883/limit-selections-in-a-listbox-in-vb-net)"). I have an event handler for the SelectedIndexChanged event for the list box that attempts to accomplish this. If the user uses Ctrl-click to select the (MaxSelection+1)th item, the selection is reverted to the previous selection. The problem is when the user selects an item and then Shift-clicks an item down the list that is MaxSelection+1 items further down the list. In this case, more than one SelectedIndexChanged event is raised: one for the Shift-click which selects the item that was Shift-clicked, and one to select all the items between the original selection and the Shift-clicked selection. The first of these events allows the user to select the Shift-clicked item (which is technically correct), then the second event reverts the selection to the selection as it was after the first event (which will be the originally selected item and the Shift-clicked item). What is desired is that the code would revert the selection to the selection before the first event (which is only the originally selected item). Is there any way to retain the selection before the Shift-click? Thanks, Rob Here's the SelectedIndexChanged event handler: ``` void ChildSelectionChanged(object sender, EventArgs e) { ListBox listBox = sender as ListBox; //If the number of selected items is greater than the number the user is allowed to select if ((this.MaxSelection != null) && (listBox.SelectedItems.Count > this.MaxSelection)) { //Prevent this method from running while reverting the selection listBox.SelectedIndexChanged -= ChildSelectionChanged; //Revert the selection to the previous selection try { for (int index = 0; index < listBox.Items.Count; index++) { if (listBox.SelectedIndices.Contains(index) && !this.previousSelection.Contains(index)) { listBox.SetSelected(index, false); } } } finally { //Re-enable this method as an event handler for the selection change event listBox.SelectedIndexChanged += ChildSelectionChanged; } } else { //Store the current selection this.previousSelection.Clear(); foreach (int selectedIndex in listBox.SelectedIndices) { this.previousSelection.Add(selectedIndex); } //Let any interested code know the selection has changed. //(We do not do this in the case where the selection would put //the selected count above max since we revert the selection; //there is no net effect in that case.) RaiseSelectionChangedEvent(); } } ```
Some 3rd-party components have cancelable events such as BeforeSelectedIndexChanged. But when using the MS default component, I think that your approach is basically what you need. You could also store the selection in other events (such as MouseDown or KeyDown) which are known to be triggered before a change.
Thanks to Lucero's insight that I could put the code to store the selection in another event, I was able to create a solution using MouseUp. As stated in the comments to Lucero's question, MouseDown fires after the SelectedValueChange event, so I has to use MouseUp instead. Here is the code: ``` /// <summary> /// Handle the ListBox's SelectedValueChanged event, revert the selection if there are too many selected /// </summary> /// <param name="sender">the sending object</param> /// <param name="e">the event args</param> void ChildSelectionChanged(object sender, EventArgs e) { ListBox listBox = sender as ListBox; //If the number of selected items is greater than the number the user is allowed to select if ((this.MaxSelection != null) && (listBox.SelectedItems.Count > this.MaxSelection)) { //Prevent this method from running while reverting the selection listBox.SelectedIndexChanged -= ChildSelectionChanged; //Revert the selection to the previously stored selection try { for (int index = 0; index < listBox.Items.Count; index++) { if (listBox.SelectedIndices.Contains(index) && !this.previousSelection.Contains(index)) { listBox.SetSelected(index, false); } } } catch (ArgumentOutOfRangeException ex) { } catch (InvalidOperationException ex) { } finally { //Re-enable this method as an event handler for the selection change event listBox.SelectedIndexChanged += ChildSelectionChanged; } } else { RaiseSelectionChangedEvent(); } } /// <summary> /// Handle the ListBox's MouseUp event, store the selection state. /// </summary> /// <param name="sender">the sending object</param> /// <param name="e">the event args</param> /// <remarks>This method saves the state of selection of the list box into a class member. /// This is used by the SelectedValueChanged handler such that when the user selects more /// items than they are allowed to, it will revert the selection to the state saved here /// in this MouseUp handler, which is the state of the selection at the end of the previous /// mouse click. /// We have to use the MouseUp event since: /// a) the SelectedValueChanged event is called multiple times when a Shift-click is made; /// the first time it fires the item that was Shift-clicked is selected, the next time it /// fires, the rest of the items intended by the Shift-click are selected. Thus using the /// SelectedValueChanged handler to store the selection state would fail in the following /// scenario: /// i) the user is allowed to select 2 items max /// ii) the user clicks Line1 /// iii) the SelectedValueChanged fires, the max has not been exceeded, selection stored /// let's call it Selection_A which contains Line1 /// iii) the user Shift-clicks and item 2 lines down from the first selection called Line3 /// iv) the SelectedValueChanged fires, the selection shows that only Line1 and Line3 are /// selected, hence the max has not been exceeded, selection stored let's call it /// Selection_B which contains Line1, Line3 /// v) the SelectedValueChanged fires again, this time Line1, Line2, and Line3 are selected, /// hence the max has been exceeded so we revert to the previously stored selection /// which is Selection_B, what we wanted was to revert to Selection_A /// b) the MouseDown event fires after the first SelectedValueChanged event, hence saving the /// state in MouseDown also stores the state at the wrong time.</remarks> private void valuesListBox_MouseUp(object sender, MouseEventArgs e) { if (this.MaxSelection == null) { return; } ListBox listBox = sender as ListBox; //Store the current selection this.previousSelection.Clear(); foreach (int selectedIndex in listBox.SelectedIndices) { this.previousSelection.Add(selectedIndex); } } ```
If more than X items are selected in a ListBox, revert to the previous selection
[ "", "c#", "listbox", "multi-select", "" ]
I'm writing some C++ code that manipulates a bunch of vectors that are changing in size and are thus being reallocated constantly. I would like to get a "pointer" into these vectors that remains valid even after reallocation of the vector. More specifically, I just want these "pointers" to remember which vector they point into and the index to which they point. When I dereference them using the standard (\*ptr) syntax, I just want them to do the obvious lookup. Obviously, actual pointers will not be valid after reallocation, and my understanding is that iterators aren't valid after reallocation either. Note also that I don't care if elements are inserted before my objects, so these "pointers" really don't have to remember anything but a vector and an index. Now, I could easily write such a class myself. Has anyone (Boost? STL?) done it for me already? **Edit**: The answers don't address my question. *I asked if this functionality is any standard library.* I take the responses as a "no"?
Try a std::pair< vector\*, int>, as neither the position of the vector nor the index of the element changes. Or, as a class: ``` template<class T> class VectorElementPointer { vector<T>& vectorref; typename vector<T>::size_type index; public: VectorElementPointer(vector<T>& vref, typename vector<T>::size_type index):vectorref(vref),index(index){} T& operator*() const {return vectorref[index];} T* operator->() const {return &vectorref[index];} }; ``` This is the easiest solution that comes to my mind, as neither the STL nor Boost contains anything to do it easier.
[An article on persistent iterators](http://www.ddj.com/cpp/184403596), complete with [implementation](http://www.ddj.com/showArticle.jhtml?documentID=cuj9901getov&pgno=2).
In C++, how can I get a pointer into a vector?
[ "", "c++", "pointers", "vector", "" ]
I'm trying to detect when an iframe and its content have loaded but not having much luck. My application takes some input in text fields in the parent window and updates the iframe to provide a 'live preview' I started with the following code (YUI) to detect when the iframe load event occurs. ``` $E.on('preview-pane', 'load', function(){ previewBody = $('preview-pane').contentWindow.document.getElementsByTagName('body')[0]; } ``` 'preview-pane' is the ID of my iframe and I'm using YUI to attach the event handler. However, trying to access the body in my callback (upon iframe load) fails, I think because the iframe loads before the event handler is ready. This code works if I delay the iframe loading by making the php script that generates it sleep. Basically, I'm asking what is the correct approach across browsers to detect when the iframe has loaded and its document is ready?
> to detect when the iframe has loaded and its document is ready? It's ideal if you can get the iframe to tell you itself from a script inside the frame. For example it could call a parent function directly to tell it it's ready. Care is always required with cross-frame code execution as things can happen in an order you don't expect. Another alternative is to set ‘var isready= true;’ in its own scope, and have the parent script sniff for ‘contentWindow.isready’ (and add the onload handler if not). If for some reason it's not practical to have the iframe document co-operate, you've got the traditional load-race problem, namely that even if the elements are right next to each other: ``` <img id="x" ... /> <script type="text/javascript"> document.getElementById('x').onload= function() { ... }; </script> ``` there is no guarantee that the item won't already have loaded by the time the script executes. The ways out of load-races are: 1. on IE, you can use the ‘readyState’ property to see if something's already loaded; 2. if having the item available only with JavaScript enabled is acceptable, you can create it dynamically, setting the ‘onload’ event function before setting source and appending to the page. In this case it cannot be loaded before the callback is set; 3. the old-school way of including it in the markup: `<img onload="callback(this)" ... />` Inline ‘onsomething’ handlers in HTML are almost always the wrong thing and to be avoided, but in this case sometimes it's the least bad option.
See [this](http://sonspring.com/journal/jquery-iframe-sizing) blog post. It uses jQuery, but it should help you even if you are not using it. Basically you add this to your `document.ready()` ``` $('iframe').load(function() { RunAfterIFrameLoaded(); }); ```
Detecting when Iframe content has loaded (Cross browser)
[ "", "javascript", "events", "cross-browser", "" ]
I have a manager that holds connections to the server. I keep the connection alive and i want my threads to request connections when it needs. My question is How do i have track objects automatically? I would like it to work similar to scoped pointer. I request a connection, then when my obj goes out of scope it tells the manager it is not in use anymore. I wont be passing it around as a pointer. I'll be doing something like ``` { Obj = Man.GetObj(); //some loop Obj.DoSomething() } //auto tell man that obj is no longer in use ```
You could create your own wrapper object and implements `IDisposable`. In the `Dispose()` method, signal the manager that you're no longer in use. You can then have your statement like... ``` using(Obj obj = Man.GetObj()) { Obj.DoSomething(); } ``` The `using` block automatically calls the `Dispose()` method at the close of the scope.
Look into the using statement. [MSDN](http://msdn.microsoft.com/en-us/library/yh598w02(VS.80).aspx)
How should i track Objs in use, C#
[ "", "c#", "" ]
C# 2005 I am using a installer class as I need a custom action that will read in a xml config file. So when the user double clicks the setup, there will be a config.xml file in the same folder. I would like to read in this config.xml file on install event. The folder will contain setup.exe and config.xml However, it can't find the config.xml file as it is looking for it under C:\windows\system32. I found this out by using applicationstartup.path. However, the setup folder which contains the setup and config.xml is located on my desktop. I thought it will look for it in the current location. Which would be the setup folder. The difficulty is that my setup folder will be downloaded from the Internet and any user could download it and unzip it anywhere on their computer. Here is my code snippet for the on install event. Many thanks for any advice ``` protected override void OnAfterInstall(System.Collections.IDictionary savedState) { DataTable dt = new DataTable(); MessageBox.Show(Application.StartupPath.ToString()); dt.ReadXml("config.xml"); MessageBox.Show(base.Context.Parameters["CAT_TargetDir"].ToString()); } ```
I am managed to solve this. The answer was just to write a simple bat file that will do all the work for me. Then the bat file will call the setup and install the application. Thanks,
Assuming you're talking about an MSI, isn't that the [OriginalDatabase](http://msdn.microsoft.com/en-us/library/aa370562(VS.85).aspx) property?
Getting location of setup.exe in custom action
[ "", "c#", "installation", "custom-action", "" ]