Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I've written a simple entry form that compiles information onto a database, and I'm having trouble with my required fields... I use a form to process the data via post method if the user neglects to fill in a required field I would like to bring him back to the original form with his/her previous fields entered. How to I pass info back into an input label? Is there an easy way to do this without any crazy scripts? I started with sessions but I realized I have no clue how to put the stored info from the session back into the input field so the user doesnt have to retype all of their info... Also would a cookie be better for this over session? Thanks guys, Arthur
[jQuery Validation Plug-in](http://bassistance.de/jquery-plugins/jquery-plugin-validation/)
When you post a form, all those variables are submitted into a global array named $\_POST['input\_name'] and available on the page posted to. A lot of times what I like to do if I'm doing it fairly quickly, is just make the value of those input fields equal the same as what would be posting. For example lets say we have a desired username field but the form didn't validate for some reason and posted back to itself; we don't want them to have to enter it again: ``` <input type="text" name="username" value="<?php print $_POST['username']; ?>" /> ``` Of course when they first load the page, the value will be empty so there is nothing there, but if for some reason it posts back, that "username" field will already contain entered information. Even better is java script validation, as the form doesn't have to post back, but this will do the job just fine!
Passing submitted form info back into input field?
[ "", "php", "html", "cookies", "session", "" ]
I'm writing a program in c# for Windows7 that works very fine... But now I started to build a setup that copies the program files into "C:\Program Files". Now there are a lot of Problems when the program is in that folder: 1) If I cancel an OpenFileDialog I'll get an exception 2) My program don't write files into the AppData folder anymore 3) The program can't open intern files in its own directory because of the permission I don't know what i can do... Can someone help me? **EDIT:** Maybe you didn't understand my problem. I wrote a program that works fine in C:\myprogram. I made an installer that copies the files into the C:\Program Files directory, it's the same when I copy my files into that directory * My program **only opens** files in its **own directory** * My program **opens and writes** files in the **AppData folder** * My program can open files like .txt in a rtb. There the OpenFileDialog will be used This 3 points don't work! If my program is in the Program Files folder it can't open a file like C:\Program Files\myprogram\xsl\test.xsl and can't write a file into the AppData folder. If I install my program into C:\lalala it will work!
Ok - I found the answer: I have to use ``` FileStream fs = File.OpenRead(tmpfile) ``` instead of ``` FileStream fs = new FileStream(tmpfile, FileMode.Open); ```
To copy files into Program Files or any privileged location, the process must be run by an elevated administrator. Since you are talking about "copying" files and an "OpenFileDialog", it sounds like you are running a .NET process to do the copying, rather than a Windows Installer. Usually, this should be done by an installer rather than your app. Your app needs to set requireAdministrator in its manifest or elevate just for that particular action. For more info, you should read up on UAC. As a start, I suggest you read [UAC: The Definitive Guide](http://www.codeproject.com/KB/vista-security/UAC__The_Definitive_Guide.aspx) on CodePlex.
c# win7: unauthorizedaccessexception
[ "", "c#", "windows-7", "" ]
I've got two tables and I've added a foreign key constraint. Kewl - works great. Now, is it possible to further constrain that relationship against some data in the parent table? Basically, I have animals in the parent table, and for the child table wishto only contain data where the parent data are .... um .. mammals. eg. ``` Animals ^^^^^^^ AnimalId INT PK NOT NULL IDENTITY AnimalType TINYINT NOT NULL -- 1: Mammal, 2:Reptile, etc.. Name Mammals ^^^^^^^ AnimalId INT PK FK NOT NULL NumberOfMammaryGlads TINYINT NOT NULL ``` So, i wishto make sure that the AnimalId can only be of type Animals.AnimalType = 1 Is this possible?? I don't want to allow someone to try and insert some info against a reptile, in the child table... Cheers :) ### Edit: I thought I had to use a [Check Constraint](http://msdn.microsoft.com/en-us/library/ms188258.aspx) (confirmed below from my first two answers - cheers!), but I wasn't sure how to (eg. the sql syntax to refer to the Animals table). ### Update: Alex has a very good post (below) that benchmarks some of the suggestions.... a very good read!
I ran a small benchmark - in this case the approach with a UDF runs almost 100 times slower. ### The overhead of an FK in CPU time = 375 ms - 297 ms = 78 ms ### The overhead of an UDF in CPU time = 7750 ms - 297 ms = 7453 ms Here's the Sql code... -- set up an auxiliary table Numbers with 128K rows: ``` CREATE TABLE dbo.Numbers(n INT NOT NULL PRIMARY KEY) GO DECLARE @i INT; SET @i = 1; INSERT INTO dbo.Numbers(n) SELECT 1; WHILE @i<128000 BEGIN INSERT INTO dbo.Numbers(n) SELECT n + @i FROM dbo.Numbers; SET @i = @i * 2; END; GO ``` -- the tables ``` CREATE TABLE dbo.Animals (AnimalId INT NOT NULL IDENTITY PRIMARY KEY, AnimalType TINYINT NOT NULL, -- 1: Mammal, 2:Reptile, etc.. Name VARCHAR(30)) GO ALTER TABLE dbo.Animals ADD CONSTRAINT UNQ_Animals UNIQUE(AnimalId, AnimalType) GO CREATE FUNCTION dbo.GetAnimalType(@AnimalId INT) RETURNS TINYINT AS BEGIN DECLARE @ret TINYINT; SELECT @ret = AnimalType FROM dbo.Animals WHERE AnimalId = @AnimalId; RETURN @ret; END GO CREATE TABLE dbo.Mammals (AnimalId INT NOT NULL PRIMARY KEY, SomeOtherStuff VARCHAR(10), CONSTRAINT Chk_AnimalType_Mammal CHECK(dbo.GetAnimalType(AnimalId)=1) ); GO ``` --- populating with UDF: ``` INSERT INTO dbo.Animals (AnimalType, Name) SELECT 1, 'some name' FROM dbo.Numbers; GO SET STATISTICS IO ON SET STATISTICS TIME ON GO INSERT INTO dbo.Mammals (AnimalId,SomeOtherStuff) SELECT n, 'some info' FROM dbo.Numbers; ``` results are: ``` SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 2 ms. Table 'Mammals'. Scan count 0, logical reads 272135, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Numbers'. Scan count 1, logical reads 441, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 7750 ms, elapsed time = 7830 ms. (131072 row(s) affected) ``` --- populating with FK: ``` CREATE TABLE dbo.Mammals2 (AnimalId INT NOT NULL PRIMARY KEY, AnimalType TINYINT NOT NULL, SomeOtherStuff VARCHAR(10), CONSTRAINT Chk_Mammals2_AnimalType_Mammal CHECK(AnimalType=1), CONSTRAINT FK_Mammals_Animals FOREIGN KEY(AnimalId, AnimalType) REFERENCES dbo.Animals(AnimalId, AnimalType) ); INSERT INTO dbo.Mammals2 (AnimalId,AnimalType,SomeOtherStuff) SELECT n, 1, 'some info' FROM dbo.Numbers; ``` results are: ``` SQL Server parse and compile time: CPU time = 93 ms, elapsed time = 100 ms. Table 'Animals'. Scan count 1, logical reads 132, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Mammals2'. Scan count 0, logical reads 275381, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Numbers'. Scan count 1, logical reads 441, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 375 ms, elapsed time = 383 ms. ``` -- populating without any integrity: ``` CREATE TABLE dbo.Mammals3 (AnimalId INT NOT NULL PRIMARY KEY, SomeOtherStuff VARCHAR(10) ); INSERT INTO dbo.Mammals3 (AnimalId,SomeOtherStuff) SELECT n, 'some info' FROM dbo.Numbers; ``` results are: SQL Server parse and compile time: CPU time = 1 ms, elapsed time = 1 ms. ``` SQL Server Execution Times: CPU time = 0 ms, elapsed time = 66 ms. Table 'Mammals3'. Scan count 0, logical reads 272135, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Numbers'. Scan count 1, logical reads 441, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 297 ms, elapsed time = 303 ms. (131072 row(s) affected) ``` The overhead of an FK in CPU time = 375 ms - 297 ms = 78 ms The overhead of an UDF in CPU time = 7750 ms - 297 ms = 7453 ms
Have a unique constraint on Animals(AnimalId, AnimalType) Add AnimalType to Mammals, and use a check constraint to make sure it is always 1. Have a FK refer to (AnimalId, AnimalType).
Is it possible to add a logic Constraint to a Foreign Key?
[ "", "sql", "t-sql", "foreign-keys", "constraints", "" ]
I'm building an application (a side project which is likely to enlist the help of the stackoverflow community on more than one occasion) which will need to open a variety of file types (i.e. open Word documents in Word, not natively in my application). I've been playing with some code for looking up the default application for the file type in the registry and passing this to Process.Start(). There seem to be two issues with this approach: 1) The application name is quoted in some instances, and not in others. 2) Process.Start() requires that the application path and it's arguments are passed separately (i.e. Process.Start("notepad.exe", @"C:\myfile.txt"); rather than Process.Start(@"notepad.exe C:\myfile.txt");). This means when I retrieve the path from the registry, I have to split it (after determining if I need to split on quotes or spaces) to determine what part is the application path and what parts are arguments, then pass those separately to Process.Start(). The alternative seems to be to just pass the filename, as in Process.Start(@"C:\myfile.txt"), but I think this only works if the application is in the Path environment variable. Which way is better? In the case of the registry, is there a common solution for how to do the argument parsing? Thanks for any and all help! **Update:** I guess the short answer is 'No.' It seems like I was really going the overkill route, and that passing just the filename will work whenever there's an associated value in the registry. I.e. anything I find in the registry myself, Process.Start() already knows how to do. I did discover that when I try this with a "new" filetype, I get a Win32Exception stating "No application is associated with the specified file for this operation." Fredrik Mörk mentions in a comment that this doesn't occur for him in Vista. What's the proper way to handle this?
If the extension is registered to be opened with a certain application, it doesn't need to be in the PATH in order to run.
The application does not need to be in the PATH if you only specify the filename. The following code worked fine for me: ``` System.Diagnostics.Process.Start(@"C:\Users\Dan\Desktop\minors.pdf"); ```
Is it worth it to lookup the default application in the registry when opening a file from a C# application?
[ "", "c#", "registry", "file-type", "" ]
i am working in an application and I press key from keyboard, how can I capture that key (or string), including the source application's name, using C#? i am working on a application, in this application i want to store keystrokes with source application for example if i working with notepad and i type " this is a pen" in notepad. i have a list view with 3 column( application name, application path, window caption) now in application name column show the program which is open. now if notepad is open then it is showing in list view and i type some text in notepad. i want to store that text in a file which i typed in notepad, this is a console application but i wannna do it in windows application. using System; using System.Diagnostics; using System.Windows.Forms; using System.Runtime.InteropServices; class InterceptKeys { private const int WH\_KEYBOARD\_LL = 13; private const int WM\_KEYDOWN = 0x0100; private static LowLevelKeyboardProc \_proc = HookCallback; private static IntPtr \_hookID = IntPtr.Zero; ``` public static void Main() { _hookID = SetHook(_proc); Application.Run(); UnhookWindowsHookEx(_hookID); } private static IntPtr SetHook(LowLevelKeyboardProc proc) { using (Process curProcess = Process.GetCurrentProcess()) using (ProcessModule curModule = curProcess.MainModule) { return SetWindowsHookEx(WH_KEYBOARD_LL, proc, GetModuleHandle(curModule.ModuleName), 0); } } private delegate IntPtr LowLevelKeyboardProc( int nCode, IntPtr wParam, IntPtr lParam); private static IntPtr HookCallback( int nCode, IntPtr wParam, IntPtr lParam) { if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN) { int vkCode = Marshal.ReadInt32(lParam); Console.WriteLine((Keys)vkCode); } return CallNextHookEx(_hookID, nCode, wParam, lParam); } [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr SetWindowsHookEx(int idHook, LowLevelKeyboardProc lpfn, IntPtr hMod, uint dwThreadId); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool UnhookWindowsHookEx(IntPtr hhk); [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr CallNextHookEx(IntPtr hhk, int nCode, IntPtr wParam, IntPtr lParam); [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern IntPtr GetModuleHandle(string lpModuleName); ``` }
I don't think that this is best performed with C#, primarily because you will be needing to delve deeply within the Windows API, which obviates the basic presmise behind .NET in that it is platform independent. As already stated by Anton you'll need to use Windows Hooks and process the WH\_KEYBOARD type hook.
You probably would like to look at [windows hooks](http://msdn.microsoft.com/en-us/library/ms997537.aspx). Implementing what you want isn't a trivial task anyway because in order to get all keystrokes from all windows in the system you must get inside a window message processing mechanism of the operating system. After all, I wouldn't suggest implementing windows hook that works under CLR. This may be disastrous to the whole OS in case you make a mistake and I'm not sure whether it is even possible. It means either dig into C/C++ and write a global hook dll intercepting all keystrokes and implement interop with that dll or try to find a way of solving your problem without involving keystrokes capturing.
am working in an application and I press key from keyboard, how can I capture that key (or string), including the source application's name, using C#?
[ "", "c#", "" ]
I am planning to sell a javascript code that I have written. Given the current state of browsers its quite possible to write complex code. I think I will face a couple of problems/have a couple of questions: 1. Javascript being client-side can be easily **copied** as soon as I show someone a demo 2. Are there any companies selling javascript code? Not dual licensed like ExtJS. 3. Should I obfuscate my code? Should I hard-code the website on which it will run and pack it etc.? 4. How do I go about this? Thank you for your time.
To be honest, with any software, no method of protection is 100% safe from being misused. Think, for example, with even large-scale commercial pieces of software, such as Photoshop, Windows or OS X. All have methods in place to try and prevent people from misusing or pirating their software, and to the average user, this is fine, and prevents people from simply copying their software and distributing it illegally. However, if people *really* want to use software illegally, they'll find a way - they may reverse-engineer it, and then create keygens or remove piracy mechanisms completely, for example. Of course, being a scripting language, applications written using JavaScript is more susceptible to misuse, since, as you pointed out, as the JavaScript runs client-side, anyone can view your JavaScript quite easily. However, for many of your customers, this shouldn't be a problem, and it is quite common nowadays for companies to software of this nature commercially, which use JavaScript as the main method of their implementation. There may be a few people who try and misuse your software, but as I pointed out above, this occurs in all walks of software development, and all you can do is your best to prevent this from happening. As you suggested, JavaScript obfuscation is a good way to make the source less readable, (but there are some limitations, for example with stripping some legitimate lines of code that the obfuscator believes to be unnecessary), but at the end of the day, you just have to remember that *most* people are likely to legitimately use your software in the correct manner, and that over-protecting your software will only cause your legitimate users annoyance, to try to deter a small minority of people who are likely to provide a workaround to any mechanisms anyway.
Companies that normally sell JavaScript components are really selling support for those components. Guaranteed bug fixes, prompt response to questions, etc.
Selling Javascript Code
[ "", "javascript", "" ]
I have the following code ``` var input = $('#el_id'); wrapper = $(document.createElement('div')); wrapper.addClass('small'); wrapper.css('width',200); input.wrap(wrapper); alert(input.children().length) ``` I get 0 for the children length. What I want is: ``` <div class="small" style="width: 200px;"> <input type="text" id="el_id" /> </div> ``` But I want the small div to be dynamically generated. Any idea why this isn't working? **UPDATE** It turns out my issue lies on another line of code. I want to assign the wrapper to a variable after I wrap and: ``` block = input.wrap("<div class='"+container_class+"' style='width: "+wrap_width+"px;'></div>"); ``` does not work. It returns the input which makes sense. But how can I return the wrapper? Thanks!
I believe you are doing it right. You just need to save the results of the wrap function to a variable: **Edit:** Updated the code to get parent(): ``` var wrapped = input.wrap(wrapper).parent(); alert(wrapped.children().length); ```
From the [documentation](http://docs.jquery.com/Manipulation/wrap#elem), it appears that wrap() only accepts a DOM element or a string. You are passing it a jQuery object, which may not work. I'd try this: ``` $('#el_id').wrap('<div class="small" style="width: 200px;"></div>'); ```
How do I wrap a page element with a dynamically generated element using jQuery?
[ "", "javascript", "jquery", "dom-manipulation", "" ]
I am trying to write a Python function which returns the same moon phase value as in the game NetHack. This is found in [hacklib.c](http://nethack.wikia.com/wiki/Source:Hacklib.c#phase_of_the_moon). I have tried to simply copy the corresponding function from the NetHack code but I don't believe I am getting the correct results. The function which I have written is `phase_of_the_moon()`. The functions `position()` and `phase()`, I found on the net, and I am using them as an indication of the success of my function. They are very accurate and give results which approximately match the nethack.alt.org server (see <http://alt.org/nethack/moon/pom.txt>). What I am after however is an exact replication of the original NetHack function, idiosyncrasies intact. I would expect my function and the 'control' function to give the same moon phase at least, but currently they do not and I'm not sure why! Here is the NetHack code: ``` /* * moon period = 29.53058 days ~= 30, year = 365.2422 days * days moon phase advances on first day of year compared to preceding year * = 365.2422 - 12*29.53058 ~= 11 * years in Metonic cycle (time until same phases fall on the same days of * the month) = 18.6 ~= 19 * moon phase on first day of year (epact) ~= (11*(year%19) + 29) % 30 * (29 as initial condition) * current phase in days = first day phase + days elapsed in year * 6 moons ~= 177 days * 177 ~= 8 reported phases * 22 * + 11/22 for rounding */ int phase_of_the_moon() /* 0-7, with 0: new, 4: full */ { register struct tm *lt = getlt(); register int epact, diy, goldn; diy = lt->tm_yday; goldn = (lt->tm_year % 19) + 1; epact = (11 * goldn + 18) % 30; if ((epact == 25 && goldn > 11) || epact == 24) epact++; return( (((((diy + epact) * 6) + 11) % 177) / 22) & 7 ); } ``` Here is the `getlt()` function (also in hacklib.c): ``` static struct tm * getlt() { time_t date; #if defined(BSD) && !defined(POSIX_TYPES) (void) time((long *)(&date)); #else (void) time(&date); #endif #if (defined(ULTRIX) && !(defined(ULTRIX_PROTO) || defined(NHSTDC))) || (defined(BSD) && !defined(POSIX_TYPES)) return(localtime((long *)(&date))); #else return(localtime(&date)); #endif } ``` Here is my Python code: ``` from datetime import date def phase_of_the_moon(): lt = date.today() diy = (lt - date(lt.year, 1, 1)).days goldn = ((lt.year - 1900) % 19) + 1 epact = (11 * goldn + 18) % 30; if ((epact == 25 and goldn > 11) or epact == 24): epact += 1 return ( (((((diy + epact) * 6) + 11) % 177) / 22) & 7 ) import math, decimal, datetime dec = decimal.Decimal def position(now=None): if now is None: now = datetime.datetime.now() diff = now - datetime.datetime(2001, 1, 1) days = dec(diff.days) + (dec(diff.seconds) / dec(86400)) lunations = dec("0.20439731") + (days * dec("0.03386319269")) return lunations % dec(1) def phase(pos): index = (pos * dec(8)) + dec("0.5") index = math.floor(index) return { 0: "New Moon", 1: "Waxing Crescent", 2: "First Quarter", 3: "Waxing Gibbous", 4: "Full Moon", 5: "Waning Gibbous", 6: "Last Quarter", 7: "Waning Crescent" }[int(index) & 7] def phase2(pos): return { 0: "New Moon", 1: "Waxing Crescent", 2: "First Quarter", 3: "Waxing Gibbous", 4: "Full Moon", 5: "Waning Gibbous", 6: "Last Quarter", 7: "Waning Crescent" }[int(pos)] def main(): ## Correct output pos = position() phasename = phase(pos) roundedpos = round(float(pos), 3) print "%s (%s)" % (phasename, roundedpos) ## My output print "%s (%s)" % (phase2(phase_of_the_moon()), phase_of_the_moon()) if __name__=="__main__": main() ```
The code as written is largely untestable - and you need to make it testable. So, you need the C code to be: ``` int phase_of_the_moon() /* 0-7, with 0: new, 4: full */ { register struct tm *lt = getlt(); return testable_potm(lt); } static int testable_potm(const struct tm *lt) { register int epact, diy, goldn; diy = lt->tm_yday; goldn = (lt->tm_year % 19) + 1; epact = (11 * goldn + 18) % 30; if ((epact == 25 && goldn > 11) || epact == 24) epact++; return( (((((diy + epact) * 6) + 11) % 177) / 22) & 7 ); } ``` Now you can run tests with multiple values of time. The alternative way to do this is to fake `getlt()` instead. You then need parallel changes in your Python code. Then you create a file of `time_t` values which can be read by both Python and C, and then converted into an appropriate structure (via `localtime()` in C). Then you can see where things are deviating.
**Edit:** Turns out both of the "problems" I spotted here were based on a misunderstanding of the `tm` struct. I'll leave the answer intact for the sake of the discussion in the comments, but save your votes for someone who might actually be correct. ;-) --- Caveat: I'm not terribly familiar with C time constructs; I'm mostly going off the field documentation supplied for `strftime`. I see two "bugs" in your port. First, I believe `tm_year` is intended to be the year without century, not the year minus 1900, so, `goldn` should be `((lt.year % 100) % 19) + 1`. Secondly, your calculation for `diy` is zero-based, whereas `tm_yday` appears (again, from the docs) to be one-based. However, I'm not certain about the latter, as fixing just the `goldn` line gives a correct result (at least for today), where as fixing both gives the wrong answer: ``` >>> def phase_of_the_moon(): lt = date.today() diy = (lt - date(lt.year, 1, 1)).days goldn = ((lt.year % 100) % 19) + 1 epact = (11 * goldn + 18) % 30 if ((epact == 25 and goldn > 11) or epact == 24): epact += 1 return ( (((((diy + epact) * 6) + 11) % 177) / 22) & 7 ) >>> phase_of_the_moon(): 3 ``` Again, this is mostly guesswork. Please be kind. :-)
How to port this NetHack function to Python?
[ "", "python", "c", "time", "porting", "nethack", "" ]
What is the most cross platform way of removing bad path characters (e.g. "\" or ":" on Windows) in Python? ### Solution Because there seems to be no ideal solution I decided to be relatively restrictive and did use the following code: ``` def remove(value, deletechars): for c in deletechars: value = value.replace(c,'') return value; print remove(filename, '\/:*?"<>|') ```
Unfortunately, the set of acceptable characters varies by OS *and* by filesystem. * [Windows](http://msdn.microsoft.com/en-us/library/aa365247.aspx): > + Use almost any character in the current code page for a name, including Unicode characters and characters in the extended character set (128–255), except for the following: > - The following reserved characters are not allowed: > < > : " / \ | ? \* > - Characters whose integer representations are in the range from zero through 31 are not allowed. > - Any other character that the target file system does not allow. The list of accepted characters can vary depending on the OS and locale of the machine that first formatted the filesystem. .NET has [GetInvalidFileNameChars](http://msdn.microsoft.com/en-us/library/system.io.path.getinvalidfilenamechars.aspx) and [GetInvalidPathChars](http://msdn.microsoft.com/en-us/library/system.io.path.getinvalidpathchars.aspx), but I don't know how to call those from Python. * Mac OS: NUL is always excluded, "/" is excluded from POSIX layer, ":" excluded from Apple APIs + HFS+: any sequence of non-excluded characters that is representable by UTF-16 in the Unicode 2.0 spec + HFS: any sequence of non-excluded characters representable in MacRoman (default) or other encodings, depending on the machine that created the filesystem + UFS: same as HFS+ * Linux: + native (UNIX-like) filesystems: any byte sequence excluding NUL and "/" + FAT, NTFS, other non-native filesystems: varies Your best bet is probably to either be overly-conservative on all platforms, or to just try creating the file name and handle errors.
I think the safest approach here is to just replace any suspicious characters. So, I think you can just replace (or get rid of) anything that isn't alphanumeric, -, \_, a space, or a period. And here's how you do that: ``` import re re.sub(r'[^\w_. -]', '_', filename) ``` The above escapes every character that's not a letter, `'_'`, `'-'`, `'.'` or space with an `'_'`. So, if you're looking at an entire path, you'll want to throw os.sep in the list of approved characters as well. Here's some sample output: ``` In [27]: re.sub(r'[^\w\-_\. ]', '_', r'some\*-file._n\\ame') Out[27]: 'some__-file._n__ame' ```
How to remove bad path characters in Python?
[ "", "python", "path", "illegal-characters", "" ]
We developed an application that uses Excel interop libraries (Microsoft.Office.Interop.Excel) to read some Excel files. When a problem occur in the application, the event Application.ThreadException is handled, so the resources are released (Excel is closed...). The problem is that when we use the VS debugger, if we stop the execution (because the process breaks on an exception, or a breakpoint, there are lots of reasons why we'd do that), the resources are not released and Excel stays opened. And of course, next time the application is launched... it crashes because there are locks on the file. So I'm looking for a way to force the release of the Excel objects, even when stopped with the debugger. Any suggestion ?
You can use the DTE (VisualStudio Automation Model) to write a macro that will be invoked when a stop debug happens, below is a snippet of the idea. ``` Private Sub DebuggerEvents_OnEnterBreakMode( ByVal Reason As EnvDTE.dbgEventReason, ByRef ExecutionAction As EnvDTE.dbgExecutionAction) Handles DebuggerEvents.OnEnterBreakMode If (Reason = dbgEventReason.dbgEventReasonStopDebugging) Then // DO YOUR CLEAN UP CODE HERE End If End Sub ```
Unfortunately there isn't a way to do this. The stop button in visual studio kills the process, so it doesn't have any chance to clean up. As a possible way round your problem (although not a very good one), you could write a cleanup routine and execute it manually from the immediate window before stopping the app. [Edit: Ignore me. This answer is wrong. Shay Erlichmen has come up with a much better solution using a macro]
Visual Studio : executing clean up code when debugging stops
[ "", "c#", "visual-studio", "debugging", "" ]
I am trying to save myself a bit of typing by writing the following code, but it seems I can't do this: ``` class lgrAdminObject(admin.ModelAdmin): fields = ["title","owner"] list_display = ["title","origin","approved", "sendToFrames"] class Photos(lgrAdminObject): fields.extend(["albums"]) ``` why doesn't that work? Also since they're not functions, I can't do the super trick ``` fields = super(Photos, self).fields fields.extend(["albums"]) ```
Inheritance applies *after* the class's body executes. In the class body, you can use `lgrAdminObject.fields` -- you sure you want to alter the superclass's attribute rather than making a copy of it first, though? Seems peculiar... I'd start with a copy: ``` class Photos(lgrAdminObject): fields = list(lgrAdminObject.fields) ``` before continuing with alterations.
Have you tried this? ``` fields = lgrAdminObject.fields + ["albums"] ``` You need to create a new class attribute, not extend the one from the parent class.
python class attribute inheritance
[ "", "python", "django", "class", "inheritance", "" ]
In the [current C++0x draft](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2857.pdf) I've noticed they introduced some new explicit keywords to highlight expected behaviors (great move!). Examples: [defaulted/deleted](http://en.wikipedia.org/wiki/C%2B%2B0x#Defaulting.2Fdeleting_of_standard_functions_on_C.2B.2B_objects) functions (*= default* and *= delete*), the new [nullptr](http://en.wikipedia.org/wiki/C%2B%2B0x#Null_pointer_constant) constant, the [explicit](http://en.wikipedia.org/wiki/C%2B%2B0x#Explicit_conversion_operators) keyword usable also for conversion operators, ... So I expected to see also a *= pure* syntax for pure virtual functions. Instead the ugly (IMHO, of course) *= 0* thing still exists. Ok, I can use a *#define pure 0* (and sometimes I do that), but I think coherency/consistency should be definitely a goal for a standard. Moreover I know it's just a sort of ultra-pedantic request, but *= 0* was indeed one of my least favorite part of C++ (euphemism)... My questions: * I know, the new standard is feature-complete, but is it still possible to ask for this small pedantic addition, even just as a "required macro" thing? * if the answer is positive, how? (any committee member around?) * am I just a bit too pedantic (or wrong) for asking this addition? what do you think about the current syntax of pure virtual functions?
That's not a small pedantic change. Introducing a new keyword is one of the **biggest** changes you can ask for. It is something they try to avoid almost at any cost. Think of all the code that uses the word "pure", which would break. In general, their guideline is to only add things to the language that *could not be done before*. A `pure` keyword wouldn't enable anything new (unlike the nullptr keyword, which enables better type checking, for example), so expect it to have a very low priority. Keep in mind that anything they do is basically maintenance work. The #1 goal is to avoid breaking the language (or existing code that uses it). Any features that are added on are only added if it can be done without breaking backward compatibility. However, the committee is more or less an open forum. Browse around their [website](http://www.open-std.org/jtc1/sc22/wg21/docs/contacts), and you should be able to find a few email addresses. OR use the `comp.std.c++` newsgroup. I believe their meetings are open as well, so you could just gatecrash the next one. ;)
The thing you hate *most* about C++ is "`= 0;`"??? Have you ever *used* this language? There's plenty of other things you could be better spending your hate on. `<Flame retardant>`I have used C++ for more than 10 years. For me it's still the language of choice whenever I need to do some computational heavy lifting.`</Flame retardant>`
How to ask for a small addition? (syntax of pure virtual functions)
[ "", "c++", "standards", "c++11", "" ]
The idea behind this code is that it uses an extension method to instantiate an instance of a class if that class is set to null. The code is simple enough, but does not work (it compiles and runs, but at the end the object reference is still `null`). Can anyone suggest why? The following code uses a simple class `SomeClass` that has a single `string` property. ``` class SomeClass { public string SomeProperty { get; set; } } static class ExtensionMethods { public static void InitialiseObjectIfNull<T>(this T obj) where T : class, new() { if (obj == null) obj = new T(); } } class Program { static void Main(string[] args) { SomeClass someClass = null; someClass.InitialiseObjectIfNull(); // someClass is still null - but why?? } } ``` (The discussion about whether or not this is an appropriate use of an extension method should be considered outside the scope of the question! I am interested in understanding *why* this approach does not work) ## Edit On closer inspection this question is less about extension methods, and more about what is going on when you pass a reference type with or without the `ref` keyword. The following function will cause a passed `obj` to be initialised for the caller: ``` static void InitialiseObjectIfNull<T>(ref T obj) where T : class, new() { if (obj == null) obj = new T(); } InitialiseObjectIfNull<SomeClass>(ref someClass); // someClass will be initialised ``` The following function will not cause a passed `obj` to be initlaised for the caller: ``` static void InitialiseObjectIfNull<T>(T obj) where T : class, new() { if (obj == null) obj = new T(); } InitialiseObjectIfNull<SomeClass>(someClass); // someClass will not be initialised ``` But...we're dealing with a reference type here, so what does the CLR do with the `new`d `obj` if the `ref` keyword is not used? Presumably it just gets garbage collected... ## Edit2 OK, we're going back to basics here. Consider the code below: ``` class Program { static void SetProperty(SomeClass someClass) { someClass.SomeProperty = "Bar"; } static void Main(string[] args) { SomeClass someClass = new SomeClass { SomeProperty = "Foo" }; SetProperty(someClass); // someClass.SomeProperty now equals "Bar" } } ``` Note: `someClass` was not passed using the `ref` keyword, but the value of its property was still changed for the caller. This is what I would expect to see. However, change the `SetProperty` function as follows: ``` static void SetProperty(SomeClass someClass) { someClass = new SomeClass { SomeProperty = "Bar" }; } ``` ...and the caller will not see any change to `someClass`.
`this T obj` would need to be a `ref` argument. Since that's not allowed ([MSDN](http://blogs.msdn.com/sreekarc/archive/2007/04/25/extension-methods.aspx) or C# In Depth p258, Skeet), you can't. I would suggest you not push this idea too far, though. It seems like a confusing application of extension methods, especially when the alternative is still a short one line.
You would need to return the value; extension methods aren't `ref` on the `this` argument: ``` static class ExtensionMethods { public static T InitialiseObjectIfNull<T>(this T obj) where T : class, new() { return obj ?? new T(); } } ``` Then you'd need: ``` someClass = someClass.InitialiseObjectIfNull(); ``` Personally, I expect it would be simpler to do it directly... with either: ``` if(someClass == null) someClass = new SomeClass(); ``` or ``` someClass = someClass ?? new SomeClass(); ```
Instantiating an instance of a class using an extension method
[ "", "c#", "extension-methods", "" ]
I using this in Access 2003. Pardon me if this is in the wrong spot, but I hoped someone could help. I have a table with multiple records that have a text represented date field. The date is in a format like: "06/01/2009" I need to select all the fields from the table but only the 6 oldest rows that fall in a range for each group of: ``` COUPONS.DocType, COUPONS.PayTo, COUPONS.ContactName, COUPONS.ContactNumber, COUPONS.DocFooter, COUPONS.PQBName, COUPONS.LetterDate, COUPONS.RetireeFirstName, COUPONS.RetireeLastName, COUPONS.Address1, COUPONS.Address2, COUPONS.City, COUPONS.State, COUPONS.ZIP, COUPONS.PQBSSN, COUPONS.EmployerCode ordered by the COUPONS.DateDue. ``` Like: select only records with a date range 01/01/2009 - 12/01/2009, and of those only select the 6 oldest entries. I have monkeyed with this for a bit and am having no luck. I know this is pretty basic, but I just cant seem to make this work. Here is the SQL select I use to get the date from the table now. ``` SELECT COUPONS.DocType, COUPONS.PayTo, COUPONS.ContactName, COUPONS.ContactNumber, COUPONS.DocFooter, COUPONS.PQBName, COUPONS.LetterDate, COUPONS.RetireeFirstName, COUPONS.RetireeLastName, COUPONS.Address1, COUPONS.Address2, COUPONS.City, COUPONS.State, COUPONS.ZIP, COUPONS.PQBSSN, COUPONS.EmployerCode, COUPONS.AmountDue, COUPONS.DateDue, Right([DateDue],4)+Left([DateDue],2)+Mid([datedue],4,2) AS SORTDATE FROM COUPONS ORDER BY COUPONS.DocType, COUPONS.PayTo, COUPONS.ContactName, COUPONS.ContactNumber, COUPONS.DocFooter, COUPONS.PQBName, COUPONS.LetterDate, Right([DateDue],4)+Left ([DateDue],2)+Mid([datedue],4,2); ```
I think I understand your problem - let me give you a solution that doesn't get into dealing with your date issue - there are a number of solutions to that above. Given this data: ``` PQBSSN DATE PQBNAME 1 1/1/2009 A 1 1/2/2009 A 1 1/3/2009 A 1 1/4/2009 Z 1 1/5/2009 Z 1 1/6/2009 Z 2 1/1/2009 B 2 1/2/2009 B 2 1/3/2009 B 2 1/4/2009 B 2 1/5/2009 B 2 1/6/2009 B 3 1/1/2009 C 3 1/2/2009 C 3 1/3/2009 C 3 1/4/2009 C 3 1/5/2009 C 3 1/6/2009 C SELECT C1.PQBSSN, C1.PQBNAME, C3.Date FROM [SELECT DISTINCT CA.PQBSSN, CA.PQBNAME FROM COUPONS AS CA]. AS C1, [SELECT DISTINCT CB.DATE FROM COUPONS AS CB]. AS C3 WHERE C3.DATE IN (SELECT TOP 2 C2.DATE FROM COUPONS AS C2 WHERE C2.PQBSSN = C1.PQBSSN ORDER BY C2.DATE); ``` The breakdown: The CA select gives the unique rows of non-date information The CB select gives all the dates in the table The "WHERE C3.DATE" select gives you the dates that apply to each matching group. You need to put checks in the WHERE of this select for every independent field if there isn't a unique key for the grouping rows. This Gives: ``` PQBSS PQBNAME Date 1 A 1/1/2009 1 Z 1/1/2009 2 B 1/1/2009 3 C 1/1/2009 1 A 1/2/2009 1 Z 1/2/2009 2 B 1/2/2009 3 C 1/2/2009 ``` I know this is a simplified version of your table, but I think it achieves your ends.
If you have control over the database but MUST use a text-based date, store your dates using the ODBC canonical format: ``` yyyy-mm-dd // if there's no time element yyyy-mm-dd HH:MM:ss // if time is needed as well ``` This has a few distinct advantages: * World-friendly, for users who aren't in the US and may think mm-dd-yyyy means dd-mm-yyyy * Sorts by date naturally, so normal < and > operators work just fine (and those operations are doing a textual comparison, they never actually convert the text to a date). * Your business layer will likely be able to read dates in this format correctly without adjusting your code at all * If you have fields that don't have actual dates, this won't generate a CONVERT() error like many of the suggestions already posted. (For instance, if you are also dealing with dirty values like "Next Tuesday" or "N/A" that you can't clean out of the database.) Converting your existing date data is a simple exercise of UPDATE with RIGHT(), LEFT(), etc., assuming your current date data is in a consistent format. Once your data is stored in a format that can be queried more readily, it's a simple problem: ``` SELECT TOP 6 * FROM mytable WHERE mydate BETWEEN startdate AND enddate ORDER BY mydate DESC ``` As for your grouping problem, I don't understand the question well enough to propose an answer. But getting your date data stored in the most efficient text format will help sort everything else out. Ok, I'm going to take a stab at your grouping problem: ``` SELECT DISTINCT DueDate, DocType, PayTo, ContactName, ContactNumber, [...other fields...] FROM coupons c1 WHERE CDate(c1.DueDate) BETWEEN '01/01/2000' AND '01/01/2009' /* Here's where the "grouping" happens--actually just filtering out the others */ AND (SELECT COUNT(*) FROM coupons c2 WHERE CDATE(c1.DueDate) >= CDATE(c2.DueDate) AND c2.DocType=c1.DocType AND c2.ContactName=c1.ContactName AND c2.ContactNumber=c1.ContactNumber [...test the other fields...] ) <= 6 ``` I can't remember the ins and outs of Jet SQL enough to know if this subquery will work, but I think it will.
SQL Select Question
[ "", "sql", "ms-access", "" ]
In my 3d editor application, I want to be able to click on and move a scene's entities (meshes, lights, cameras, etc). To do that, I imagine that I need to get the current mouse position relative to my Direct3d view and somehow give it to my ViewModel. How would I go about doing that, considering that my app is built with the MVVM pattern? Additional Info: The Direct3d renderer is from a C++ DLL and we're using an HwndHost to place it inside our WPF app. It has a WndProc which I imagine I could use to handle mouse messages there but I'd rather avoid it if at all possible. Thanks!
The best solution I found is to use an attached behavior. In the control that serves as the placeholder to the HwndHost I set an attached property, like ``` b:MouseBehavior.LeftClick="{Binding DoSomeActionCommand}" ``` where MouseBehavior receives the control and sets mouse events on it. It works well for me.
[InputManager.Current.PrimaryMouseDevice is probably your best bet.](http://msdn.microsoft.com/en-us/library/system.windows.input.mousedevice_members.aspx)
Building a MVVM 3D Editor Application -> Getting Mouse Position?
[ "", "c#", "wpf", "xaml", "mvvm", "direct3d", "" ]
This is supposedly a very easy question, but I just can't seem to find the right solution. There is a string in the format: ``` A:B=C;D:E=F;G:E=H;... ``` whereas A, B and C are alphanumeric (and may be lower as well as upper case). A and B are of length 1+, C may be empty. I figured I'd have to use something along the lines of ``` ((?<A>.+):(?<B>.+)=(?<C>.*);)* ``` but I don't see how to make it match in a lazy way. I'm also not sure how to obtain the results of the respective matches so I'd appreciate a little code sample. I'd be glad if you could give me a hint.
You might use `\w` to match alphanumeric characters rather than `.`, which matches everything. Then, you might try to capture one match at a time: ``` (?<A>\w+):(?<B>\w+)=(?<C>\w*); ``` Here's a small example: ``` Regex regex = new Regex("(?<A>\\w+):(?<B>\\w+)=(?<C>\\w*);"); string test = "A:B=C;D:E=F;G:E=H"; // get all matches MatchCollection mc = regex.Matches(test); foreach (Match m in mc) { Console.WriteLine("A = {0}", m.Groups["A"].Value); Console.WriteLine("B = {0}", m.Groups["B"].Value); Console.WriteLine("C = {0}", m.Groups["C"].Value); } ``` **note**: there are several tools that allow you to experiment with regular expressions and also provide some sort of help; I personally like [Expresso](http://www.ultrapico.com/Expresso.htm) - try it out, it will be very useful for learning.
Is regex a requirement? Since the string has a very structured, well, structure, it is easy to parse it without regex: ``` string input = "A:B=C;D:EF=G;E:H=;I:JK=L"; string[] elements = input.Split(new[] { ';' }); List<string[]> parts = new List<string[]>(); foreach (string element in elements) { parts.Add(element.Split(new[] { ':', '=' })); } // result output foreach (string[] list in parts) { Console.WriteLine("{0}:{1}={2}", list[0], list[1], list[2]); } ``` The output will be: ``` A:B=C D:EF=G E:H= I:JK=L ```
C# How to split (A:B=C)* using regex?
[ "", "c#", "regex", "lazy-evaluation", "" ]
What is the best way to parse (get a DOM tree of) a HTML result of XmlHttpRequest in Firefox? EDIT: I do *not* have the DOM tree, I want to acquire it. XmlHttpRequest's "responseXML" works only when the result is actual XML, so I have only responseText to work with. ~~The innerHTML hack doesn't seem to work with a complete HTML document (in <html></html>).~~ - turns out it works fine.
**`innerHTML`** should work just fine, e.g. ``` // This would be after the Ajax request: var myHTML = XHR.responseText; var tempDiv = document.createElement('div'); tempDiv.innerHTML = myHTML.replace(/<script(.|\s)*?\/script>/g, ''); // tempDiv now has a DOM structure: tempDiv.childNodes; tempDiv.getElementsByTagName('a'); // etc. etc. ```
At least for newer Firefox versions, an easier way is or will soon be available. <https://developer.mozilla.org/en/HTML_in_XMLHttpRequest> indicates that starting from FF11 it will be possible to ask for a DOM directly from the XHR by setting the `responseType` attribute to `"document"`. At that point, the HTML will be parsed and the DOM stuck into `responseXML` as for an XML document.
How to parse HTML from JavaScript in Firefox?
[ "", "javascript", "ajax", "firefox", "dom", "" ]
Imagine 2 tables in a relational database, e.g. Person and Billing. There is a (non-mandatory) OneToOne association defined between these entities, and they share the Person primary key (i.e. PERSON\_ID is defined in both Person and Billing, and it is a foreign key in the latter). When doing a select on Person via a named query such as: ``` from Person p where p.id = :id ``` Hibernate/JPA generates two select queries, one on the Person table and another on the Billing table. The example above is very simple and would not cause any performance issues, given the query returns only one result. Now, imagine that `Person` has `n` OneToOne relationships (all non-mandatory) with other entities (all sharing the `Person` primary key). Correct me if I'm wrong, but running a `select` query on Person, returning `r` rows, would result in `(n+1)*r` selects being generated by Hibernate, even if the associations are *lazy*. Is there a workaround for this potential performance disaster (other than not using a shared primary key at all)? Thank you for all your ideas.
> Imagine 2 tables in a relational database, e.g. Person and Billing. There is a (non-mandatory) OneToOne association defined between these entities, Lazy fetching is conceptually not possible for non-mandatory OneToOne by default, Hibernate has to hit the database to know if the association is `null` or not. More details from this old wiki page: > ### [Some explanations on lazy loading (one-to-one)](https://community.jboss.org/wiki/Someexplanationsonlazyloadingone-to-one) > > [...] > > Now consider our class B has > one-to-one association to C > > ``` > class B { > private C cee; > > public C getCee() { > return cee; > } > > public void setCee(C cee) { > this.cee = cee; > } > } > > class C { > // Not important really > } > ``` > > Right after loading B, you may call > `getCee()` to obtain C. But look, > `getCee()` is a method of YOUR class > and Hibernate has no control over it. > Hibernate does not know when someone > is going to call `getCee()`. That > means Hibernate must put an > appropriate value into "`cee`" > property at the moment it loads B from > database. If proxy is enabled for > `C`, Hibernate can put a C-proxy > object which is not loaded yet, but > will be loaded when someone uses it. > This gives lazy loading for > `one-to-one`. > > But now imagine your `B` object may or > may not have associated `C` > (`constrained="false"`). What should > `getCee()` return when specific `B` > does not have `C`? Null. But remember, > Hibernate must set correct value of > "cee" at the moment it set `B` > (because it does no know when someone > will call `getCee()`). Proxy does not > help here because proxy itself in > already non-null object. > > So the resume: **if your B->C mapping > is mandatory (`constrained=true`), > Hibernate will use proxy for C > resulting in lazy initialization. But > if you allow B without C, Hibernate > just HAS TO check presence of C at the > moment it loads B. But a SELECT to > check presence is just inefficient > because the same SELECT may not just > check presence, but load entire > object. So lazy loading goes away**. So, not possible... by default. > Is there a workaround for this potential performance disaster (other than not using a shared primary key at all)? Thank you for all your ideas. The problem is not the shared primary key, with or without shared primary key, you'll get it, the problem is the **nullable** OneToOne. **First option**: use bytecode instrumentation (see references to the documentation below) and *no-proxy* fetching: ``` @OneToOne( fetch = FetchType.LAZY ) @org.hibernate.annotations.LazyToOne(org.hibernate.annotations.LazyToOneOption.NO_PROXY) ``` **Second option**: Use a fake `ManyToOne(fetch=FetchType.LAZY)`. That's probably the most simple solution (and to my knowledge, the recommended one). But I didn't test this with a shared PK though. **Third option**: Eager load the Billing using a `join fetch`. ### Related question * [Making a OneToOne-relation lazy](https://stackoverflow.com/questions/1444227/making-a-onetoone-relation-lazy) ### References * Hibernate Reference Guide + [19.1.3. Single-ended association proxies](http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-fetching-proxies) + [19.1.7. Using lazy property fetching](http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html#performance-fetching-lazyproperties) * Old Hibernate FAQ + [How do I set up a 1-to-1 relationship as lazy?](http://web.archive.org/web/20071021233649/http://www.hibernate.org/117.html#A18) * Hibernate Wiki + [Some explanations on lazy loading (one-to-one)](https://community.jboss.org/wiki/Someexplanationsonlazyloadingone-to-one)
**Stay away from hibernate's OneToOne mapping** It is very broken and dangerous. You are one minor bug away from a database corruption problem. <http://opensource.atlassian.com/projects/hibernate/browse/HHH-2128>
OneToOne relationship with shared primary key generates n+1 selects; any workaround?
[ "", "java", "hibernate", "jpa", "" ]
Does anyone know if there is a way I can insert values into a C# Dictionary when I create it? I can, but don't want to, do `dict.Add(int, "string")` for each item if there is something more efficient like: ``` Dictionary<int, string>(){(0, "string"),(1,"string2"),(2,"string3")}; ```
There's whole page about how to do that here: <http://msdn.microsoft.com/en-us/library/bb531208.aspx> Example: > In the following code example, a `Dictionary<TKey, TValue>` is > initialized with instances of type `StudentName`: ``` var students = new Dictionary<int, StudentName>() { { 111, new StudentName {FirstName="Sachin", LastName="Karnik", ID=211}}, { 112, new StudentName {FirstName="Dina", LastName="Salimzianova", ID=317}}, { 113, new StudentName {FirstName="Andy", LastName="Ruth", ID=198}} }; ```
``` Dictionary<int, string> dictionary = new Dictionary<int, string> { { 0, "string" }, { 1, "string2" }, { 2, "string3" } }; ```
How to insert values into C# Dictionary on instantiation?
[ "", "c#", "dictionary", "" ]
I can't seem to figure out how this is happening. Here's an example of the file that I'm attempting to bulk insert into SQL server 2005: ``` ***A NICE HEADER HERE*** 0000001234|SSNV|00013893-03JUN09 0000005678|ABCD|00013893-03JUN09 0000009112|0000|00013893-03JUN09 0000009112|0000|00013893-03JUN09 ``` Here's my bulk insert statement: ``` BULK INSERT sometable FROM 'E:\filefromabove.txt WITH ( FIRSTROW = 2, FIELDTERMINATOR= '|', ROWTERMINATOR = '\n' ) ``` But, for some reason the only output I can get is: ``` 0000005678|ABCD|00013893-03JUN09 0000009112|0000|00013893-03JUN09 0000009112|0000|00013893-03JUN09 ``` The first record always gets skipped, unless I remove the header altogether and don't use the FIRSTROW parameter. How is this possible? Thanks in advance!
I don't think you can skip rows in a different format with `BULK INSERT`/`BCP`. When I run this: ``` TRUNCATE TABLE so1029384 BULK INSERT so1029384 FROM 'C:\Data\test\so1029384.txt' WITH ( --FIRSTROW = 2, FIELDTERMINATOR= '|', ROWTERMINATOR = '\n' ) SELECT * FROM so1029384 ``` I get: ``` col1 col2 col3 -------------------------------------------------- -------------------------------------------------- -------------------------------------------------- ***A NICE HEADER HERE*** 0000001234 SSNV 00013893-03JUN09 0000005678 ABCD 00013893-03JUN09 0000009112 0000 00013893-03JUN09 0000009112 0000 00013893-03JUN09 ``` It looks like it requires the '|' even in the header data, because it reads up to that into the first column - swallowing up a newline into the first column. Obviously if you include a field terminator parameter, it expects that every row **MUST** have one. You could strip the row with a pre-processing step. Another possibility is to select only complete rows, then process them (exluding the header). Or use a tool which can handle this, like SSIS.
Maybe check that the header has the same line-ending as the actual data rows (as specified in `ROWTERMINATOR`)? Update: from [MSDN](http://msdn.microsoft.com/en-us/library/ms188365.aspx): > The FIRSTROW attribute is not intended > to skip column headers. Skipping > headers is not supported by the BULK > INSERT statement. When skipping rows, > the SQL Server Database Engine looks > only at the field terminators, and > does not validate the data in the > fields of skipped rows.
SQL Bulk Insert with FIRSTROW parameter skips the following line
[ "", "sql", "sql-server-2005", "bulkinsert", "" ]
I had to figure out a way to ask this that wasn't subjective, so this is specifically for Microsoft's coding style. In the ASP.NET MVC source, code files look like this: ``` // Copyright info namespace System.Web.Mvc { using System; // blah blah // ... } ``` Note that 'using System' lines up nicely with the namespace. If I was to apply this style to my company's code, should I put 'using' statements for my company's namespaces directly below as well (so that it lines up)? When I put 'using' declarations at the top, I usually start with .NET namespaces first, so that's why I'm unsure. For example, should I do this: ``` namespace MyCompany.MyProduct.Something { using System; using MyCompany.MyProduct.SomethingElse; } ``` or this: ``` namespace MyCompany.MyProduct.Something { using MyCompany.MyProduct.SomethingElse; using System; } ``` I'm tempted toward the latter.
There is no single Microsoft style, although there have been attempts to consolidate their standardizations. That being said, [StyleCop](http://code.msdn.microsoft.com/sourceanalysis) forces all System namespaces to be listed first...
Microsoft StyleCop dictates using System.\* first then your custom library namespace (i.e. the first option).
Microsoft coding style question
[ "", "c#", "coding-style", "" ]
Is there a way to call a program (Python script) from a local HTML page? I have a YUI-colorpicker on that page and need to send its value to a microcontroller via rs232. (There is other stuff than the picker, so I can't code an application instead of an HTML page.) Later, this will migrate to a server, but I need a fast and easy solution now. Thanks.
I see now that Daff mentioned the simple HTTP server, but I made an example on how you'd solve your problem (using `BaseHTTPServer`): ``` import BaseHTTPServer HOST_NAME = 'localhost' PORT_NUMBER = 1337 class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler): def do_GET(s): s.send_response(200) s.send_header('Content-Type', 'text/html') s.end_headers() # Get parameters in query. params = {} index = s.path.rfind('?') if index >= 0: parts = s.path[index + 1:].split('&') for p in parts: try: a, b = p.split('=', 2) params[a] = b except: params[p] = '' # !!! # Check if there is a color parameter and send to controller... if 'color' in params: print 'Send something to controller...' # !!! s.wfile.write('<pre>%s</pre>' % params) if __name__ == '__main__': server_class = BaseHTTPServer.HTTPServer httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler) try: httpd.serve_forever() except KeyboardInterrupt: pass httpd.server_close() ``` Now, from your JavaScript, you'd call `http://localhost:1337/?color=ffaabb`
Python has a small built in Web server. If you already already got Python to run with the RS232 you might need to read [here](http://fragments.turtlemeat.com/pythonwebserver.php) on how to set up a very simple and basic webserver. An even easier one can look like [this](http://effbot.org/librarybook/simplehttpserver.htm): ``` import SimpleHTTPServer import SocketServer port = 8000 Handler = SimpleHTTPServer.SimpleHTTPRequestHandler httpd = SocketServer.TCPServer(("", port), Handler) httpd.serve_forever() ``` Try so separate you source as good as possible, to that you won't have too much trouble to move it to a production ready Python capable webserver.
Call program from within a browser without using a webserver
[ "", "python", "html", "browser", "" ]
In the following C++ code, I realised that `gcount()` was returning a larger number than I wanted, because `getline()` consumes the final newline character but doesn't send it to the input stream. What I still don't understand is the program's output, though. For input "Test\n", why do I get " est\n"? How come my mistake affects the *first* character of the string rather than adding unwanted rubbish onto the end? And how come the program's output is at odds with the way the string looks in the debugger ("Test\n", as I'd expect)? ``` #include <fstream> #include <vector> #include <string> #include <iostream> using namespace std; int main() { const int bufferSize = 1024; ifstream input( "test.txt", ios::in | ios::binary ); vector<char> vecBuffer( bufferSize ); input.getline( &vecBuffer[0], bufferSize ); string strResult( vecBuffer.begin(), vecBuffer.begin() + input.gcount() ); cout << strResult << "\n"; return 0; } ```
I've also duplicated this result, Windows Vista, Visual Studio 2005 SP2. When I figure out what the heck is happening, I'll update this post. **edit**: Okay, there we go. The problem (and the different results people are getting) are from the \r. What happens is you call `input.getline` and put the result in vecBuffer. The getline function strips off the \n, but leaves the \r in place. You then transfer the vecBuffer to a string variable, but use the gcount function from input, meaning you will get one char too much, because the input variable still contains the \n, and the vecBuffer does not. The resulting strResult is: ``` - strResult "Test" [0] 84 'T' char [1] 101 'e' char [2] 115 's' char [3] 116 't' char [4] 13 '␍' char [5] 0 char ``` So then "Test" is printed, followed by a carriage return (puts the cursor back at the start of the line), a null character (overwriting the T), and finally the \n, which correctly puts the cursor on the new line. So you either have to strip out the \r, or write a function that gets the string length directly from vecBuffer, checking for null characters.
I've duplicated Tommy's problem on a Windows XP Pro Service Pack 2 system with the code compiled using Visual Studio 2005 SP2 (actually, it says "Version 8.0.50727.879"), built as a console project. If my test.txt file contains just "Test" and a CR, the program spits out " est" (note the leading space) when run. If I had to take a wild guess, I'd say that this version of the implementation has a bug where it is treating the Windows newline character like it should be treated in Unix (as a "go to the front of the same line" character), and then it wipes out the first character to hold part of the next prompt or something. --- **Update:** After playing with it a bit, I'm positive that is what is going on. If you look at strResult in the debugger, you will see that it copied over a decimal 13 value at the end. That's CR, which in Windows-land is '\n', and everywhere else is "return to the beginning of the line". If I instead change your constructor to read: string strResult( vecBuffer.begin(), vecBuffer.begin() + input.gcount() - 1 ); ...(so that the CR isn't copied) then it prints out "Test" like you'd expect.
Why is the beginning of my string disappearing?
[ "", "c++", "string", "vector", "cout", "" ]
I have an XML document as follows: ``` <Database> <SMS> <Number>"+447528349828"</Number> <Date>"09/06/24</Date> <Time>13:35:01"</Time> <Message>"Stop"</Message> </SMS> <SMS> <Number>"+447528349828"</Number> <Date>"09/06/24</Date> <Time>13:35:01"</Time> <Message>"Stop"</Message> </SMS> </Database> ``` I am trying to check whether the number child node of the parent SMS node exists in the document (for validation purposes to avoid inserting duplicate data). Any advice on a potential solution? EDIT: The element will be compared to an input string. For example if(inputNumber == xmlDocNumber){ //Don't Insert New Element }
I'll suggest a slightly different tack to using `Count()` - use `Any()`. The advantage is that Any() can stop as soon as it gets any matches at all: ``` var smsWithNoNumber = main.Descendants("SMS") .Where(x => !x.Elements("Number").Any()); ``` In this case it won't make much odds, but in cases where `Count()` might have to count a million hits just to tell you that there was at least one, it's a useful trick to know. I'd say it's also a clearer indicator of what you mean.
Assuming that you have your number in some canonicalized form and your XML is loaded into an `XmlDocument` or some such, the simplest non-LINQ way to do it is with an XPath query: ``` string pattern = String.Format("/Database/SMS/Number[. = '{0}']", number); if (myDoc.SelectSingleNode(pattern) != null) { // number already exists in document } ```
C# Linq to XML check if element exists
[ "", "c#", "xml", "linq", "linq-to-xml", "" ]
I've searched around and haven't really found a clear answer as to when you'd want to use `.First` and when you'd want to use `.FirstOrDefault` with LINQ. * When would you want to use `.First`? Only when you'd want to catch the exception if no results where returned? ``` var result = List.Where(x => x == "foo").First(); ``` * And when would you want to use `.FirstOrDefault`? When you'd always want the default type if no result? ``` var result = List.Where(x => x == "foo").FirstOrDefault(); ``` * And for that matter, what about Take? ``` var result = List.Where(x => x == "foo").Take(1); ```
I would use `First()` when I know or expect the sequence to have at least one element. In other words, when it is an exceptional occurrence that the sequence is empty. Use `FirstOrDefault()` when you know that you will need to check whether there was an element or not. In other words, when it is legal for the sequence to be empty. You should not rely on exception handling for the check. (It is bad practice and might hurt performance). Finally, the difference between `First()` and `Take(1)` is that `First()` returns the element itself, while `Take(1)` returns a sequence of elements that contains exactly one element.
`.First` will throw an exception when there are no results. `.FirstOrDefault` won't, it will simply return either null (reference types) or the default value of the value type. (e.g like `0` for an int.) The question here is not when you want the default type, but more: Are you willing to handle an exception or handle a default value? Since exceptions should be exceptional, `FirstOrDefault` is preferred when you're not sure if you're going to get results out of your query. When logically the data should be there, exception handling can be considered. `Skip()` and `Take()` are normally used when setting up paging in results. (Like showing the first 10 results, and the next 10 on the next page, etc.)
When to use .First and when to use .FirstOrDefault with LINQ?
[ "", "c#", ".net", "linq", "" ]
My C++ project is growing larger. We are also moving to using cmake for building now. I want to divide the application into libraries so that they can be linked for testing, preparing the application package, etc. Right now I would divide my code into libraries as follows: * core * GUI * utilities (these are used by core and other components) * io (xml parsing/outputing using print functions of classes in core) * tests (unit tests) * simulator (tests the core) An alternative would be to divide based on the directory structure - one library for each directory. But from my past experience it leads to too many libraries and then library dependencies become tough to handle during linking. Are there any best practices in this regard?
Sit down with a piece of paper and decide your library architecture. The library should be designed as a set of levels. * A libary on level A (the base) should have dependencioes only on system libraries and only if it must on libraries on level A. * A library on level B can have dependencies on libraries at level A and system libararies and only if it must on libraries on level B. * etc Each library should represent a complete job at its particular level. Things at lower level generally have smaller jobs but lots of them. A library at a higher level should reresent a complete task. ie don't have a lib for windows objects and a lib for events. At this level the job here is handline all interaction with a window (this includes how it interacts with events). You seem to have identified some resonable functional groups. The only one that see a bit suspicious is io. If you truly have some generic IO routines that provide real functionality fine. But if it is just grouping the IO for a bunch of different objects then I would scrap that (it all depends on the usage). So the next step is to identify the relationship between them. As for using directory structures. Usually everything in one directory will be present within the same library, but that does not exclude the posability of other directories also being present. I would avoid putting half the classes in directory in libA and the other half of the classes in libB etc.
You should have a read of Large-Scale C++ Software Design by John Lakos. You may not be able to read it before you start your work, but you should put this book on your list. Otherwise Martin York's advise is sound. One more thing though, I would recommend picking up a tool like doxygen that can give you dependency diagrams of your code base. If your bothering to do this type of restructuring you should rid yourself of circular dependencies between your libraries. Lakos describes a lot of ways to cut dependencies - some obvious, some less so.
Dividing C++ Application into Libraries
[ "", "c++", "" ]
What happens if you save a reference to the current object during the finalize call? For example: ``` class foo { ... public void finalize() { bar.REFERENCE = this; } } ``` Is the object garbage-collected, or not? What happens when you try to access `bar.REFERENCE` later?
The object is not garbage collected. This is know as "Object resurrection". You must be careful with that, once the finalizer is called the gc won't call it again, on some enviroments like .NET you can re-register the finalizer but i'm not sure about java
If you absolutely must resurrect objects, this [JavaWorld](http://www.javaworld.com/javaworld/jw-06-1998/jw-06-techniques.html) article suggests creating a fresh instance rather than resurrecting the instance being finalized because if the instance being finalized becomes eligible for collection again it will simply be collected (the finalizer won't be run again).
Reference to object during finalize
[ "", "java", "garbage-collection", "finalizer", "" ]
I'm sending attachments using the System.Net.Mail.SmtpClient in C#. The attachment names are the same as the name of the file I pass into the attachment constructor ``` myMail.Attachments.Add(new Attachment(attachmentFileName)); ``` How would I go about setting a "nice" name for the attachment? The names I currently have are basically numeric IDs indicating which occurrence of a report is attached. My users are looking for something more friendly like "results.xls".
See [Attachment.Name Property](http://msdn.microsoft.com/en-us/library/system.net.mail.attachment.name.aspx): > Gets or sets the MIME content type name value in the content type associated with this attachment. You can set the [Attachment](http://msdn.microsoft.com/en-us/library/system.net.mail.attachment.aspx) `.Name` property to anything you like. In the example inside the last link, you could have : ``` // Create the file attachment for this e-mail message. Attachment data = new Attachment(file, MediaTypeNames.Application.Octet); data.Name = "VeryNiceName.dat"; //(not in original example) ... message.Attachments.Add(data); ```
Save the attachment to the temp folder with the desired name then attach it. Don't forget to delete it after you send the email so the temp folder doesn't grow too large. **Edit**: The accepted answer is much better but I'll leave this here for others that go down the same thought path and see how wrong it is.
How to set a nice name for an e-mail (System.Net.Mail.SmtpClient) attachment
[ "", "c#", "smtpclient", "email-attachments", "" ]
I came across an interesting question today where I have two methods that, at a quick glance, both do the same thing. That is return an IEnumerable of Foo objects. I have defined them below as List1 and List2: ``` public class Foo { public int ID { get; set; } public bool Enabled { get; set;} } public static class Data { public static IEnumerable<Foo> List1 { get { return new List<Foo> { new Foo {ID = 1, Enabled = true}, new Foo {ID = 2, Enabled = true}, new Foo {ID = 3, Enabled = true} }; } } public static IEnumerable<Foo> List2 { get { yield return new Foo {ID = 1, Enabled = true}; yield return new Foo {ID = 2, Enabled = true}; yield return new Foo {ID = 3, Enabled = true}; } } } ``` Now consider the following tests: ``` IEnumerable<Foo> listOne = Data.List1; listOne.Where(item => item.ID.Equals(2)).First().Enabled = false; Assert.AreEqual(false, listOne.ElementAt(1).Enabled); Assert.AreEqual(false, listOne.ToList()[1].Enabled); IEnumerable<Foo> listTwo = Data.List2; listTwo.Where(item => item.ID.Equals(2)).First().Enabled = false; Assert.AreEqual(false, listTwo.ElementAt(1).Enabled); Assert.AreEqual(false, listTwo.ToList()[1].Enabled); ``` These two methods seem to do the "same" thing. Why do the second assertions in the test code fail? Why is listTwo's second "Foo" item not getting set to false when it is in listOne? NOTE: I'm after an explanation of why this is allowed to happen and what the differences in the two are. Not how to fix the second assertion as I know that if I add a ToList call to List2 it will work.
The first block of code builds the items once and returns a list with the items. The second block of code builds those items each time the IEnumerable is walked through. This means that the second and third line of the first block operate on the same object instance. The second block's second and third line operate on *different* instances of Foo (new instances are created as you iterate through). The best way to see this would be to set breakpoints in the methods and run this code under the debugger. The first version will only hit the breakpoint once. The second version will hit it twice, once during the .Where() call, and once during the .ElementAt call. (edit: with the modified code, it will also hit the breakpoint a third time, during the ToList() call.) The thing to remember here is that an iterator method (ie. it uses yield return) will be run *every* time the enumerator is iterated through, not just when the initial return value is constructed.
Those are definitely *not* the same thing. The first builds and returns a list the moment you call it, and you can cast it back to list and list-y things with it if you want, including add or remove items, and once you've put the results in a variable you're acting on that single set of results. Calling the function would produce another set of results, but re-using the result of a single call acts on the same objects. The second builds an IEnumerable. You can enumerate it, but you can't treat it as a list without first calling `.ToList()` on it. In fact, calling the method doesn't do *anything* until you actually iterate over it. Consider: ``` var fooList = Data.List2().Where(f => f.ID > 1); // NO foo objects have been created yet. foreach (var foo in fooList) { // a new Foo object is created, but NOT until it's actually used here Console.WriteLine(foo.Enabled.ToString()); } ``` Note that the code above will create the first (unused) Foo instance, but not until entering the foreach loop. So the items aren't actually created until called for. But that means every time you call for them, you're building a new set of items.
Why does LINQ treat two methods that do the "same" thing differently?
[ "", "c#", "linq", "" ]
I have a C++ application that can be simplified to something like this: ``` class AbstractWidget { public: virtual ~AbstractWidget() {} virtual void foo() {} virtual void bar() {} // (other virtual methods) }; class WidgetCollection { private: vector<AbstractWidget*> widgets; public: void addWidget(AbstractWidget* widget) { widgets.push_back(widget); } void fooAll() { for (unsigned int i = 0; i < widgets.size(); i++) { widgets[i]->foo(); } } void barAll() { for (unsigned int i = 0; i < widgets.size(); i++) { widgets[i]->bar(); } } // (other *All() methods) }; ``` My application is performance-critical. There are typically thousands of widgets in the collection. The classes derived from `AbstractWidget` (of which there are dozens) typically leave many of the virtual functions not overridden. The ones that are overridden typically have very fast implementations. Given this, I feel I can optimize my system with some clever meta-programming. The goal is to leverage function inlining and to avoid virtual function calls, while keeping the code managable. I've looked into the Curiously Recurring Template Pattern (see [here](http://bjacob.livejournal.com/5800.html) for description). This seems to *almost* do what I want, but not quite. Is there any way to make the CRTP work for me here? Or, is there any other clever solution anyone can think of?
CRTP or compile-time polymorphism is for when you know all of your types at compile time. As long as you're using `addWidget` to collect a list of widgets at runtime and as long as `fooAll` and `barAll` then have to handle members of that homogenous list of widgets at runtime, you have to be able to handle different types at runtime. So for the problem you've presented, I think you're stuck using runtime polymorphism. A standard answer, of course, is to verify that the performance of runtime polymorphism is a problem before you try to avoid it... If you really need to avoid runtime polymorpism, then one of the following solutions may work. **Option 1: Use a compile-time collection of widgets** If your WidgetCollection's members are known at compile time, then you can very easily use templates. ``` template<typename F> void WidgetCollection(F functor) { functor(widgetA); functor(widgetB); functor(widgetC); } // Make Foo a functor that's specialized as needed, then... void FooAll() { WidgetCollection(Foo); } ``` **Option 2: Replace runtime polymorphism with free functions** ``` class AbstractWidget { public: virtual AbstractWidget() {} // (other virtual methods) }; class WidgetCollection { private: vector<AbstractWidget*> defaultFooableWidgets; vector<AbstractWidget*> customFooableWidgets1; vector<AbstractWidget*> customFooableWidgets2; public: void addWidget(AbstractWidget* widget) { // decide which FooableWidgets list to push widget onto } void fooAll() { for (unsigned int i = 0; i < defaultFooableWidgets.size(); i++) { defaultFoo(defaultFooableWidgets[i]); } for (unsigned int i = 0; i < customFooableWidgets1.size(); i++) { customFoo1(customFooableWidgets1[i]); } for (unsigned int i = 0; i < customFooableWidgets2.size(); i++) { customFoo2(customFooableWidgets2[i]); } } }; ``` Ugly, and really not OO. Templates could help with this by reducing the need to list every special case; try something like the following (completely untested), but you're back to no inlining in this case. ``` class AbstractWidget { public: virtual AbstractWidget() {} }; class WidgetCollection { private: map<void(AbstractWidget*), vector<AbstractWidget*> > fooWidgets; public: template<typename T> void addWidget(T* widget) { fooWidgets[TemplateSpecializationFunctionGivingWhichFooToUse<widget>()].push_back(widget); } void fooAll() { for (map<void(AbstractWidget*), vector<AbstractWidget*> >::const_iterator i = fooWidgets.begin(); i != fooWidgets.end(); i++) { for (unsigned int j = 0; j < i->second.size(); j++) { (*i->first)(i->second[j]); } } } }; ``` **Option 3: Eliminate OO** OO is useful because it helps manage complexity and because it helps maintain stability in the face of change. For the circumstances you seem to be describing - thousands of widgets, whose behavior generally doesn't change, and whose member methods are very simple - you may not have much complexity or change to manage. If that's the case, then you may not need OO. This solution is the same as runtime polymorphism, except that it requires that you maintain a static list of "virtual" methods and known subclasses (which is not OO) and it lets you replace virtual function calls with a jump table to inlined functions. ``` class AbstractWidget { public: enum WidgetType { CONCRETE_1, CONCRETE_2 }; WidgetType type; }; class WidgetCollection { private: vector<AbstractWidget*> mWidgets; public: void addWidget(AbstractWidget* widget) { widgets.push_back(widget); } void fooAll() { for (unsigned int i = 0; i < widgets.size(); i++) { switch(widgets[i]->type) { // insert handling (such as calls to inline free functions) here } } } }; ```
Simulated dynamic binding (there are other uses of CRTP) is for when the *base class* thinks of itself as being polymorphic, but *clients* only actually care about one particular derived class. So for instance you might have classes representing an interface into some platform-specific functionality, and any given platform will only ever need one implementation. The point of the pattern is to templatize the base class, so that even though there are multiple derived classes, the base class knows at compile time which one is in use. It doesn't help you when you genuinely need runtime polymorphism, such as for example when you have a container of `AbstractWidget*`, each element can be one of several derived classes, and you have to iterate over them. In CRTP (or any template code), `base<derived1>` and `base<derived2>` are unrelated classes. Hence so are `derived1` and `derived2`. There's no dynamic polymorphism between them unless they have another common base class, but then you're back where you started with virtual calls. You might get some speedup by replacing your vector with several vectors: one for each of the derived classes that you know about, and one generic one for when you add new derived classes later and don't update the container. Then addWidget does some (slow) `typeid` checking or a virtual call to the widget, to add the widget to the correct container, and maybe has some overloads for when the caller knows the runtime class. Be careful not to accidentally add a subclass of `WidgetIKnowAbout` to the `WidgetIKnowAbout*` vector. `fooAll` and `barAll` can loop over each container in turn making (fast) calls to non-virtual `fooImpl` and `barImpl` functions that will then be inlined. They then loop over the hopefully much smaller `AbstractWidget*` vector, calling the virtual `foo` or `bar` functions. It's a bit messy and not pure-OO, but if almost all your widgets belong to classes that your container knows about, then you might see a performance increase. Note that if most widgets belong to classes that your container cannot possibly know about (because they're in different libraries, for example), then you can't possibly have inlining (unless your dynamic linker can inline. Mine can't). You could drop the virtual call overhead by messing about with member function pointers, but the gain would almost certainly be negligible or even negative. Most of the overhead of a virtual call is in the call itself, not the virtual lookup, and calls through function pointers will not be inlined. Look at it another way: if the code is to be inlined, that means the actual machine code has to be different for the different types. This means you need either multiple loops, or a loop with a switch in it, because the machine code clearly can't change in ROM on each pass through the loop, according to the type of some pointer pulled out of a collection. Well, I guess maybe the object could contain some asm code that the loop copies into RAM, marks executable, and jumps into. But that's not a C++ member function. And it can't be done portably. And it probably wouldn't even be fast, what with the copying and the icache invalidation. Which is why virtual calls exist...
Can I use the Curiously Recurring Template Pattern here (C++)?
[ "", "c++", "templates", "metaprogramming", "virtual", "crtp", "" ]
## Edit 1 I believe my problem stems from the following. The function that fills the dropdown portion sets the Display Member to the CountyName. Then when I try and set the SelectedText or EditValue, as has been suggested, that function only returns the CountyID which it try's to match to something in the DropDown list DisplayMember. I need it to match it to something in the ValueMember list. Using the following I got it to work but it is a HACK and I'd greatly appreciate finding a real solution. ``` lkuResidenceCounty.ItemIndex = Convert.ToInt32(row["ResidencyCountyID"].ToString()); ``` --- ## Original Post I have a lookup box(DevExpress) on a member form that I fill the possible values in from the DB with this code --> ``` lkuResidenceCounty.Properties.DataSource = ConnectBLL.BLL.Person.CountyList(); lkuResidenceCounty.Properties.PopulateColumns(); lkuResidenceCounty.Properties.DisplayMember = "CountyName"; lkuResidenceCounty.Properties.ValueMember = "CountyID"; lkuResidenceCounty.Properties.Columns[0].Visible = false; lkuResidenceCounty.Properties.Columns[2].Visible = false; lkuResidenceCounty.Properties.Columns[3].Visible = false; ``` This works just fine as the CountyName is displayed as expected. However, When I try and load an existing member's value for this field using the below, which is part of a function that takes a row from the DataSet --> ``` lkuResidenceCounty.Properties.ValueMember = row["ResidencyCountyID"].ToString(); ``` I get a blank box. I have stepped through the code and the correct ID is being returned for the member. Unfortunately the stored procedure to fill the dropdown options pulls from a Maintenance Table with the columns "CountyName" & "CountyID". So that is correct. Unfortunately, the stored procedure to load a specific person's current county pulls from the Person Table where there is a column called "ResidencyCountyID". It is so named because there is also a "ResponsibilityCountyID" column. I need a way for them both to coexist, any solutions? Thanks!
DisplayMember and ValueMember are used to populate the control with the list of selectable values. To set the selected value of a populated LookUpEdit control, set its [EditValue](http://www.devexpress.com/Help/?document=XtraEditors/clsDevExpressXtraEditorsLookUpEdittopic.htm&levelup=true) property: ``` lkuResidenceCounty.EditValue = row["ResidencyCountyID"].ToString(); ``` In response to your edit: According to the documentation: > The currently selected row determines values for the editor's edit value and display text. The value for BaseEdit.EditValue is obtained from the RepositoryItemLookUpEditBase.ValueMember field, while the text to display in the edit box is obtained from the RepositoryItemLookUpEditBase.DisplayMember field of the selected row. > > When you change BaseEdit.EditValue, > the editor locates and selects the row > whose > RepositoryItemLookUpEditBase.ValueMember > field contains the new value. The text > in the edit box is changed to reflect > the newly selected row. I don't use these controls but it sounds to me that it shouldn't be working as you described. I think the ToString() is the problem because EditValue accepts an object so it's probably expecting an int for the value. Try: ``` lkuResidenceCounty.EditValue = (int)row["ResidencyCountyID"]; ```
Why is it that you need the same LookUpEdit to have two value members? Is it being used standalone or in a grid? If standalone, you could swap out the two repository editors depending on the current row. But, are there more than 2 possible values for the ValueMember? That would also complicate things. UPDATE Looking at your edit, I think I understand what's going on a little more. So, you don't wish to change your ValueMember (which refers to a data column), but rather to change the value of the editor? If so, then you should definitely use EditValue (not SelectedText, which I don't believe is meant to be set), and assign it to row["value\_field"] like so: ``` lkuResidenceCounty.EditValue = row["ResidencyCountyID"]; ``` What happens when you do that?
Different Value Member, same control
[ "", "c#", ".net", "winforms", "data-binding", "devexpress", "" ]
I don't like to use XAML. I prefer to code everything in C#, but I think that I am doing things wrong. In which cases it is better to use XAML and when do you use C#? What is your experience?
Creating an entire window in C# can be a mess of code. The best thing about WPF is that XAML allows you to separate your design from your logic, making for much easier-to-read code. I'll use C# when I need to create dynamic controls, but I tend to keep my general design, static storyboards, styles, datatemplates, etc. in XAML.
Check out [this video](https://youtu.be/BRxnZahCPFQ) on MVVM in WPF. If you want wrap your head around how to organize a WPF application vis-a-vis what goes in XAML, code behind and other abstractions, this is a great place to start.
XAML or C# code-behind
[ "", "c#", "wpf", "xaml", "code-behind", "" ]
I want to know whether is it a good idea to catch exception based on unique index of sql in java. i want to catch an exception like 'duplicate entry for 1-0' if so then handle exception otherwise insert properly in database table?
I say you don't do that, for two reasons: * the error messages are a bit unclear: *ERROR 1062 (23000): Duplicate entry 'xxx' for key 1*. Are you always 100% sure which key is 1? * it locks in you to a specific database vendor I find it simpler to *transactionally*: * check for row's existence; * throw an exception if the row already exists; * insert the new row. --- **Performance issues**: I say *measure twice, cut once*. Profile the usage for your specific use case. Top of my head I would say that the performance will not be an issue except for the heavy db usage scenarios. The reason is that once you perform a `SELECT` over that specific row, its data will be placed in the database caches and *immediately* used for insertion check done on the index for the `INSERT` statement. Also keeping in mind that this access is backed by an index leads to the conclusion that performance will not be an issue. But, as always, do measure.
I don't see why not. It's probably more efficient than running a query before the insert. It's probably better to catch the exception's error code rather than recognising the error message, though.
mysql unique index used as exception handling method in java
[ "", "java", "mysql", "exception", "indexing", "unique", "" ]
Do you know any decent framework for creating Swing screens? I need to extend my application for implementors that will have to add their own screens. Possible options: 1. Netbeans platform with Matisse editor 2. Bean builder (very old and unsupported) 3. Eclipse VEP (beta) Any other option? --- **Update:** I am not looking for design time tool to build my screens. I am looking for an addon or a platform to allow implementors (users of my application) to build custom screens for different clients. The screens have to be saved in some format (say xml) and will be displayed by another (view-only) application. Because of that, solutions like Matisse4MyEclipse don't fit, because I can't bundle their proprietary plugin with my application. --- **Update 2:** I am asking for a framework, not a design-time tool. I want some kind of component to embed in my application, so that users will be able to create screens by themselves.
[instantiations' WindowBuilder](http://www.instantiations.com/windowbuilder/index.html?id=1) - a commercial product
I have used the [JGoodies Form API](http://www.jgoodies.com/downloads/libraries.html) in the past, and I liked the simplicity of it. (I dislike GUI builders...) I guess it could be made configurable using Spring.
Java GUI creation framework
[ "", "java", "user-interface", "swing", "matisse", "" ]
Trying to remove all of the files in a certain directory gives me the follwing error: > OSError: [Errno 2] No such file or directory: '/home/me/test/\*' The code I'm running is: ``` import os test = "/home/me/test/*" os.remove(test) ```
`os.remove()` does not work on a directory, and `os.rmdir()` will only work on an empty directory. And Python won't automatically expand "/home/me/test/\*" like some shells do. You can use `shutil.rmtree()` on the directory to do this, however. ``` import shutil shutil.rmtree('/home/me/test') ``` be careful as it removes the files *and the sub-directories* as well.
os.remove doesn't resolve unix-style patterns. If you are on a unix-like system you can: ``` os.system('rm '+test) ``` Else you can: ``` import glob, os test = '/path/*' r = glob.glob(test) for i in r: os.remove(i) ```
Remove all files in a directory
[ "", "python", "unix", "file-management", "" ]
> **Possible Duplicate:** > [Get current URL with JavaScript?](https://stackoverflow.com/questions/1034621) How do you get the address of the page you are on in JavaScript? For example, if I had a script at `somesite.com/javascript/home.html` and I want to find out the request address (`somesite.com/javascript/home.html`), how do I get this information in JavaScript?
You need to use: `document.location` or `window.location` You can read more [here](https://developer.mozilla.org/en/DOM/document.location). Or there is a little more explanation over [there](http://www.comptechdoc.org/independent/web/cgi/javamanual/javalocation.html). For clarifying matter: **Originally Posted by Mozilla Developer Center** > document.location was originally a > read-only property, although Gecko > browsers allow you to assign to it as > well. For cross-browser safety, use > window.location instead.
``` window.location.href; ``` or ``` location.href; ``` `window` is the global object, so `location.href` will be identical to `window.location.href` and NOT `document.location.href` (as long as there's no enclosing function or `with` statement which shadows the property)
Request address in JavaScript
[ "", "javascript", "" ]
I need to pull a specific substring from a string of the form: ``` foo=abc;bar=def;baz=ghi ``` For example, how would I get the value of "bar" from that string?
You can use [charindex](http://msdn.microsoft.com/en-us/library/ms186323.aspx) and [substring](http://msdn.microsoft.com/en-us/library/ms187748.aspx). For example, to search for the value of "baz": ``` declare @str varchar(128) set @str = 'foo=abc;bar=def;baz=ghi' -- Make sure @str starts and ends with a ; set @str = ';' + @str + ';' select substring(@str, charindex(';baz=',@str) + len(';baz='), charindex('=',@str,charindex(';baz=',@str)) - charindex(';baz=',@str) - 1) ``` Or for the value of "foo" at the start of the string: ``` select substring(@str, charindex(';foo=',@str) + len(';foo='), charindex('=',@str,charindex(';foo=',@str)) - charindex(';foo=',@str) - 1) ``` Here's a UDF to accomplish this (more readable version inspired by BlackTigerX's answer): ``` create function dbo.FindValueInString( @search varchar(256), @name varchar(30)) returns varchar(30) as begin declare @name_start int declare @name_length int declare @value_start int declare @value_end int set @search = ';' + @search set @name_start = charindex(';' + @name + '=',@search) if @name_start = 0 return NULL set @name_length = len(';' + @name + '=') set @value_start = @name_start + @name_length set @value_end = charindex(';', @search, @value_start) return substring(@search, @value_start, @value_end - @value_start) end ``` As you can see, this isn't easy in Sql Server :) Better do this in the client language, or normalize your database so the substrings go in their own columns.
I have a generalized solution that works for this problem: ``` CREATE FUNCTION [dbo].[fn_StringBetween] ( @BaseString varchar(max), @StringDelim1 varchar(max), @StringDelim2 varchar(max) ) RETURNS varchar(max) AS BEGIN DECLARE @at1 int DECLARE @at2 int DECLARE @rtrn varchar(max) SET @at1 = CHARINDEX(@StringDelim1, @BaseString) IF @at1 > 0 BEGIN SET @rtrn = SUBSTRING(@BaseString, @at1 + LEN(@StringDelim1), LEN(@BaseString) - @at1) SET @at2 = CHARINDEX(@StringDelim2, @rtrn) IF @at2 > 0 SET @rtrn = LEFT(@rtrn, @at2 - 1) END RETURN @rtrn END ``` so if you run (just wrap your original string to be searched with ';' at beginning and end): ``` PRINT dbo.fn_StringBetween(';foo=abc;bar=def;baz=ghi;', ';bar=', ';') ``` you will get 'def' returned.
Find a specific substring using Transact-SQL
[ "", "sql", "sql-server", "" ]
In C#, what data type should I use to represent monetary amounts? Decimal? Float? Double? I want to take in consideration: precision, rounding, etc.
Use [`System.Decimal`](http://msdn.microsoft.com/en-us/library/system.decimal.aspx): > The Decimal value type represents > decimal numbers ranging from positive > 79,228,162,514,264,337,593,543,950,335 > to negative > 79,228,162,514,264,337,593,543,950,335. > ***The Decimal value type is appropriate > for financial calculations requiring > large numbers of significant integral > and fractional digits and no round-off > errors.*** The Decimal type does not > eliminate the need for rounding. > Rather, it minimizes errors due to > rounding. Neither [`System.Single` (`float`)](http://msdn.microsoft.com/en-us/library/system.single.aspx) nor [`System.Double` (`double`)](http://msdn.microsoft.com/en-us/library/system.single.aspx) are ~~precise enough~~ capable of representing high-precision floating point numbers without rounding errors.
Use decimal and money in the DB if you're using SQL.
What data type should I use to represent money in C#?
[ "", "c#", "" ]
I'm currently working on a project where we have a lot of dependencies. I would like to compile all the referenced dll's into the .exe much like you would do with embedded resources. I have tried [ILMerge](http://research.microsoft.com/en-us/people/mbarnett/ilmerge.aspx) but it can't handle .xaml resources. So my question is: Is there a way to merge a WPF project with multiple dependencies into a single .exe?
[.NET reactor](http://www.eziriz.com/dotnet_reactor.htm) has the feature of merging the assemblies and its not very expensive.
<http://www.digitallycreated.net/Blog/61/combining-multiple-assemblies-into-a-single-exe-for-a-wpf-application> This worked like a charm for me :) and its completely free. Adding code in case the blog ever disappears. ## 1) Add this to your `.csproj` file: ``` <Target Name="AfterResolveReferences"> <ItemGroup> <EmbeddedResource Include="@(ReferenceCopyLocalPaths)" Condition="'%(ReferenceCopyLocalPaths.Extension)' == '.dll'"> <LogicalName>%(ReferenceCopyLocalPaths.DestinationSubDirectory)%(ReferenceCopyLocalPaths.Filename)%(ReferenceCopyLocalPaths.Extension)</LogicalName> </EmbeddedResource> </ItemGroup> </Target> ``` ## 2) Make your Main `Program.cs` look like this: ``` [STAThreadAttribute] public static void Main() { AppDomain.CurrentDomain.AssemblyResolve += OnResolveAssembly; App.Main(); } ``` ## 3) Add the `OnResolveAssembly` method: ``` private static Assembly OnResolveAssembly(object sender, ResolveEventArgs args) { Assembly executingAssembly = Assembly.GetExecutingAssembly(); AssemblyName assemblyName = new AssemblyName(args.Name); var path = assemblyName.Name + ".dll"; if (assemblyName.CultureInfo.Equals(CultureInfo.InvariantCulture) == false) path = String.Format(@"{0}\{1}", assemblyName.CultureInfo, path); using (Stream stream = executingAssembly.GetManifestResourceStream(path)) { if (stream == null) return null; var assemblyRawBytes = new byte[stream.Length]; stream.Read(assemblyRawBytes, 0, assemblyRawBytes.Length); return Assembly.Load(assemblyRawBytes); } } ```
Merging dlls into a single .exe with wpf
[ "", "c#", "wpf", "dll", "ilmerge", "" ]
for example the string is my name is xyz(25) i want to get 25 in a variable through php..
use preg\_match to do it: ``` $string = "xyz(25)"; preg_match("/.*\((\d*)\)/", $string, $matches); print_r($matches); ``` will return ``` Array ( [0] => xyz(25) [1] => 25 ) ```
Use [regular expressions](http://www.webcheatsheet.com/php/regular_expressions.php).
taking the value contained in the brackets using php
[ "", "php", "" ]
I am working with numbers a lot when editing a particular type of file, and it's mostly tedious work. The file has a format like this: ``` damagebase = 8.834 "abc_foo.odf" 3.77 "def_bar.odf" 3.77 "ghi_baz.odf" 3.77 "jkl_blah.odf" 4.05 ... ``` What would you recommend for writing a script that parses this and lets me programmatically change each number? Language: i use C#, some F# (noob), and Lua. If you suggest regexes, could you provide specific ones as i am not familiar with them?
You can match runs of non-whitespace and punt to [Double.Parse](http://msdn.microsoft.com/en-us/library/fd84bdyt.aspx): ``` int multiplier = 3; string input = "damagebase = 8.834\n" + " \"abc.odf\" 3.77\n" + " \"def.odf\" 3.77\n" + " \"ghi.odf\" .77\n" + " \"jkl.odf\" -4.05\n" + " \"mno.odf\" 5\n"; Regex r = new Regex(@"^(\w+)\s*=\s*(\S+)" + @"(?:\s+""([^""]+)""\s+(\S+))+", RegexOptions.Compiled | RegexOptions.Multiline); Match m = r.Match(input); if (m.Success) { double header = Double.Parse(m.Groups[2].Value); Console.WriteLine("{0} = {1}", m.Groups[1].Value, header * multiplier); CaptureCollection files = m.Groups[3].Captures; CaptureCollection nums = m.Groups[4].Captures; for (int i = 0; i < files.Count; i++) { double val = Double.Parse(nums[i].Value); Console.WriteLine(@" ""{0}"" {1}", files[i].Value, val * multiplier); } } else Console.WriteLine("no match"); ``` Running it gives ``` damagebase = 26.502 "abc.odf" 11.31 "def.odf" 11.31 "ghi.odf" 2.31 "jkl.odf" -12.15 "mno.odf" 15 ```
Perl is pretty good for stuff like this. Here's a perl script that will do what you want. ``` #!/usr/bin/env perl $multiplier = 2.0; while (<>) { $n = /=/ ? 2 : 1; @tokens = split; $tokens[$n] *= $multiplier; print "\t" if not /=/; print join(' ', @tokens) . "\n"; } ``` Usage: ``` ./file.pl input_file > output_file ```
Script to Parse and Change Numbers
[ "", "c#", "scripting", "numbers", "fileparsing", "" ]
I have an existing project that uses `@Override` on methods that override *interface* methods, rather than superclass methods. I cannot alter this in code, but I would like Eclpse to stop complaining about the annotation, as I can still build with Maven. How would I go about disabling this error? **Note: Due to project requirements, I need to compile for Java 1.5.**
Using the `@Override` annotation on methods that implement those declared by an interface is only valid from Java 6 onward. It's an error in Java 5. Make sure that your IDE projects are setup to use a Java 6 JRE, and that the "source compatibility" is set to 1.6 or greater: 1. Open the Window > Preferences dialog 2. Browse to Java > Compiler. 3. There, set the "Compiler compliance level" to 1.6. Remember that Eclipse can override these global settings for a specific project, so check those too. --- *Update:* The error under Java 5 isn't just with Eclipse; using `javac` directly from the command line will give you the same error. *It is not valid Java 5 source code.* However, you can specify the `-target 1.5` option to JDK 6's `javac`, which will produce a Java 5 version class file from the Java 6 source code.
Do as follows: Project -> Properties -> java compiler -> * Enable project specific settings - 'yes' * Compiler compliance - 1.6 * generated class files and source compatibility - 1.5
Why does Eclipse complain about @Override on interface methods?
[ "", "java", "eclipse", "interface", "annotations", "syntax-error", "" ]
Well, I'm coding the OnPaint event for my own control and it is very nescessary for me to make it pixel-accurate. I've got a little problem with borders of rectangles. See picture: *removed dead ImageShack link* These two rectangles were drawn with the same location and size parameters, but using different size of the pen. See what happend? When border became larger it has eaten the free space before the rectangle (on the left). I wonder if there is some kind of property which makes border be drawn inside of the rectangle, so that the distance to rectangle will always be the same. Thanks.
You can do this by specifying [PenAlignment](http://msdn.microsoft.com/en-us/library/z62ath7a.aspx) ``` Pen pen = new Pen(Color.Black, 2); pen.Alignment = PenAlignment.Inset; //<-- this g.DrawRectangle(pen, rect); ```
If you want the outer bounds of the rectangle to be constrained in all directions you will need to recalculate it in relation to the pen width: ``` private void DrawRectangle(Graphics g, Rectangle rect, float penWidth) { using (Pen pen = new Pen(SystemColors.ControlDark, penWidth)) { float shrinkAmount = pen.Width / 2; g.DrawRectangle( pen, rect.X + shrinkAmount, // move half a pen-width to the right rect.Y + shrinkAmount, // move half a pen-width to the down rect.Width - penWidth, // shrink width with one pen-width rect.Height - penWidth); // shrink height with one pen-width } } ```
Border in DrawRectangle
[ "", "c#", "graphics", "" ]
I'm currently doing a project in C# with a lot of rendering, and throughout almost all the classes there's a constant value of the type integer being used for scaling of the rendering. I know I could define this constant in one place as a normal variable and then pass it around, but this seemes really cumbersome. When is it acceptable to use static variables in C#? The easiest solution to my problem would be to create a class containing the static variable that all the other classes could reference - would that be bad design?
Not bad design at all. In fact, having a Common or Utility namespace and class that exposes static methods and static values centralizes these values in one place so you can ensure that every module in you application is using the appropriate values. It's low cohesion, but acceptable for the benefit. I see no problem with it.
**How** constant is the value? `static` is fine for things that are `readonly`, but you can quickly get into a mess if it *isn't* `readonly` - especially if you have multiple threads. The scaling factor doesn't sound like a hard constant to me - i.e. it isn't: ``` public const double ScaleFactor = 1; ``` I wouldn't hesitate to use a static variable for something I load once and leave alone. Other than that, I'd probably encapsulate (in your case) some kind of `RenderContext` with this value and any other utility methods - and pass the `RenderContext` between methods; this can also help you abstract away from the underlying implementation if you need to unit test, etc. As you find you need more properties (and you inevitably will), you just extend the `RenderContext` class - nothing else changes. --- (edit) Also - consider the future: will you ever be doing more than one render at once? Since we all have lots of cores now, etc... static is good if all the threads share a value. There is `[ThreadStatic]`, but that is a bit messy by comparison.
When to use static variables?
[ "", "c#", "global-variables", "" ]
I'm having a tricky debugging issue, perhaps due to my lack of understanding about how c++ manages memory. The code is too long to post, but the essential setup is as follows: ``` global_var = 0; int main() { for(i = 0; i < N; ++i) { ClassA a; new ClassB(a); // seems to be problem! } } ``` For some `N`, `global_var` gets corrupted (is no longer 0). There is nothing in the constructors of ClassA or ClassB that mess with `global_var`. Replacing `new ClassB(a)` with `ClassB b(a)` seems to solve the problem, although this doesn't allow me to do what I want (create a `boost::ptr_vector` with the `new ClassB(a)` instances). Any ideas on what might be going wrong? **Update:** I'm really doing something like: ``` global_var = 0; int main() { boost::ptr_vector<ClassB> myobjects; for(i = 0; i < N; ++i) { ClassA a; myobjects.push_back(new ClassB(a)); // seems to be problem! } } ``` Both create problems. But why is this a problem? Should I be doing something else to put a bunch of objects into a queue? I'm using `myobjects` it as the basis of a Command Pattern. **Update** `classB' looks like: ``` class ClassB { public: ClassB() {} ClassB(ClassA a) : a_(a) {} private: ClassA a_; } ``` ClassA is just a simple list initialization as well (in real life). Problem? **Update** I believe this may have something to do with the fact that global\_var is actually a complex matrix type and there may be issues with the way it allocates memory.
After much exploration, this behavior turned out to be due to a bug in the underlying class of global\_var. There was a subtle bug in the way global and static memory allocation was being done.
Memory in C++ is not automatically freed by default. The result of ``` new ClassB(a); // seems to be problem! ``` is being created on the heap. The address of the memory is not assigned to anything. Therefore one can't manually delete it like ``` ClassB* b = new ClassB(a); // creates a B, returns an address, // and stores the address in b. delete b; // deletes memory at address "b" ``` The failure to delete the memory may cause the heap to overflow into other regions of memory in your program, causing your global variable to be overwritten. This could happen without assigning the result of new to a variable, so long as its not deleted.
C++ overflow with new keyword debugging
[ "", "c++", "debugging", "pointers", "" ]
``` DB Table: id int(6) message char(5) ``` I have to add a record (message) to the DB table. In case of duplicate message(this message already exists with different id) I want to delete (or inactivate somehow) the both of the messages and get their ID's in reply. Is it possible to perform with only one query? Any performance tips ?... P.S. I use PostgreSQL. The main my problem I worried about, is a need to use locks when performing this with two or more queries... Many thanks!
If you really want to worry about locking do this. 1. UPDATE table SET status='INACTIVE' WHERE id = 'key'; If this succeeds, there was a duplicate. * INSERT the additional inactive record. Do whatever else you want with your duplicates. If this fails, there was no duplicate. * INSERT the new active record. 2. Commit. This seizes an exclusive lock right away. The alternatives aren't quite as nice. * Start with an INSERT and check for duplicates doesn't seize a lock until you start updating. It's not clear if this is a problem or not. * Start with a SELECT would need to add a LOCK TABLE to assure that the select held the row found so it could be updated. If no row is found, the insert will work fine. If you have multiple concurrent writers and two writers could attempt access at the same time, you may not be able to tolerate row-level locking. Consider this. 1. Process A does a LOCK ROW and a SELECT but finds no row. 2. Process B does a LOCK ROW and a SELECT but finds no row. 3. Process A does an INSERT and a COMMIT. 4. Process B does an INSERT and a COMMIT. You now have duplicate active records. Multiple concurrent insert/update transactions will only work with table-level locking. Yes, it's a potential slow-down. Three rules: (1) Keep your transactions as short as possible, (2) release the locks as quickly as possible, (3) handle deadlocks by retrying.
You could write a procedure with both of those commands in it, but it may make more sense to use an insert trigger to check for duplicates (or a nightly job, if it's not time-sensitive).
Insert/Delete performance
[ "", "python", "database", "performance", "database-design", "" ]
Is there a way to periodically run a Unix command (`ps` in my case) in Java? The loop I wrote: ``` while( this.check ) { try { ProcessBuilder pb = new ProcessBuilder("ps"); Process proc; System.out.println(" * * Running `ps` * * "); byte[] buffer; String input; proc = pb.start(); BufferedInputStream osInput = new BufferedInputStream(proc.getInputStream()); //prints 0 every time after the first System.out.println(osInput.available()); buffer = new byte[osInput.available()]; osInput.read(buffer); input = new String(buffer); for( String line : input.split("\n")) { if( line.equals("") ) continue; this.handlePS(line); } proc.destroy(); try { Thread.sleep(10000); } catch (InterruptedException ie) { ie.printStackTrace(); } } catch (IOException ioe) { ioe.printStackTrace(); } } } ``` doesn't work. It runs perfectly fine the first time, but there are 0 bytes available from the input stream every time after that. I'd try the `watch` command, but this Solaris box doesn't have that. I can't use a cron job since I need to know if the PID is there in the Java Application. Any ideas? Thanks in advance. EDIT: cannot use cron job EDIT: I'm making a new `Thread` of the same type (PS) after it concludes, so I am definitely making a new ProcessBuilder every time. EDIT: I put the loop that didnt work back in since it caused confusion.
I'm not certain where the loop is, but you will need to create a new `Proc` object (and thus a new `InputStream`) each time through the loop. Otherwise you will always be looking at the result to the first call. The javadocs for `ProcessBuilder` indicate that you do not need to create one of those each time. There may also be a race condition where the input stream is not yet ready when you callk `available()`. You should look at making certain that the input stream has reached EOF (which will happen with ps, although not with, say, top) before printing the results. You are also not handling encoding properly, although I don't know what kind of encoding the output of "ps" is (outside of ASCII). Since "ps" is probably ASCII this is reasonably safe, but may not be for other commands (and for other input streams).
In addition to Kathy's answer, you should also gather stdout and stderr in separate threads for each invocation. Otherwise the process will block waiting for you to read this data. See [this answer](https://stackoverflow.com/questions/882772/capturing-stdout-when-calling-runtime-exec/882795#882795) for more details. EDIT. Are you calling [waitFor()](http://www.j2ee.me/j2se/1.4.2/docs/api/java/lang/Process.html#waitFor%28%29) to gather any exit status ? The way I would normally approach this is to execute and then call `waitFor()`. I think `destroy()` may be redundant in this context.
Repeat Unix Command in Java
[ "", "java", "unix", "solaris", "inputstream", "" ]
We are currently trying to improve our usability in our intranet web app. One of our goals is to declutter certain pages which have large Grids on them. To do this, we have started putting commonly used actions (like Delete, Reset To Zero, Mark As Complete, etc.) in the grids as functions. My initial idea was to find and use icons (i.e. Delete = a big red x, Reset = an arrow curving around itself, Mark As Complete a checkmark, etc.) to represent these tasks. Some of my coworkers agree with me; they say the icons make the tool look more professional. Others, however, say that icons obscure their intended function, and simple text such as *Delete* would be more appropriate. Assuming that we will include a tooltip description of the function regardless of the type we choose (icons or text), which one is more usable? Can we use icons for some actions and text for others, or will that hurt usability?
I'm a big fan of icons. Heck, the textarea I'm typing this answer into right now has nothing but icons on top of it. You've got to be a moron not to know that a big dark B means bold... I think if your icon is self explanatory (like a big red X for delete etc) then that's for sure the way to go. It also makes it easier for the user to find the right action.
Using icons for some functions and text for others probably isn't very consistent (unless you follow established idioms such as toolbars and menus, which are both places for commands, even though one is for icons, the other one for labels – doesn't mean that it's a good idiom, though). To quote from Jef Raskin's *The Humane Interface:* > In every study that considered the question, icons were demonstrated to be more difficult to understand than were labels, especially at first viewing, which contradicts one of the most frequently cited reasons for using icons, namely, comprehensibility for beginners. GUIs often present us with windows full of identical icons, each with a label. The icons are small and numerous, and there are dozens of different icons. The limited conditions under which icons are effective do not obtain in present computer systems. > > Although it is true that tiny icons can take less screen space than labels, you have to ask: At what cost? The smaller a button, the longer it takes to operate it, and the more difficult it is to find; also, it is difficult to make a small icon distinctive. Another small point: Icons take more time to create than do words. and > Mayhew [1] cites a number of research studies on the use of icons. Unfortunately, most of the studies did not compare labels to icons. But from these and other studies, we can conclude that icons are most effective when there are at most a dozen of them and when at most a dozen are seen at one time. In addition, it is essential that they > > * Are visually distinct > * Do a good job of representing the appropriate concept > * Are presented at a reasonably large size, typically larger than a text label would be > > [1] Mayhew, Deborah. *Principles and Guidelines in Software User Interface Design* (Englewood Cliffs, NJ.: Prentice-Hall, 1992). I'd agree with Raskin on this point that in many cases icons really obscure the meaning of concepts and commands and you need additional text anyway to explain them. Aza Raskin's article [*The End of an Icon*](http://www.azarask.in/blog/post/the_end_of_an_icon/) is a good read on this as well.
Icons vs Text for Commonly Used Actions
[ "", "c#", "asp.net", "usability", "icons", "" ]
I'm trying to use the public methods/classed from a project provided as a jar file (called Hello.jar for instance) wrapped in a package called hello. ``` package hello; public class Hello { public static void main(String[] args) { coucou(); } public static void coucou() { System.out.println("Hello there"); } } ``` In a separate project called Tool, I want to be able to call the method `Hello.coucou()` so I wrote something like this: ``` import hello.*; public class Tool { public static void main(String[] args) { System.out.println("main program running"); Hello.coucou(); } } ``` and I compiled Tool.java with the following command (under linux): ``` $ javac Tool.java -classpath .:./extern/: ``` where Hello.jar is located in the folder `./extern` This seems to compile fine but when I launch it (i.e. java Tool), I get this: ``` main program running Exception in thread "main" java.lang.NoClassDefFoundError: hello/Hello at Tool.main(Tool.java:9) Caused by: java.lang.ClassNotFoundException: hello.Hello at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:323) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:268) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:336) ... 1 more ``` I am new to Java (C/C++ background) and I don't understand what I'm doing wrong. Any ideas? Cheers David --- **Edit:** I tried adding Hello.jar to the classpath on the command line, but I still get the same error: ``` $ javac Tool.java -classpath .:./extern/Hello.jar: $ java Tool -classpath .:./extern/Hello.jar: main program running Exception in thread "main" java.lang.NoClassDefFoundError: hello/Hello at Tool.main(Tool.java:9) Caused by: java.lang.ClassNotFoundException: hello.Hello at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:323) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:268) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:336) ... 1 more ```
Actually the trick was in the order of the arguments in the command line: Is the -cp (or -classpath) is set last, then it doesn't work ``` java Tool -cp .:extern/Hello.jar ``` It has to be first like: ``` java -cp .:extern/Hello.jar Tool ``` !!!
You need the Hello.jar on the classpath when you run as well as when you compile.
Using a jar in a Java project?
[ "", "java", "linux", "command-line", "jar", "" ]
I am trying to bind an ASP.NET `GridView` control to an `string` array and I get the following item: > A field or property with the name > 'Item' was not found on the selected > data source. What is correct value I should use for DataField property of the asp:BoundField column in my GridView control. Here is my source code: ASPX page ``` <asp:GridView ID="MyGridView" runat="server" AutoGenerateColumns="false"> <Columns> <asp:BoundField DataField="Item" /> <asp:CommandField ButtonType="Link" ShowSelectButton="true" SelectText="Click Me!" /> </Columns> </asp:GridView> ``` Code Behind: ``` string[] MyArray = new string[1]; MyArray[0] = "My Value"; MyGridView.DataSource = MyArray; MyGridView.DataBind(); ``` **UPDATE** I need to have the `AutoGenerateColumns` attribute set to `false` because I need to generate additional `asp:CommandField` columns. I have updated my code sample to reflect this scenario
One method is to pass it a class with a single, named field. That way, you can give it a name. ``` public class GridRecord { public string MyValue { get; set; } } ``` Then convert your string array to a list of the class ``` string[] MyArray = new string[1]; MyArray[0] = "My Value"; List<GridRecord> MyList = ( from ar in myArray select new GridRecord { MyValue = ar }).ToList(); MyGridView.DataSource = MyList; MyGridView.DataBind(); ``` Now you can name your DataField property ``` <asp:GridView ID="MyGridView" runat="server" AutoGenerateColumns="false"> <Columns> <asp:BoundField DataField="MyValue" /> </Columns> </asp:GridView> ```
After hours of search, I finally found that there is a special DataField for this case: "**!**" ``` <asp:GridView ID="MyGridView" runat="server" AutoGenerateColumns="false"> <Columns> <asp:BoundField DataField="!" /> </Columns> </asp:GridView> ``` I hope it'll help someone one day :)
Binding an ASP.NET GridView Control to a string array
[ "", "c#", "asp.net", "data-binding", "gridview", "" ]
I have a GridView, each row has Edit button. After it's clicked, one of the columns turns into a drop down list where users can select value. Edit button becomes Update - so very simple usual scenario. Now, I don't seem to be able to grab the selected drop down list after Update is clicked. Here is my code: ``` protected void gv_UpdateRow(string arg) { int currentIndex = gv.EditIndex; gv.EditIndex = -1; GridViewRow currentRow = gv.Rows[currentIndex]; try { string value2 = ((DropDownList)currentRow.FindControl("ddlValueTwo")).SelectedItem.ToString(); } catch { Response.Write("error"); } BindGridView(); } ``` So basically, the program execution always ends up at the catch statement. I have checked and drop down list is found, the exception is thrown when selected item is not found. What gives? I use c# asp.net 2.0 web forms
got it! it was the IsPostback, I was missing it, so the gridview was being rebound every page load, and since the drop down list is inside the grid, the data was lost. However, one thing I forgot to mention here is that all this code sits inside the user control (ascx file) and IsPostBack property applies to the page not the control, which is useless in my case. For example, in my circumstances I add the control manually, so IsPostback will ALWAYS be true, so to avoid this problem I had to implement a session based solution. Hope this helps someone. There also usercontrol.IsPostBack property but it didn't perform as expected, perhaps they got it right for 3.0
Looks like a databinding error, you are trying yo access data that is not present yet...
Getting a selected value from the drop down list inside a GridView on Update
[ "", "c#", "gridview", "" ]
Could you guys please tell me how I can make the following code more pythonic? The code is correct. Full disclosure - it's problem 1b in Handout #4 of [this](http://www.stanford.edu/class/cs229/materials.html) machine learning course. I'm supposed to use newton's algorithm on the two data sets for fitting a logistic hypothesis. But they use matlab & I'm using scipy Eg one question i have is the matrixes kept rounding to integers until I initialized one value to 0.0. Is there a better way? Thanks ``` import os.path import math from numpy import matrix from scipy.linalg import inv #, det, eig x = matrix( '0.0;0;1' ) y = 11 grad = matrix( '0.0;0;0' ) hess = matrix('0.0,0,0;0,0,0;0,0,0') theta = matrix( '0.0;0;0' ) # run until convergence=6or7 for i in range(1, 6): #reset grad = matrix( '0.0;0;0' ) hess = matrix('0.0,0,0;0,0,0;0,0,0') xfile = open("q1x.dat", "r") yfile = open("q1y.dat", "r") #over whole set=99 items for i in range(1, 100): xline = xfile.readline() s= xline.split(" ") x[0] = float(s[1]) x[1] = float(s[2]) y = float(yfile.readline()) hypoth = 1/ (1+ math.exp(-(theta.transpose() * x))) for j in range(0,3): grad[j] = grad[j] + (y-hypoth)* x[j] for k in range(0,3): hess[j,k] = hess[j,k] - (hypoth *(1-hypoth)*x[j]*x[k]) theta = theta - inv(hess)*grad #update theta after construction xfile.close() yfile.close() print "done" print theta ```
``` x = matrix([[0.],[0],[1]]) theta = matrix(zeros([3,1])) for i in range(5): grad = matrix(zeros([3,1])) hess = matrix(zeros([3,3])) [xfile, yfile] = [open('q1'+a+'.dat', 'r') for a in 'xy'] for xline, yline in zip(xfile, yfile): x.transpose()[0,:2] = [map(float, xline.split(" ")[1:3])] y = float(yline) hypoth = 1 / (1 + math.exp(theta.transpose() * x)) grad += (y - hypoth) * x hess -= hypoth * (1 - hypoth) * x * x.transpose() theta += inv(hess) * grad print "done" print theta ```
One obvious change is to get rid of the "for i in range(1, 100):" and just iterate over the file lines. To iterate over both files (xfile and yfile), zip them. ie replace that block with something like: ``` import itertools for xline, yline in itertools.izip(xfile, yfile): s= xline.split(" ") x[0] = float(s[1]) x[1] = float(s[2]) y = float(yline) ... ``` (This is assuming the file is 100 lines, (ie. you want the whole file). If you're deliberately restricting to the *first* 100 lines, you could use something like: ``` for i, xline, yline in itertools.izip(range(100), xfile, yfile): ``` However, its also inefficient to iterate over the same file 6 times - better to load it into memory in advance, and loop over it there, ie. outside your loop, have: ``` xfile = open("q1x.dat", "r") yfile = open("q1y.dat", "r") data = zip([line.split(" ")[1:3] for line in xfile], map(float, yfile)) ``` And inside just: ``` for (x1,x2), y in data: x[0] = x1 x[1] = x2 ... ```
How do you make this code more pythonic?
[ "", "python", "machine-learning", "scipy", "" ]
I am used to working with Apache servers, so when mod\_rewrite is enabled, I can create an htaccess file and use URL rewriting. Here's my htaccess file: ``` RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] ``` Now I've built this site that uses this URL rewriting module but I have come to learn that it is a Microsoft server. Can I use my htaccess file? Is there something I need to change to get it to work? How can I tell if URL rewriting is set up on the Microsoft server?
You can use your configuration as is, in [Ionic's Isapi Rewrite Filter (IIRF)](http://iirf.codeplex.com). IIRF is free, open-source.
If you use IIS 7 then you can use IIS URL Rewrite Module, that has an ["Import Rules" feature](http://learn.iis.net/page.aspx/470/importing-apache-modrewrite-rules/) that can be used to translate mod\_rewrite rules to IIS URL rewrite format. These particular rewrite rules will not translate because the RewriteCond uses the "-s" and "-l" flags which check if the requested URL corresponds to a non-zero size file or to a symbolic link on a file system. If your application does not use any symbolic links then you can safely replace these conditions with: ``` RewriteCond %{REQUEST_FILENAME} -f [OR] ``` and then convert the rules by using IIS URL Rewrite UI. That will result in these rules: ``` <rewrite> <rules> <rule name="Imported Rule 1" stopProcessing="true"> <match url="^.*$" /> <conditions logicalGrouping="MatchAny"> <add input="{REQUEST_FILENAME}" matchType="IsFile" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" /> </conditions> <action type="None" /> </rule> <rule name="Imported Rule 2" stopProcessing="true"> <match url="^.*$" /> <action type="Rewrite" url="index.php" /> </rule> </rules> </rewrite> ```
PHP, URL rewriting with htaccess, and Microsoft IIS Url Rewriting
[ "", "php", "iis", "url-rewriting", "" ]
I am trying to construct method which returns a boolean: ``` public boolean isStringValid(String s){ boolean isValid; String temp = null; // only combinations of 'A','B','C' are allowed for (int i = 0; i < s.length(); i++) { temp = s.substring(i, i+1); if (temp.equals("A")|temp.equals("B")|temp.equals("C")){ isValid= true; }else{ isValid= false; } } return isValid; } ``` But I get a compiler error saying "the local variable isValid may not have been initialized". What I am trying to do is take a string and examine its every letter, if any letter other than A, B or C is found in the string, the isStringValid method should return a false. Only after every letter is checked and found to be either A,B or C can the method return true. I guess I am having trouble figuring out the scope of the local variables. What is the appropriate way for the method to return from within the if/else blocks? If that is not possible, what would you recommend is the best way to design this? Thank you in avdance Kindest regards
I agree with the answers that state that you should initialize the isValid boolean variable. However, you could do what you want with a [regular expression](http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html#sum) ``` /* * returns false if s contains a character different from 'a' 'b' or 'c' */ public boolean isStringValid(String s){ return !Pattern.matches("($^|[^abc]+)",s); } ``` [abc] means that you are checking if s contains the 'a','b' or 'c' character [^abc] means that you are checking if s contains a character that´s none of 'a','b' or 'c'. [^abc]+ means that you are checking if s contains **at least one** character that´s none of 'a','b' or 'c'. $^ Means empty strings
What happens if you get an empty string as argument? The compiler needs to be sure you're always returning something. Initialize your isValid variable with false, this way if the method gets an empty string it would just return the *default value*. ``` boolean isValid = false; ``` In Java you can't return a variable that may not be initialized, just be sure in all possible flows the variable is set with any value. *UPDATE: This will solve your question but I suggest you to take a look to the answers below because your method's logic isn't correct*
Problem with scoped local variables and conditionals
[ "", "java", "variables", "local", "" ]
I am trying to convert a set object to list...for example "p=list('abc')" is not working. any ideas or is it inherent in appengine
A set object to list is converted like so: ``` my_list = list(my_set) ``` I don't understand your example though. Converting a string to a list results in a list of characters: ``` >>> list('abc') ['a', 'b', 'c'] ```
There is no specific change "inherent" in appengine with respect to common aspects like lists. It is as just the same, plain python.
Can't seem to get list() working
[ "", "python", "django", "google-app-engine", "list", "" ]
Consider this line: ``` if (object.getAttribute("someAttr").equals("true")) { // .... ``` Obviously this line is a potential bug, the attribute might be `null` and we will get a `NullPointerException`. So we need to refactor it to one of two choices: **First option:** ``` if ("true".equals(object.getAttribute("someAttr"))) { // .... ``` **Second option:** ``` String attr = object.getAttribute("someAttr"); if (attr != null) { if (attr.equals("true")) { // .... ``` The first option is awkward to read but more concise, while the second one is clear in intent, but verbose. Which option do you prefer in terms of readability?
I've always used ``` if ("true".equals(object.getAttribute("someAttr"))) { // .... ``` because although it is a little more difficult to read it's much less verbose and I think it's readable enough so you get used to it very easily
In the second option, you can take advantage of short-circuiting `&&`: ``` String attr = object.getAttribute("someAttr"); if (attr != null && attr.equals("true")) { // .... ```
Gracefully avoiding NullPointerException in Java
[ "", "java", "coding-style", "nullpointerexception", "code-readability", "" ]
``` var seq = Enumerable.Range(1, 10).Reverse(); var sort1 = seq.OrderBy(i => i); var sort2 = seq.OrderBy(delegate(int i) { return i; }); ``` i think sort2 is more explicit but sort 1 is shorter. besides that, i don't really know the difference. what is the recommended way of doing this?
Lambda expressions are (IMO) better than anonymous methods in every case *except* where you don't care about the parameters, in which case there's a nice shortcut: ``` // Lambda expression has to specify parameter types EventHandler x = (sender, args) => Console.WriteLine("Hi"); // Anonymous method can ignore them EventHandler x = delegate { Console.WriteLine("Hi"); }; ``` Lambda expressions have two other "problems" IMO: * Obviously they're not available if you're not using C# 3. (Although you can target .NET 2.0 from VS2008 and still use them.) * The syntax for a parameterless lambda expresssion is somewhat clunky: ``` () => stuff ```
I much prefer the lambda syntax (sort1) where possible. I only use the more verbose syntaxes where they are required. I consider the extra stuff non-productive code that just gets in the way of understanding what I'm writing. Edit: Unless of course I'm working on a .NET 2.0 app, where you can't use the lambda syntax. Then, I'm just glad I at least have anonymous methods.
What is the recommended way of writing anonymous functions in C#?
[ "", "c#", "syntax", "lambda", "anonymous-methods", "" ]
I am trying to implement URL rewriting in my PHP application. Can someone share a step by step procedure of implementing URL rewriting in PHP and MySQL? In my application I want to implement following URL rewriting, I want to redirect ``` 1. http://example.com/videos/play/google-io-2009-wave-intro 2. http://example.com/videos/play/203/google-io-2009-wave-intro ``` to ``` 1. http://example.com/videos/play.php?title=google-io-2009-wave-intro 2. http://example.com/videos/play.php?id=203 ``` Please tell me how to implement both URL rewriting in any of the above way. One more thing which URL will be best according to SEO, management, application point-of-view out of the following two types. ``` 1. http://example.com/videos/play/google-io-2009-wave-intro 2. http://example.com/videos/play/203/google-io-2009-wave-intro ```
[A Beginner's Guide to mod\_rewrite](http://www.workingwith.me.uk/articles/scripting/mod_rewrite). Typically this will be nothing more than enabling the mod\_rewrite module (you likely already have it enabled with your host), and then adding a .htaccess file into your web-directory. Once you've done that, you are only a few lines away from being done. The tutorial linked above will take care of you. Just for fun, here's a [Kohana](http://www.kohanaframework.org) .htaccess file for rewriting: ``` # Turn on URL rewriting RewriteEngine On # Installation directory RewriteBase /rootDir/ # Protect application and system files from being viewed RewriteRule ^(application|modules|system) - [F,L] # Allow any files or directories that exist to be displayed directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/ RewriteRule .* index.php/$0 [PT,L] ``` What this will do is take all requests and channel them through the index.php file. So if you visited www.examplesite.com/subjects/php, you may actually be visiting www.examplesite.com/index.php?a=subjects&b=php. If you find these URLs attractive, I would encourage you to go one step further and check out the MVC Framework (Model, View, Controller). It essentially allows you to treat your website like a group of functions: www.mysite.com/jokes ``` public function jokes ($page = 1) { # Show Joke Page (Defaults to page 1) } ``` Or, www.mysite.com/jokes/2 ``` public function jokes ($page = 1) { # Show Page 2 of Jokes (Page 2 because of our different URL) } ``` Notice how the first forward slash calls a function, and all that follow fill up the parameters of that function. It's really very nice, and make web-development much more fun!
You cannot do this with PHP alone. You'll need to look into mod\_rewrite (assuming you are using apache).
How to do URL re-writing in PHP?
[ "", "php", "apache", "mod-rewrite", "seo", "url-rewriting", "" ]
Python has a lot of GUI libraries: tkinter, wxWidgets, pyGTK etc. But all these GUIs need to be installed and quite heavyweight, so it's a bit complex to deploy end-user GUI python apps that relay on mentioned GUI libraries. Recently, I have thought about python's built-in `ctypes` module. Theoretically, it's possible to create a pure python GUI library that will use `ctypes` on windows ( `windll.user32.CreateWindowEx`, etc ), native pyObjC on MacOS and pyGTK / pyQt on gnome / kde. Does such a library exist? If not, what do you think is wrong with this idea?
The path of least effort and best results would be to learn what it takes to deploy an app using those existing GUI libraries.
starting in Python 2.7 and 3.1, Tk will look a lot better. <http://docs.python.org/dev/whatsnew/2.7.html#ttk-themed-widgets-for-tk> "Tcl/Tk 8.5 includes a set of themed widgets that re-implement basic Tk widgets but have a more customizable appearance and can therefore more closely resemble the native platform’s widgets. This widget set was originally called Tile, but was renamed to Ttk (for “themed Tk”) on being added to Tcl/Tck release 8.5."
Pure python gui library?
[ "", "python", "tkinter", "pyqt", "pygtk", "pyobjc", "" ]
Is there any way to read the content of a RAR file (support for multi-file RAR is a must)? I don't want to extract the content to the disk, just read it like a stream.
Low level lib to work with 7z.dll (supports rar archives, incliding multi-part, works with .net streams): [C# (.net) interface for 7-Zip archive dlls](http://dev.nomad-net.info/articles/sevenzipinterface) And more high-level lib based on the first one: [SevenZipSharp](http://sevenzipsharp.codeplex.com/)
Install NUnrar from nuget ``` RarArchive file = RarArchive.Open("rar file path");//@"C:\test.rar" foreach (RarArchiveEntry rarFile in file.Entries) { string path = "extracted file path";//@"C:\" rarFile.WriteToDirectory(path); } ```
Read content of RAR files using C#
[ "", "c#", "stream", "rar", "" ]
I am developing a rails app. (I don't think this is a rails-specific problem) There's a reservation process which is consisted of 3 steps. When a user is on step 2 page, if the user clicks 'Previous' button, the form data in step 1 should be the same as before. I attached "history.go(-1);" to the 'Previous' button. It works on my firefox browser. But it doesn't work on some IE browsers. My IE works though. How can I force it to preserve the form data when the page is back? Thanks. Sam
Can't rely on the client (javascript) for this kind of operation. You save the data somewhere at step 1, so just restore it.
You could save page 2's data to the database, or a server-memory cache, or a cookie or three, and restore it when the page is loaded.
How do I keep form data when Back button is clicked
[ "", "javascript", "internet-explorer", "forms", "back", "" ]
Is there a tool to convert a VB.NET 2005 project to a C# 2008 project. I am trying to convert our project to VS 2008 and mostly port all the vb.NET code in some projects to C# 3.0/3.5.
You could check out [SharpDevelop](http://community.sharpdevelop.net/blogs/mattward/articles/FeatureTourCodeConversion.aspx). It's an open source .NET development environment. SharpDevelop has some code conversion built in.
I had a similar decision to make with a VB.net project. Solution was a compromise, I decided to run with dual VB and C#. Upgrading VB.NET 2005 to 2008 is easy bit. Added CSharp and VB folders to App\_Code and ``` <codeSubDirectories> <add directoryName="CSharp" /> <add directoryName="VB" /> </codeSubDirectories> ``` to compilation section of web.config As Kev says, it's not as straight forward as you might expect, and you will likely run into unexpected issues , that make running with dual language support the best solution I know this doesn't directly answer the question, but it's an alternative approach
Convert VB.NET 2005 project to C# 2008 Project
[ "", "c#", ".net", "vb.net", "visual-studio", "visual-studio-2008", "" ]
I have two tables: ``` CREATE TABLE dbo.country ( cntry_id VARCHAR(2) NOT NULL, name VARCHAR(50) NOT NULL, CONSTRAINT pk_country PRIMARY KEY (cntry_id) CREATE TABLE dbo.city ( city_id VARCHAR(3) NOT NULL, name VARCHAR(50) NOT NULL, cntry_id VARCHAR(2) NOT NULL, CONSTRAINT pk_city PRIMARY KEY (city_id), FOREIGN KEY (cntry_id) REFERENCES dbo.country(cntry_id) ) ``` I am trying to drop the fk constrait so I can then drop the table. The FK definitley exists: ``` EXEC sp_fkeys country pktable_qualifier pktable_owner pk_tablename ... xxxxxx xxx country cntry_id .... ``` (DB name obscured) But both ``` EXEC sp_dropkey foreign, country, city EXEC sp_dropkey foreign, city, country ``` return ``` 264 Error (17499) No foreign key for the table or view exists. sp_dropkey(263) ``` Does anybody know how to drop these keys? Thank you in advance Ryan
``` ALTER TABLE dbo.city DELETE FOREIGN KEY [enternameoftheforeignkeyhere] ``` Otherwise I don't know what could be the reason. The number of the error message means he couldn't delete it from syskeys. But he found the two tables alright and your are the owner of the tables too. --- Did you try `sp_helpkey` and `sp_helpconstraint` to check what they say about the existence of a FK? This should also be able to tell you if there really is a FK defined. ``` select * from syskeys where depid = object_id([parenttablename]) and type = 2 ``` Regarding the naming of a FK. This should do the trick ``` CREATE TABLE .... CONSTRAINT fk_mykey FOREIGN KEY (cntry_id) REFERENCES dbo.country(cntry_id) ... ```
I do not have a 12.5 DB infront of me at the moment but I know with Sybase Anywhere and Sybase IQ you can manage foreign keys with Sybase Central. If you have Sybase Central installed, fire it up select your table and then on the right side look for a tab called Constraints or Foreign Keys. If you have it select the FK you want to drop, press delete then right click the table on the left side and select SAVE TABLE. I hope that helps!
Delete foreign keys in sybase 12.5
[ "", "sql", "schema", "foreign-keys", "sybase", "" ]
In C# i am making a simple text editor with line numbers. I want to count the ammount of valid line breaks in a string. i want to count \r \n \r\n How can i do this? Or better yet, can someone point me to an article on how to line number an rtf box
Counting Lines - <http://ryanfarley.com/blog/archive/2004/04/07/511.aspx> RTB With Line Numbers - <http://www.xtremedotnettalk.com/showthread.php?s=&threadid=49661&highlight=RichTextBox>
**Note:** This answer is more to do with the abstract task of counting lines in a string, rather than to do with the GUI side of things. It's probably not as useful as some other answers for the original questioner, but I suspect it's useful in similar situations which don't involve GUIs. If enough people reckon it's not relevant here, I'll delete it. I would use an existing type which already knows about line endings, namely `TextReader`, in conjunction with my `LineReader` type from [MiscUtil](http://pobox.com/~skeet/csharp/miscutil): ``` string text = "ab\ncd"; int lines = new LineReader(() => new StringReader(text)).Count(); ``` Alternatively, without the dependencies: ``` public IEnumerable<string> GetLines(string text) { using (TextReader reader = new StringReader(text)) { string line; while ((line = reader.ReadLine()) != null) { return line; } } } ``` then: ``` int lineCount = GetLines(text).Count(); ``` Note that this will count actual lines of text rather than line breaks - this may subtly different from what you want (e.g. it'll normally be line breaks + 1, but not if there's a line break at the end of the text).
C# Count All Occurences Of Line Feed and Carrage Return
[ "", "c#", "count", "richtextbox", "" ]
I've written a simple abstract generic class in C# (.NET 2.0) and I preferably want to limit it to only reference types, so I can indicate no value by a null. However, I also want to use types such as long and decimal, why don't allow null (being structs after all). I considered making the class ``` public abstract Field<Nullable<T>> { } ``` But that would prevent my use of the string type, which is a class. How can I box up my decimals and longs so I can use them in this generic. ``` abstract class Field<T> { private int _Length = 1; private bool _Required = false; protected T _Value; //= null; public int Length { get { return _Length; } private set { if (value < 1) throw new ArgumentException("Field length must be at least one."); _Length = value; } } public bool Required { get { return _Required; } private set { _Required = value; } } public abstract string GetFieldValue(); public abstract void ParseFieldValue(string s); public virtual T Value { get { return _Value; } set { if (value == null && Required) throw new ArgumentException("Required values cannot be null."); _Value = value; } } } ``` Please note that I need to represent numeric 0 and null differently. For this reason default(T) will not work.
You would need two classes ``` abstract class FieldR<T> where T: class { T Value { get {} set {} } } abstract class FieldV<T> where T: struct { Nullable<T> Value { get {} set {} } } ``` The first class would use ``` T ``` While the second class would use ``` Nullable<T> ```
The whole point of generics (among others) is to avoid boxing. See [this](http://msdn.microsoft.com/en-us/library/25tdedf5(VS.80).aspx): ``` private bool _Required = false; protected T _Value = default(T); ``` If you need to distinguish between "0" and "not set", `object` is your only way out: ``` protected object _Value; ``` And then box-unbox-box-unbox.
What types to use for boxing in generics
[ "", "c#", "generics", ".net-2.0", "struct", "" ]
I'm writing a simple function and for some reason(probably a simple one) it's not working for me and I was wondering if you guys could help me out. ``` function check_value($postID) { $ID = $postID; $cookie = $_COOKIE['list_of_IDS']; $position = strpos($cookie,$ID); echo 'ID:'.$ID.'-Cookie:'.$cookie; if ($position !== false) { echo "ID is in the cookie"; } } ``` In trying to figure out what the problem was I put that echo line above the If Statement there to make sure there actually is stuff in the variables. My problem is that the IF statement never prints out. A `$postID` is a number `123123`. The `$cookie` string is usually something like `123123.23422.234234.2342342.234234` Thanks for your help!
Strpos won't work with an int, so you need to cast the ID to a string. Try this: ``` $ID = (string)$postID; ```
Note that if your cookie string looks like `123123.23422.234234.2342342.234234` and you are looking for an ID, say, `1231` or `23`, your function would return `TRUE` while actually that ID is not in the list. Your current implementation of `strpos()` will also match partial numbers. Here is a simple workaround that will require the ID to be surrounded by dots. ``` $position = strpos('.'.$cookie.'.', '.'.$ID.'.'); ```
Problem with Strpos In PHP
[ "", "php", "string", "strpos", "" ]
Why does VB have more LINQ keywords than C#?
If i remember correctly the only 4 keywords that are available to VB.Net and not C# via the query syntax are * [Distinct](http://msdn.microsoft.com/en-us/library/bb384803.aspx) * [Skip](http://msdn.microsoft.com/en-us/library/bb514016.aspx) / [Skip While](http://msdn.microsoft.com/en-us/library/bb531331.aspx) * [Take](http://msdn.microsoft.com/en-us/library/bb531351.aspx) / [Take While](http://msdn.microsoft.com/en-us/library/bb513984.aspx) * [Aggregate](http://msdn.microsoft.com/en-us/library/bb531251.aspx) VB.Net added these keywords because we felt it would be valuable to our users. We envisioned query syntax being a primary way VB.net users would use LINQ and wanted to enable as many scenarios as possible via the syntax. I do not know why C# left these out. My **speculation** is that they didn't feel the scenario justified the cost especially considering the feature is still available via extension methods. **EDIT** Added Aggregate and a few others (thanks Dario and Chris!)
The VB team decided to create more language extensions than the C# team did (in terms of keywords that translate to standard query operators). I chalk this up to different teams having different design goals. Fortunately both languages have full access to all of the standard query operators that are available so there is no loss of functionality.
Why does VB have more LINQ keywords than C#?
[ "", "c#", "vb.net", "linq", "" ]
we're running into performance issues, and one potential culprit is a centralized use of a volatile singleton. the specific code is of the form ``` class foo { static volatile instance; static object l = new object(); public static foo Instance { if (instance == null) lock(l) { if (instance == null) instance = new foo(); } return foo(); } } ``` this is running on an 8-way box, and we're seeing context switching to the tune of 500,000 per second. typical system resources are fine - 25% cpu util, 25% memory util, low IO, no paging, etc. does using a volatile field induce a memory barrier or any kind of cpu cache reload? or does it just go after main memory every time, for that field only?
`lock` does induce a memory barrier, so if you are always accessing instance in a lock you don't need the volatile. According to [this site](http://dotnet.org.za/markn/archive/2008/10/21/net-memory-model-bytecode-reordering.aspx): > The C# volatile keyword implements acquire and release semantics, which implies a read memory barrier on read and a write memory barrier on write.
One thing volatile will not do is cause a context switch. If you're seeing 500,000 context switches per second, it means that your threads are blocking on something and volatile is *not* the culprit.
What is the cost of the volatile keyword in a multiprocessor system?
[ "", "c#", "multithreading", "volatile", "" ]
Why is the following `array()` passed into a function. I am not able to understand the `array()` function. I know if $\_POST doesn't have any value, it will pass `array()`, but what is the value in `array()`? ``` SomeFunction($_POST ? $_POST : array()); ```
`array()` isn't a function per se, it's a language construct. But simply using `array()` will create an empty array for you, that is, with zero elements. You probably want to check for: ``` isset($_POST) ? $_POST : array() ``` Edit: As pointed out by greg, `$_POST` will always be set. So there is no need to check for it and return an empty array. `someFunc($_POST)` should do exactly the same thing.
[array()](http://se.php.net/manual/en/function.array.php) is not a function, it's a [language construct to create a new array](http://se.php.net/manual/en/language.types.array.php). If no arguments (excuse the function terminology) are given, an empty array is created. The difference between PHP arrays and say... Java arrays are that PHP arrays are dynamically resized as new elements are added. But the array()-construct also takes parameters as a comma-separated list of *key=>value-pairs*. So, you can create arrays in the following ways: ``` $empty = array(); $autoIndexed = array (1, 2, 3); $associative = array('key1' => 1, 'key2' => 2); var_dump($empty, $autoIndexed, $associative); // Prints: Array () Array ( [0] => 1 [1] => 2 [2] => 3 ) Array ( [key1] => 1 [key2] => 2 ) ```
I am stuck with PHP's array()?
[ "", "php", "arrays", "function", "parameters", "" ]
I know there is a simple solution to this but can't seem to find it at the moment. Given a numpy array, I need to know if the array contains integers. Checking the dtype per-se is not enough, as there are multiple int dtypes (int8, int16, int32, int64 ...).
Found it in the [numpy book](http://templatelab.com/numpybook/)! Page 23: > The other types in the hierarchy define particular categories of types. > These categories can be useful for testing whether or not the object > returned by self.dtype.type is of a particular class (using issubclass). ``` issubclass(n.dtype('int8').type, n.integer) >>> True issubclass(n.dtype('int16').type, n.integer) >>> True ```
Checking for an integer type does not work for floats that are integers, e.g. `4.` Better solution is `np.equal(np.mod(x, 1), 0)`, as in: ``` >>> import numpy as np >>> def isinteger(x): ... return np.equal(np.mod(x, 1), 0) ... >>> foo = np.array([0., 1.5, 1.]) >>> bar = np.array([-5, 1, 2, 3, -4, -2, 0, 1, 0, 0, -1, 1]) >>> isinteger(foo) array([ True, False, True], dtype=bool) >>> isinteger(bar) array([ True, True, True, True, True, True, True, True, True, True, True, True], dtype=bool) >>> isinteger(1.5) False >>> isinteger(1.) True >>> isinteger(1) True ```
How do I find out if a numpy array contains integers?
[ "", "python", "numpy", "" ]
I want to increment an unsigned integer from multiple threads. I know about Interlocked.Increment, but it does not handle unsigned integers. I could use lock(), but I would rather not if possible for performance reasons. Is it thread safe just to increment it in the normal way? It would not matter if the occasional increment got lost, as it's only used for statistics. What I don't want is the value to get corrupted.
You say you don't want to use `lock` for performance reasons - but have you tested it? An uncontested lock (which this is likely to be, by the sounds of it) is pretty cheap. I generally go for "obviously correct" rather than "clever and possibly better performing" when it comes to threading (and in general, but especially for threading). Benchmark your app with and without locking, and see whether you can even notice the difference. If locking makes a *significant* difference then sure, use cunning stuff. Otherwise, I'd just stick with a lock. One thing you *might* want to do is use `Interlocked.Increment` with an `int` and just cast it when necessary to get a `uint`, like this: ``` using System; using System.Reflection; using System.Threading; public class Test { private static int count = int.MaxValue-1; public static uint IncrementCount() { int newValue = Interlocked.Increment(ref count); return unchecked((uint) newValue); } public static void Main() { Console.WriteLine(IncrementCount()); Console.WriteLine(IncrementCount()); Console.WriteLine(IncrementCount()); } } ``` Output: ``` 2147483647 2147483648 2147483649 ``` (In other words it wraps with no problems.)
If you really need the full range of an unsigned int (2^32 - 1) rather than a signed int (2^31 -1), you could cast to an int64 (there is an `Interlocked.Increment` overload that takes int64) and then cast back to an unsigned int.
C# multi-threaded unsigned increment
[ "", "c#", "multithreading", "unsigned", "interlocked-increment", "" ]
It appears my SQL isn't limiting results based on `price`. In my previous post, [SQL: Help me optimize my SQL](https://stackoverflow.com/questions/1039771/sql-help-me-optimize-my-sql), people indicated that I should use a LEFT OUTER JOIN. ``` SELECT homes.home_id, address, city, state, zip, price, photo_id, photo_url_dir FROM homes LEFT OUTER JOIN home_photos ON homes.home_id = home_photos.home_id AND primary_photo_group_id = home_photo_group_id AND home_photo_type_id =2 AND display_status = true AND homes.price BETWEEN 500000 AND 1000000 ``` ## However, it's still displaying homes whose price is < 500000 I don't understand. Why would the SQL above display homes that have a price less than 500000 when I have a WHERE condition to limit exactly that field. Thanks for your help. ## What I want to do I want to display both homes with and homes without a home\_photo based on criteria like `PRICE` between X and Y ... or `SQFT` > Z. But these criteria needs to be applied to both those homes with and those `homes` without a `home_photo` ## Is this correct? ``` SELECT homes.home_id, address, city, state, zip, price, photo_id, photo_url_dir FROM homes LEFT OUTER JOIN home_photos ON homes.home_id = home_photos.home_id AND homes.primary_photo_group_id = home_photos.home_photo_group_id AND home_photos.home_photo_type_id =2 WHERE homes.display_status = true AND homes.price BETWEEN 500000 AND 1000000 ```
the last line should be using WHERE not AND ``` WHERE homes.price BETWEEN 500000 AND 1000000 ``` The end result is the following SQL: ``` SELECT homes.home_id, homes.address, homes.city, homes.state, homes.zip, homes.price, home_photos.photo_id, home_photos. photo_url_dir FROM homes LEFT OUTER JOIN home_photos ON homes.home_id = home_photos.home_id AND homes.primary_photo_group_id = home_photos.home_photo_group_id AND home_photos.home_photo_type_id =2 WHERE homes.price BETWEEN 500000 AND 1000000 AND homes.display_status = true ``` EDIT now your SQFT would go before the WHERE ``` AND home_photos.home_photo_type_id =2 AND SQFT <=2000 WHERE homes.price BETWEEN 500000 AND 1000000 ```
Use the join conditions after JOIN keyword and all other filter conditions after the WHERE clause
SQL: Help with LEFT OUTER JOIN
[ "", "sql", "database", "database-design", "query-optimization", "" ]
I'm looking to optimize my SQL. My database schema is: ## HOMES * home\_id * address * city * state * zip * primary\_photo\_group\_id ## HOME\_PHOTOS * photo\_id (primary key) * home\_id (home primary key) * photo\_group\_id (a photo group is the same image, resize from thumbnail to large size) * home\_photo\_type\_id (the size of the image be it a thumbnail or a large size) * photo\_url\_dir (the filesystem location where the photo is stored) ## Problem It's very possible that a 'home' does not have a photo associated with the home. In that case, the `primary_photo_group_id` = 0`. Otherwise,`primary\_photo\_group\_id` equals the group\_id of the photo to use as the primary photo. ## Slow SQL (b/c of UNION) ``` SELECT homes.home_id, address, city, state, zip, photo_id, photo_url_dir FROM homes, home_photos WHERE homes.home_id = home_photos.home_id AND primary_photo_group_id = home_photo_group_id AND home_photo_type_id = 2 UNION SELECT homes.home_id, address, city, state, zip, null, null FROM homes WHERE primary_photo_group_id = 0 ``` ## What I would like to do I would like to get rid of the UNION since I'm having to search back over the entire table 2x. How can I get rid of the UNION since I need to check for the case where primary\_photo\_group\_id = 0 and if it's not equal to 0, then query the `home_photos` table ## Here is the pseudo code that needs to happen ``` SELECT homes.home_id, address, city, state, zip, photo_id, (include only if primary_photo_group_id != 0) photo_url_dir (include only if primary_photo_group_id != 0) FROM homes, home_photos (include only if primary_photo_group_id != 0) WHERE primary_photo_group_id = 0 ELSE homes.home_id = home_photos.home_id AND primary_photo_group_id = home_photo_group_id AND home_photo_type_id = 2 ```
Maybe you do not know about left outer join? Try: ``` SELECT homes.home_id, address, city, state, zip, photo_id photo_url_dir FROM homes h left outer join home_photos hp on h.home_id = hp.home_id AND primary_photo_group_id = home_photo_group_id AND home_photo_type_id = 2 ```
``` SELECT homes.home_id, address, city, state, zip, photo_id, photo_url_dir FROM homes LEFT JOIN home_photos ON home_photos.home_id = homes.home_id AND home_photo_group_id = CASE WHEN primary_photo_group_id = 0 THEN NULL ELSE primary_photo_group_id END AND home_photo_type_id = 2 ``` Having a composite index on `home_photos (home_id, home_photo_group_id, home_photo_type_id)` will greatly improve this query. Note that using `CASE` is slightly more efficient than left joining on `0`, even if there are no records with `home_photo_group_id = 0` in `home_photos`. When `MySQL` sees a `JOIN` on `NULL` (which can yield nothing by definition), it won't even look into the joined table. When it joins on `0`, it still has to check the index and make sure no value exists. This is not very much of a performance impact, but still can improve your query time by several percents, especially if you have a lot of `0`'s in `homes`. See this entry in my blog for performance detail: * [**Constant vs. NULL to mark missing values in OUTER JOINs**](http://explainextended.com/2009/06/24/constant-vs-null-to-mark-missing-values-in-outer-joins/) Also note that your tables are not in `2NF`. Your `group_id` depends on `home_id`, and including it into `home_photos` is `2NF` violation. It's not always bad, but it may be harder to manage.
SQL: Help me optimize my SQL
[ "", "sql", "mysql", "database", "database-design", "optimization", "" ]
I've written an image processing program in MATLAB which makes heavy use of the MATLAB Image Processing Toolbox, especially the morphological operations (imopen, imclose) as well as imadjust. We do a lot of spline fit operations and medfilt2 and medfilt1 a lot also. We have a client who wants us to convert this program to Java. I would like to hear a detailed description of a Java image processing library which can duplicate MATLAB's functionality in image processing and splines, especially how the interface compares with MATLAB's. I have read about Java's Advanced Image Processing Library but I haven't been able to find any detailed documentation on it on the web. Also, the little documentation I've read about it seems to indicate that it uses a rather complex model of images, combining them into tiles and so-forth. It would be great if there was a Java library which allowed me to continue to treat gray scale images as just 2D or 3D arrays. Also, it would be great to learn of any general gotchas in converting between MATLAB and Java. --- Edit: our app currently segments images of a relatively simple object. It: ``` 1. Starts with a 3D matrix of gray scale image slices representing a single area 2. Does a medfilt1 to even out each slice. 3. Does some imopen, imclose and imadjust operations on the image to remove some fuzziness, 4. Does some simple thresholding in various areas to find boundary points 5. Fits splines to the boundary points, 6. Uses the 3rd dimension in various ways to further perfect the point matching, especially using medfilt2. 7. Saves each slice with the boundary splines written in color on it. ``` I should note that we're doing "spline fitting" rather than spline matching - spline fitting is a least square match with a fixed number of knots - spline matching matches the points exactly with an arbitrary number of knots. I wouldn't want to implement spline matching from more simplistic spline functions. MATLAB's Builder JA is an option but I would like to also know what's available in pure Java as well as know what kind of overhead Builder JA involves. --- Edit 2: Note that we are doing spine *fitting* - using a given point's fit to the spline as a way to decide whether to eliminate it - since the data is **messy**, we have a multi-step point elimination process, so splines are an **integral** part of the algorithm. And so, since I can't find any mention of splines in JAI at all, so if anyone knows a java library offering least-square spline fitting, that would be wonderful. --- Edit 2.5: We're using a least-square approximation of a set of points using splines with a fixed number of knots (0-5 knots). If we have to re-implement that, things will get dicey, since right now we're using a MATLAB contributed library for it. And we certainly don't want to revisit the algorithm. It was hard enough getting something that worked...
There are several general pitfalls about converting Matlab code to Java code. I've converted Matlab to C++ code, so my advice comes from those experiences. 1. If you're using for loops in Matlab, in general, you're doing it wrong. Adding matrices (images, etc) is a fairly simple: a = b + c; no matter the size of the image. Filtering is also a fairly straightforward call: a = imfilter('median', b); #or something like this, I'm not in front of my matlab machine at the moment. Similar function calls exist in JAI (Java Advanced Imaging), so see if you can find them. I don't know the specifics of your median filtering requirements (I assume medfilt1 is meant to be a 3x3 local median filtering kernel, rather than a 1D filtering kernel run on the data, because that would mean that you're filtering only in one direction), so take a look at what's there in [the documentation.](http://download.java.net/media/jai/javadoc/1.1.3/jai-apidocs/index.html) But, if you write your own, the above addition can be as simple as a doubly-nested for loop, or a complicated class that implements something like MyMatrix a = MyMatrix.Add(b, c); My point is, the simplicity of Matlab can obscure all the design decisions you need to make in order to make this an efficient java program. 2. Remember, when you do do for loops, matlab and java have reverse row/column order. Matlab is column-major, and java is [row-major](http://en.wikipedia.org/wiki/Row-major). You will need to rewrite your loops to take that change into account, or else your code will be slower than it should be. 3. Personally, I'd tend to avoid the JAI except for specific operations that I need to have accomplished. For instance, just use it for the median filtering operations and so forth. I consider using it to be an optimization, but that's just because I'm Old School and tend to write my own image processing operations first. If you take that approach, you can write your code to be exactly what you want, and then you can add in the JAI calls and make sure that the output matches what your code already does. The problem with using advanced libraries like the JAI or the Intel IPP in C++ is that there are a lot of library-specific gotchas (like tiling, or whether or not each row is allocated like a bitmap with a few extra pixels on the end, or other such details), and you don't want to be dealing with those problems while at the same time moving your code over. JAI is fast, but it's not a magic bullet; if you don't know how to use it, better to make sure that you've got something before you've got something fast. 4. If I can read between the lines a little bit, it looks like you're doing some kind of segmentation on medical imaging data. I don't know what the java libraries are for reading in DICOM images are, but gdcm works well for C++ and C#, and also has java wrappers. Matlab obscures the ease of image handling, especially DICOM image handling, so you may find yourself having to learn some DICOM library in order to handle the image file manipulations. I've learned a small fraction of the DICOM standard over the years; the [specification](ftp://medical.nema.org/medical/dicom/2008/) is extremely complete, perhaps overly so, but you can figure out how to do what you need to do in excruciating detail. If you're trying to do segmentations of medical data, saving the spline on the data is not the right thing to do so that your images operate with other DICOM readers. Take a look at the way contours are specified. Edit in response to further information: Spline Fitting is probably best done from a numerical approach rather than a library approach. There may be a way to do this in JAI, but I'm not familiar enough with the language. Instead, I'd check out Numerical Recipes, specifically [Chapter 3](http://www.ams.sunysb.edu/~deng/teach/ams321/cubic-spline-c3-3.pdf), for code on spline fitting. The code is one based, not zero based, so it requires some translation, but it's entirely doable. If you're trying to remove noise points from a boundary, you may also want to try blurring the edges that you're originally deriving your points from. Without knowing the spline fitting you're trying to do (there are many variations), it'd be hard to recommend an exact equivalent in another language. Edit 2.5: If by spline fitting from a contributed library, do you mean something like [this code](http://www.mathworks.com/matlabcentral/fileexchange/13812)? If worst comes to worst, you'd at least have the source code. If you do end up having to do something like this, another very useful tip is that Matlab is all doubles, nothing else unless you force it (and even then, a lot of operations don't work on non-doubles). So, you'll need to do your code in doubles as well, in order to maintain reasonable agreement. I'd also make several tests. If you do end up rewriting that code (or something like it), having a group of known inputs and expected outputs (within some reasonable margin of error, where you have to define what 'reasonable' means) will be critical in making sure that the wheel you're copying (not really reinventing) has the same rotations per distance as the original. There are probably too many paranthetical expressions in that last sentence. Yet Another Edit: If all of the above is too much of a headache, then consider the JA builder already pointed out. Otherwise, the approach I've outlined, or something similar, will probably be where you end up.
How about using the [**MATLAB Builder JA**](http://www.mathworks.com/products/javabuilder/) provided by The MathWorks itself, which is the developer of MATLAB itself.?
How do I convert a MATLAB image processing program to java?
[ "", "java", "matlab", "image-processing", "" ]
Firstly, I'm in C# here so that's the flavor of RegEx I'm dealing with. And here are thing things I need to be able to match: ``` [(1)] ``` or ``` [(34) Some Text - Some Other Text] ``` So basically I need to know if what is between the parentheses is numeric and ignore everything between the close parenthesis and close square bracket. Any RegEx gurus care to help?
This should work: ``` \[\(\d+\).*?\] ``` And if you need to catch the number, simply wrap `\d+` in parentheses: ``` \[\((\d+)\).*?\] ```
Do you have to match the []? Can you do just ... ``` \((\d+)\) ``` (The numbers themselves will be in the groups). For example ... ``` var mg = Regex.Match( "[(34) Some Text - Some Other Text]", @"\((\d+)\)"); if (mg.Success) { var num = mg.Groups[1].Value; // num == 34 } else { // No match } ```
Regular Expression to match numbers inside parenthesis inside square brackets with optional text
[ "", "c#", "regex", "" ]
I am implementing the BitTorent protocol using Java via this [spec](http://wiki.theory.org/BitTorrentSpecification#Peer_wire_protocol_.28TCP.29). In the messages section all messages are fixed length except 2 of them; for one of them it's the only variable message after the handshake so I can check others and assume it's a piece message when no other messages met. But for the following message > ``` > bitfield: <len=0001+X><id=5><bitfield> > ``` > > The bitfield message may only be sent immediately after the handshaking sequence is completed, and before any other messages are sent. It is optional, and need not be sent if a client has no pieces. > > The bitfield message is variable length, where X is the length of the bitfield. The payload is a bitfield representing the pieces that have been successfully downloaded. The high bit in the first byte corresponds to piece index 0. Bits that are cleared indicated a missing piece, and set bits indicate a valid and available piece. Spare bits at the end are set to zero. > > A bitfield of the wrong length is considered an error. Clients should drop the connection if they receive bitfields that are not of the correct size, or if the bitfield has any of the spare bits set. I can't come up with a way to parse it if i do not know the length; how am I supposed to locate id in a stream of bytes? Edit: In payload of the bitfield message is the 0's or 1's for each piece in the torrent file, length of the message will change depending on the size of the torrent content. So i don't think i can assume that the number of pieces will always fit in a 5 byte number.
The `id` field will always be the 5th byte of a message, after the four bytes for the `len` field. You can do something like the following: ``` DataInputStream stream; // ... int length = stream.readInt(); byte id = stream.readByte(); byte[] payload = new byte[length - 1]; stream.readFully(payload); ``` That should work for any message, actually, since they all have the same `len`+`id` header. **Edit:** "So i don't think i can assume that the number of pieces will always fit in a 5 byte number." A four-byte length field can handle up to 2^32-1 bytes in the payload, and with 8 bits per byte that gives you room for 34,359,738,360 pieces. That should be plenty! :-)
> I can't come up with a way to parse it > if i do not know the length; Judging from the description, the length is given in the first 4 bytes of the message. > how am I supposed to locate id in a > stream of bytes? It looks as though the id is the 5th byte in each message, right after the length field. So you just have to look at the first 5 bytes after you're finished parsing the previous message.
Parsing a variable-length message
[ "", "java", "parsing", "network-protocols", "inputstream", "bittorrent", "" ]
I know it is php global variable but I'm not sure, what it do? I also read from official php site, but did not understand.
You may want to read up on the basics of PHP. Try reading some starter tutorials. `$_POST` is a variable used to grab data sent through a web form. Here's a simple page describing `$_POST` and how to use it from W3Schools: [PHP $\_POST Function](http://www.w3schools.com/php/php_post.asp) Basically: Use HTML like this on your first page: ``` <form action="submit.php" method="post"> Email: <input type="text" name="emailaddress" /> <input type="submit" value="Subscribe" /> </form> ``` Then on `submit.php` use something like this: ``` <? echo "You subscribed with the email address:"; echo $_POST['emailaddress']; ?> ```
There are generally 2 ways of sending an HTTP request to a server: * GET * POST Say you have a <form> on a page. ``` <form method="post"> <input type="text" name="yourName" /> <input type="submit" /> </form> ``` Notice the "method" attribute of the form is set to "post". So in the PHP script that receives this HTTP request, $\_POST[ 'yourName' ] will have the value when this form is submitted. If you had used the GET method in your form: ``` <form method="get"> <input type="text" name="yourName" /> <input type="submit" /> </form> ``` Then $\_GET['yourName'] will have the value sent in by the form. $\_REQUEST['yourName'] contains all the variables that were posted, whether they were sent by GET or POST.
What is the purpose of $_POST?
[ "", "php", "forms", "superglobals", "" ]
I need to get the product version and file version for a DLL or EXE file using Win32 native APIs in C or C++. I'm *not* looking for the Windows version, but the version numbers that you see by right-clicking on a DLL file, selecting "Properties", then looking at the "Details" tab. This is usually a four-part dotted version number x.x.x.x.
You would use the [GetFileVersionInfo](http://msdn.microsoft.com/en-us/library/ms647003.aspx) API. See [Using Version Information](http://msdn.microsoft.com/en-us/library/ms646985(VS.85).aspx) on the MSDN site. Sample: ``` DWORD verHandle = 0; UINT size = 0; LPBYTE lpBuffer = NULL; DWORD verSize = GetFileVersionInfoSize( szVersionFile, &verHandle); if (verSize != NULL) { LPSTR verData = new char[verSize]; if (GetFileVersionInfo( szVersionFile, verHandle, verSize, verData)) { if (VerQueryValue(verData,"\\",(VOID FAR* FAR*)&lpBuffer,&size)) { if (size) { VS_FIXEDFILEINFO *verInfo = (VS_FIXEDFILEINFO *)lpBuffer; if (verInfo->dwSignature == 0xfeef04bd) { // Doesn't matter if you are on 32 bit or 64 bit, // DWORD is always 32 bits, so first two revision numbers // come from dwFileVersionMS, last two come from dwFileVersionLS TRACE( "File Version: %d.%d.%d.%d\n", ( verInfo->dwFileVersionMS >> 16 ) & 0xffff, ( verInfo->dwFileVersionMS >> 0 ) & 0xffff, ( verInfo->dwFileVersionLS >> 16 ) & 0xffff, ( verInfo->dwFileVersionLS >> 0 ) & 0xffff ); } } } } delete[] verData; } ```
All these solutions did not work properly (with my system). I found out that each of the four parts of the version number are saved as a 16-bit value. The first two numbers are saved in the 32-bit DWORD dwFileVersionMS, and the second two in dwFileVersionLS. So I edited your code at the output section like this: ``` TRACE( "File Version: %d.%d.%d.%d\n", ( pFileInfo->dwFileVersionMS >> 16 ) & 0xffff, ( pFileInfo->dwFileVersionMS >> 0 ) & 0xffff, ( pFileInfo->dwFileVersionLS >> 16 ) & 0xffff, ( pFileInfo->dwFileVersionLS >> 0 ) & 0xffff ); ``` And it works perfectly. The output is formatted like on my system: ## major.minor.build.revision
How do I programmatically get the version of a DLL or EXE file?
[ "", "c++", "winapi", "dll", "version", "exe", "" ]
I was wondering if it is possible in C++ to retrieve the name of a class in string form without having to hardcode it into a variable or a getter. I'm aware that none of that information is actually used at runtime, therefor it is unavailable, but are there any macros that can be made to create this functionality? Edit: May be helpful to note that I'm actually trying to retrieve the name of a derived class, and I'm using Visual C++ 2008 Express Edition.
You can use `typeid`: ``` #include <typeinfo> std::cout << typeid(obj).name() << "\n"; ``` However, the type name isn't standardided and may differ between different compilers (or even different versions of the same compiler), and it is generally not human readable because it is [*mangled*](https://en.wikipedia.org/wiki/Name_mangling). On GCC and clang (with libstdc++ and libc++), you can demangle names using the [`__cxa_demangle` function](https://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-html-USERS-4.3/a01696.html) (on MSVC demangling does not seem necessary): ``` #include <cxxabi.h> #include <cstdlib> #include <memory> #include <string> std::string demangle(char const* mangled) { auto ptr = std::unique_ptr<char, decltype(& std::free)>{ abi::__cxa_demangle(mangled, nullptr, nullptr, nullptr), std::free }; return {ptr.get()}; } ``` This will *still* not necessarily be a readable name — for instance, `std::string` is a type name for the actual type, and its complete type name in the current libstdc++ is `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >`; by contrast, in the current libc++ it’s `std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >`. “Prettifying” type aliases is unfortunately not trivial.
If you just want to check if it's certain class, then ``` typeid(obj) == typeid(CSubClass) ``` will always work regardless of the implementations. Otherwise, a convenient way is to declare: ``` virtual const char* classname() { return "CMyClass";} ``` and implement per subclass.
Retrieving a c++ class name programmatically
[ "", "c++", "class", "macros", "" ]
I have a situation here where I need to distribute work over to multiple JAVA processes running in different JVMs, probably different machines. Lets say I have a table with records 1 to 1000. I am looking for work to be collected and distributed is sets of 10. Lets say records 1-10 to workerOne. Then records 11-20 to workerThree. And so on and so forth. Needless to say workerOne never does the work of workerTwo unless and until workerTwo couldnt do it. This example was purely based on database but could be extended to any system, I believe be it File processing, email processing and so forth. I have a small feeling that the immediate response would be to go for a Master/Worker approach. However here we are talking about different JVMs. Even if one JVM were to come down the other JVM should just keep doing its work. Now the million dollar question would be: Are there any good frameworks(production ready) that would give me facility to do this. Even if there are concrete implementations of specific needs like Database records, File processing, Email processing and their likes. I have seen the Java Parallel Execution Framework, but am not sure if it can be used for different JVMs and if one were to come down would the other keep going.I believe Workers could be on multiple JVMs, but what about the Master? More Info 1: Hadoop would be a problem because of the JDK 1.6 requirement. Thats bit too much. Thanks, Franklin
You could also use message queues. Have one process that generates the list of work and packages it in nice little chunks. It then plops those chunks on a queue. Each one of the workers just keeps waiting on the queue for something to show up. When it does, the worker pulls a chunk off the queue and processes it. If one process goes down, some other process will pick up the slack. Simple and people have been doing it that way for a long time so there's a lot information about it on the net.
Might want to look into [MapReduce](http://en.wikipedia.org/wiki/MapReduce) and [Hadoop](http://hadoop.apache.org/core/docs/current/mapred_tutorial.html)
Workload Distribution / Parallel Execution in JAVA
[ "", "java", "parallel-processing", "distribution", "workload", "" ]
I am working on a REST WCF project and when I implement the following code, it complains that it can't resolve the WebGet class? What am I missing? I tried importing the System.ServiceModel.Web namespace but it can't find it even though I referenced it. The "Web" in System.ServiceModel.Web does not register when I register it in a using statement on top of my code. Basically, what do I need to implement such WCF REST concepts like WebGet, WebInvoke, UriTemplate, etc? **UPDATE:** After some feedback and thinking about this a little bit more what I've done, it seems that the DLLs (System.ServiceModel & System.ServiceModel.Web) do not come up via the 'Add Reference' window when I go to add a project reference. When I first started the project, FYI, since these assemblies did not come up at first, I went 'searching' for them, and copied them to a temp folder so I can reference them and thus, I guess I am having the resolve issues. So, now that I am at this point, how can I get my VS to recognize/register these WCF REST DLLs? Thanks! **UPDATE:** I believe I am update-to-date on everything: developing on VS 2008 SP1 - I try to download the latest SPs, downloaded the REST Preview 2 Starter Kit, developing against 3.5 Framework, trying to create a WCF REST layer to ultimately be consumed by Silverlight 2 client. This is what I have: ``` using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; using UtilityClasses; using Microsoft.ServiceModel.Web; using Microsoft.Http; namespace WcfRestService { [ServiceContract] public interface IRestService { [OperationContract(Name = "Add")] [WebGet(UriTemplate = "/")] // ** can't compile here ** int Add(); } } ``` Any advice will be greatly appreciated it.
This happened to me too. I did this: 1. Delete System.Service.Web from References 2. Build 3. Clean Project 4. Add System.Service.Web to References 5. Build ..and VS found it??
You need to reference the System.ServiceModel.Web DLL. Right-click the 'References' folder in your project and choose 'Add Reference...'. Scroll down to System.ServiceModel.Web and click 'OK'.
WCF - Cannot resolve [WebGet] symbol - what am I doing wrong?
[ "", "c#", "wcf", "" ]
I am pretty new to c#. I have a page that requires multiple recordsets, and a single sproc that returns them. I am using a repeater control for the main recordset. How do I get to the next returned recordset? --- OK so the datasource is in the aspx page. I would have move it to the code behind page to use NextResult right? Here is my code now. How do I move the datasource to the codebehind, implement a datareader so I can use nextresult? ``` <asp:SqlDataSource ID="AssetMgtSearch" runat="server" ConnectionString="<%$ ConnectionStrings:OperationConnectionString %>" SelectCommand="spAssetMgtItemList" SelectCommandType="StoredProcedure"> </asp:SqlDataSource> <div class="contentListFullHeight"> <table cellspacing="0" cellpadding="0" border="0" class="contentList"> <tr> <th>ShipmentID/</td> <th>MaterialID/</td> <th>ItemID/</td> </tr> <asp:Repeater ID="Repeater1" runat="server" DataSourceID="AssetMgtSearch"> <ItemTemplate> <tr> <td colspan="3" style="border-top:solid thin blue">&nbsp;</td> </tr> <tr> <td><%#Container.DataItem(0)%></td> <td><%#Container.DataItem(1)%></td> <td><%#Container.DataItem(2)%></td> </tr> </ItemTemplate> </asp:Repeater> </table> ```
Call the NextResult() method on your reader to move to the next result. No, you can't do this using SqlDataSource, you need to use codebehind or break up the procedure into separate queries.
Thanks for your answers everyone. NextResult() works well provided you make quite a few changes going from the drag and drop control creation. Here they are. 1. Remove the datasource from the aspx page 2. Remove the DataSourceID property from the Repeater control 3. Create a function in your codebehind that returns a datareader object. eg AstMgtDr() 4. On your page load set the datasource and databind properties for the Repeater control `Repeater1.DataSource = AstMgtDr();` `Repeater1.DataBind();` 5. At the top of your aspx page, add a page level directive to use the "System.Data.Common" namespace `<%@ Import namespace="System.Data.Common" %>` 6. To display your data: > this is the method with the best > performance but it requires explicit > typing ``` `<%#((DbDataRecord)Container.DataItem).GetInt32(0)%>` ``` > this is another method using field > names - more expensive than the > previous method but faster than the default Eval. ``` `<%# ((DbDataRecord)Container.DataItem)["ShipmentID"] %>` ``` Hope this saves somebody else some time.
Stored Procedure Returns Multiple Recordsets
[ "", "c#", "web-applications", "multiple-resultsets", "" ]
I mean, aside from its name *the Standard Template Library* (which evolved into the C++ standard library). C++ initially introduce OOP concepts into C. That is: you could tell what a specific entity could and couldn't do (regardless of how it does it) based on its class and class hierarchy. Some compositions of abilities are more difficult to describe in this manner due to the complexities of multiple inheritance, and the fact that C++ supports interface-only inheritance in a somewhat clumsy way (compared to java, etc), but it's there (and could be improved). And then templates came into play, along with the STL. The STL seems to take the classical OOP concepts and flush them down the drain, using templates instead. There should be a distinction between cases when templates are used to generalize types where the types themselves are irrelevant for the operation of the template (containers, for examples). Having a `vector<int>` makes perfect sense. However, in many other cases (iterators and algorithms), templated types are supposed to follow a "concept" (Input Iterator, Forward Iterator, etc...) where the actual details of the concept are defined entirely by the implementation of the template function/class, and not by the class of the type used with the template, which is a somewhat anti-usage of OOP. For example, you can tell the function: ``` void MyFunc(ForwardIterator<...> *I); ``` **Update:** As it was unclear in the original question, ForwardIterator is ok to be templated itself to allow any ForwardIterator type. The contrary is having ForwardIterator as a concept. expects a Forward Iterator only by looking at its definition, where you'd need either to look at the implementation or the documentation for: ``` template <typename Type> void MyFunc(Type *I); ``` Two claims I can make in favor of using templates: 1. Compiled code can be made more efficient, by recompiling the template for each used type, instead of using dynamic dispatch (mostly via vtables). 2. And the fact that templates can be used with native types. However, I am looking for a more profound reason for abandoning classic OOP in favor of templating for the STL?
The short answer is "because C++ has moved on". Yes, back in the late 70's, Stroustrup intended to create an upgraded C with OOP capabilities, but that is a long time ago. By the time the language was standardized in 1998, it was no longer an OOP language. It was a multi-paradigm language. It certainly had some support for OOP code, but it also had a turing-complete template language overlaid, it allowed compile-time metaprogramming, and people had discovered generic programming. Suddenly, OOP just didn't seem all that important. Not when we can write simpler, more concise *and* more efficient code by using techniques available through templates and generic programming. OOP is not the holy grail. It's a cute idea, and it was quite an improvement over procedural languages back in the 70's when it was invented. But it's honestly not all it's cracked up to be. In many cases it is clumsy and verbose and it doesn't really promote reusable code or modularity. That is why the C++ community is today far more interested in generic programming, and why *everyone* is finally starting to realize that functional programming is quite clever as well. OOP on its own just isn't a pretty sight. Try drawing a dependency graph of a hypothetical "OOP-ified" STL. How many classes would have to know about each other? There would be a *lot* of dependencies. Would you be able to include just the `vector` header, without also getting `iterator` or even `iostream` pulled in? The STL makes this easy. A vector knows about the iterator type it defines, and that's all. The STL algorithms know *nothing*. They don't even need to include an iterator header, even though they all accept iterators as parameters. Which is more modular then? The STL may not follow the rules of OOP as Java defines it, but doesn't it achieve the *goals* of OOP? Doesn't it achieve reusability, low coupling, modularity and encapsulation? And doesn't it achieve these goals *better* than an OOP-ified version would? As for why the STL was adopted into the language, several things happened that led to the STL. First, templates were added to C++. They were added for much the same reason that generics were added to .NET. It seemed a good idea to be able to write stuff like "containers of a type T" without throwing away type safety. Of course, the implementation they settled on was quite a lot more complex and powerful. Then people discovered that the template mechanism they had added was even more powerful than expected. And someone started experimenting with using templates to write a more generic library. One inspired by functional programming, and one which used all the new capabilities of C++. He presented it to the C++ language committee, who took quite a while to grow used to it because it looked so strange and different, but ultimately realized that *it worked better than the traditional OOP equivalents they'd have to include otherwise*. So they made a few adjustments to it, and adopted it into the standard library. It wasn't an ideological choice, it wasn't a political choice of "do we want to be OOP or not", but a very pragmatic one. They evaluated the library, and saw that it worked very well. In any case, both of the reasons you mention for favoring the STL are absolutely essential. The C++ standard library **has** to be efficient. If it is less efficient than, say, the equivalent hand-rolled C code, then people would not use it. That would lower productivity, increase the likelihood of bugs, and overall just be a bad idea. And the STL **has** to work with primitive types, because primitive types are all you have in C, and they're a major part of both languages. If the STL did not work with native arrays, it would be **useless**. Your question has a strong assumption that OOP is "best". I'm curious to hear why. You ask why they "abandoned classical OOP". I'm wondering why they should have stuck with it. Which advantages would it have had?
The most direct answer to what I think you're asking/complaining about is this: The assumption that C++ is an OOP language is a false assumption. C++ is a multi-paradigm language. It can be programmed using OOP principles, it can be programmed procedurally, it can be programmed generically (templates), and with C++11 (formerly known as C++0x) some things can even be programmed functionally. The designers of C++ see this as an advantage, so they would argue that constraining C++ to act like a purely OOP language when generic programming solves the problem better and, well, more *generically*, would be a step backwards.
Why is the STL so heavily based on templates instead of inheritance?
[ "", "c++", "oop", "templates", "stl", "std", "" ]
Excuse the code dump, these are functions within a UserControl ``` private void PNGQuantPreviewControl_Resize(object sender, EventArgs e) { createOffScreenBm(); draw(); } private void createOffScreenBm() { offScreenBm = new Bitmap(this.Size.Width, this.Size.Height); offScreenGfx = Graphics.FromImage(offScreenBm); } private void draw() { // draw background offScreenGfx.FillRectangle(transTexture, 0, 0, offScreenBm.Width, offScreenBm.Height); // draw image preview offScreenGfx.DrawImage(pngQuantPreview, getTopLeftPosition()); // apply to picture box this.CreateGraphics().DrawImage(offScreenBm, 0, 0); } ``` So, when the control changes size, it recreates the offscreen bitmap to reflect the new size and redraws the image. However, if I quickly resize the control the bitmap doesn't fill it, there's a gap left at the right and/or bottom. I'm fairly new to C#, so there's probably something obvious I'm doing wrong, or I'm reading the size values at the wrong time. Any ideas?
First of all you need to overwrite OnPaint method, or subscribe to Paint event and draw everything there. Second you do not need to create offscreen bitmap for double buffering, because in .net already exist class for such purposes [BufferedGraphics](http://msdn.microsoft.com/en-us/library/system.drawing.bufferedgraphics.aspx). And third, it is much better to create UserControl descedant and enable internal .net double buffering, something like this: ``` public UserControl2 { SetStyle(ControlStyles.OptimizedDoubleBuffer | ControlStyles.AllPaintingInWmPaint | ControlStyles.UserPaint, true); } ``` Using this approach you will get double-buffering, and all you need is to draw your graphics in OnPaint method. You can read more about this [control styles](http://msdn.microsoft.com/en-us/library/system.windows.forms.controlstyles.aspx) in Msdn.
Have you considered overriding the OnPaint method and placing the code within that method? This would result in your drawing code being executed any time the control needs to be redrawn, regardless of the reason. A resize event does not necessarily wait until you are finished resizing the parent container. When the resize event is raised it needs to wait until the code exits before it can capture a new resize event so when the window/control is resized quickly, it can't keep up all that well and what you get is the last time it was able to capture the event, not necessarily the final state of the control ... if that makes any sense.
.NET UserControl: Size property gives incorrect value on Resize event
[ "", "c#", "winforms", "events", "resize", "user-controls", "" ]
A lot of frameworks use URL conventions like `/controller/action/{id}` which is great, but if you need any configuration beyond that, it's up to you to write your own routes. How would you handle URLs like `/users/{id}/friends` on the backend? (to list all of a user's friends) I'm thinking that in the controller, something like this would be appropriate: ``` class User { function index() { echo 'user index'; } } class Friend extends User { function index($user_id) { echo 'friend index'; } } ``` Then you would have the following map: ``` /users -> User::index() /users/{id} -> User::view($id) /users/{id}/friends -> Friend::index($user_id) ``` I wanted to put the Friend class inside the User class but apparently you can't do that in PHP so this is the best I could come up with. Thoughts? What URL would use for editing your list of friends? `/users/{id}/friends/edit` could work, but it doesn't seem appropriate, since you should never be editing someone else's friend list. Would `/account/friends/edit` be a better choice? Where would you put the corresponding code for that? In a friend controller, or a user controller, or a specialized account controller? Bonus question: which do you prefer? `/photos/delete/{id}` or `/photos/{id}/delete` --- **The answers:** So, what I've gathered from the answers is that if the "thing" is complicated (like "friends") but doesn't have its own controller, you can give it one without a model, or if it's not, you should stuff it in with whatever it's most closely related to. Your URLs should not influence where you put your code. Most people seem to think you should stick to `/controller/action/{id}` whever possible, because it's what people are familiar with. No one really commented on the extended class aside from saying it's "awkward". Perhaps FriendList would have been a more appropriate class in that case if I really wanted to separate it out. Thanks for all the answers :)
The routes you're talking about, and the way you're using subclasses to achieve this structure, seems a bit awkward to me. The standard convention of `/controller/action/{id}` works great for simple actions, but if you're creating a complex application you will always need to create custom routes. There are probably some good guidelines to use when creating these routes, but it really boils down to staying consistent across your application and keeping things as simple as possible. I don't see any good reason to have `/user/{id}/friends` mapping to a "`Friend`" controller. Why not just have "`friends`" be an action on the `User` controller? Once you actually drill down to view a specific friend's page, you could use a `Friend` controller (`/friends/view/123`) or you could repurpose your `User` controller so that it works for a friend or the currently logged in user (`/user/view/123`). Re: the bonus question, I'd stick with `/photos/delete/{id}` (`/controller/action/{id}`) as that's the most widely accepted mechanism.
You can do either or. The problem is when you mix the two. /users/{id}/friends and /users/friends/{id} When someone has the id of "friends" this will fail. This may seem like a trivial case but it's very popular to use usernames for ids. You will have to limit user names for every action. --- Sometimes you can't do `/{controller}/{action}/{id}` I did a indie music site a while back and we did ``` /artist/{username} /artist/{username}/albums /artist/{username}/albums/{album} ``` We didn't want to test for conditionals so we didn't do ``` /artist/{username}/{album} ``` Since we didn't want to check for anyone with an album named "albums" We could have done it ``` /artist/{username} /artist/{username}/albums /albums/{album} ``` but then we would lose the SEO advantage of having both the artist name and the album name in the URL. Also in this case we would be forcing album names to be unique which would be bad since it's common for artist to have album names the same as other artist. You could do pure `/{controller}/{action}/{id}` but then you would lose some SEO and you can't do URL shortening. ``` /artist/view/{username} /artist/albums/{username} /album/view/{album} ``` --- Getting back to your example. > /users/{id}/friends/edit could work, > but it doesn't seem appropriate, since > you should never be editing someone > else's friend list. In this case it should be `/friends/edit` since your user id is duplicate information assuming your in a session somehow. In general you want to support URL shortening not URL expansion. (Bonus question) Neither, i'd use [REST](http://en.wikipedia.org/wiki/Representational_State_Transfer). `DELETE /photo?id={id}`
PHP framework URL conventions
[ "", "php", "url-routing", "conventions", "" ]
It's a beginners question, but... [Image of dll reference and dll included in project file http://a3.vox.com/6a00c2251e5b66549d00e398ca81eb0003-pi](http://a3.vox.com/6a00c2251e5b66549d00e398ca81eb0003-pi) If you look at the image above, there is the "Bass.Net" dll added as reference and also directly as file in the project. Can someone tell me what's the point of doing that?
No reason, really. It could be that Visual Studio is set to display files not in the project (hard to tell from the picture) and the dll's happen to be in the main directory. The text is pretty clear that the extra files are * bass.dll * bassenc.dll * lame.exe The .net one happens to be with the others in the same directory and you need to add it as a reference.
Within Windows, a DLL is a [dynamic link library](http://en.wikipedia.org/wiki/Dynamic-link_library), which packages a set of programmatic functionality together. In this example, bass.dll exposes the features and functionality relevant to audio processing through this file (and any files it depends on). In order to use this functionality, you need the reference in the solution, so that Visual Studio can [link it](http://en.wikipedia.org/wiki/Linker) at compile time. The DLL will then typically be copied to your output directory when the application is built. That's all that is necessary to get the code to work properly, the rest is really just preference or convention. Some people prefer to have all the files that exist in the project directory in the solution, so that the Solution Explorer reflects the file system. Typically you will want to have libraries your application depends on somewhere in your solution directory hierarchy so that the entire application is packaged together (making source code control use easier, for instance). You won't want to put this library in the BIN directory or any directory that Visual Studio generates, though, to avoid accidental deletions. In any event, having the reference is the important part, the file being in the project or solution is not necessary. Typically, you'll want to keep external libraries out of your source directories, though, so I wouldn't actually recommend this structure. I tend to use a structure like this, but, again, this is all preference: * Source: Source code and project files * Libraries: DLLs * Support: Miscellaneous code or projects, but not actually part of the application (perhaps deployment scripts)
Adding a dll file to a C# project
[ "", "c#", ".net", "dll", "installation", "project", "" ]
I need some Regular expression experts for an extra hand. :) I have different paths,different folders,different amount of folders. My question:How do I get the last thing - the filename? For example in the path: C:\a\b\c\d\e\fgh.ddj How do I get "fgh.ddj" with regular expressions?
You don't need regex's, you can do it just like this, its a system.io helper function: ``` myfilename = Path.GetFileName(mypath); ```
You can also use FileInfo. When using FileInfo, it actually doesn't matter if the file is present or not. ``` var fileInfo = new FileInfo("C:\a\b\c\d\e\fgh.ddj"); var fileName = fileInfo.Name; //this returns "fgh.ddj" ``` If the file is present, of course there's lots of info about file size, last accessed, etc.
How to get a filename from a path?
[ "", "c#", "regex", "" ]
first of all let me say that I know about dynamic, no PIA, named and optional parameters and finally covariance and contravariance. In my projects 99% of the time my code is in C#, I almost never do Office automation, I don't plan to use any dynamic language any time soon, and, besides being necessary for Office automation, I think named and optional parameters are wrong (and no, overloads do not count, if you have that many overloads the problem is in the design, not the language). So whats there for a C# guy in a static-typed world?, The only thing appears to be covariance and contravariance, which sounds great, but I was hoping something more. The only bits that I'm excited to try are the parallel extensions for Linq but that's about it (and that's [available for 3.5 [as a CTP](http://www.microsoft.com/downloads/details.aspx?FamilyId=348F73FD-593D-4B3C-B055-694C50D2B0F3&displaylang=en)). Why are you exited about C# 4?
The only thing to be excited about in C# 4 from a non-COM, non-dynamic standpoint are covariance and contravariance. Everything else is centered around dynamic typing.
1. Better Garbage Collection 2. New Thread Pooling Engine 3. Code Contracts 4. If you're not doing ASP.NET WebForms development you wouldn't care, but, there are significant improvements there as well. [Learning Resources for .NET 4.0 New Features](http://bogdanbrinzarea.wordpress.com/2009/04/24/learning-net-40-new-features/) ...hit the link for some good resources about some of the new features.
whats new in C# 4 for a static-typed guy
[ "", "c#", ".net-4.0", "" ]
Why does LocalEndpoint = 0.0.0.0 at this point? According to the docs it should be the appropiate address selected by the system. Note: This only occurrs on some machines. Most machines return the IP Address I would expect. Is this a bug in .NET? ``` using (Socket s = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)) { Console.WriteLine("Connecting"); s.Connect("www.google.com", 80); Console.WriteLine("Connected OK"); s.Send(new byte[] { 1 }); Console.WriteLine("Sent Byte OK"); Console.WriteLine("Local EndPoint = " + s.LocalEndPoint.ToString()); } //Local EndPoint = 0.0.0.0 ``` I have also tried doing ``` s.Bind(new IPEndPoint(IPAddress.Any, 0)); ``` directly after creating the socket and it made no difference. The "problem" machine always returns 0.0.0.0. Here is the result of an ipconfig /all ``` Windows IP Configuration Host Name . . . . . . . . . . . . : andrepc Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No ``` Ethernet adapter Local Area Connection 2: ``` Media State . . . . . . . . . . . : Media disconnected Description . . . . . . . . . . . : VIA VT6105 Rhine Fast Ethernet Adapter Physical Address. . . . . . . . . : 00-30-18-67-A0-EB ``` Ethernet adapter Local Area Connection: ``` Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek RTL8029 PCI Ethernet Adapter Physical Address. . . . . . . . . : 00-C0-DF-E7-C9-5D Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 10.0.0.6 Subnet Mask . . . . . . . . . . . : 255.0.0.0 Default Gateway . . . . . . . . . : 10.0.0.2 DHCP Server . . . . . . . . . . . : 10.0.0.2 DNS Servers . . . . . . . . . . . : 10.0.0.2 Lease Obtained. . . . . . . . . . : Wednesday, May 20, 2009 5:39:06 PM Lease Expires . . . . . . . . . . : Thursday, May 21, 2009 5:39:06 PM ``` 10.0.0.6 Would be the IP I would expect as a result.
Ok decided to use a workaround. Tested and works. Can anyone think of a reason why doing it this way might not give an accurate IP/Port? ``` static void Main(string[] args) { try { using (Socket s = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)) { Console.WriteLine("Connecting"); s.Connect("www.google.com", 80); Console.WriteLine("Connected OK"); s.Send(new byte[] { 1 }); Console.WriteLine("Sent Byte OK"); Console.WriteLine("Local EndPoint Netstat = " + GetLocalEndPoint(s)); Console.WriteLine("Local EndPoint LocalEndPoint = " + ((IPEndPoint)s.LocalEndPoint).ToString()); } } catch (Exception e) { Console.WriteLine("Exception - " + e.Message); } } static string GetLocalEndPoint(Socket s) { Console.WriteLine(DateTime.Now.TimeOfDay.ToString() + " Find IP LocalEndPoint"); IPGlobalProperties properties = IPGlobalProperties.GetIPGlobalProperties(); TcpConnectionInformation[] connections = properties.GetActiveTcpConnections(); IPEndPoint remoteEP = (IPEndPoint)s.RemoteEndPoint; foreach (TcpConnectionInformation netstat in connections) { if (remoteEP.ToString() == netstat.RemoteEndPoint.ToString()) { Console.WriteLine(DateTime.Now.TimeOfDay.ToString() + " Find IP LocalEndPoint OK"); //Use the "Netstat" IP but the socket.LocalEndPoint port return new IPEndPoint(netstat.LocalEndPoint.Address, ((IPEndPoint)s.LocalEndPoint).Port).ToString(); } } return string.Empty; } Connecting Connected OK Sent Byte OK 09:57:38.5312500 Find IP LocalEndPoint 09:57:38.5312500 Find IP LocalEndPoint OK Local EndPoint Netstat = 10.0.0.6:1711 Local EndPoint LocalEndPoint = 0.0.0.0:1711 ```
I Think that you are getting the correct results for the local endpoint on your socket, from what I tested, thats basically the default ip since "0.0.0.0" basically tells the socket to listen to all network adapter if you listen with that socket. the IPEndpoint.Any property basically equals an ip of "0.0.0.0". I think your getting the your local ip with the netstat since you are resolving it in that method.
C# Socket.LocalEndPoint returns 0.0.0.0 on some machines
[ "", "c#", "sockets", "" ]
Before I write my own function to do it, is there any built-in function, or simple one-liner to convert: ``` Array ( [0] => pg_response_type=D [1] => pg_response_code=U51 [2] => pg_response_description=MERCHANT STATUS [3] => pg_trace_number=477DD76B-B608-4318-882A-67C051A636A6 ) ``` Into: ``` Array ( [pg_response_type] => D [pg_response_code] =>U51 [pg_response_description] =>MERCHANT STATUS [pg_trace_number] =>477DD76B-B608-4318-882A-67C051A636A6 ) ``` Just trying to avoid reinventing the wheel. I can always loop through it and use explode.
**Edit** - didn't read the question right at all, whoops.. A `foreach` through the array is the quickest way to do this, e.g. ``` foreach($arr as $key=>$val) { $new_vals = explode("=", $val); $new_arr[$new_vals[0]] = $new_vals[1]; } ```
> I can always loop through it and use explode. that's what you should do.
php array processing question
[ "", "php", "" ]
I've been working on a simple forum for my site. I'm adding a feature that will mark unread posts + forums since your last visit. Im essentially storing a date of last access to the forum with each user account. When a user comes back to the site, a query is ran to fetch all the post\_id and post\_parent\_forum values, which are then added to an array, that's stored in a SESSION variable with the key being the post\_id (which is unique) and the value being the forum id (which wont be unique since posts will appear in the few forums). In the forum index, I use in\_array() for each forum, to see if the forum\_id is in the array, if it is, it will be marked as "unread". In the thread listing, I use array\_key\_exists() to see if the key for each thread ID is in the array, if it is, its marked as "unread". When a post is viewed, the item is removed from the array with the key equal to the ID of the thread. Is this method reasonable, or am I going to run into issues if the forum becomes more popular? Im concerned about running 20 array\_key\_exists() checks on each forum listing. Is it reasonably fast? On a side note.... can I work directly with the SESSION stored array, or do I have to assign its value to a regular variable, remove a key, unset the old session var, and re-set it with the updated array?
php arrays are hashtables! a hash key lookup is not very expensive, so no, i don't think you'll run into performance problems because of this. in\_array is a different matter (searches through the whole array), but really - it still shouldn't be a problem. beware of premature optimization! > can I work directly with the SESSION stored array yes. `$_SESSION` is (normally) just an array that is populated when you call `session_start()` by unserializing the session files content. when the script ends or `session_write_close()` is called, the $\_SESSION array is `serialize()`'d back to the file. no magic, really. if you *should* do that is another question, after all `$_SESSION` is a *global variable* (ewwww).
My practice says: don't worry about hash-tables performance, but think about db-related issues, because they are the most expensive in resources (especially in forum-like projects which are full of text). I never met issues related to PHP performance. MySQL issues always came much faster.
Is this a good way of doing what I want (in PHP)?
[ "", "php", "arrays", "" ]
I am after a Regex expression that will strip out white spaces when there is two or more repeated, leaving just one space behind. For example this line ``` The cow jumped over the moon ``` which has multiple spaces separating the words in some cases would become ``` The cow jumped over the moon ```
Try this regular expression: ``` [ ]+ ``` and replace it with a single space.
``` string cleanedString = Regex.Replace(input, @"\s+", " "); ```
Strip out extra spaces when there is more than one in a row
[ "", "c#", "regex", "" ]
Is there any way to convert a collection of objects into a single new object using LINQ? I want to use this within another LINQ to SQL expression.
Why don't you use the [`string.Join`](http://msdn.microsoft.com/en-us/library/system.string.join.aspx) itself? ``` string.Join("<br/>", collection.Select(e => e.TextProp)); ```
You can use the Aggregate method... ``` var myResults = (from myString in MyStrings select myString) .Aggregate(string.Empty, (results, nextString) => string.Format("{0}<br />{1}", results, nextString)); ``` or ``` var myResults = MyStrings.Aggregate(string.Empty, (results, nextString) => string.Format("{0}<br />{1}", results, nextString)); ```
Is there a LINQ equivalent of string.Join(string, string[])
[ "", "c#", ".net", "linq", "linq-to-sql", "" ]
I'm trying to build some complicated stuff for an C# winform application being build online and trying to gather some information here and there.I've looked on the web that it's possible to get a mac address of a computer on the network using either System.Net.NetworkInformation or System.Management.ManagementClass(which i can't reference for some reasons).Where are my worries 1. Can my web server online know the mac address of a client's machine connected to it? 2. If question 1 is true i guess it will use IP (correct me if I'm wrong) what if client's machine is sitting behind a proxy server or using multiple web proxy? 3. if question 1 and 2 are positive How to do that from the web server.
No, there's no easy way to do that. The MAC address is only resolvable on the same subnet - assuming this isn't a fairly small intranet app, you would not be on the same subnet as your clients. In theory, querying the client with remote WMI would work - but the firewall and permission issues are non-trivial. Again, unless you can control all clients - you're not likely to have success here. About the only thing you could do is a downloadable app - possibly Flash, Silverlight or ActiveX - that interrogated the local machine for you. I'm not sure if that info would be sandboxed by the browser though. My guess is there's an easier way to do what you're trying to do - but you'd need to provide more details on *why* you want the MAC address.
You can't get any of that information from a web server, and you should not try. Consider that machines may have multiple IP addresses and multiple MAC addresses, and may be behind a proxy server or Network Address Translation device, or worse. IP addresses belong to the network layer, and should generally not be used by the Application layer. If nothing else, it's unlikely that the Network Administrators will consult the Developers when making changes to the network that will invalidate your assumptions.
web server to fetch client's machine mac address
[ "", "c#", "asp.net", "iis", "" ]
1. `float ff = 1.2f;` 2. `Float fo = new Float(1.2f);` 3. `double fg = 3.2d;` 4. `Double fh = new Double(2.1d);` Can I use '=' between the (1) and (3) or between the (2) and (4)??
Yes. 1. Makes a plain old data type (AKA a primitive type) called "float." 2. Makes a Java Object called Float that holds that value that happens to be identical to (1) Responding to the edit questions: You will see 1. "possible loss of precision" message if you try `ff = fg`. 2. "incompatible types" if you try `fo = fh`. 3. `fg = ff` will work fine (the float fits in a double). 4. `fh = fo` will still give you an "incompatible types".
Yes, 2 creates an Object.
Is there any difference between these two statements?
[ "", "java", "types", "floating-point", "primitive", "" ]
I've got a WPF application where **PageItems** are model objects. My main ViewModel has an ObservableCollection of **PageItemViewModels**, each one building itself from its matching PageItem model object. Each **PageItemViewModel** inherits from the abstract class **BaseViewModel** in order to get the INotifyPropertyChanged functionality. Each **PageItemViewModel** also implements the **IPageItemViewModel** in order to make sure it has the needed properties. I will eventually have around 50 pages so I want to **eliminate any unnecessary code**: * **SOLVED (SEE BELOW)**: is there a way I can get PageItemViewModel classes to **inherit IdCode and Title** so I don't have to implement them in each class? I can't put them in BaseViewModel since other ViewModels inherit it which don't need these properties, and I can't put them in IPageItemViewModel since it is only an interface. I understand I need **multiple inheritance** for this which C# doesn't support * **SOLVED (SEE BELOW)**: is there a way I can get rid of the **switch** statement, e.g. somehow use **reflection** instead? Below is a **stand-alone Console application** which demonstrates the code I have in my **WPF** application: ``` using System.Collections.Generic; namespace TestInstantiate838 { public class Program { static void Main(string[] args) { List<PageItem> pageItems = PageItems.GetAll(); List<ViewModelBase> pageItemViewModels = new List<ViewModelBase>(); foreach (PageItem pageItem in pageItems) { switch (pageItem.IdCode) { case "manageCustomers": pageItemViewModels.Add(new PageItemManageCustomersViewModel(pageItem)); break; case "manageEmployees": pageItemViewModels.Add(new PageItemManageEmployeesViewModel(pageItem)); break; default: break; } } } } public class PageItemManageCustomersViewModel : ViewModelBase, IPageItemViewModel { public string IdCode { get; set; } public string Title { get; set; } public PageItemManageCustomersViewModel(PageItem pageItem) { } } public class PageItemManageEmployeesViewModel : ViewModelBase, IPageItemViewModel { public string IdCode { get; set; } public string Title { get; set; } public PageItemManageEmployeesViewModel(PageItem pageItem) { } } public interface IPageItemViewModel { //these are the properties which every PageItemViewModel needs string IdCode { get; set; } string Title { get; set; } } public abstract class ViewModelBase { protected void OnPropertyChanged(string propertyName) { //this is the INotifyPropertyChanged method which all ViewModels need } } public class PageItem { public string IdCode { get; set; } public string Title { get; set; } } public class PageItems { public static List<PageItem> GetAll() { List<PageItem> pageItems = new List<PageItem>(); pageItems.Add(new PageItem { IdCode = "manageCustomers", Title = "ManageCustomers"}); pageItems.Add(new PageItem { IdCode = "manageEmployees", Title = "ManageEmployees"}); return pageItems; } } } ``` # Refactored: interface changed to abstract class ``` using System; using System.Collections.Generic; namespace TestInstantiate838 { public class Program { static void Main(string[] args) { List<PageItem> pageItems = PageItems.GetAll(); List<ViewModelPageItemBase> pageItemViewModels = new List<ViewModelPageItemBase>(); foreach (PageItem pageItem in pageItems) { switch (pageItem.IdCode) { case "manageCustomers": pageItemViewModels.Add(new PageItemManageCustomersViewModel(pageItem)); break; case "manageEmployees": pageItemViewModels.Add(new PageItemManageEmployeesViewModel(pageItem)); break; default: break; } } foreach (ViewModelPageItemBase pageItemViewModel in pageItemViewModels) { System.Console.WriteLine("{0}:{1}", pageItemViewModel.IdCode, pageItemViewModel.Title); } Console.ReadLine(); } } public class PageItemManageCustomersViewModel : ViewModelPageItemBase { public PageItemManageCustomersViewModel(PageItem pageItem) { IdCode = pageItem.IdCode; Title = pageItem.Title; } } public class PageItemManageEmployeesViewModel : ViewModelPageItemBase { public PageItemManageEmployeesViewModel(PageItem pageItem) { IdCode = pageItem.IdCode; Title = pageItem.Title; } } public abstract class ViewModelPageItemBase : ViewModelBase { //these are the properties which every PageItemViewModel needs public string IdCode { get; set; } public string Title { get; set; } } public abstract class ViewModelBase { protected void OnPropertyChanged(string propertyName) { //this is the INotifyPropertyChanged method which all ViewModels need } } public class PageItem { public string IdCode { get; set; } public string Title { get; set; } } public class PageItems { public static List<PageItem> GetAll() { List<PageItem> pageItems = new List<PageItem>(); pageItems.Add(new PageItem { IdCode = "manageCustomers", Title = "ManageCustomers"}); pageItems.Add(new PageItem { IdCode = "manageEmployees", Title = "ManageEmployees"}); return pageItems; } } } ``` # Answer to eliminating Switch statement: Thanks Jab: ``` string assemblyName = System.Reflection.Assembly.GetExecutingAssembly().GetName().Name; string viewModelName = assemblyName + ".ViewModels.PageItem" + StringHelpers.ForcePascalNotation(pageItem.IdCode) + "ViewModel"; var type = Type.GetType(viewModelName); var viewModel = Activator.CreateInstance(type, pageItem) as ViewModelBase; AllPageViewModels.Add(viewModel); ```
One solution that isn't very pretty, but works, would be to use convention to get rid of the switch statement. This assumes you can change the IdCodes or atleast modify the case to match the ViewModel. ``` var type = Type.GetType("PageItem" + pageItem.IdCode + "ViewModel"); var viewModel = Activator.CreateInstance(type) as ViewModelBase; pageItemViewModels.Add(viewModel); ``` Note that you should add error checking here, there are a couple points of failure here. It is, however, better than having to maintain an ever-growing switch statement.
Can you create a class that inherits from BaseViewModel that will implement these two properties - your PageItemViewModel classes that need this could then inherit from that.
How can I emulate multiple-inheritance and use reflection to optimize this code?
[ "", "c#", "wpf", "inheritance", "abstract-class", "multiple-inheritance", "" ]
Is this action too redundant - is there a better way to simplify it? ``` [Authorize, AcceptVerbs(HttpVerbs.Post)] public ActionResult ChangePassword(string oldPassword, string newPassword, string confirmPassword) { var oldPasswordValidationResults = _validatorProvider.Validate<IStringLengthValidator>(oldPassword); oldPasswordValidationResults.Where(r => !r.Passed) .Each(r => ModelState.AddModelError("OldPassword", "Please enter your old password.")); var newPasswordValidationResults = _validatorProvider.Validate<IStringLengthValidator>(newPassword); newPasswordValidationResults.Where(r => !r.Passed) .Each(r => ModelState.AddModelError("NewPassword", "Please enter a new password.")); if (!ModelState.IsValid) return View(); if (newPassword != confirmPassword) ModelState.AddModelError("ConfirmPassword", "The passwords do not match."); if (!ModelState.IsValid) return View(); if (!_userMembershipService.ChangePassword(oldPassword, newPassword)) ModelState.AddModelError("_FORM", "Unable to change your password."); if (!ModelState.IsValid) return View(); return View("ChangePasswordSuccessful"); } ``` All of these seem to me to have a code smell... ``` if (!ModelState.IsValid) return View(); ```
This change seems to preserve your original intentions a little better: ``` if (newPassword != confirmPassword) { ModelState.AddModelError("ConfirmPassword", "The passwords do not match."); return View(); } if (!_userMembershipService.ChangePassword(oldPassword, newPassword)) { ModelState.AddModelError("_FORM", "Unable to change your password."); return View(); } return View("ChangePasswordSuccessful"); ```
Nested `if` statements may help simplify the code: ``` [Authorize, AcceptVerbs(HttpVerbs.Post)] public ActionResult ChangePassword(string oldPassword, string newPassword, string confirmPassword) { var oldPasswordValidationResults = _validatorProvider.Validate<IStringLengthValidator>(oldPassword); oldPasswordValidationResults.Where(r => !r.Passed) .Each(r => ModelState.AddModelError("OldPassword", "Please enter your old password.")); var newPasswordValidationResults = _validatorProvider.Validate<IStringLengthValidator>(newPassword); newPasswordValidationResults.Where(r => !r.Passed) .Each(r => ModelState.AddModelError("NewPassword", "Please enter a new password.")); if (ModelState.IsValid) { if (newPassword == confirmPassword) { if (_userMembershipService.ChangePassword(oldPassword, newPassword)) { return View("ChangePasswordSuccessful"); } else { ModelState.AddModelError("_FORM", "Unable to change your password."); } } else { ModelState.AddModelError("ConfirmPassword", "The passwords do not match."); } } return View(); } ```
Redunancy elimination in ASP.NET MVC Actions
[ "", "c#", "asp.net-mvc", "refactoring", "" ]
I have the following set of data ``` <ids> <id1 attr1="value1" attr2="value2" /> <id2 attr3="value3" attr4="value4" /> <id3 attr2="value6" attr5="value7" /> </ids> ``` Basically, it's an XML that can have any node name with any attribute name with any attribute value. After parsing the XML, I store the attribute data in a Dictionary. Then I store that same Dictionary as a value with the node name as a key. So my data structure would be a Dictionary<string, Dictionary<string, string>> (let's give this a variable name called "dict") So if I wanted to get the value for attr2 in the id1 node, I would do: ``` string value = dict["id1"]["attr2"]; // value will be value2 ``` I think this is a pretty simple and workable solution for my needs, but there just seems to be this voice at the back of my head telling me that there is a different data structure or simpler solution that I'm missing out on. What does everyone think?
I think your solution is a good one. It will provide very fast lookups, and matches exactly to your domain. Is your main problem with the nested dictionaries? If so, I would suggest that you not worry about it - using collections of collections is often a very useful tool. My only complaint would be this: If you're not using this frequently, you're going to be loading a lot of information into a data structure that may be unncessary. If this is for one time lookups, leaving it in XML and using XPath queries may be a more optimal solution than pre-parsing and loading the entire thing into memory. If you're querying this frequently, though, this is a more optimal solution.
How about a class? ``` public class YourId { public string Id { get; set; } public string Attribute1 { get; set; } public string Value { get; set; } } ``` Then you could create a List and populate it via your xml... It would be easy to work with and you could use linq with it: ``` List<YourId> items = GetIdsFromXml(); var query = from i in items where i.Id == "id1" select i; // or... items.Where(i => i.Attribute == "blah").ToList(); // ect. ```
What's a better alternative to the following data structure: Dictionary<string, Dictionary<string, string>>
[ "", "c#", "data-structures", "" ]
I am currently writing an open source SDK for a program that I use and I'm using an IoC container internally(NInject) to wire up all my internal dependencies. I have some objects that are marked as internal so that I don't crowd the public API as they are only used internally and shouldn't been seen by the user, stuff like factories and other objects. The problem that I'm having is that NInject can't create internal objects which means that I have to mark all my internal objects public which crowds up the public API. My question is: Is there someway to get around this problem or am I doing it all wrong? PS. I have thought about using InternalsVisiableTo attribute but I feel like that is a bit of a smell.
**Quick look at the other answers:** it doesn't seem like you are doing something so different that there is something fundamentally wrong with Ninject that you would need to modify it or replace it. In many cases, you can't "go straight for [the] internals" because they rely upon unresolved dependency injection; hence the usage of Ninject in the first place. Also it sounds like you already do have an internal set of interfaces which is why the question was posed. **Thoughts:** one problem with using Ninject directly in your SDK or library is that then your users will have to use Ninject in their code. This probably isn't an issue for you because it is your IoC choice so you were going to use it anyway. What if they want to use another IoC container, then now they effectively have two running duplicating efforts. Worse yet what if they want to use Ninject v2 and you've used v1.5 then that really complicates the situation. **Best case:** if you can refactor your classes such that they get everything they need through Dependency Injection then this is the cleanest because the library code doesn't need *any* IoC container. The app can wire up the dependencies and it just flows. This isn't always possible though, as sometimes the library classes need to create instances which have dependencies that you can't resolve through injection. **Suggestion:** The [CommonServiceLocator](http://commonservicelocator.codeplex.com/) (and [the Ninject adapter](http://github.com/enkari/ninject/tree/master/src/CommonServiceLocator.NinjectAdapter/) for it) were specifically designed for this situation (libraries with dependencies). You code against the CommonServiceLocator and then the application specifies which DI/IoC actually backs the interface. It is a bit of a pain in that now you have to have Ninject *and* the CommonServiceLocator in your app, but the CommonServiceLocator is quite lightweight. Your SDK/library code *only* uses the CommonServiceLocator which is fairly clean.
I guess you don't even need that. IoC is for public stuff. Go straight for internals. But - that's just my intuition...
Using IoC with internal objects
[ "", "c#", "inversion-of-control", "ninject", "" ]
To compute the similarity between two documents, I create a feature vector containing the term frequencies. But then, for the next step, I can't decide between "[Cosine similarity](http://en.wikipedia.org/wiki/Cosine_similarity)" and "[Hamming distance](http://en.wikipedia.org/wiki/Hamming_distance)". My question: Do you have experience with these algorithms? Which one gives you better results? In addition to that: Could you tell me how to code the Cosine similarity in PHP? For Hamming distance, I've already got the code: ``` function check ($terms1, $terms2) { $counts1 = array_count_values($terms1); $totalScore = 0; foreach ($terms2 as $term) { if (isset($counts1[$term])) $totalScore += $counts1[$term]; } return $totalScore * 500 / (count($terms1) * count($terms2)); } ``` I don't want to use any other algorithm. I would only like to have help to decide between both. And maybe someone can say something to how to improve the algorithms. Will you get better results if you filter out the stop words or common words? I hope you can help me. Thanks in advance!
A Hamming distance should be done between two strings of equal length and with the order taken into account. As your documents are certainly of different length and if the words places do not count, cosine similarity is better (please note that depending your needs, better solutions exist). :) Here is a cosine similarity function of 2 arrays of words: ``` function cosineSimilarity($tokensA, $tokensB) { $a = $b = $c = 0; $uniqueTokensA = $uniqueTokensB = array(); $uniqueMergedTokens = array_unique(array_merge($tokensA, $tokensB)); foreach ($tokensA as $token) $uniqueTokensA[$token] = 0; foreach ($tokensB as $token) $uniqueTokensB[$token] = 0; foreach ($uniqueMergedTokens as $token) { $x = isset($uniqueTokensA[$token]) ? 1 : 0; $y = isset($uniqueTokensB[$token]) ? 1 : 0; $a += $x * $y; $b += $x; $c += $y; } return $b * $c != 0 ? $a / sqrt($b * $c) : 0; } ``` It is fast (`isset()` instead of `in_array()` is a killer on large arrays). As you can see, the results does not take into account the "magnitude" of each the word. I use it to detect multi-posted messages of "almost" copy-pasted texts. It works well. :) **The best link about string similarity metrics**: <http://www.dcs.shef.ac.uk/~sam/stringmetrics.html> For further interesting readings: <http://www.miislita.com/information-retrieval-tutorial/cosine-similarity-tutorial.html> <http://bioinformatics.oxfordjournals.org/cgi/content/full/22/18/2298>
Unless I'm mistaken, I think you've got an algorithm *halfway between the two algorithms*. For Hamming distance, use: ``` function check ($terms1, $terms2) { $counts1 = array_count_values($terms1); $totalScore = 0; foreach ($terms2 as $term) { if (isset($counts1[$term])) $totalScore += 1; } return $totalScore * 500 / (count($terms1) * count($terms2)); } ``` (Note that you're only adding 1 for each matched element in the token vectors.) And for cosine similarity, use: ``` function check ($terms1, $terms2) { $counts1 = array_count_values($terms1); $counts2 = array_count_values($terms2); $totalScore = 0; foreach ($terms2 as $term) { if (isset($counts1[$term])) $totalScore += $counts1[$term] * $counts2[$term]; } return $totalScore / (count($terms1) * count($terms2)); } ``` (Note that you're adding the product of the token counts between the two documents.) The main difference between the two is that **cosine similarity will yield a stronger indicator when two documents have the same word multiple times in the documents**, while **Hamming distance doesn't care how often the individual tokens come up**. **Edit**: just noticed your query about removing function words etc. I do advise this if you're going to use cosine similarity - as function words are quite frequent (in English, at least), you might skew a result by not filtering them out. If you use Hamming distance, the effect will not be quite as great, but it could still be appreciable in some cases. Also, if you have access to a [lemmatizer](http://en.wikipedia.org/wiki/Lemmatization), it will reduce the misses when one document contains "galaxies" and the other contains "galaxy", for instance. Whichever way you go, good luck!
Cosine similarity vs Hamming distance
[ "", "php", "relationship", "similarity", "" ]