Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am working on a GUI application and would rather distribute just one jar as opposed to multiple ones. Can you control this with the manifest.
Merge your jars to one jar. See this [thread](https://stackoverflow.com/questions/81260/java-easiest-way-to-merge-a-release-into-one-jar-file).
Another option is to use a custom class loader such as this one: <http://one-jar.sourceforge.net/>
Can you add multiple jars in a jar file and then launch that jar file
[ "", "java", "jar", "" ]
I use Eclipse for both Java and Flex development. Recently, I changed its color scheme to make java code look nicer. But that messed up the flex color scheme somehow. Is it possible to have my new color scheme for java and default color scheme for flex code? Or, how can i edit the color scheme for flex code as well?
Was able to sort it out with help from this post: <http://polygeek.com/302_flex_changing-font-colors-in-flex-buildereclipse>
You can access the Flex syntax coloring via Window -> Preferences -> Flex -> Editors -> Syntax Coloring. It shouldn't interfere with the Java syntax coloring which is located under Window -> Preferences -> Java -> Editor -> Syntax Coloring. Perhaps you changed the general font and color settings under General -> Appearance and General -> Editors, which might have caused the problem. Settings applied there will be globally applied to all editors.
Eclipse color themes - java and flex
[ "", "java", "apache-flex", "eclipse", "color-scheme", "" ]
I am working with a LINQ to SQL query and have run into an issue where I have 4 optional fields to filter the data result on. By optional, I mean has the choice to enter a value or not. Specifically, a few text boxes that could have a value or have an empty string and a few drop down lists that could have had a value selected or maybe not... For example: ``` using (TagsModelDataContext db = new TagsModelDataContext()) { var query = from tags in db.TagsHeaders where tags.CST.Equals(this.SelectedCust.CustCode.ToUpper()) && Utility.GetDate(DateTime.Parse(this.txtOrderDateFrom.Text)) <= tags.ORDDTE && Utility.GetDate(DateTime.Parse(this.txtOrderDateTo.Text)) >= tags.ORDDTE select tags; this.Results = query.ToADOTable(rec => new object[] { query }); } ``` Now I need to add the following fields/filters, but only if they are supplied by the user. 1. Product Number - Comes from another table that can be joined to TagsHeaders. 2. PO Number - a field within the TagsHeaders table. 3. Order Number - Similar to PO #, just different column. 4. Product Status - If the user selected this from a drop down, need to apply selected value here. The query I already have is working great, but to complete the function, need to be able to add these 4 other items in the where clause, just don't know how!
You can code your original query: ``` var query = from tags in db.TagsHeaders where tags.CST.Equals(this.SelectedCust.CustCode.ToUpper()) && Utility.GetDate(DateTime.Parse(this.txtOrderDateFrom.Text)) <= tags.ORDDTE && Utility.GetDate(DateTime.Parse(this.txtOrderDateTo.Text)) >= tags.ORDDTE select tags; ``` And then based on a condition, add additional where constraints. ``` if(condition) query = query.Where(i => i.PONumber == "ABC"); ``` I am not sure how to code this with the query syntax but id does work with a lambda. Also works with query syntax for the initial query and a lambda for the secondary filter. You can also include an extension method (below) that I coded up a while back to include conditional where statements. (Doesn't work well with the query syntax): ``` var query = db.TagsHeaders .Where(tags => tags.CST.Equals(this.SelectedCust.CustCode.ToUpper())) .Where(tags => Utility.GetDate(DateTime.Parse(this.txtOrderDateFrom.Text)) <= tags.ORDDTE) .Where(tags => Utility.GetDate(DateTime.Parse(this.txtOrderDateTo.Text)) >= tags.ORDDTE) .WhereIf(condition1, tags => tags.PONumber == "ABC") .WhereIf(condition2, tags => tags.XYZ > 123); ``` The extension method: ``` public static IQueryable<TSource> WhereIf<TSource>( this IQueryable<TSource> source, bool condition, Expression<Func<TSource, bool>> predicate) { if (condition) return source.Where(predicate); else return source; } ``` Here is the same extension method for IEnumerables: ``` public static IEnumerable<TSource> WhereIf<TSource>( this IEnumerable<TSource> source, bool condition, Func<TSource, bool> predicate) { if (condition) return source.Where(predicate); else return source; } ```
Just need to use a conditional checking for the parameter's existence. For instance: ``` where (string.IsNullOrEmpty(ProductNumber) || ProductNumber == tags.productNumber) ``` That way if the product number isn't entered that expression will return true in all cases, but if it is entered it will only return true when matching.
LINQ to SQL Where Clause Optional Criteria
[ "", "c#", "asp.net", "linq", "linq-to-sql", "" ]
Maybe you can help me with a SQL Query: I have a conversion value in a secondary table and the following structure: ``` ID PRICE_BRL PRICE_USD -- --------- --------- 1 10 5 2 12 NULL 3 NULL 3 4 14 NULL 5 NULL 4 6 NULL NULL ``` I need a Result Set Like that prioritizes the first column, in case of NULL, gives me the second column value multiplied by the conversion value stored in the secondary table. Something like, in pseudo-code: ``` SELECT id, ( IF (price_brl != null) price_brl ELSE price_usd * tbl_2.value ) as final_price FROM tbl_1 ``` I think it must be simple using Joins, but I can't figure it out! Thanks in advance.
Pseudo code as well: ``` select id, coalesce(price_brl, price_usd * tbl_2.value) from tbl_1 inner join tbl2 ```
``` select id, isnull( price_brl, price_usd * tbl_2.value) from tbl_1 inner join tbl_2 on tbl_1.id=tbl_2.id ``` Obviously, you'll need to adjust the join. But I think this will do the trick.
Conditional SELECT of a column
[ "", "sql", "database", "conditional-statements", "" ]
Environment - VS2008, Vista SP1. I have written a process management service which can launch applications either in session 0 or the interactive console (usually 1). Please note this is NOT the normal mode of operation, it's for in-house debug purposes only. In the field, these processes will be safely hidden away in session 0. Security concerns do not apply. ### Clearly people aren't reading this: security concerns do not apply. We have dozens of existing server apps (NOT services) written like this. We're not about to completely revamp these applications, we just need to be able to get at their inbuilt debug dialogs when running release versions in-house. I already know all about the canonical solution and pipes etc. If it was acceptable to add remote interfaces into all these apps, that's what we'd be doing. I use the following code to do this: ``` ZeroMemory (&sui, sizeof(STARTUPINFO)); sui.cb = sizeof (STARTUPINFO); sui.wShowWindow = pTask->GetWinStartState() ; sui.dwFlags = STARTF_USESHOWWINDOW ; ZeroMemory (&pi,sizeof(pi)); if (bInteractive) { HANDLE hToken = NULL; DWORD dwSessionId = WTSGetActiveConsoleSessionId(); WTSQueryUserToken (dwSessionId, &hToken); sui.lpDesktop = TEXT("winsta0\\default"); LPVOID pEnv = NULL; DWORD dwCreationFlag = NORMAL_PRIORITY_CLASS | CREATE_NEW_CONSOLE; HMODULE hModu = LoadLibrary(TEXT("Userenv.dll")); if (hModu ) { if (CreateEnvironmentBlock (&pEnv, hToken, FALSE)) dwCreationFlag |= CREATE_UNICODE_ENVIRONMENT; else pEnv = NULL; } bCreatedOk = CreateProcessAsUser (hToken, NULL, (LPTSTR)(pTask->GetExeName()), NULL, NULL, FALSE, dwCreationFlag, pEnv, NULL, &sui, &pi); } else { bCreatedOk = CreateProcess (NULL, ... blah...); } ``` This all works fine and I can run and monitor native processes both in the Vista service session and the console. Great. Cakes and ale for everyone. So here's the problem. If I try to run a **winforms** (C#) app interactively like this, it appears to run, shows up in Process Explorer as running in session 1, but on the desktop... nada. No window appears at all. The process runs up and shuts down all fine, but no window ever appears. The exact same winform exe run from explorer also shows up in session 1, but this time appears on the desktop just fine. Any ideas ?
Despite the evident hysteria there is nothing wrong with launching an application from a service into an interactive session provided it is done with the **same privileges as the interactive user or lower**. Since you are launching as the interactive user there can be no privilege escalation. What you are doing does work. I suspect that the issue has something to do with your STARTUPINFO struct. You appear to be creating your sui on the stack but you don't show what you are doing with it. Are you initializing it to all 0s, if not you may be getting some garbage from the stack that is causing the window not to show or to show at some co-ordinates off the screen.
In a word, "don't". Services typically run with reduced privileges and NOT as the current user. As such, in Vista+ they're not allowed to interact with the users desktop. On top of all that, services get a null Window Station. You used to be able to check a box that said something like "Allow to interact with the desktop" but not anymore. It's bad practice. Your best bet is to create a helper app that runs in the users context and communicates with the service via a named pipe, LRPC or a socket, then have your helper app launch the program for the user. This is the way most anti-virus now works. Also, read [this whitepaper](http://www.microsoft.com/whdc/system/vista/services.mspx) from Microsoft on the subject. Services can't run in anything other than session 0. NOTE: a little research seems to indicate that you need to duplicate the token using something like this: ``` DuplicateTokenEx(hTokenNew,MAXIMUM_ALLOWED,NULL, SecurityIdentification,TokenPrimary,&hTokenDup); ```
Launching a .Net winforms application interactively from a service
[ "", "c++", "winforms", "winapi", "windows-services", "" ]
**Question:** IE and Firefox / Safari seem to deal differently with BASE HREF and Javascript window.location type requests. First, is this an accurate description of the problem? What's going on? And what's the best cross-browser solution to deal with this situation? **Context:** I have a small PHP flat file sitelet (it's actually a usability testing prototype). I dynamically generate the BASE tag's HREF value in PHP, i.e. if it's running on our company's server, it's: ``` $basehref = 'http://www.example.com/alpha/bravo/UsabilityTest/'; ``` and on my local dev machine, it's: ``` $basehref = 'http://ellen.local/delta/echo/foxtrot/UsabilityTest/'; ``` For one of the tasks, I collect some user input, do some transformations on it in Javascript, and send to the server using code like this: ``` function allDone() { // elided code for simplicity of stackoverflow question var URI = "ProcessUserInput.php?"; URI = URI + "alphakeys=" + encodeURI( keys.join(",") ); URI = URI + "&sortedvalues=" + encodeURI( values.join(",") ); window.location = URI; } ``` Both the javascript file (containing function **allDone()**) and the processing PHP script (**ProcessUserInput.php**) live in a subdirectory of UsabilityTest. In other words, their actual URL is <http://www.example.com/alpha/bravo/UsabilityTest/>**foxtrot/ProcessUserInput.php** aka `$basehref . '/foxtrot/ProcessUserInput.php'` **The Problem** IE's JavaScript basically seems to ignore the BASE HREF. The javascript and the PHP processor live in the same directory, so the call to **ProcessUserInput.php** works out fine. The input gets processed and everything works fine. But when I test on Firefox, the JavaScript *does* appear to use the BASE HREF, because the script's output gets sent to ``` $basehref . '/ProcessUserInput.php' ``` This breaks, because **ProcessUserInput.php** is in a subdirectory of basehref. However, if I add the subdirectory name to the javascript, it no longer works in IE. **Solutions?** I can think of a few ways to solve this: * In Javascript, read the HREF property of the BASE tag and manually prepend to `var URI` in the javascript, calling a fully-resolved absolute URL * Process the **.js** file with PHP and insert the `$basehref` variable into the script * Move the files around * Something else? I'm sure there must be other ways to solve this too. What's the best way to deal with BASE HREF in JavaScript when IE and Firefox apply it differently in JavaScript?
Using the `assign` method of window.location seems like the most straightforward answer. Instead of ``` window.location = URI; ``` I'm using this: ``` window.location.assign( URI ); ``` which is doing the right thing in both IE and Firefox.
> IE and Firefox / Safari seem to deal differently with BASE HREF and Javascript window.location type requests. Yes, this is a long-standing difference going back to the early days of Netscape-vs-IE. IE enforces base-href only at the point a document element is interacted-with. So, you can `createElement('a')`, set a relative `href` and `click()` it\*, but the base-href will be ignored; appendChild it to the document containing the base-href, and it'll work. On the other browsers the base-href is taken as global per-window and always applied. Which is right? It seems to be unspecified. The original JavaScript docs say only that `location.hash` (and hence, `location` applied as a string): > represents a complete URL So setting it to a relative URL would seem to be an undefined operation. (\*: link.click() is a non-standard method supported by IE and Opera) > read the HREF property of the BASE tag and manually prepend Probably what I'd do, yeah, if you're dead set on using <base>.
BASE HREF, javascript, and Internet Explorer vs. Firefox
[ "", "javascript", "html", "href", "" ]
I have written a message board as my first ASP.NET project. It seems to work well so far. However, one of the features I have is that each message has a spam rating. It is simply the number of times that viewers have marked the message as spam divided by the total number of times the message has been viewed. The idea is to allow users to ignore messages with high spam ratings if they choose to do so. However, the whole system can be foiled by a spammer simply viewing his own message and refreshing it a bunch of times, which will increase the number of times the message has been viewed. It can also be foiled by someone who marks the message as spam a bunch of times in a row. I need a way to determine whether a particular visitor has already viewed a particular message, and a way to determine whether a particular user has already marked a particular message as spam. I don't want to require registration - I want anyone to be able to view or post messages. Two ideas I've been considering are setting a cookie when the visitor gets onto the site so I can track them, or creating a new data table that logs IP addresses of users when they view a message or mark it as spam. Can anyone else think of a better way? Does .NET have any built in features that might help me determine whether the visitor is viewing a page for the first time?
The most reliable measure is IP address filtering as the spammer could circumvent any other method by writing his own program to request the page. It will affect your clients behind NAT, but for spam filtering purposes, it works pretty OK.
ASP.Net has a built-in feature called `Personalization` that can be used to detect and remember anonymous users alongside normal "registered" users.
Providing a feature only once to each unique visitor
[ "", "c#", ".net", "unique", "visitor-pattern", "" ]
I'm trying to write a light-weight image viewing application. However, there are system memory limitations with .NET. When trying to load large bitmaps (**9000 x 9000 px** or larger, 24-bit), I get a System.OutOfMemoryException. This is on a Windows 2000 PC with 2GB of RAM (of which 1.3GB is used up). It also takes a lot of time to attempt loading the files. The following code generates this error: ``` Image image = new Bitmap(filename); using (Graphics gfx = this.CreateGraphics()) { gfx.DrawImage(image, new Point(0, 0)); } ``` As does this code: ``` Stream stream = (Stream)File.OpenRead(filename); Image image = Image.FromStream(stream, false, false); using (Graphics gfx = this.CreateGraphics()) { gfx.DrawImage(image, new Rectangle(0, 0, 100, 100), 4000, 4000, 100, 100, GraphicsUnit.Pixel); } ``` Also, it is enough to do just this: ``` Bitmap bitmap = new Bitmap(filename); IntPtr handle = bitmap.GetHbitmap(); ``` The latter code was intended for use with GDI. While researching this, I found out that this is in fact a memory issue where .NET tries to allocate twice as much as is needed in a single contigous block of memory. <http://bytes.com/groups/net-c/279493-drawing-large-bitmaps> I know from other applications (Internet Explorer, MS Paint etc.) that it IS possible to open large images, and rather quickly. My question is, ***how do I use large bitmaps with .NET?*** Is there anyway to stream them, or non-memory load them?
This is a two part question. The first question is how you can load large images without running out of memory (1), the second one is on improving loading performance (2). (1) Concider an application like Photoshop where you have the ability to work with huge images consuming gigabites on the filesystem. Keeping the entire image in memory and still have enough free memory to perform operations (filters, image processing and so on, or even just adding layers) would be impossible on most systems (even 8gb x64 systems). That is why applications such as this uses the concept of swap files. Internally I'm assuming that photoshop uses a proprietary file format, suitable for their application design and built to support partial loads from the swap, enabling them to load parts of a file into memory to process it. (2) Performande can be improved (quite a lot) by writing custom loaders for each file format. This requires you to read up on the file headers and structure of the file formats you want to work with. Once you've gotten the hand of it its not \*\*\*\*that\*\*\*\* hard, but it's not as trivial as doing a method call. For example you could google for FastBitmap to see examples on how you can load a bitmap (BMP) file very fast, it included decoding the bitmap header. This involved pInvoke and to give you some idea on what you are up against you will need to define the bitmap structues such as ``` [StructLayout(LayoutKind.Sequential, Pack = 1)] public struct BITMAPFILEHEADER { public Int16 bfType; public Int32 bfSize; public Int16 bfReserved1; public Int16 bfReserved2; public Int32 bfOffBits; } [StructLayout(LayoutKind.Sequential)] public struct BITMAPINFO { public BITMAPINFOHEADER bmiHeader; public RGBQUAD bmiColors; } [StructLayout(LayoutKind.Sequential)] public struct BITMAPINFOHEADER { public uint biSize; public int biWidth; public int biHeight; public ushort biPlanes; public ushort biBitCount; public BitmapCompression biCompression; public uint biSizeImage; public int biXPelsPerMeter; public int biYPelsPerMeter; public uint biClrUsed; public uint biClrImportant; } ``` Possibly work with creating a DIB (<http://www.herdsoft.com/ti/davincie/imex3j8i.htm>) and oddities like data being stored "upside down" in a bitmap which you need to take into account or you'll see a mirror image when u open it :-) Now that's just for bitmaps. Say you wanted to do PNG then you'd need to do similar stuff but decoding the PNG header, which in its simplest form isnt that hard, but if you want to get full PNG specification support then you are in for a fun ride :-) PNG is different to say a bitmap since it uses a chunk based format where it has "headers" you can loacate to find the diffrent data. Example of some chunks I used while playing with the format was ``` string[] chunks = new string[] {"?PNG", "IHDR","PLTE","IDAT","IEND","tRNS", "cHRM","gAMA","iCCP","sBIT","sRGB","tEXt","zTXt","iTXt", "bKGD","hIST","pHYs","sPLT","tIME"}; ``` You are also going to have to learn about Adler32 checksums for PNG files. So each file format you'd want to do would add a different set of challenges. I really wish I could give more complete source code examples in my reply but it's a complex subject, and to be honest I've not implemented a swap myself so I wouldn't be able to give too much solid advice on that. The short answer is that the image processing cababilities in the BCL isn't that hot. The medium answer would be to try and find if someone has written an image library that could help you and the long answer would be to pull up your sleeves and write the core of your application yourself. Since you know me in real-life you know where to find me ;)
For a really comprehensive answer, I would use Reflector to look at the source code of Paint.NET (<http://www.getpaint.net/>); an advanced graphics editing program written in C#. (as pointed out in the comment, Paint.NET used to be open source but is now closed source).
How do I use large bitmaps in .NET?
[ "", "c#", "bitmap", "gdi", "out-of-memory", "" ]
How does one go about and try to find all subclasses of a given class (or all implementors of a given interface) in Java? As of now, I have a method to do this, but I find it quite inefficient (to say the least). The method is: 1. Get a list of all class names that exist on the class path 2. Load each class and test to see if it is a subclass or implementor of the desired class or interface In Eclipse, there is a nice feature called the Type Hierarchy that manages to show this quite efficiently. How does one go about and do it programmatically?
There is no other way to do it other than what you described. Think about it - how can anyone know what classes extend ClassX without scanning each class on the classpath? Eclipse can only tell you about the super and subclasses in what seems to be an "efficient" amount of time because it already has all of the type data loaded at the point where you press the "Display in Type Hierarchy" button (since it is constantly compiling your classes, knows about everything on the classpath, etc).
Scanning for classes is not easy with pure Java. The spring framework offers a class called [ClassPathScanningCandidateComponentProvider](http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/context/annotation/ClassPathScanningCandidateComponentProvider.html) that can do what you need. The following example would find all subclasses of MyClass in the package org.example.package ``` ClassPathScanningCandidateComponentProvider provider = new ClassPathScanningCandidateComponentProvider(false); provider.addIncludeFilter(new AssignableTypeFilter(MyClass.class)); // scan in org.example.package Set<BeanDefinition> components = provider.findCandidateComponents("org/example/package"); for (BeanDefinition component : components) { Class cls = Class.forName(component.getBeanClassName()); // use class cls found } ``` This method has the additional benefit of using a bytecode analyzer to find the candidates which means it will *not* load all classes it scans.
How do you find all subclasses of a given class in Java?
[ "", "java", "class", "interface", "subclass", "" ]
I was reading the GoF book and in the beginning of the prototype section I read this: > This benefit applies primarily to > languages like C++ that don't treat > classes as first class objects. I've never used C++ but I do have a pretty good understanding of OO programming, yet, this doesn't really make any sense to me. Can anyone out there elaborate on this (I have used\use: C, Python, Java, SQL if that helps.)
For a class to be a first class object, the language needs to support doing things like allowing functions to take classes (not instances) as parameters, be able to hold classes in containers, and be able to return classes from functions. For an example of a language with first class classes, consider Java. Any object is an instance of its class. That class is itself an instance of [java.lang.Class](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Class.html).
For everybody else, heres the full quote: > "Reduced subclassing. Factory Method > (107) often produces a hierarchy of > Creator classes that parallels the > product class hierarchy. The Prototype > pattern lets you clone a prototype > instead of asking a factory method to > make a new object. Hence you don't > need a Creator class hierarchy at all. > This benefit applies primarily to > languages like C++ that don't treat > classes as first-class objects. > Languages that do, like Smalltalk and > Objective C, derive less benefit, > since you can always use a class > object as a creator. Class objects > already act like prototypes in these > languages." - GoF, page 120. As [Steve puts it](http://www.cpptalk.net/gof-c-doesnt-treat-classes-as-frist-class-objects--vt45396.html), > I found it subtle in so much as one > might have understood it as implying > that /instances/ of classes are not > treated a first class objects in C++. > If the same words used by GoF appeared > in a less formal setting, they may > well have intended /instances/ rather > than classes. The distinction may not > seem subtle to /you/. /I/, however, > did have to give it some thought. > > I do believe the distinction is > important. If I'm not mistaken, there > is no requirement than a compiled C++ > program preserve any artifact by which > the class from which an object is > created could be reconstructed. IOW, > to use Java terminology, there is no > /Class/ object.
Treating classes as first-class objects
[ "", "c++", "oop", "prototype", "design-patterns", "" ]
Came across the following line in the Composite Application Guidelines. I know the **=>** is a lambda but what does the **()** mean? What are some other examples of this? What is it called so I can search for it? ``` this.regionViewRegistry.RegisterViewWithRegion(RegionNames.SelectionRegion , () => this.container.Resolve<EmployeesListPresenter>().View); ```
It's a lambda expression that takes 0 arguments <http://msdn.microsoft.com/en-us/library/bb397687.aspx>
If you look at `x => x + 1` It takes a parameter x and returns x incremented by one. The compiler will use type inference to deduct that x is probably of type int and will return another int so you have a lambda that takes a parameter x of type int and returns an integer. ``` () => 3; ``` is the same but doesn't take a parameter, it will return an integer. ``` () => Console.WriteLine("hello"); ``` Will result in a void method with no parameters.
What does "() =>" mean in C#?
[ "", "c#", "syntax", "lambda", "" ]
I'm having a set of Python scripts that process the photos. What I would like is to be able to create some kind of flash-presentation out of those images. Is there any package or 'framework' that would help to do this?
You should generate a formated list with the data to your photos, path and what else you need in your presentation. That data you load into a SWF, where your presentation happens. Like that you can let python do what it does and flash what flash does best. You might find allready made solutions for flash galleries / slideshows. <http://airtightinteractive.com/simpleviewer/> is a famous one. You can load your custom xml in it.
I don't know of any Python-specific solutions but there are multiple tools to handle this: You can create a flash file with dummy pictures which you then replace using mtasc, swfmill, SWF Tools or similar. This way means lots of trouble but allows you to create a dynamic flash file. If you don't need dynamic content, though, you're better off creating a video with ffmpeg. It can create videos out of multiple images, so if you're somehow able to render the frames you want in the presentation, you could use ffmpeg to make a video out of it. If you only want charts, use SWF Charts. You could use external languages that have a library for creating flash files. And finally there was another script language that could be compiled into several other languages, where swf waas one of the targets, but I can't remember its name right now.
What's a way to create flash animations with Python?
[ "", "python", "flash", "" ]
In a C#, Windows forms app, we have a series of user controls that are associated with menu entry records in a database... ``` ID Menu Title 1 User Management 4 Group Management 6 System Utilities 12 Configuration Management ``` A few user controls... * UserManagement.cs * GroupManager.cs * SysUtil.cs * ConfigurationMan.cs We're currently just switching on the ID, but that seems like a very primitive way to accomplish this and has a lot of hard-coded IDs, which I don't like. What would be the most appropriate way to read the database entries and load the appropriate user control? Thanks!
You could store the class names in the database and instance them via reflection. They would need to all implement an interface or all derive from a base class (which I'm assuming you're doing, extending Control) in order to be dealt with without having to handle each individual class differently.
In some ways, the switch (perhaps on an enum rather than just an id) is quite reasonable. You could push the assembly-qualified names into the DB, but what does the DB care about assemblies or classes? And what happens when you refactor? Move to java? Need to support multiple UI engines? Nah, I'd probably just switch myself - in a factory method to encapsulate it... ``` public static Control CreateControl(ControlType controlType) { switch(controlType) { case ControlType.Foo: return new blah... case ControlType.Bar: ... } throw new ArgumentOutOfRangeException(); } ```
Dynamically loading user controls from database entries in C#
[ "", "c#", "winforms", "dynamic", "user-controls", "" ]
I would like to provide a deafult implementation for an event generated by an instance of “Authenticator” (hereafter “objAuthenticator”). Is this an acceptable practice to provide such method that registers for handling objAuthenticator’s own event to provide a deafult implementation? And also, if it is an acceptable pratice, I have two follow-up questions. I have two overloaded methods for “SendEmailOnAuthenticationFailed”. * Which one should I expose to the outside world? * Which one would cause less coupling between classes? ``` public class Authenticator { public event EventHandler AuthenticationFailed = delegate { }; protected virtual void OnAuthenticationFailed() { var handler = AuthenticationFailed; handler(this, EventArgs.Empty); } public void IsAuthenticated() { // Some logic... // ... // Woops authentication failed OnAuthenticationFailed(); } // Is this a better option? public void SendEmailOnAuthenticationFailed() { SendEmailOnAuthenticationFailed(EmailSender); } // Or is this? public void SendEmailOnAuthenticationFailed(EventHandler emailSender) { AuthenticationFailed += emailSender; } private void EmailSender(object sender, EventArgs e) { Console.WriteLine("Send email: EmailSender"); } } ``` ***[UPDATE]***: Answer to Marc Gravell's question > I'm very confused about what your code is trying to do So far, "SendEmailOnAuthenticationFailed" is not exposed to the outside world. What I am trying to do is that (I just added IsAuthenticated) within "IsAuthenticated", when an authentication fails, I would like to send an email without having to write the same event handler everywhere. ***[UPDATE2]***: I have realized that there is no need to expose "AuthenticationFailed". I will keep the class as closed as possible without exposing the event. Why can't we mark more than one answers as "answer"? ... ***[UPDATE3]***: Final output: Here is the how I decided to implement. ``` public class Authenticator { private readonly IEmailResponder _EmailResponder; public Authenticator(IEmailResponder emailResponder) { _EmailResponder = emailResponder; } public void IsAuthenticated() { // Some logic... // ... // Woops authentication failed SendEmailForAuthenticationFailure(); } private void SendEmailForAuthenticationFailure() { _EmailResponder.SendEmail(...); } } ```
The answer to the first question is that it is acceptable. For your follow-ups, the answer is *it depends*. The first one internalizes the email process. If you choose to do this, make sure that you're injecting the email logic. If you choose the latter, it allows you to inject the behavior. If you do the second one, however, I would suggest that you change the name, since it doesn't matter if it's email, or MQ, or tiddlywinks. In fact, the second method isn't really needed, since they can subscribe directly to the event. If you're concerned about decoupling, you need to think about the single responsibility of the Authenticator class. It should be to authenticate, not to send emails. Try this: ``` public class Authenticator { public event EventHandler AuthenticationFailed = delegate { }; protected virtual void OnAuthenticationFailed() { AuthenticationFailed(this, EventArgs.Empty); } } ``` now, consume authenticator: ``` AuthenticationEmailResponder responder = new AuthenticationEmailResponder(emailAddress, emailServer); objAuthenticator.AuthenticationFailed += responder.SendFailureMessage; ```
It is unusual to listen to your own events; such logic can typically go into the OnAuthenticationFailed method. That said - I'm **very** confused about what your code is trying to do, which makes it a little hard to answer... I doubt it applies, but note additionally that there are some subtle [synchronisation issues](http://marcgravell.blogspot.com/2009/02/fun-with-field-like-events.html) you might need to think about (when talking to your own events) if this code is highly threaded (which most isn't).
Registering to "this" object's own event handler within "this" method
[ "", "c#", ".net", "events", "" ]
I'm trying find the best way to remove nonnumerical data from a varchar in SQL e.g. ``` '(082) 000-0000' to '0820000000' or '+2782 000 0000' to '0820000000' ``` The difficulty is i'm not always sure what number formats are coming in, as shown above, so I'd like like everything that is not a number removed essentially. **Update:** From what you guys have said this is a little spike done: ``` declare @Num varchar(20) set @Num = ' + (82) 468 6152 ' --strip nonnumrical data out of @num print @Num set @Num = replace(@Num, ' ', '') set @Num = replace(@Num, '+', '') set @Num = replace(@Num, '-', '') set @Num = replace(@Num, '(', '') set @Num = replace(@Num, ')', '') print @Num ``` Couldn't get the replace [^0-9] expression right though.
If you're using SQL Server 2005 or newer then your best option is to create a [user-defined CLR function](http://msdn.microsoft.com/en-us/library/ms189876.aspx) and use a regular expression to remove all non-numeric characters. If you don't want to use a CLR function then you could create a standard user-defined function. This will do the job although it won't be as efficient: ``` CREATE FUNCTION dbo.RemoveNonNumerics(@in VARCHAR(255)) RETURNS VARCHAR(255) AS BEGIN DECLARE @out VARCHAR(255) IF (@in IS NOT NULL) BEGIN SET @out = '' WHILE (@in <> '') BEGIN IF (@in LIKE '[0-9]%') SET @out = @out + SUBSTRING(@in, 1, 1) SET @in = SUBSTRING(@in, 2, LEN(@in) - 1) END END RETURN(@out) END ``` And then select from your table like so: ``` SELECT dbo.RemoveNonNumerics(your_column) AS your_tidy_column FROM your_table ```
Have a look at [this post](http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=27205) (it's the 8th post down - the first LONG one) which details how to use regular expressions in SQL Server. It's not the fastest (that would be do it before you get to SQL) but it provides a decent way to do it.
Removing nonnumerical data out of a number + SQL
[ "", "sql", "sql-server", "t-sql", "numbers", "" ]
How do I determine what screen my application is running on?
This should get you started. Get a Button and a listbox on a Form and put this in the Button\_Click: ``` listBox1.Items.Clear(); foreach (var screen in Screen.AllScreens) { listBox1.Items.Add(screen); } listBox1.SelectedItem = Screen.FromControl(this); ``` The answer is in the last line, remember that a Form is a Control too.
The System.Windows.Forms.Screen class provides this functionaility. For example: Screen s = Screen.FromPoint(p); where p is a Point somewhere on your application (in screen coordinates).
How do I find what screen the application is running on in C#
[ "", "c#", "winforms", "screen", "" ]
I have the a `ListView` like this ``` <asp:ListView ID="ListView1" runat="server"> <EmptyDataTemplate> <asp:Literal ID="Literal1" runat="server" text="some text"/> </EmptyDataTemplate> ... </asp:ListView> ``` In `Page_Load()` I have the following: ``` Literal x = (Literal)ListView1.FindControl("Literal1"); x.Text = "other text"; ``` but `x` returns `null`. I’d like to change the text of the `Literal` control but I don’t have no idea how to do it.
I believe that unless you call the `DataBind` method of your `ListView` somewhere in code behind, the `ListView` will never try to data bind. Then nothing will render and even the `Literal` control won’t be created. In your `Page_Load` event try something like: ``` protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { //ListView1.DataSource = ... ListView1.DataBind(); //if you know its empty empty data template is the first parent control // aka Controls[0] Control c = ListView1.Controls[0].FindControl("Literal1"); if (c != null) { //this will atleast tell you if the control exists or not } } } ```
You can use the following: ``` protected void ListView1_ItemDataBound(object sender, ListViewItemEventArgs e) { if (e.Item.ItemType == ListViewItemType.EmptyItem) { Control c = e.Item.FindControl("Literal1"); if (c != null) { //this will atleast tell you if the control exists or not } } } ```
Find control in ListView EmptyDataTemplate
[ "", "c#", "asp.net", ".net", "listview", "findcontrol", "" ]
I noticed [Sun](http://en.wikipedia.org/wiki/Sun_Microsystems) is providing a 64-bit version of Java. Does it perform better than the 32-bit version?
Define your workload and what "perform" means to you. This is sort of a running annoyance to me, as a performance geek of long standing. Whether or not a particular change "performs better" or not is dependent, first and foremost, on the workload, ie, what you're asking the program to do. 64 bit Java will often perform better on things with heavy computation loads. Java programs, classically, have heavy I/O loads and heavy network loads; 64 bit vs 32 bit may not matter, but operating systems usually do.
Almost always 64 bits will be slower. To quote Sun from the [HotSpot FAQ](http://java.sun.com/docs/hotspot/HotSpotFAQ.html#64bit_performance): > The performance difference comparing > an application running on a 64-bit > platform versus a 32-bit platform on > SPARC is on the order of 10-20% > degradation when you move to a 64-bit > VM. On AMD64 and EM64T platforms this > difference ranges from 0-15% depending > on the amount of pointer accessing > your application performs. There are more details at the link.
Does Java 64 bit perform better than the 32-bit version?
[ "", "java", "64-bit", "32-bit", "" ]
I have to process a large byte array that is passed to my function. I need to copy the content from this incoming byte array in smaller "chunks" to an outbound byte array. For every "chunk" of data created in the outbound array, I need to call a web service. Upon return, I need to resume looping through the incoming byte array, continuing to pass a whole or partial chunk of data until the complete incoming array is processed (i.e. sent to the web service in chunks). I am very new to C# and I am struggling with a loop that works. I know how to call the web service to handle a "chunk" but I can't get the looping correct. Here is a sketch of the pathetic mess I currently have: ``` int chunkSize = 10000; byte[] outboundBuffer = new byte[chunkSize]; while (BytesRead > 0) { long i = 0; foreach (byte x in incomingArray) { BytesRead += 1; outboundBuffer[i] = incomingArray[i] i++; } uploadObject.Size = BytesRead; uploadObject.MTOMPayload = outboundBuffer; // call web service here and pass the uploadObject // get next "chunk" until incomingArray is fully processed } ``` I know this is a mess and won't work; could someone sketch a proper loop to get this done? Thanks very much.
You might want to look into [Array.Copy](http://msdn.microsoft.com/en-us/library/system.array.copy.aspx) or [Buffer.BlockCopy](http://msdn.microsoft.com/en-us/library/system.buffer.blockcopy.aspx); this will clean things up a bit, since you won't have to copy all of the bytes individually: ``` int incomingOffset = 0; while(incomingOffset < incomingArray.Length) { int length = Math.Min(outboundBuffer.Length, incomingArray.Length - incomingOffset); // Changed from Array.Copy as per Marc's suggestion Buffer.BlockCopy(incomingArray, incomingOffset, outboundBuffer, 0, length); incomingOffset += length; // Transmit outbound buffer } ```
You probably want `Buffer.BlockCopy` (the rawest of the copies; ideally suitable for `byte[]`). Of course, the other option is to use a `MemoryStream` in place of the outbound array, and just `Write` to it each time, then call `ToArray()` or `GetBuffer()` on the `MemoryStream` (with `GetBuffer()`, you need to watch the length; with `ToArray()` it is trimmed for you automatically): ``` MemoryStream ms = new MemoryStream(); byte[] buffer = new byte[BUFFER_SIZE]; int bytesReceived; while((bytesReceived = GetNextChunk(buffer, 0, BUFFER_SIZE)) > 0) { ms.Write(incomingArray, 0, bytesReceived); } byte[] final = ms.ToArray(); ```
Need loop to copy chunks from byte array
[ "", "c#", "arrays", "" ]
> The destructor should only release unmanaged resources that your object holds on to, and it should not reference other objects. If you have only managed references you do not need to (and should not) implement a destructor. You want this only for handling unmanaged resources. **Because there is some cost to having a destructor**, you ought to implement this only on methods that consume valuable, unmanaged resources. > > -- [Top Ten Traps in C# for C++ Programmers](http://www.ondotnet.com/pub/a/dotnet/2002/02/11/csharp_traps.html) The article doesn't go into this in more depth, but what sorts of costs are involved with using a destructor in C#? **Note:** I know about the GC and the fact the destructor isn't called at reliable times, that all aside, is there anything else?
Any object that has a finalizer (I prefer that term over destructor, to emphasize the difference from C++ destructors) is added to the finalizer queue. This is a list of references to objects that has a finalizer that has to be called before they are removed. When the object is up for garbage collection, the GC will find that it's in the finalizer queue and move the reference to the freachable (f-reachable) queue. This is the list that the finalizer background thread goes through to call the finalizer method of each object in turn. Once the finalizer of the object has been called, the object is no longer in the finalizer queue so it's just a regular managed object that the GC can remove. This all means that if an object has a finalizer, it will survive at least one garbage collection before it can be removed. This usually means that the object will be moved to the next heap generation, which involves actually moving the data in memory from one heap to another.
[The most extensive discussion I've seen on how this all works was done by Joe Duffy](http://www.bluebytesoftware.com/blog/PermaLink.aspx?guid=88e62cdf-5919-4ac7-bc33-20c06ae539ae). It has more detail than you might imagine. Following that up, I put together [a practical approach](http://www.atalasoft.com/cs/blogs/stevehawley/archive/2006/09/21/10887.aspx) to doing this on a day to day - less about the cost but more about the implementation.
Costs involved with C# destructors (aka: finalizers)?
[ "", "c#", "idisposable", "destructor", "finalizer", "" ]
When an exception is thrown in an Asp.Net web page, an error message is displayed with the complete stack trace. Example below: > Stack Trace: > IndexOutOfRangeException: Index was outside the bounds of the array. > > **MyNameSpace.SPAPP.ViewDetailsCodeBehind.LoadView() +5112** > MyNameSpace.SPAPP.ViewDetailsCodeBehind.Page\_Load(Object sender, EventArgs e) +67 > System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +13 > System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +43 > System.Web.UI.Control.OnLoad(EventArgs e) +98 > ... ... The problem is that the Line number displayed does not correspond to the line in my code that originated the exception. In the example above, the stack shows line number 5111, yet my code behind .cs file only has 250 lines! The aspx page is stored in a SharePoint site and the Assembly with the code behind has been deployed to GAC. Also, I've compiled in Debug mode. **Given the settings above, how can I find out what line in my code caused the Exception?** *Clarification as pointed out by [strelokstrelok](https://stackoverflow.com/users/2788/strelokstrelok):* > In **Release mode** the number in front of the exception is NOT the line of code. Instead it's an offset to the native compiled code, which doesn't have any meaning to humans. More about this here: <http://odetocode.com/Blogs/scott/archive/2005/01/24/963.aspx> > > In **debug mode** the PDB file will automatically map the native code offset to your .cs line in code and the number displayed WILL be the corresponding line in code.
Those numbers are NOT line numbers. In Release mode the stack trace contains the offsets into the native compiled code instead of line numbers. You can read some more about it here: <http://odetocode.com/Blogs/scott/archive/2005/01/24/963.aspx> The only way to get line numbers in a stack trace is if you built you code in debug mode with the PDB files available.
Your code behind file is not the complete class, it's only a portion that is used when the class as a whole is compiled by ASP.NET. To find what is truly on that line, take a look at the compiled class / assembly using a tool like Reflector.
Exception error message with incorrect line number
[ "", "c#", "asp.net", "sharepoint", ".net-2.0", "gac", "" ]
**Duplicate:** [How can I prevent database being written to again when the browser does a reload/back?](https://stackoverflow.com/questions/305684/how-can-i-prevent-database-being-written-to-again-when-the-browser-does-a-reload) --- Is there a nice elegant way to stop an .aspx page from resubmitting form data if the user requests a refresh from their browser? Ideally without losing the viewstate. **EDIT:** The most elegant solution for me turned out to be wrapping the relevant controls in an asp.net ajax UpdatePanel. Ta everyone.
There is no elegant way, but there are options. As stated in one of the other answers you can do a HTTP redirect response. Another solution is submitting through means such as Ajax, or through an iframe.
Generate and insert a unique identifier into the page that's also stored on the server. Store that when the form is submitted and don't let that value get passed in multiple times. **UPDATE**: This is the only "proper" way to do this. When I say this, I mean storing something on the server side. Anything based on client behaviour is potentially buggy. Those implementations don't concern themselves with potential browser bugs, incompatibilities, javascript disabled, connection timeouts etc. Something on the server side needs to know that this particular action has already been performed and stop it on the server side. I stress this, because often this needs to be done to stop a client refreshing and making multiple orders (and potentially multiple bills). This also allows the client to refresh gracefully if the action hasn't actually been received and acted upon by the server (e.g. in the case of a timeout).
Preventing webform resubmission on browser refresh without losing viewstate
[ "", "c#", "asp.net", "browser", "webforms", "" ]
I switched from NAnt to using Python to write build automation scripts. I am curious if whether any build frameworks worth using that are similar to Make, Ant, and NAnt, but, instead, are Python-based. For example, Ruby has Rake. What about Python?
Try [SCons](http://www.scons.org/) Or are you looking for something just to build python projects?
The following look good, but I haven't used them (yet): * [Paver](http://www.blueskyonmars.com/projects/paver/) * [waf](http://code.google.com/p/waf/) * [doIt](http://python-doit.sourceforge.net/) Paver looks especially promising.
Are there any good build frameworks written in Python?
[ "", "python", "build-process", "build-automation", "" ]
I want to store some fragments of an XML file in separate files. It seems, there is no way to do it in a straight way: Reading the chunks fails. I always get the Exception "javax.xml.transform.TransformerException: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed." It only works when there is only ONE 'root' element (which is not the root element in the normal sense). I understand that XML with multiple 'roots' is not well-formed, but it should be treated as a chunk. Please, before suggesting some work-around-solutions, tell me: Are XML chunks valid at all? And IF so, can they be read out using standard JDK6 API? Test code: ``` String testChunk1 = "<e1>text</e1>"; String testChunk2 = "<e1>text</e1><e2>text</e2>"; // the following doesn't work with 'testChunk2' StringReader sr = new StringReader(testChunk1); StringWriter sw = new StringWriter(); TransformerFactory.newInstance().newTransformer().transform( new StreamSource(sr), new StreamResult(sw)); System.out.println(sw.toString()); ```
The W3C have been working towards defining a standard for [XML fragment interchange](http://www.w3.org/TR/xml-fragment). I'm mentioning it not because it's a solution to your problem, but it's definitely relevant to see that there's discussion of how to handle such things. In the .NET world you can work with XML fragments and, for example, [validate them against a schema](http://support.microsoft.com/kb/318504). This suggests that it is worth searching for similar support in the Java libraries. If you want to transform such fragments with XSLT, a very common approach is to put a wrapper element around them, which can then act as the root of the DOM.
While I suppose there must be some way, perhaps kludgy, to do what you want, I am not aware of any way to do it. The standard XML parsers expect well-formed XML, as you're discovering. If you want to store your XML as a number of separate fragments in different files, then probably the best way to do this is to create your own Reader or InputStream that actually (behind the scenes) reads all of the fragments in order, and then provide that wrapped Reader or InputStream to the transformer. That way, the XML parser sees a single XML document but you can store it however you want. If you do something like this, the fragments (except for the very first) cannot start with the standard XML header: ``` <?xml version="1.0" encoding="UTF-8" ?> ```
Are XML chunks valid?
[ "", "java", "xml", "validation", "" ]
I'm writing some little applications that parse the source of a few web pages, extract some data, and save it into another format. Specifically, some of my banks don't provide downloads of transactions/statements but they do provide access to those statements on their websites. I've done one fine, but another (HSBC UK) is proving a pain in the arse, since its source is not valid XHTML. For example there is whitespace before the `<?xml?>` tag, and there are places where `==` is used instead of `=` between an attribute name and its value (e.g. `<li class=="lastItem">`). Of course, when I pass this data into my `XmlDocument`, it throws a wobbly (more accurately an exception). My question is: is it possible to relax the requirements for XML parsing in C#? I know it's far better to fix these problems at source - that's absolutely my attitude too - but there's roughly zero chance HSBC would change their website which already works in most browsers just for little old me.
Take a look at the [HTML agility pack](http://www.codeplex.com/htmlagilitypack). It allows you to extract elements of a non-XHTML-compliant web page through XPath, as if it were a well-formed XHTML document. And for the love of Kleene, don't try to regexp a HTML page with any kind of complexities!
I don't believe you can relax the parsing, but you could run it through something like [HTML Tidy](http://tidy.sourceforge.net/) first to let *that* deal with the mess.
Parsing an XML/XHTML document but ignoring errors in C#
[ "", "c#", "xml", "" ]
More specifically, what's the most elegant way to cast the Graphics object in a paint() call up to a Graphics2D object? ``` public void paint(Graphics g) { // How do I convert/cast/etc the g variable to a Graphics2D object? } ``` Or am I missing the point here? Is there a better way to handle this in general?
``` Graphics2D g2 = (Graphics2D)g; ``` There is no need to be fancy about it. You will always receive a G2D in the paint method. I even like to call the `Graphics` parameter `g2` and then cast to a `Graphics2D` named `g` to make using it simpler.
``` Graphics2D g2 = Graphics2D.class.cast(g); ```
What is the most elegant way to cast a variable you just received as an argument?
[ "", "java", "graphics", "" ]
I'm trying to perform the following cast ``` private void MyMethod(object myObject) { if(myObject is IEnumerable) { List<object> collection = (List<object>)myObject; ... do something } else { ... do something } } ``` But I always end up with the following excepction: Unable to cast object of type 'System.Collections.Generic.List`1[MySpecificType]' to type 'System.Collections.Generic.List`1[System.Object]' I really need this to work because this method needs to be very generic to receive single objects and collections both of unspecified types. Is this possible, or is there another way of accomplishing this. Thank you.
C# 4 will have covariant and contravariant template parameters, but until then you have to do something nongeneric like ``` IList collection = (IList)myObject; ```
You can't cast an IEnumerable<T> to a List<T>. But you can accomplish this using LINQ: ``` var result = ((IEnumerable)myObject).Cast<object>().ToList(); ```
Cast received object to a List<object> or IEnumerable<object>
[ "", "c#", ".net", "" ]
Below is a sample implementation of overriding Object.Equals() for an entity base class from which all other entities in an application derive. All entity classes have the property Id, which is a nullable int. (It's the primary key of whatever table the entity class corresponds to.) ``` public override bool Equals(object obj) { if (obj == null || GetType() != obj.GetType()) return false; if (base.Equals(obj)) return true; return Id.HasValue && ((EntityBase) obj).Id.HasValue && Id.Value == ((EntityBase) obj).Id.Value; } ``` Given this implementation of Equals(), how do you correctly implement GetHashCode()?
If you're deriving from something that already overrides `GetHashCode` I'd implement it as: ``` public override int GetHashCode() { unchecked { int hash = 37; hash = hash * 23 + base.GetHashCode(); hash = hash * 23 + Id.GetHashCode(); return hash; } } ``` A null value of Id will return 0 for Id.GetHashCode(). If your class just derives from Object, I'd just return `Id.GetHashCode()` - you do *not* want to include the `object.GetHashCode` implementation in your hash code, as that basically ends up being object identity. Note that your equality definition *won't* return `true` if neither entity has an Id, but the same hashcode will be returned from both objects. You may wish to consider changing your Equals implementation.
What Jon Skeet answered is a good solution, however, you might want to add an unchecked code block to allow integer overflowing ``` unchecked { int hash = ...; return hash } ``` <https://msdn.microsoft.com/en-us/library/khy08726(v=vs.140).aspx> > If neither checked nor unchecked is specified, the default context depends on external factors such as compiler options. I'd also like to add, again, that using `base.GetHashCode()` on POCO's will call the default `object.GetHashCode`. That's definitely not what you want...
What is the correct implementation for GetHashCode() for entity classes?
[ "", "c#", ".net", "orm", "" ]
I'm learning JAX-RS (aka, JSR-311) using Jersey. I've successfuly created a Root Resource and am playing around with parameters: ``` @Path("/hello") public class HelloWorldResource { @GET @Produces("text/html") public String get( @QueryParam("name") String name, @QueryParam("birthDate") Date birthDate) { // Return a greeting with the name and age } } ``` This works great, and handles any format in the current locale which is understood by the Date(String) constructor (like YYYY/mm/dd and mm/dd/YYYY). But if I supply a value which is invalid or not understood, I get a 404 response. For example: ``` GET /hello?name=Mark&birthDate=X 404 Not Found ``` How can I customize this behavior? Maybe a different response code (probably "400 Bad Request")? What about logging an error? Maybe add a description of the problem ("bad date format") in a custom header to aid troubleshooting? Or return a whole Error response with details, along with a 5xx status code?
There are several approaches to customize the error handling behavior with JAX-RS. Here are three of the easier ways. The first approach is to create an Exception class that extends WebApplicationException. Example: ``` public class NotAuthorizedException extends WebApplicationException { public NotAuthorizedException(String message) { super(Response.status(Response.Status.UNAUTHORIZED) .entity(message).type(MediaType.TEXT_PLAIN).build()); } } ``` And to throw this newly create Exception you simply: ``` @Path("accounts/{accountId}/") public Item getItem(@PathParam("accountId") String accountId) { // An unauthorized user tries to enter throw new NotAuthorizedException("You Don't Have Permission"); } ``` Notice, you don't need to declare the exception in a throws clause because WebApplicationException is a runtime Exception. This will return a 401 response to the client. The second and easier approach is to simply construct an instance of the `WebApplicationException` directly in your code. This approach works as long as you don't have to implement your own application Exceptions. Example: ``` @Path("accounts/{accountId}/") public Item getItem(@PathParam("accountId") String accountId) { // An unauthorized user tries to enter throw new WebApplicationException(Response.Status.UNAUTHORIZED); } ``` This code too returns a 401 to the client. Of course, this is just a simple example. You can make the Exception much more complex if necessary, and you can generate what ever http response code you need to. One other approach is to wrap an existing Exception, perhaps an `ObjectNotFoundException` with an small wrapper class that implements the `ExceptionMapper` interface annotated with a `@Provider` annotation. This tells the JAX-RS runtime, that if the wrapped Exception is raised, return the response code defined in the `ExceptionMapper`.
``` @Provider public class BadURIExceptionMapper implements ExceptionMapper<NotFoundException> { public Response toResponse(NotFoundException exception){ return Response.status(Response.Status.NOT_FOUND). entity(new ErrorResponse(exception.getClass().toString(), exception.getMessage()) ). build(); } } ``` Create above class. This will handle 404 (NotFoundException) and here in toResponse method you can give your custom response. Similarly there are ParamException etc. which you would need to map to provide customized responses.
JAX-RS / Jersey how to customize error handling?
[ "", "java", "rest", "error-handling", "jersey", "jax-rs", "" ]
How would i create a multi value Dictionary in c#? E.g. `Dictionary<T,T,T>` where the first T is the key and other two are values. so this would be possible: `Dictionary<int,object,double>` Thanks
Just create a `Pair<TFirst, TSecond>` type and use that as your value. I have an example of one in my [C# in Depth source code](http://csharpindepth.com/Downloads.aspx). Reproduced here for simplicity: ``` using System; using System.Collections.Generic; public sealed class Pair<TFirst, TSecond> : IEquatable<Pair<TFirst, TSecond>> { private readonly TFirst first; private readonly TSecond second; public Pair(TFirst first, TSecond second) { this.first = first; this.second = second; } public TFirst First { get { return first; } } public TSecond Second { get { return second; } } public bool Equals(Pair<TFirst, TSecond> other) { if (other == null) { return false; } return EqualityComparer<TFirst>.Default.Equals(this.First, other.First) && EqualityComparer<TSecond>.Default.Equals(this.Second, other.Second); } public override bool Equals(object o) { return Equals(o as Pair<TFirst, TSecond>); } public override int GetHashCode() { return EqualityComparer<TFirst>.Default.GetHashCode(first) * 37 + EqualityComparer<TSecond>.Default.GetHashCode(second); } } ```
If you are trying to group values together this may be a great opportunity to create a simple struct or class and use that as the value in a dictionary. ``` public struct MyValue { public object Value1; public double Value2; } ``` then you could have your dictionary ``` var dict = new Dictionary<int, MyValue>(); ``` you could even go a step further and implement your own dictionary class that will handle any special operations that you would need. for example if you wanted to have an Add method that accepted an int, object, and double ``` public class MyDictionary : Dictionary<int, MyValue> { public void Add(int key, object value1, double value2) { MyValue val; val.Value1 = value1; val.Value2 = value2; this.Add(key, val); } } ``` then you could simply instantiate and add to the dictionary like so and you wouldn't have to worry about creating 'MyValue' structs: ``` var dict = new MyDictionary(); dict.Add(1, new Object(), 2.22); ```
Multi value Dictionary
[ "", "c#", "" ]
Table layout: ``` CREATE TABLE t_order (id INT, custId INT, order DATE) ``` I'm looking for a SQL command to select a maximum of one row per order (the customer who owns the order is identified by a field named custId). I want to select *ONE* of the customer's orders (doesn't matter which one, say sorted by id) if there is no order date given for any of the rows. I want to retrieve an empty Resultset for the customerId, if there is already a record with given order date. Here is an example. Per customer there should be one order at most (one without a date given). Orders that have already a date value should not appear at all. ``` +---------------------------------------------------------+ |id | custId | date | +---------------------------------------------------------+ | 1 10 NULL | | 2 11 2008-11-11 | | 3 12 2008-10-23 | | 4 11 NULL | | 5 13 NULL | | 6 13 NULL | +---------------------------------------------------------+ | | | Result \ | / \ / +---------------------------------------------------------+ |id | custId | date | +---------------------------------------------------------+ | 1 10 NULL | | | | | | | | 5 13 NULL | | | +---------------------------------------------------------+ powered be JavE ``` Edit: I've choosen glavić's answer as the correct one, because it provides the correct result with slightly modified data: ``` +---------------------------------------------------------+ |id | custId | date | +---------------------------------------------------------+ | 1 10 NULL | | 2 11 2008-11-11 | | 3 12 2008-10-23 | | 4 11 NULL | | 5 13 NULL | | 6 13 NULL | | 7 11 NULL | +---------------------------------------------------------+ ``` Sfossen's answer will not work when customers appear more than twice because of its where clause constraint a.id != b.id. Quassnoi's answer does not work for me, as I run server version 4.0.24 which yields the following error: [alt text http://img25.imageshack.us/img25/8186/picture1vyj.png](http://img25.imageshack.us/img25/8186/picture1vyj.png)
Try this: ``` SELECT to1.* FROM t_order AS to1 WHERE to1.date IS NULL AND to1.custId NOT IN ( SELECT to2.custId FROM t_order AS to2 WHERE to2.date IS NOT NULL GROUP BY to2.custId ) GROUP BY to1.custId ``` For MySQL 4: ``` SELECT to1.* FROM t_order AS to1 LEFT JOIN t_order AS to2 ON to2.custId = to1.custId AND to2.date IS NOT NULL WHERE to1.date IS NULL AND to2.id IS NULL GROUP BY to1.custId ```
For a specific customer it's: ``` SELECT * FROM t_order WHERE date IS NULL AND custId=? LIMIT 1 ``` For all customers its: ``` SELECT a.* FROM t_order a LEFT JOIN t_order b ON a.custId=b.custID and a.id != b.id WHERE a.date IS NULL AND b.date IS NULL GROUP BY custId; ```
Mysql4: SQL for selecting one or zero record
[ "", "sql", "mysql", "database", "" ]
should be simple enough but it's causing me a couple of issues. I have a data set similar to the following: ``` User UserID Name Age UserPropertyValues UserID PropertyCodeValueID PropertyCodes PropertyCodeID PropertyCodeName PropertyCodeValues PropertyCodeValueID PropertyCodeID PropertValue ``` Now let's assume the tables contain the following data: ``` 1 John 25 2 Sarah 34 1 2 1 3 2 1 2 3 1 FavColour 2 CarMake 3 PhoneType 1 1 Blue 2 1 Yellow 3 2 Ford 4 3 Mobile 5 3 Landline ``` Now from this I'm looking to create a view to return the User details, as well as the property values for Property code 1 and 2 like so: ``` John 25 Yellow Ford Sarah 34 Blue Ford ``` The queries I have tried so far tend to return repeating rows of data : ``` John 25 Yellow John 25 Ford Sarah 34 Blue Sarah 34 Ford ``` Any help is appreciated, thank you all in advance.
Input data: ``` DECLARE @User TABLE (UserID INT, Name VARCHAR(10), Age INT) INSERT INTO @User SELECT 1, 'John', 25 UNION SELECT 2, 'Sarah', 34 DECLARE @UserPropertyValues TABLE(UserID INT, PropertyCodeValueID INT) INSERT INTO @UserPropertyValues SELECT 1, 2 UNION SELECT 1, 3 UNION SELECT 2, 1 UNION SELECT 2, 3 DECLARE @PropertyCodes TABLE (PropertyCodeID INT, PropertyCodeName VARCHAR(10)) INSERT INTO @PropertyCodes SELECT 1, 'FavColour' UNION SELECT 2, 'CarMake' UNION SELECT 3, 'PhoneType' DECLARE @PropertyCodeValues TABLE (PropertyCodeValueID INT, PropertyCodeID INT, PropertValue VARCHAR(10)) INSERT INTO @PropertyCodeValues SELECT 1, 1, 'Blue' UNION SELECT 2, 1, 'Yellow' UNION SELECT 3, 2, 'Ford' UNION SELECT 4, 3, 'Mobile' UNION SELECT 5, 3, 'Landline' ``` If two properties is all that you need in result, and each user have those properties, then try this: ``` SELECT U.Name, U.Age, PCVFC.PropertValue, PCVCM.PropertValue FROM @User U INNER JOIN @UserPropertyValues UPVFC ON U.UserID = UPVFC.UserID INNER JOIN @PropertyCodeValues PCVFC ON UPVFC.PropertyCodeValueID = PCVFC.PropertyCodeValueID AND PCVFC.PropertyCodeID = 1 INNER JOIN @UserPropertyValues UPVCM ON U.UserID = UPVCM.UserID INNER JOIN @PropertyCodeValues PCVCM ON UPVCM.PropertyCodeValueID = PCVCM.PropertyCodeValueID AND PCVCM.PropertyCodeID = 2 ``` **[edit]** But to handle possible NULL values better use this: ``` SELECT U.Name, U.Age, FC.PropertValue, CM.PropertValue FROM @User U LEFT JOIN ( SELECT UserID, PropertValue FROM @UserPropertyValues UPV INNER JOIN @PropertyCodeValues PCV ON UPV.PropertyCodeValueID = PCV.PropertyCodeValueID AND PCV.PropertyCodeID = 1 ) FC ON U.UserID = FC.UserID LEFT JOIN ( SELECT UserID, PropertValue FROM @UserPropertyValues UPV INNER JOIN @PropertyCodeValues PCV ON UPV.PropertyCodeValueID = PCV.PropertyCodeValueID AND PCV.PropertyCodeID = 2 ) CM ON U.UserID = CM.UserID ```
What you really need to is abandon this type of database design as soon as humanly possible. It will never be either effective of efficient. To get three types of values you have to join to the table three times. Once you have 30 or forty differnt types of information, you will need to join to the table that many times (and left joins at that). Further everytime you want any information you will need to join to this table. I see this as creating a major locking issue in your database. The people who originally designed one of the databases I work with did this and caused a huge performance issue when the company grew from having one or two customers to the largest in our industry. If the properties are ones that will likely only have one realted records per person, put them into the user table. If they will have multiple records then create a separate table for each type of information (one for hones, one for email, one for cartype, etc.) Since the information you will eventually want to collect will usually be more than the simple value and differnt for each type of information they must be in separate tables. Then when you only need to see one value (say phone number but not email) you join to just that table and you aren't interfeing with people trying to access email but not phone number. And if you have a yellow ford or white Honda, it will be stored in only one record in the auto table rather than two property records in your design.
Problem creating a view containing lookup tables
[ "", "sql", "database", "view", "" ]
Suddenly my whole project stopped compiling at all, showing the following message: > Program 'path\_to\_obj\_project\_folder' does not contain a static 'Main' > method suitable for an entry point I made no changes to project properties, just added some classes, moved some other classes into folders. Its an WPF Application project so it should be all OK. Entry point is where it should be, file App.xaml was not modified at all :( What should I do to make it work again? **NOTE** *For reference*: if renaming the `App.xaml` this can happen. As OP stated, `App.xaml` was not altered; however, this is added for anyone that does rename the `App.xaml`.
Check the properties of *App.xaml*. Is the `Build Action` still `ApplicationDefinition`?
Maybe the "Output type" in properties->Application of the project must be a "Class Library" instead of console or windows application.
How to fix WPF error: "Program does not contain a static 'Main' method suitable for an entry point"?
[ "", "c#", "wpf", "" ]
In this loop, I'm trying to take user input and continually put it in a list till they write "stop". When the loop is broken, the for loop prints out all of the li's. How would I take the output of the for loop and make it a string so that I can load it into a variable? ``` x = ([]) while True: item = raw_input('Enter List Text (e.g. <li><a href="#">LIST TEXT</a></li>) (Enter "stop" to end loop):\n') if item == 'stop': print 'Loop Stopped.' break else: item = make_link(item) x.append(item) print 'List Item Added\n' for i in range(len(x)): print '<li>' + x[i] + '</li>\n' ``` I want it to end up like this: Code: ``` print list_output ``` Output: ``` <li>Blah</li> <li>Blah</li> <li>etc.</li> ```
In python, strings support a `join` method (conceptually the opposite of `split`) that allows you to join elements of a list (technically, of an iterable) together using the string. One very common use case is `', '.join(<list>)` to copy the elements of the list into a comma separated string. In your case, you probably want something like this: ``` list_output = ''.join('<li>' + item + '</li>\n' for item in x) ``` If you want the elements of the list separated by newlines, but *no newline at the end of the string*, you can do this: ``` list_output = '\n'.join('<li>' + item + '</li>' for item in x) ``` If you want to get really crazy, this might be the most efficient (although I don't recommend it): ``` list_output = '<li>' + '</li>\n<li>'.join(item for item in x) + '</li>\n' ```
``` s = "\n".join(['<li>' + i + '</li>' for i in x]) ```
How would I make the output of this for loop into a string, into a variable?
[ "", "python", "" ]
I'm trying to upload a PDF file to a website using Hot Banana's content management system using a Python script. I've successfully logged into the site and can log out, but I can't seem to get file uploads to work. The file upload is part of a large complicated web form that submits the form data and PDF file though a POST. Using Firefox along with the Firebug and Tamper Data extensions I took a peek at what the browser was sending in the POST and where it was going. I believe I mimicked the data the browser was sending in the code, but I'm still having trouble. I'm importing cookielib to handle cookies, poster to encode the PDF, and urllib and urllib2 to build the request and send it to the URL. Is it possible that registering the poster openers is clobbering the cookie processor openers? Am I doing this completely wrong? --- Edit: What's a good way to debug the process? At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually. Edit 2: Chris Lively suggested I post the error I'm getting. The response from urllib2 doesn't generate an exception, but just returns: ``` <script> if (parent != window) { parent.document.location.reload(); } else { parent.document.location = 'login.cfm'; } </script> ``` I'll keep at it.
**"What's a good way to debug [a web services] process?"** *At the moment, I'm just dumping out the urllib2 response to a text file and examining the output to see if it matches what I get when I do a file upload manually.* Correct. That's about all there is. HTTP is a very simple protocol -- you make a request (POST, in this case) and the server responds. Not much else involved and not much more you can do while debugging. What else would you like? Seriously. What kind of debugger are you imagining might exist for this kind of stateless protocol?
A tool like [WireShark](http://www.wireshark.org/) will give you a more complete trace at a much lower-level than the firefox plugins. Often this can be something as simple as not setting the content-type correctly, or failing to include content-length.
How to debug a file upload?
[ "", "python", "post", "upload", "urllib2", "" ]
I am designing a class... there are crucial methods that need an object passed to them or they need to be able to "get" an object. So the question is, should you use getter/setters OR directly send the object as an argument to the method - in order for the method to work properly. Or should you set objects via the constructor if they are really crucial to the class operating correctly?
If it doesn't make sense to have an instance of this class without a certain object (eg it might make no sense to construct a data-access class without a connection to a database), then it's a "dependency" and should be part of the constructor. If your class can survive without it, or can use some default value, then you could instead make it a property and check if it's assigned before being used. I'd strongly advocate constructor dependency injection in most cases though.
The question isn't how "crucial" they are (every method needs to have the data it needs, by definition). A better question is how frequently do they change. If they'll be different each time the method is called (or at least reasonably could be) they should be parameters. If they are expected to generally be the same for the life of the object (or a significant fraction there of) they should be stored with the object. In the latter case, don't *just* rely on the user calling a setter. If they are required, they should be set in the constructor even if they can be changed by a setter.
Class Design - Properties or Parameters?
[ "", "c#", "class-design", "" ]
I am using JavaScript to specify the target link. It is not working properly. What change should I do in order for it to work correctly? My code: ``` var link = create_element(cell, 'img'); link.setAttribute("src", "images/sub.png"); link.href = "http://localhost/"; ```
As Sergei stated, you need a link with an image inside it. Below is an extension to your code. It's completely untested though but gives a general idea on how you might achieve it based on what you had already. ``` var link = create_element(cell, 'a'); link.href = "http://localhost/"; var image = create_element(link, 'img'); image.setAttribute("src", "images/sub.png"); ```
You can't use href for an img tag. What about adding a clickhandler? ``` link.onclick = function(){top.location.replace("http://localhost");}; ```
Hyperlinking for image
[ "", "javascript", "image", "dom", "hyperlink", "" ]
POSIX uses `struct timeval` to represent time intervals. ``` struct timeval { time_t tv_sec; unsigned tv_usec; }; ``` GHS Integrity represents `Time` in the following manner, ``` struct Time { time_t Seconds; unsigned Fraction; }; ``` For example, 0.5sec is represented as `0x80000000` and 0.25sec is represented as `0x40000000`. What is the best way to convert from `timeval` to `Time`? (p.s. The answer is not to link the POSIX library into Integrity and use POSIX calls.)
This is an unusual way to represent time. Anyway, there are two easy ways to do it either way if you have 64-bit integers or floating points (the former are more likely on an embedded system): ``` /* assuming long is 64-bit and int is 32-bit or in general long twice the size of int: */ Fraction = (long) tv_usec * UINT_MAX / 1000000 /* usecs to fraction */ tv_usec = (long) Fraction * 1000000 / UINT_MAX /* fraction to usecs */ /* assuming floating points are available: */ Fraction = tv_usec * ((double) UINT_MAX / 1000000) /* usecs to fraction */ tv_usec = Fraction * ((double) 1000000 / UINT_MAX) /* fraction to usecs */ ``` Obviously both are only integer approximations, because most values in one scale cannot be represented as integers in the other scale. And in one direction you may be losing some precision because the `Fraction` form can represent much finer times - one increment of the `Fraction` form is less than 0.00024 microseconds. But that is only if your timer can actually measure those values which is not very likely - most timers cannot even measure at the scale of microseconds, and the value you see in `tv_usec` is often rounded. If neither 64-bit integers nor floating points are available an option, you could do it iteratively with an extra variable. I was thinking if there is a simpler (and less expensive, considering that this is timing code) way to do such scaling than doing the equivalent of iterative 64-bit multiplication and division with two 32-bit integers. Of the two ideas that came to my mind, one would not do exact even scaling and may produce results that are by up to 9 bits off, and the one that compensates for that turns out not to be any cheaper. If something new comes up in my mind I will post it here, but this is an interesting challenge. Does anyone else have a good algorithm or snippet? Perhaps with the aid of a small precomputed table?
You might wanna read up on floating-point representation as Fraction seems to be the first bits of the significand. ``` Time t; u64 s = 1000000 * t.Seconds + u64(1000000 * reinterpret_cast<double>(0x3FF0000000000000|((u64)t.Fraction>>12))) timeval tv; tv.tv_sec = s / 1000000 tv.tv_usec = s % 1000000 ``` This is foobar but it really works... you'll need 64-bit integers and double floating-point.
How do I convert from a 32-bit int representing time in usec to a 32-bit int representing time as a binary fraction in secs?
[ "", "c++", "c", "embedded", "" ]
How do you specify where to start reading in a file when using StreamReader? I have created a streamreader object, along with a file stream object. After both objects are created, how would I go upon controlling where I want the StreamReader to start reading from a file? Let's say the file's contents are as follows, ``` // song list. // junk info. 1. Song Name 2. Song Name 3. Song Name 4. Song Name 5. Song Name 6. Song Name ``` How would I control the streamreader to read from let's say #2? Also, how could I also control where to make it stop reading by a similar delimiter like at #5? Edit: By delimiter I mean, a way to make StreamReader start reading from ('2.')
Are you trying to deserialize a file into some in-memory object? If so, you may want to simply parse the entire file in using `ReadLine` or something similar, store each line, and then access it via a data structure such as a `KeyValuePair<int, string>`. Update: Ok... With the new info, I think you have two options. If you're looking at reading until you find a match, you can `Peek()`, check to see if the character is the one you're looking for, and then `Read()`. Alternatively, if you're looking for a set position, you can simply `Read()` that many characters and throw away the return value. If you're looking for complex delimiter, you can read the entire line or even the entire file into memory and use [Regular Expressions](http://msdn.microsoft.com/en-us/library/30wbz966.aspx). Hope that helps...
If the file contains new line delimiters you can use [ReadLine](http://msdn.microsoft.com/en-us/library/system.io.streamreader.readline.aspx) to read a line at a time. So to start reading at line #2, you would read the first line and discard and then read lines until line #5.
How do you specify where to start reading in a file when using StreamReader?
[ "", "c#", "streamreader", "" ]
Specifically I am trying to grasp how SQL statements differ from normal C style programming, I can't seem to understand how to do stuff like "for every userid in this table that has another column of data equal to such and such, run this stored procedure" which in programming would be a for loop, how the heck do you do stuff like that? Or like, for each row in tableA that has a contentID of 11, add a new row to tableB containing this data and the userID from the row of tableA found containing contentID of 11... Anyone mind possibly writing a bit on how I should understand SQL statements compared to programming? I feel like I could wield it better if I understood how I was suppose to think about it...
They are approaching the world from different points of view. C is about performing actions. SQL is about storing data, and manipulating data. The only "actions" it is good at are pulling and changing data. Think of all your data like a [Venn diagram](http://en.wikipedia.org/wiki/Venn_Diagram)- SQL lets you "look" at any part of that diagram you want. If you want to actually do something to that data, then in C, you might say "Go to every user and perform this action on them", as in ``` //if a customer is late, send them a reminder for(int i=0;i<USER_COUNT-1;++i){ if(LATE_ON_PAYMENTS=CustomerType(Customers[i])){ SendReminder(Customers[i]); } //if cust is late on their payments } //for ea customer ``` In SQL, you would be able to ASK for the list of users, as in: ``` SELECT * FROM CUSTOMERS WHERE LATE_FLAG = 'Y'; ``` Or you could change data regarding those customers, as in: ``` UPDATE CUSTOMERS SET TRUST_LEVEL = TRUST_LEVEL - 1 --trust a little less when they are late WHERE LATE_FLAG = 'Y'; ``` Note that this UPDATE could affect any number of rows, but there is no loop... you are simply saying "look up these records, and change them in this way". But if you wanted to send them a reminder, well that's just too bad... you've got to use C or a stored procedure to do that. You really get the best of both worlds when you **combine** a traditional language with SQL. If you can replace the earlier example in C with this (disclaimer: I know this is bogus code, it's just an example): ``` //if a customer is late, send them a reminder //get all the late customers sqlCommand = 'SELECT CUSTOMER_ID FROM CUSTOMERS WHERE LATE_FLAG = ''Y'''; dataSet = GetDataSet(sqlCommand); //now loop through the late customers i just retrieved for(int i=0;i<dataSet.RecordCount - 1;++i){ SendReminder(dataSet[i].Field('CUSTOMER_ID')); } //for ea customer ``` Now the code is more readable, and everyone is pointed at the same data source at runtime. You also avoid the potentially messy code in C that would have been involved in building your list of customers - now it is just a dataset. Just as SQL sucks at doing imperative actions, C sucks at manipulating data sets. Used together, they can easily get data, manipulate it, and perform actions on it.
Let me take a crack at this. I'm taking the long road here, so bear with me. Ultimately all programs, data, etc. on a computer are composed of the same stuff: ones and zeros. Nothing more, nothing less. So how does a computer know to treat one set of ones and zeros as an image and another set as an executable? The answer is context. It's something that humans are terribly good at so it's no surprise that it's the underpinning of much of what a computer does. The mechanisms are complex but the end effect amounts to a computer that constantly switches perspective in order to do incredibly flexible things with an incredibly limited data set. I bring this up because computer languages are similar. In the end, ALL computer languages end up as a series of op-codes ran through the processor. In other words, it's assembly language all the way down. **All computer languages are assembly language, including any implementation of SQL.** The reason we bother is this: programming languages allow us to create a useful illusion of approaching problems from a new perspective. They give us a way to take a problem and re-frame the solution. At the risk of being cliche, when we don't like the answer to a problem, a different programming language allows us to ask a different question. So, when you approach a language, be it a query language or an object-oriented language or a procedural language, your first question needs to be, "What is this language's perspective? What's its outlook on the task of problem solving?" I'd go so far as to suggest that a language without a clear vision of itself is more trouble than it's worth. With C, I would suggest that the perspective is this: "Even the lowest level operations of vastly different processors can be described in a simple, common language." C is designed to get in the driver's seat of any processor out there while still having the same old steering wheel, pedals, and dash. So with C, you're doing everything. That's why it's referred to as a "high-level assembly language". Or, to quote a friend of mine, "C is the Latin of computer languages. Assembly language is the grunts of the apes in the trees." SQL is an entirely different beast with an entirely different perspective... or is it? SQL's perspective is this: "Even the most complex commands of vastly different databases can be described in a simple, common language." Sounds familiar, eh? SQL is designed to allow you to get into the driver's seat of any *database software* and have the same steering wheel, pedals, etc. So in summary, C is a language used to give commonly-understood commands to any arbitrary CPU while SQL is a language used to give commonly-understood commands to any arbitrary database back-end. Now, where do they cross paths? It's actually fairly simple. What does a CPU do? It receives, transforms, and sends information. So if your goal is to interpret and present data or accept commands from an end-user, you're working in C. C is for the *procedures* that you need to automate through the computer. What does a database do? It stores, groups and retrieves large sets of information. So if at any point your C program needs to store, group or retrieve a large data-set or subsets of a large data set, then chances are you'll be interacting with a database. How? By having your C program send SQL commands to the database, of course. ;) I hope this illuminates things somewhat because otherwise I'll just look like a pompous so-and-whatever for this long, rambling reply. :-P
Please help me understand SQL vs C like programming?
[ "", "sql", "c", "" ]
I'd like to write an extension method for the .NET String class. I'd like it to be a special varation on the Split method - one that takes an escape character to prevent splitting the string when a escape character is used before the separator. What's the best way to write this? I'm curious about the best non-regex way to approach it. Something with a signature like... ``` public static string[] Split(this string input, string separator, char escapeCharacter) { // ... } ``` **UPDATE:** Because it came up in one the comments, the escaping... In C# when escaping non-special characters you get the error - CS1009: Unrecognized escape sequence. In IE JScript the escape characters are throw out. Unless you try \u and then you get a "Expected hexadecimal digit" error. I tested Firefox and it has the same behavior. I'd like this method to be pretty forgiving and follow the JavaScript model. If you escape on a non-separator it should just "kindly" remove the escape character.
How about: ``` public static IEnumerable<string> Split(this string input, string separator, char escapeCharacter) { int startOfSegment = 0; int index = 0; while (index < input.Length) { index = input.IndexOf(separator, index); if (index > 0 && input[index-1] == escapeCharacter) { index += separator.Length; continue; } if (index == -1) { break; } yield return input.Substring(startOfSegment, index-startOfSegment); index += separator.Length; startOfSegment = index; } yield return input.Substring(startOfSegment); } ``` That seems to work (with a few quick test strings), but it doesn't remove the escape character - that will depend on your exact situation, I suspect.
This will need to be cleaned up a bit, but this is essentially it.... ``` List<string> output = new List<string>(); for(int i=0; i<input.length; ++i) { if (input[i] == separator && (i==0 || input[i-1] != escapeChar)) { output.Add(input.substring(j, i-j); j=i; } } return output.ToArray(); ```
C# Extension Method - String Split that also accepts an Escape Character
[ "", "c#", ".net", "parsing", "extension-methods", "split", "" ]
At the end of a page, if something occurs, it needs to be cleared, then the entire page needs to be re-parsed before serving to the client. I was going to echo out a javascript to refresh the page, but that will make them load the page and then reload it...I was wondering if there was a way to just tell the php engine to go back to the beginning and re-parse the entire page? Thanks! I will try to explain the problem more clearly but it is complicated and I am a terrible communicator. I on the page that lists products I am giving users the option to select fields to narrow the results. The system remembers this so they don't have to keep selected them. If they narrow a category like metal color and then go to a category that metal color is irrelevant like crystal figurines it will not show any results because none will match the metal color chosen. To generate the query to pull the products from the data-base is very complicated because different categories have different requirements to find the correct products. so once the query is generated I want to test it against mysql\_num\_rows() and if there is no results clear out the users choices and start over.
> To generate the query to pull the products from the data-base is very complicated because different categories have different requirements to find the correct products. so once the query is generated I want to test it against `mysql_num_rows()` and if there is no results clear out the users choices and start over. In that case, just put the query inside a function that returns the result, check the row count, and if it's zero clear the filters and call that function a second time.
You're being a little vague, but if you're merely talking about reparsing the output, you could do that using [output buffering](http://us.php.net/outcontrol).
PHP: re-parse entire page before serving?
[ "", "php", "parsing", "php4", "" ]
I have a JUnit test that is checking to make sure that a customized xml serialization is working properly. The customized xml serialization is just a few custom converters for Xstream. The deserializers work, but for some reason in Eclipse 3.x, JUnit fails the serialization. In Ant on the command line, it works just fine. It also works just fine in Eclipse if I debug and step through the test case, but if put a break point after the failing testcase executes it fails still. What gives? Am I having a class path issue? Some additional information: Expected: ``` <site> <name>origin</name> <version>0.6.0</version> <description>Stuff</description> <source>./fake-file.xml</source> <location> <latitude deg="44" min="26" sec="37.640"/> <longitude deg="-57" min="-38" sec="-6.877"/> <ellipsoid-height value="-79.256" units="meters"/> <geoid-height value="0.000" units="meters"/> </location> </site> ``` Actual: ``` <site> <name>origin</name> <version>0.6.0</version> <description>Stuff</description> <source>./fake-file.xml</source> <location> <latitude deg="44" min="26" sec="37.640"/> <longitude deg="-57" min="-38" sec="-6.877"/> <ellipsoid-height value="-79.256" units="meters"/> <geoid-height value="-79.256" units="meters"/> </location> </site> ``` The code that writes the location fields: ``` public void marshal(Object source, HierarchicalStreamWriter writer, MarshallingContext context) { ILatLonEllipsoidHeightPoint aLoc = (ILatLonEllipsoidHeightPoint) source; synchronized(aLoc) { writer.startNode(LATITUDE); writer.addAttribute(DEGREES, Integer.toString( PointUnitConversions.getLatitudeHours(aLoc.getLatitude()))); writer.addAttribute(MINUTES, Integer.toString( PointUnitConversions.getLatitudeMinutes(aLoc.getLatitude()))); writer.addAttribute(SECONDS, String.format("%.3f", PointUnitConversions.getLatitudeSeconds(aLoc.getLatitude()))); writer.endNode(); writer.startNode(LONGITUDE); writer.addAttribute(DEGREES, Integer.toString( PointUnitConversions.getLongitudeHours(aLoc.getLongitude()))); writer.addAttribute(MINUTES, Integer.toString( PointUnitConversions.getLongitudeMinutes(aLoc.getLongitude()))); writer.addAttribute(SECONDS, String.format("%.3f", PointUnitConversions.getLongitudeSeconds(aLoc.getLongitude()))); writer.endNode(); writer.startNode(ELLIPSOID_HEIGHT); writer.addAttribute(VALUE, String.format("%.3f", aLoc.getEllipsoidHeight())); writer.addAttribute(UNITS, METERS); writer.endNode(); writer.startNode(GEOID_HEIGHT); writer.addAttribute(VALUE, String.format("%.3f", aLoc.getGeoidHeight())); writer.addAttribute(UNITS, METERS); writer.endNode(); } } ``` The PointUnitConversions calls do the obvious math to take a decimal degrees and convert to corresponding integer or double values for the component parts. Its just that last attribute of location that is causing the failure.
It works fine if you debug it? Sounds like a timing issue; and debugging it slows it down enough to work. Can you post some code? Edit: Thanks for adding the info. It seems that if its NOT failing when you step through it, but it fails if you just run it normally, I would bet its a timing/Threading issues. Are you doing this in a Thread somewhere? Is there some contention? Or a race condition?
Something's different. Can't tell based on what you posted, but my guesses might be: 1. Different JVM used by Ant, command shell, and Eclipse, 2. Different JARs in the CLASSPATH for Ant and Eclipse, 3. Different XML encodings.
Why does a junit test fail in eclipse but pass from ant?
[ "", "java", "eclipse", "junit", "" ]
I have an MS-Access application that formats the data in two large spreadsheets (20,000+ rows), imports this data into tables, runs a few queries comparing the data and outputs the results as excel files. The problem is that as the application (and VBA code) grows it's becoming more of a pain using Access and I'm wondering if there is a better approach? What would be the advantages/disadvantages of a .NET(C#)solution say, compared to MS-Access, and what would be the best libraries etc. to use? Cheers, Breandán Cheers for the responses so far, I forgot to mention though that this application needs to be stand alone, I need to be able to package up the app and and send it to the end user to install on their computer. This only has (of note) MS-Office and .Net Framework installed, so I'm not sure how feasible MySQL etc. would be with no where to host it.
Moving to .Net would allow you to have better tools at your disposal to manipulate the data. You have to be careful though about what exactly you are doing at the moment with your Access solution: if you're doing a lot of special case handling for processing the data from and to Excel, then chances are you'll still have to do those in whatever language or framework you chose. If you have a lot of code invested into pulling the Excel Data into Access then you could still keep Access for that part and use .Net to help you in doing the comparisons and creation of the resulting Excel report. It's a bit hard to really make a recommendation without knowing more about your project. If you just want to use automation to pull-in data and create your Excel file, then .Net may not offer you a lot as you'll still have to do the exact same things you've already done in Access. Instead, you may consider using commercial Excel component that use different paradigm to open/create Excel spreadsheet in a nicer way. There are a few component vendors that have these. One solution is also to use reporting tools to directly pull the data from Excel and produce a report that you can simply save back to Excel. My advice would be: * If your Access solution is stable and it's doing its job, then you may consider keeping it. Moving to a new system will cost you time and money and you have to check if the outcome is worth the investment. * If you feel too constrained by the capabilities of Access, then spend some time experimenting with various solutions and components that allow you to manipulate Excel, for instance using a LINQ Excel provider ([1](http://solidcoding.blogspot.com/2008/01/linq-to-excel-provider-20.html) or [2](http://blogs.msdn.com/ericwhite/archive/2008/11/14/using-linq-to-query-excel-tables.aspx)) may provide a nice abstraction, or try various [commercial components](http://www.componentsource.com/features/spreadsheet/index.html) until you find one that matches your needs. If you're going the .Net route, you may end-up not even needing a database for processing the data. If you do though, you can always use Jet -or its new version, ACE- as a back-end that will create MSAccess databases. It's already installed on most machines and well supported by MS tools. Other good options are SQL Server Compact and SQLite as none of these require complex setup, just a DLL to ship with your project.
I'd say for the data volumes of 20,000 rows you're working with, a SQL server database isn't really going to gain you much except for moving to stored procedures for data manipulation. In this respect, it's arguably better than VBA, so you will probably get a code base that's more maintainable. However, the data volumes you describe are tiny by database standards. I wouldn't expect performance to be an issue until you have a one or two orders of magnitude more data than that. If you want to do a [data munging](http://www.manning.com/cross/) job, you might be better off with a scripting language like [Perl](http://www.cpan.org) or [Python](http://www.python.org). These languages are much better for data manipulation tasks than C# or VB.Net. Good, free windows distributions of both Perl and Python can be found at [www.activestate.com](http://www.activestate.com/). Excel can be scripted with Python through the [python-com](http://python.net/crew/mhammond/win32/Downloads.html) interface using the same API as VBA, but gaining a much better language with a huge variety of libraies available. Similarly, this can also be done with Perl through [Win32::OLE](http://www.perlmonks.org/?node=Win32%3A%3AOLE). There are also some utility libraries such as [pyexcelerator](http://sourceforge.net/projects/pyexcelerator), [xlrd](http://www.lexicon.net/sjmachin/xlrd.htm), and [xlwt](https://secure.simplistix.co.uk/svn/xlwt/trunk/README.html), for Python and [Spreadsheet::WriteExcel/Spreadsheet::ParseExcel](http://www.ibm.com/developerworks/linux/library/l-pexcel/) for Perl. There are also modules available for building installable windows applications such as [Py2EXE](http://www.py2exe.org/) or [Perl Dev Kit](http://activestate.com/perl_dev_kit/?gclid=CMDIzd-a3ZgCFUIw3godIVWBeA) [This Stackoverflow posting](https://stackoverflow.com/questions/441758/driving-excel-from-python-in-windows) discusses using Excel from Python, including generating a wrapper with MakePy, in a bit more depth. It you're working with .Net, you could also try [IronPython](http://www.codeplex.com/Wiki/View.aspx?ProjectName=IronPython) - it's a native .Net implementation of Python, which will run anywhere with a suitable .Net runtime installed. You can get also a free Visual Studio plugin called [IronPythonStudio](http://www.codeplex.com/IronPythonStudio/Release/ProjectReleases.aspx)). Another alternative is [R](http://www.r-project.org). R is primarily a statistical package, but the core language has strong data manipulation capabilities and a variety of interface libraries (and others such as graphics, various statistical computations and an [Excel interface.](http://www.stats.bris.ac.uk/R/web/packages/RExcelInstaller/index.html). It's actually quite a powerful general purpose data manipulation and reporting tool.
Alternative to MS-Access/Excel for Spreadsheet Manipulation
[ "", "c#", ".net", "excel", "ms-access", "spreadsheet", "" ]
How do I order by child objects in LINQ? Classes A, B, and C. A has a collection of B and B has a collection of C. I want to order object A by the Ordinal (int) property of C. ``` var query = from a in db.A orderby a.Bs.OrderBy(x=> x.C.Ordinal) <--- ?? select a; ``` I can't seem to figure out the orderby statement for this. EDIT: Sorry, my original statement was incorrect: A has a collection of B and B holds a C object. C is not a collection. Each A should be ordered on C's ordinal property. ANOTHER EDIT/Solution: I ended up doing an .OrderBy(b=>b.C.Ordinal) on the client for each B collection in A during display. That turned out better anyway since I can let the client order by anything they need to, instead of embedding that in my repository.
You need to work out *which* C you're interested in. Think of this in terms of real things - if you're trying to order parents by their children's ages, *which* child's age to you take into account? If you have one parent with kids of ages 1 and 5, and one parent with kids of ages 2 and 4, which should come first?
You are trying to order by a Collection. That wont work. You need to either chose one element or aggregate a single value from the child list.
Order a parent object by a child's property in LINQ
[ "", "c#", "linq", "" ]
So say I want to... insert into tableA, 2 variables from tableB, but only rows that are in tableB that have 1 of the variables equal to a certain thing... hmm let's see if I can be more specific... i want to create a new row in tableA with the userid and courseid from tableB for every row of tableB that has a courseid of 11 please advise
``` INSERT INTO tableA (col1, col2) SELECT userid, courseid FROM tableB WHERE courseid = 11 ```
Well not knowing what columns you have in tableA, I'd say: ``` insert into tableA select userid, courseid from tableB where courseid=11 ```
Possible to INSERT INTO a table with values from another table?
[ "", "sql", "insert", "" ]
In Java, this code throws an exception when the HTTP result is 404 range: ``` URL url = new URL("http://stackoverflow.com/asdf404notfound"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.getInputStream(); // throws! ``` In my case, I happen to know that the content is 404, but I'd still like to read the body of the response anyway. (In my actual case the response code is 403, but the body of the response explains the reason for rejection, and I'd like to display that to the user.) How can I access the response body?
[Here is the bug report](https://bugs.java.com/bugdatabase/view_bug?bug_id=4513568) (close, will not fix, not a bug). Their advice there is to code like this: ``` HttpURLConnection httpConn = (HttpURLConnection)_urlConnection; InputStream _is; if (httpConn.getResponseCode() < HttpURLConnection.HTTP_BAD_REQUEST) { _is = httpConn.getInputStream(); } else { /* error from server */ _is = httpConn.getErrorStream(); } ```
It's the same problem I was having: `HttpUrlConnection` returns `FileNotFoundException` if you try to read the `getInputStream()` from the connection. You should instead use `getErrorStream()` when the status code is higher than 400. More than this, please be careful since it's not only 200 to be the success status code, even 201, 204, etc. are often used as success statuses. Here is an example of how I went to manage it ``` ... connection code code code ... // Get the response code int statusCode = connection.getResponseCode(); InputStream is = null; if (statusCode >= 200 && statusCode < 400) { // Create an InputStream in order to extract the response object is = connection.getInputStream(); } else { is = connection.getErrorStream(); } ... callback/response to your handler.... ``` In this way, you'll be able to get the needed response in both success and error cases. Hope this helps!
Read error response body in Java
[ "", "java", "http", "httpurlconnection", "" ]
I would like to write a [JavaScript](http://en.wikipedia.org/wiki/JavaScript) function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: <http://www.census.gov/tiger/tms/gazetteer/zips.txt> (I only care about the 2nd column) --- This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: * Break zipcode into 2 parts, first 2 digits and last 3 digits. * Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. * Or, covert the zips into hex, and see if I can do the same thing using smaller groups. * Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. * Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)
> I would like to write a JavaScript function that validates a zip code Might be more effort than it's worth, keeping it updated so that at no point someone's real valid ZIP code is rejected. You could also try an external service, or do what everyone else does and just accept any 5-digit number! > here is a list of optimizations over a straight hashtable that I can think of Sorry to spoil the potential Fun, but you're probably not going to manage much better actual performance than JavaScript's Object gives you when used as a hashtable. Object member access is one of the most common operations in JS and will be super-optimised; building your own data structures is unlikely to beat it even if they are potentially better structures from a computer science point of view. In particular, anything using ‘Array’ is not going to perform as well as you think because Array is actually implemented as an Object (hashtable) itself. Having said that, a possible space compression tool if you only need to know 'valid or not' would be to use a 100000-bit bitfield, packed into a string. For example for a space of only 100 ZIP codes, where codes 032-043 are ‘valid’: ``` var zipfield= '\x00\x00\x00\x00\xFF\x0F\x00\x00\x00\x00\x00\x00\x00'; function isvalid(zip) { if (!zip.match('[0-9]{3}')) return false; var z= parseInt(zip, 10); return !!( zipfield.charCodeAt(Math.floor(z/8)) & (1<<(z%8)) ); } ``` Now we just have to work out the most efficient way to get the bitfield to the script. The naive '\x00'-filled version above is pretty inefficient. Conventional approaches to reducing that would be eg. to base64-encode it: ``` var zipfield= atob('AAAAAP8PAAAAAAAAAA=='); ``` That would get the 100000 flags down to 16.6kB. Unfortunately atob is Mozilla-only, so an additional base64 decoder would be needed for other browsers. (It's not too hard, but it's a bit more startup time to decode.) It might also be possible to use an AJAX request to transfer a direct binary string (encoded in ISO-8859-1 text to responseText). That would get it down to 12.5kB. But in reality probably anything, even the naive version, would do as long as you served the script using mod\_deflate, which would compress away a lot of that redundancy, and also the repetition of '\x00' for all the long ranges of ‘invalid’ codes.
You could do the unthinkable and treat the code as a number (remember that it's not actually a number). Convert your list into a series of ranges, for example: ``` zips = [10000, 10001, 10002, 10003, 23001, 23002, 23003, 36001] // becomes zips = [[10000,10003], [23001,23003], [36001,36001]] // make sure to keep this sorted ``` then to test: ``` myzip = 23002; for (i = 0, l = zips.length; i < l; ++i) { if (myzip >= zips[i][0] && myzip <= zips[i][1]) { return true; } } return false; ``` this is just using a very naive linear search (O(n)). If you kept the list sorted and used binary searching, you could achieve O(log n).
Writing a JavaScript zip code validation function
[ "", "javascript", "hash", "compression", "zipcode", "" ]
What are the best practices for doing DOM insertion? * Is it faster to insert large chunks of html vs element at a time in a loop? * Does it matter what html you are inserting, or only how big the chunk is? * It it faster to insert a table, vs inserting just the rows using the table hack?
innerHTML insertion is marginally faster than DOM manipulation 1:1 and gains more for cases where you're actually inserting multiple nodes and attributes etc.., but it's more error prone and dangerous given it's essentially an eval statement in disguise. In my experience JS is so fast these days that the speed gains of innerHTML do not justify the risks for anything but the largest of insertions/iteration batches. Long story short, you want to do the fewest DOM manipulations possible, so one tip when creating a hierarchy for insertion is to create them against each other in memory and then insert the highest element into the DOM at the last possible moment. That leaves the fewest render updates for the browser. Course again we're talking fractions of milliseconds...
Setting `innerHTML` is often faster than inserting seperate nodes. Another possibility would be to [create a DocumentFragment](https://developer.mozilla.org/en/DOM/document.createDocumentFragment), which allows to insert the nodes all at once. Another advantage of using DocumentFragments is that they can easily be [cloned](https://developer.mozilla.org/En/DOM/Node.cloneNode), which can replace a lot of use cases for `innerHTML` and is potentially faster as no parsing is involved.
Fastest DOM insertion
[ "", "javascript", "dom", "" ]
I have this in mind: On *each* server: (they *all* are set up *identically*) * A free database like [MySQL](http://www.mysql.com/) or [PostgreSQL](http://www.postgresql.org/). * [Tomcat](http://tomcat.apache.org/) 6.x for hosting Servlet based Java applications * [Hibernate](http://www.hibernate.org/) 3.x as the ORM tool * [Spring](http://en.wikipedia.org/wiki/Spring_framework) 2.5 for the business layer * [Wicket](http://wicket.apache.org/) 1.3.2 for the presentation layer I place a load balancer in front of the servers and a replacement load balancer in case my primary load balancer goes down. I use [Terracotta](http://www.terracotta.org/) to have the session information replicated between the servers. If a server goes down the user should be able to continue their work at another server, ideally as if nothing happened. What is left to "solve" (as I haven't actually tested this and for example do not know what I should use as a load balancer) is the database replication which is needed. If a user interacts with the application and the database changes, then that change must be replicated to the database servers on the other server machines. **How should I go about doing that? Should I use MySQL PostgreSQL or something else (which ideally is free as we have a limited budget)? Does the other things above sound sensible?** Clarification: I cluster to get high availability first and foremost and I want to be able to add servers and use them all at the same time to get high scalability.
Since you're already using [Terracotta](http://www.terracotta.org), and you believe that a second DB is a good idea (agreed), you might consider expanding Terracotta's role. We have customers who use Terracotta for database replication. Here's a brief example/description but I think they have stopped supporting clients for this product.: <http://www.terracotta.org/web/display/orgsite/TCCS+Asynchronous+Data+Replication>
You are trying to create a multi-master replication, which is a very bad idea, as any change to any database has to replicate to every other database. This is terribly slow - on one server you can get several hundred transactions per second using a couple of fast disks and RAID1 or RAID10. It can be much more if you have a good RAID controller with battery backed cache. If you add the overhead of communicating with all your servers, you'll get at most tens of transactions per second. If you want high availability you should go for a warm standby solution, where you have a server, which is replicated but not used - when main server dies a replacement takes over. You can lose some recent transactions if your main server dies. You can also go for one master, multiple slaves asynchronous replication. Every change to a database will have to be performed on one master server. But you can have several slave, read-only servers. Data on this slave servers can be several transactions behind the master so you can also lose some recent transactions in case of server death. PostgreSQL does have both types of replication - warm standby using log shipping and one master, multiple slaves using slony. Only if you will have a very small number of writes you can go for synchronous replication. This can also be set for PostgreSQL using PgPool-II or Sequoia. Please read [High Availability, Load Balancing, and Replication](http://www.postgresql.org/docs/8.3/interactive/high-availability.html) chapter in Postgres documentation for more.
Database replication for redundancy using a free database and a Java with Spring & Hibernate web application
[ "", "java", "mysql", "database", "postgresql", "redundancy", "" ]
I have an abstract superclass and various subclasses. Each subclass contains a value that I would like to use statically but it is not possible to create an abstract static method. I want to get a value from them dynamically without having to create instances. What do I do? Another question would be: How would I loop through subclasses? Is it it even possible? One attempt involved mapping class names (Subclass.class) to the value and trying to use the newInstance on them so I could use a method to get the value but this doesn't work. Where am I going wrong in my approach?
Creating a second class for each of your subclasses which represents the type of that subclass might work. For example, create a factory class for each subclass (a class that is responsible for creating instances of that subclass). There only needs to be one instance of each factory class. Each factory class can then be responsible for knowing the subclass-specific data you describe. You then just need to loop over a fixed set of factory classes.
Why not go about it the other way? Put the data someplace statically accessible and have the subclasses get it from there? Of course, the feasibility of this depends on the nature of the data but when you find yourself hitting this sort of barrier it often helps to step back and reexamine your assumptions. -- MarkusQ
Getting data from a subclass without instantiation
[ "", "java", "" ]
Can you explain me code below : ``` private static List<Post> _Posts; public static Post GetPost(Guid id) { return _Posts.Find(delegate(Post p) { return p.Id == id; }); } ``` 1. What is the point to find an object in a generic list by that way ? He can simply iterate the list. 2. How this delegated method called for each element of list ? **NOTE :** if this has a common name, can you update my question's title ? Thanks !
You're quite right he can iterate over the list, you can think of the code in your question as being conceptually the same as the following: ``` private static Post GetPost(Guid id) { Post p = default(Post); foreach (Post post in _Posts) { if (post.Id == id) { p = post; break; } } return p; } ``` It requires less code to write your snippet and importantly you are now saying what you want to be found and not exactly how to find it: ``` private static Post GetPost(Guid id) { return _Posts.Find(delegate(Post p) { return p.Id == id; }); } ``` In C# 3.0 this can be shortened further using what is called a "[lambda expression](http://msdn.microsoft.com/en-us/library/bb397687.aspx)" to: ``` private static Post NewGetPost(Guid id) { return _Posts.Find(p => p.Id == id); } ``` Using the least amount of readable code to achieve the same goal makes both writers and readers of that code happier.
He is using an anonymous delegate. He could have used a [lambda expression](http://msdn.microsoft.com/en-us/library/bb397687.aspx) instead: ``` Posts.Find(p => p.Id == id) ``` Also, wrapping access to the list in a method achieves nothing in this case and exposes the elements of the list to external callers. This is bad practice.
Anonymous Delegates and generic Lists in C#
[ "", "c#", "generics", "anonymous-delegates", "" ]
How to check if a session is invalid or not? There is no method in the [API](http://java.sun.com/products/servlet/2.2/javadoc/javax/servlet/http/HttpSession.html). Is it the same as `isNew()`? And what is the difference if not?
If you want to know whether it valid based on a request: ``` request.isRequestedSessionIdValid() or HttpSession sess = request.getSession(false); if (sess != null) { // it's valid } ``` If you have stored a reference to the session and need to validate I would ``` try { long sd = session.getCreationTime(); } catch (IllegalStateException ise) { // it's invalid } ```
`isNew()` is true only if this session wasn't yet accepted by client (i.e. it was just created, and JSESSIONID wasn't sent yet, or if it was sent, client didn't send it back, so server doesn't know about it and created another session)
How to check if a session is invalid
[ "", "java", "session", "httpsession", "" ]
I have the following C++ code that tried to generate a random number. The [idea](https://stackoverflow.com/questions/601727/generating-synthetic-dna-sequence-with-subtitution-rate/601750#601750) is we given some rate "x" and number of runs; we hope it would generate the number as many as (x \* number of runs times). ``` #include <iostream> #include <vector> #include <fstream> #include <sstream> #include <time.h> using namespace std; int main () { // Initialize Random Seed srand (time(NULL)); string line; double SubsRate = 0.003; double nofRuns = 1000000; for (unsigned i=0; i < nofRuns ; i++) { int toSub = rand() % 1000 + 1; if (toSub == (SubsRate * 1000)) { cout << toSub << " Sub" << endl; } } return 0; } ``` Hence if we run the code above K times with this command: ``` $ a=0 ; while test $a -lt 10 ; do ./MyCode | wc -l ; a=`expr $a + 1` ; done ``` We expect it to generate number "3" as many as ~3000 times in 1M runs. But some how my code above my code above only generate number "3" as many as 900 ~ 1000 times. How can I improve on my code above?
In other words, you are checking that the result == 3, not that the result is <= 3. 3 will only happen, one in 1000 times, but <= 3 will happen at the rate you want.
* You will expect to get number 3 one time out of 1000, i.e. 1000 times out of 1M. * You will expect to get number 9 one time out of 1000, i.e. 1000 times out of 1M. * You will expect to get number 7 one time out of 1000, i.e. 1000 times out of 1M. * You will expect to get either one of 3, 7 or 9 three times out of 1000, i.e. 3000 times out of 1M.
Generating Random Number with Certain Rate
[ "", "c++", "algorithm", "random", "" ]
I've got a simple file host going that gives files a unique id and just stores them in a directory. I've been told that this will cause problems in the future, and I'm wondering what things I should look out for to make sure it works smoothly into the future and beyond. Also, is there a performance issue with forcing downloads by sending header information and readfile()? Would it be better to preserve file names and allow uses to direct download isn't of using a script? Thanks
> Also, is there a performance issue with forcing downloads by sending header information and readfile()? Yes, if you do it naively. A good file download script should: * stream long files to avoid filling memory * support ETags and Last-Modified request/response headers to ensure caches continue to work * come up with reasonable Expires/Cache-Control settings It still won't be as fast as the web server (which is typically written in C and heavily optimised for serving files, maybe even using OS kernel features for it), but it'll be much better. > Would it be better to preserve file names and allow uses to direct download isn't of using a script? It would perform better, yes, but getting the security right is a challenge. See [here](https://stackoverflow.com/questions/602539/stop-people-uploading-malicious-php-files-via-forms/602904#602904) for some discussion. A compromise is to use a rewrite, so that the URL looks something like: ``` hxxp://www.example.com/files/1234/Lovely_long_filename_that_can_contain_any_Unicode_character.zip ``` But it gets redirected internally to: ``` hxxp://www.example.com/realfiles/1234.dat ``` and served (quickly) by the web server.
The kind of problems you have been told about very likely have to do with the **performance impact of piling thousands and thousands of files in the same directory**. To circumvent this, do not store your files directly under one directory, but **try to spread them out under subdirectories** (*buckets*). In order to achieve this, look at the ID (let's say 19873) of the file you are about to store, and store it under `<uploads>/73/98/19873_<filename.ext>`, where 73 is `ID % 100`, 98 is `(ID / 100) % 100` etc. The above guarantees that you will have at most 100 subdirectories under `<uploads>`, and at most 100 further subdirectories underneath `<uploads>/*`. This will thin out the number of files per directory at the leaves significantly. Two levels of subdirectories are typical enough, and represent a good balance between not wasting too much time resolving directory or file names to inodes both in breadth (what happens when you have too many filenames to look through in the same directory - although modern filesystems such as `ext3` will be very efficient here) and depth (what happens when you have to go 20 subdirectories deep looking for your file). You may also elect to use larger or smaller values (10, 1000) instead of 100. Two levels with modulo 100 would be ideal for between 100k and 5M files Employ the same technique to calculate the full path of a file on the filesystem given the ID of a file that needs to be retrieved.
Directory structure for a file host
[ "", "php", "linux", "filesystems", "" ]
I have a view in one of my Oracle Database that is taking too long to execute. When the statement runs, it does not seem to stop. Is there anyway that we can verify the performance of this view or how we can check to see if the statement session is 'hanging'? Thanks, N2EE # UPDATE I realised that the issue is with the underlying query in the view. Thanks to Edwin for the autotrace fix.
Most likely the execution of your query is very slow. You can see how the query in executed in the database by using explain plan. If you have SQL\*Plus you can do this very easy with the following statement: ``` set autotrace traceonly ``` Then type in the query, and you will get statistics on your query like this: ``` SQL> set autotrace traceonly SQL> select * from o_drops; 4461 rows selected. Execution Plan ---------------------------------------------------------- Plan hash value: 3820245448 ----------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4287 | 280K| 11 (10)| 00:00:01 | | 1 | TABLE ACCESS FULL| O_DROPS | 4287 | 280K| 11 (10)| 00:00:01 | ----------------------------------------------------------------------------- Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 333 consistent gets 48 physical reads 0 redo size 337057 bytes sent via SQL*Net to client 2316 bytes received via SQL*Net from client 299 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 4461 rows processed ``` If one of the resources is very high, it could work to rewrite the query and/or add indexes to the table you are using.
You'll need to take a look of the performance of the query that makes up the view. The best way to do that is to do an explain plan on the sql statement the view uses. That will indicate if its doing full table scans or some other less than optimal behavior. Tune the query and your view should run much better.
Oracle Database 10g VIEW performance
[ "", "sql", "oracle", "plsql", "oracle10g", "view", "" ]
I want to parse the strings, so that to check whether they have specified syntax or not. Example: ``` Str = Z344-R565l t ``` Here my requirement is after `Z` there should be a number and after that a `-` and after that `R` should be there followed by a number, followed by `l`, followed by a space and then finally `t`. If any thing other than this it should be a error. I have to parse many different kind of syntax like this. I would be awkward if write a function for each type of syntax required. I heard that yacc or lex can solve this problem. Can any one please throw some light on my problem?
Use boost::regex ``` #include <string> #include <boost/regex.hpp> bool isMatch(std::string input){ boost::regex r("Z[0-9]*-R[0-9]*l t"); return boost::regex_search(input, r); } ``` The other thing that you could do is supply a list of regex expressions in a file, one expression per line. Create a vector of boost::regex objects using the file input and iterate through the vector of patterns for each string you need to validate. It's not very efficient but it will work.
You do this with a regex. Z344-R565l t Your regex should look something like this. Not sure what regex library to use for c++, but this is the general regex to make sure that your string matches. ``` Z[0-9]+-R[0-9]+l t ```
How parse a string in C++
[ "", "c++", "lexical-analysis", "parser-generator", "" ]
This is a memory allocation issue that I've never really understood. ``` void unleashMonkeyFish() { MonkeyFish * monkey_fish = new MonkeyFish(); std::string localname = "Wanda"; monkey_fish->setName(localname); monkey_fish->go(); } ``` In the above code, I've created a MonkeyFish object on the heap, assigned it a name, and then unleashed it upon the world. Let's say that ownership of the allocated memory has been transferred to the MonkeyFish object itself - and only the MonkeyFish itself will decide when to die and delete itself. Now, when I define the "name" data member inside the MonkeyFish class, I can choose one of the following: ``` std::string name; std::string & name; ``` When I define the prototype for the setName() function inside the MonkeyFish class, I can choose one of the following: ``` void setName( const std::string & parameter_name ); void setName( const std::string parameter_name ); ``` I want to be able to minimize string copies. In fact, I want to eliminate them entirely if I can. So, it seems like I should pass the parameter by reference...right? What bugs me is that it seems that my localname variable is going to go out of scope once the unleashMonkeyFish() function completes. Does that mean I'm FORCED to pass the parameter by copy? Or can I pass it by reference and "get away with it" somehow? Basically, I want to avoid these scenarios: 1. I don't want to set the MonkeyFish's name, only to have the memory for the localname string go away when the unleashMonkeyFish() function terminates. (This seems like it would be very bad.) 2. I don't want to copy the string if I can help it. 3. I would prefer not to new localname What prototype and data member combination should I use? **CLARIFICATION**: Several answers suggested using the static keyword to ensure that the memory is not automatically de-allocated when unleashMonkeyFish() ends. Since the ultimate goal of this application is to unleash N MonkeyFish (all of which must have unique names) this is not a viable option. (And yes, MonkeyFish - being fickle creatures - often change their names, sometime several times in a single day.) **EDIT**: Greg Hewgil has pointed out that it is illegal to store the name variable as a reference, since it is not being set in the constructor. I'm leaving the mistake in the question as-is, since I think my mistake (and Greg's correction) might be useful to someone seeing this problem for the first time.
One way to do this is to have your string ``` std::string name; ``` As the data-member of your object. And then, in the unleashMonkeyFish function create a string like you did, and *pass it by reference* like you showed ``` void setName( const std::string & parameter_name ) { name = parameter_name; } ``` It will do what you want - creating one copy to copy the string into your data-member. It's not like it has to re-allocate a new buffer internally if you assign another string. Probably, assigning a new string just copies a few bytes. std::string has the capability to reserve bytes. So you can call "name.reserve(25);" in your constructor and it will likely not reallocate if you assign something smaller. (i have done tests, and it looks like GCC always reallocates if you assign from another std::string, but not if you assign from a c-string. [They say](http://gcc.gnu.org/onlinedocs/libstdc++/manual/bk01pt05ch13s05.html) they have a copy-on-write string, which would explain that behavior). The string you create in the unleashMonkeyFish function will automatically release its allocated resources. That's the key feature of those objects - they manage their own stuff. Classes have a destructor that they use to free allocated resources once objects die, std::string has too. In my opinion, you should not worry about having that std::string local in the function. It will not do anything noticeable to your performance anyway most likely. Some std::string implementations (msvc++ afaik) have a small-buffer optimization: For up to some small limit, they keep characters in an embedded buffer instead of allocating from the heap. **Edit**: As it turns out, there is a better way to do this for classes that have an efficient `swap` implementation (constant time): ``` void setName(std::string parameter_name) { name.swap(parameter_name); } ``` The reason that this is better, is that now the caller knows that the argument is being copied. Return value optimization and similar optimizations can now be applied easily by the compiler. Consider this case, for example ``` obj.setName("Mr. " + things.getName()); ``` If you had the `setName` take a reference, then the temporary created in the argument would be bound to that reference, and within `setName` it would be copied, and after it returns, the temporary would be destroyed - which was a throw-away product anyway. This is only suboptimal, because the temporary itself could have been used, instead of its copy. Having the parameter not a reference will make the caller see that the argument is being copied anyway, and make the optimizer's job much more easy - because it wouldn't have to inline the call to see that the argument is copied anyway. For further explanation, read the excellent article [`BoostCon09/Rvalue-References`](https://www.boostpro.com/trac/wiki/BoostCon09/RValue101)
If you use the following method declaration: ``` void setName( const std::string & parameter_name ); ``` then you would also use the member declaration: ``` std::string name; ``` and the assignment in the `setName` body: ``` name = parameter_name; ``` You cannot declare the `name` member as a reference because you *must* initialise a reference member in the object constructor (which means you couldn't set it in `setName`). Finally, your `std::string` implementation probably uses reference counted strings anyway, so no copy of the actual string data is being made in the assignment. If you're that concerned about performance, you had better be intimately familiar with the STL implementation you are using.
When is it not a good idea to pass by reference?
[ "", "c++", "memory-management", "" ]
I`m a beginner C++ programmer. And I want to pursue my career in system- and driver-programming. Can you suggest me an opensource projects to I improve my skills in low-level development? I am looking for a project with the following characteristic: - on C\C++ language based - a small project with a small amount of code, yet - UNIX-based systems designed Do you know that something like this?
Check the google summer of code projects page! These are all open source, and many of them are based on C/C++. Each project lists ideas that are aimed at outsiders / beginners. Here is last year's page: <http://code.google.com/soc/2008/> Google has not yet decided on which projects are participating this year, but this information will become available within the next couple of weeks (before the end of march 2009) along with a fresh list of ideas. If you're a student in a College/University you can get lucky and even get mentoring through the GSOC project. But even if not, they will really value any contribution you can make.
Always work on open source projects that you actually use and care about. If you don't use the project yourself, why should you do good work on it?
C++ OpenSource project for beginner programmer?
[ "", "c++", "open-source", "" ]
I have two tables in a one to many relationship. (products and qty break pricing). At the database level I cannot create a relationship between the two tables. I brought those two tables into LINQ and created the association manually. I need to do a big LINQ query and have the tables be joined. My problem is it's not using a join to get the data. LINQ is using 1 select on the main table, then 1 select for each row in that main table. ``` Dim db As New LSSStyleDataContext(connString) Dim options As New DataLoadOptions() options.LoadWith(Function(c As commerce_product) c.commerce_qty_breaks) db.LoadOptions = options Dim dbProducts = (From prods In db.commerce_products).ToList ``` Any thoughts on why this might be? Thanks! Paul EDIT: here are the two tables: ``` CREATE TABLE [dbo].[commerce_product]( [pf_id] [int] NOT NULL, [name] [varchar](500) COLLATE SQL_Latin1_General_CP1_CI_AS [description] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [restricted] [varchar](5) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_commerce_product_1] PRIMARY KEY NONCLUSTERED ( [pf_id] ASC ) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] ``` And the other table: ``` CREATE TABLE [dbo].[commerce_qty_break]( [pf_id] [int] NOT NULL, [sku] [varchar](100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [qty] [int] NOT NULL, [list_price] [int] NOT NULL, [break_id] [int] NOT NULL, CONSTRAINT [PK_commerce_qty_break] PRIMARY KEY CLUSTERED ( [pf_id] ASC, [qty] ASC, [break_id] ASC ) ON [PRIMARY] ) ON [PRIMARY] ``` The DBML is straight forward with only the two tables. I created an association between the two tables, "commerce\_product" being the parent and "commerce\_qty\_break" being the child joined by "PF\_ID". I can write something like this: ``` Dim dbproducts = From prods In db.commerce_products _ Join qtys In db.commerce_qty_breaks On prods.pf_id Equals qtys.pf_id _ Select prods ``` And i see that it joins on the table in the query, but as soon as i try and spin through the "qty\_breaks" it starts executing selects to get that info. I'm totally stumped. Edit 2: Here is the DBML: ``` <?xml version="1.0" encoding="utf-8"?> <Database Name="LSScommerceDB_DevB" Class="LSSStyleDataContext" xmlns="http://schemas.microsoft.com/linqtosql/dbml/2007"> <Connection Mode="AppSettings" ConnectionString="***" SettingsObjectName="HSLPriceUpdate.My.MySettings" SettingsPropertyName="LSScommerceDB_DevBConnectionString" Provider="System.Data.SqlClient" /> <Table Name="dbo.commerce_product" Member="commerce_products"> <Type Name="commerce_product"> <Column Name="pf_id" Type="System.Int32" DbType="Int NOT NULL" IsPrimaryKey="true" CanBeNull="false" /> <Column Name="name" Type="System.String" DbType="VarChar(500)" CanBeNull="true" /> <Column Name="description" Type="System.String" DbType="Text" CanBeNull="true" UpdateCheck="Never" /> <Column Name="list_price" Type="System.Int32" DbType="Int" CanBeNull="true" /> <Column Name="image_file" Type="System.String" DbType="VarChar(255)" CanBeNull="true" /> <Column Name="image_width" Type="System.Int32" DbType="Int" CanBeNull="true" /> <Column Name="image_height" Type="System.Int32" DbType="Int" CanBeNull="true" /> <Column Name="sale_price" Type="System.Int32" DbType="Int" CanBeNull="true" /> <Column Name="sale_start" Type="System.DateTime" DbType="DateTime" CanBeNull="true" /> <Column Name="sale_end" Type="System.DateTime" DbType="DateTime" CanBeNull="true" /> <Column Name="attr_label1" Type="System.String" DbType="VarChar(100)" CanBeNull="true" /> <Column Name="attr_label2" Type="System.String" DbType="VarChar(100)" CanBeNull="true" /> <Column Name="attr_label3" Type="System.String" DbType="VarChar(100)" CanBeNull="true" /> <Column Name="attr_label4" Type="System.String" DbType="VarChar(100)" CanBeNull="true" /> <Column Name="attr_label5" Type="System.String" DbType="VarChar(100)" CanBeNull="true" /> <Column Name="sku" Type="System.String" DbType="VarChar(100)" CanBeNull="true" /> <Column Name="UOM" Type="System.String" DbType="VarChar(50)" CanBeNull="true" /> <Column Name="Sell_Pack" Type="System.String" DbType="VarChar(50)" CanBeNull="true" /> <Column Name="mfg_model_number" Type="System.String" DbType="VarChar(50)" CanBeNull="true" /> <Column Name="mfg_id" Type="System.Int32" DbType="Int" CanBeNull="true" /> <Column Name="logo_file" Type="System.String" DbType="VarChar(255)" CanBeNull="true" /> <Column Name="drop_ship" Type="System.String" DbType="VarChar(50)" CanBeNull="true" /> <Column Name="lead_time" Type="System.Int32" DbType="Int" CanBeNull="true" /> <Column Name="hazard_flag" Type="System.String" DbType="VarChar(50)" CanBeNull="true" /> <Column Name="publish_date" Type="System.DateTime" DbType="DateTime" CanBeNull="true" /> <Column Name="restricted" Type="System.String" DbType="VarChar(5)" CanBeNull="true" /> <Association Name="commerce_product_commerce_qty_break" Member="commerce_qty_breaks" ThisKey="pf_id" OtherKey="pf_id" Type="commerce_qty_break" /> </Type> </Table> <Table Name="dbo.commerce_qty_break" Member="commerce_qty_breaks"> <Type Name="commerce_qty_break"> <Column Name="pf_id" Type="System.Int32" DbType="Int NOT NULL" IsPrimaryKey="true" CanBeNull="false" /> <Column Name="sku" Type="System.String" DbType="VarChar(100) NOT NULL" CanBeNull="false" /> <Column Name="qty" Type="System.Int32" DbType="Int NOT NULL" IsPrimaryKey="true" CanBeNull="false" /> <Column Name="list_price" Type="System.Int32" DbType="Int NOT NULL" CanBeNull="false" /> <Column Name="sale_price" Type="System.Int32" DbType="Int" CanBeNull="true" /> <Column Name="sale_start" Type="System.DateTime" DbType="DateTime NOT NULL" CanBeNull="false" /> <Column Name="sale_end" Type="System.DateTime" DbType="DateTime" CanBeNull="true" /> <Column Name="break_id" Type="System.Int32" DbType="Int NOT NULL" IsPrimaryKey="true" CanBeNull="false" /> <Association Name="commerce_product_commerce_qty_break" Member="commerce_product" ThisKey="pf_id" OtherKey="pf_id" Type="commerce_product" IsForeignKey="true" /> </Type> </Table> </Database> ``` EDIT 3: Apparently this is only an issue in SQL 2000. SQL 2008 works fine. I have other tables that do eager loading in SQL 2000 and i can't figure out what the difference is between these two tables.
I ended up using the entity framework and all is well.
I created a VB console app and created the schema as you have it here. Also - the relationship is PK -> PK so does this mean it's supposed to be a one-to-one relationship? I populated the tables with a row each (see below) and ran the code you've listed above. I ran SQL Profiler and it only queried once: ``` SELECT [t0].[pf_id], [t0].[name], [t0].[description], [t0].[restricted], [t1].[pf_id] AS [pf_id2], [t1].[sku], [t1].[qty], [t1].[list_price], [t1].[break_id], ( SELECT COUNT(*) FROM [dbo].[commerce_qty_break] AS [t2] WHERE [t2].[pf_id] = [t0].[pf_id] ) AS [value] FROM [dbo].[commerce_product] AS [t0] LEFT OUTER JOIN [dbo].[commerce_qty_break] AS [t1] ON [t1].[pf_id] = [t0].[pf_id] ORDER BY [t0].[pf_id], [t1].[qty], [t1].[break_id] ``` I wanted to make sure that the Data Options was forcing a deep load, so I added some extra code - here's the full code I used (and only the single query as above was traced): ``` Dim options As New DataLoadOptions() options.LoadWith(Function(c As commerce_product) c.commerce_qty_breaks) db.LoadOptions = options Dim dbProducts = (From prods In db.commerce_products).ToList Dim dbProduct = dbProducts.First().commerce_qty_breaks Dim x = dbProduct.First().list_price ``` Here's the test data: ``` INSERT INTO [Test].[dbo].[commerce_product] ([pf_id],[name],[description],[restricted]) VALUES (1,'Test','Test','Test') GO INSERT INTO [Test].[dbo].[commerce_qty_break] ([pf_id],[sku],[qty],[list_price],[break_id]) VALUES (1,'22',1,1,1) GO ```
LINQ to SQL: inner join with manual association on SQL 2000
[ "", "asp.net", "sql", "vb.net", "linq", "linq-to-sql", "" ]
I'm sorry if this question has already been answered but I couldn't find it. I am trying to open a C# form when a function in a C++ program is called (the main program is in C++-cli, the form is in C#, it is an empty form just to try how it works). I am using Visual Studio 2005 and I have both projects in my solution. The C# project is just a form and I have configured it as a class library. However, when I go to the C++ program's function I type this in the beginning: ``` int _stdcall Init(void) { ... FormProject::Form1 form1; form1 = new FormProject::Form1::Form1(); form1.something(); ... } ``` I have little experience in C++, I have tried different combinations but I am unlucky. The C++ project has been made compatible with CLI. I have already included the needed: ``` #using <System.Windows.Forms.dll> #using <System.dll> ``` I am aware there are some big mistakes but I only need this form running, no more. Thank you very much. Edit: I have already added the references.
If I understand you, I think you need to instantiate the C# object using C++/CLI's syntax enhancements. For example, the code you posted should be: ``` FormProject::Form1^ form1; form1 = gcnew FormProject::Form1::Form1(); // or FormProject::Form1() form1->something(); ``` I suggest you create a new WinForms project in managed C++, and just look at the generated code for the correct syntax.
The C++/CLI project will need to add a reference to the C# project before you can use types in your C# library.
Using C++-cli, how can I instantiate WinForms form defined in C# library?
[ "", "c#", "c++-cli", "" ]
pyPdf is a great library to split, merge PDF files. I'm using it to split pdf documents into 1 page documents. pyPdf is pure python and spends quite a lot of time in the \_sweepIndirectReferences() method of the PdfFileWriter object when saving the extracted page. I need something with better performance. I've tried using multi-threading but since most of the time is spent in python code there was no speed gain because of the GIL (it actually ran slower). Is there any library written in c that provides the same functionality? or does anyone have a good idea on how to improve performance (other than spawning a new process for each pdf file that I want to split) Thank you in advance. Follow up. Links to a couple of command line solutions, that can prove sometimes faster than pyPDF: * <http://multivalent.sourceforge.net/Tools/pdf/Split.html> * <http://www.linuxsolutions.fr/how-to-extract-pages-from-a-pdf/> I modified pyPDF PdfWriter class to keep track of how much time has been spent on the \_sweepIndirectReferences() method. If it has been too long (right now I use the magical value of 3 seconds) then I revert to using ghostscript by making a call to it from python. Thanks for all your answers. (codelogic's xpdf reference is the one that made me look for a different approach)
[mbtPdfAsm](http://thierry.schmit.free.fr/spip/spip.php?article15) is a fast, open source command line tool for PDF processing. [Xpdf](http://www.foolabs.com/xpdf/download.html) is also worth mentioning since it's GPL and written in C++. The source code is well modularized and allows for writing command line tools.
Does it have to be python? My pure-Perl library [CAM::PDF](http://search.cpan.org/dist/CAM-PDF/) is pretty fast at appending and deleting PDF document pages. It saves the sweeping for the very end, where possible.
Fast PDF splitter library
[ "", "python", "c", "pdf", "pypdf", "" ]
I should start by saying that I am using ASP.NET using C# in a .NET 2.0 environment. In particular, I am trying to generate a csv download when the user clicks on a link button. The link to my postback is inside an UpdatePanel. In the past (before ajax) to return non-html content I would use code such as the following: ``` string filename = e.CommandArgument.ToString();//somefile.csv string fileContents = SomeClass.GetFile(filename); Response.AddHeader("Content-disposition", string.Format("attachment; filename={0}", filename)); Response.Write(fileContents); ``` But since the content is not trying to do a full refresh of the browser this technique does not work. Does someone have a better approach for this kind of situation. One constraint I have is that I am stuck with .net 2.0 for this part of the project and can't switch to 3.5 to solve this problem. p.s. I need to generate the content with a dynamic filename as well
I would avoid generating a postback for this at all. Istead, you want to use a completely separate page, preferably even just a handler (\*.ashx). Then you can use a normal hyperlink and your old attachment code will work just fine.
Create a server-side container (even an HTML DIV with runat="server") inside the UpdatePanel, and set the .InnerHtml or .InnerText property to the fileContents variable when the UpdatePanel is refreshed.
Best way to return dynamic non-html content from a page
[ "", "c#", "asp.net", "" ]
We're starting a project written in Java in a company which works purely on MS technologies. Microsoft Team System is used as source control tool. A question is whether we should try to integrate Eclipse with MTS (which makes sense from the top level as there would be still a single repository for the company) or we should try to setup another source control tool - most likely Subversion (which makes sense from developers perspective)? Anyone tried to marry Team System with Java projects? Does it makes any sense?
You can use Team Foundation Server with Eclipse with Teamprise. As for whether it makes sense or not depends on the environment. If you are one of many projects, just one that happens to be Java, it does not make sense to reinvent the wheel and implement SVN, unless the business is making a clean deliniation between Java projects and MS projects ... and realizing they have two places to get metrics for projects, which can be nasty. I would vote on TFS, personally, as there is already an investment in TFS.
I work on a project that uses eclipse and teamprise to use TFS with java. While the experience isn't perfect, it does work fairly well. Since your company already has an investment in TFS I would stick with that if you and your team have comfort with TFS as a version control system. However, if you are comfortable with subversion and not with TFS, then I say just make the switch to subverison and you won't look back.
Microsoft Team System and Java
[ "", "java", "tfs", "" ]
I want to push data to the browser over HTTP without killing my django/python application. I decided to use a comet server, to proxy requests between my application and the client (though I still haven't really figured it out properly). I've looked into the following engines: orbited cometd ejabberd jetty Has anyone had any experience working with these servers and deploying them? Any insight and links regarding the topics would be great. Thank you.
I would recommend looking into Twisted, their twisted.web server, and the comet work done on top of it at Divmod. They can handle far more concurrent connections than traditional thread or process based servers, which is exactly what you need for something like this. And, yes, I've architected systems using Twisted for COMET stuff, while using other things for the more front-facing web applications beside it. It works out well with each part doing what it does best.
I need to do something very similar. I found this, but haven't had the time to look at it properly yet: > django\_evserver is simple http server > for Django applications. It's based on > libevent library. The main advantage > of django\_evserver is that it provides > methods of preempting django views. > You can render a site in several > chunks, each of them can be handled by > different django view. > > Using this idea it's possible to use > django\_evserver as comet server for > django applications. <http://code.google.com/p/django-evserver/>
Choosing and deploying a comet server
[ "", "python", "django", "comet", "daemon", "" ]
could some one help me out trying to understand how hard drive seeking works. I have a small binary database file which read performance is absolutely essential. If I need to skip a few bytes in the file is it quicker to use seek() or to read() then discard the unwanted data. If the average seek time of a hard drive is 10ms and the read speed is 300MB/s I calculated that it's quicker to read() than seek() with a value smaller than 3MB. Is true? Is there an overhead when performing a new seek, which reading an existing stream doesn't have? Which do you think be a more suitable file structure for an index. ``` Entry1:Value:PointerIntoToData Entry2:Value:PointerIntoToData Entry3:Value:PointerIntoToData Data, Data, Data Or Entry1:Value:Data Entry2:Value:Data Entry3:Value:Data ``` When reading an entry if the value is not correct it will be ignored. So when streaming the file is it quicker to: 1. when an entry is not required use seek() to skip over it 2. when a entry is not needed read it then discard the data 3. or the use first structure, when an entry is required seek() into a data repository at the end. Entry is 4 bytes, value is 8 bytes & data is 12KB Cheers
All *seek* system call does is changing a position in file where the next read will be. It does not move the drive head. Drive heads move when data is read or written and you don't have direct control over what OS will do next. Reading lots of data you aren't going to need has impact because all read data needs space in OS buffers and causes older data to be discarded. So using seek over big files will mess with filesystem cache less. --- All I write beneath assumes you cannot fit whole database in memory. If you can, just do that. Read everything and try to append new and changed data at the end of file. Don't worry about wasted space, just do some compacting once in a while. --- If your database is too big: Data is read and written to physical drive in blocks (or pages). Similarly the basic unit of disk IO in your OS is page. If OS caches data from disk it's also in whole pages. So thinking whether you need to move forward few bytes using seek or read makes little sense. If you want to make it fast, you need to take into account how disk IO really works. First, already mentioned by nobugz, locality of reference. If the data you use in each operation is located close together in a file, your OS will need to read or write less pages. On the other hand, if you spread your data, many pages will need to be read or written at once, which will always be slow. As to data structure for index. Typically they are organized as [B-trees](http://en.wikipedia.org/wiki/B-tree). It's a data structure made especially for effective searching of large quantities of data stored in memory with paged reads and writes. And both strategies for organizing data is used in practice. For example, MS SQL Server by default stores data the first way: data is stored separately and indices only contain data from indexed columns and physical addresses of data rows in files. But if you define clustered index then all data will be stored within this index. All other indexes will point to the data via clustered index key instead of physical address. The first way is simpler but the other may be much more effective if you often do scans of ranges of data based on clustered index.
How "absolutely essential" is seek access? Have you tested your application with a non-optimal solution yet? During that testing, did you benchmark to determine where the **real** bottlenecks are? If you haven't, you'll be surprised by the results. Next, try different methods and compare the running times. Test under different system loads (ie, when the system is idle except for your application, and when it is busy). Consider that your optimizations based on your current hard drive may become incorrect when a new, faster hard drive has different internal optimizations that throw your work out the window.
Database Structure & Hard drive seek time confusion
[ "", "c++", "hardware", "hard-drive", "" ]
I am looking for a library to generate svg diagrams in python (I fetch data from a sql database). I have found [python-gd](http://newcenturycomputers.net/projects/gdmodule.html), but it has not much documentation and last update was in 2005 so I wonder if there are any other libraries that are good for this purpose. I am mostly thinking about simple line graphs, something like [this](https://edynblog.files.wordpress.com/2007/07/line-graph-days-on-market.jpg): ![example line graph](https://edynblog.files.wordpress.com/2007/07/line-graph-days-on-market.jpg "days on market")
As you're looking for simple line graphics, probably, [CairoPlot](http://linil.wordpress.com/2008/09/16/cairoplot-11/) will fit your needs as it can generate svg output files out of the box. Take a look at [this](http://linil.files.wordpress.com/2008/06/cairoplot_dotlineplot.png). ![CairoPlot - DotLinePlot](https://linil.files.wordpress.com/2008/06/cairoplot_dotlineplot.png?w=450h=300) This example image shows only a few of its capabilities. Using the trunk version available at [launchpad](http://launchpad.net/cairoplot) you'll be able to add a legend box and add axis titles. Besides that, using the trunk version, it's possible to generate: * DotLine charts (the ones I believe you need) * Scatter charts * Pie/Donut charts * Horizontal/Vertical Bar charts * Gantt charts
Try using [matplotlib](http://matplotlib.sourceforge.net/). You can configure it with a SVG [backend](http://matplotlib.sourceforge.net/faq/installing_faq.html#what-is-a-backend).
svg diagrams using python
[ "", "python", "svg", "diagram", "" ]
There exist [static analysis tools for Python](https://stackoverflow.com/questions/35470/are-there-any-static-analysis-tools-for-python), but compile time checks tend to be diametrically opposed to the [run-time binding philosophy](http://python-history.blogspot.com/2009/01/introduction-and-overview.html) that Python embraces. It's *possible* to wrap the standard Python interpreter with a static analysis tool to enforce some "[use strict](http://perldoc.perl.org/strict.html)"-like constraints, but we don't see any widespread adoption of such a thing. Is there something about Python that makes "use strict" behavior unnecessary or especially undesirable? Alternatively, is the "use strict" behavior unnecessary in Perl, despite its widespread adoption? Note: By "necessary" I mean "practically necessary", not strictly necessary. Obviously you *can* write Perl without "use strict," but (from what I've seen) most Perl programmers *do* use it. Note: The Python interpreter-wrapper need not *require* "use strict"-like constraints -- you could use a pseudo-pragma similar to "use strict" that would be ignored by the normal interpreter. I'm not talking about adding a language-level feature. --- Update: Explaining what "use strict" does in Perl per comments. (Link to official docs is in the first paragraph.) The "use strict" directive has three distinct components, only two of which are really interesting: * use strict vars: Statically checks lexically scoped variable usage in your program. (Keep in mind that, in Python, there is basically only `global` scope and `local` scope). Many Python linters check for this sort of thing. Since it's the only static analysis that they can do, the linters assume you use straightforward lexical scoping and warn you about things that appear wrong in that sense until you tell them to shut up; i.e. ``` FOO = 12 foo += 3 ``` If you're not doing anything fancy with your namespaces this can be useful to check for typos. * use strict refs: Prevents symbolic namespace dereferencing. Python's closest analog is using `locals()` and `globals()` to do symbolic binding and identifier lookup. * use strict subs: No real analog in Python.
"the run-time binding philosophy that Python embraces... makes "use strict" behavior unnecessary [and] especially undesirable" Pretty good summary. Thanks. That is essentially it. Static analysis tools don't help Python enough to be worthwhile. --- **Edit** "I'm asking for us to introspect on *why* we don't need it and, relatedly, why Perl programmers think they do need it." The reason why is precisely the reason you already gave. We don't need it because it doesn't help. Clearly, you don't like that answer, but there's not much more to be said. Compile-time or pre-compile time checking simply does not help. However, since you took the time to asked the question again, I'll provide more evidence for the answer you already gave. I write Java almost as much as I write Python. Java's static type checking does not prevent any logic problems; it doesn't facilitate meeting performance requirements; it doesn't help meet the use cases. It doesn't even reduce the volume of unit testing. While static type checking does spot the occasional misuse of a method, you find this out just as quickly in Python. In Python you find it at unit test time because it won't run. Note: I'm not saying wrong types are found with lots of clever unit tests, I'm saying most wrong type issues are found through unhandled exceptions where the thing simply won't run far enough to get to test assertions. The reason why is Pythonistas don't waste time on static checking is simple. We don't need it. It doesn't offer any value. It's a level of analysis that has no economic benefit. It doesn't make me any more able to solve the real problems that real people are having with their real data. Look at the most popular SO Python questions that are language (not problem domain or library) related. [Is there any difference between "foo is None" and "foo == None"?](https://stackoverflow.com/questions/26595/is-there-any-difference-between-foo-is-none-and-foo-none) -- `==` vs. `is`. No static checking can help with this. Also, see [Is there a difference between `==` and `is` in Python?](https://stackoverflow.com/questions/132988/is-there-a-difference-between-and-is-in-python) [What does \*\* (double star) and \* (star) do for parameters?](https://stackoverflow.com/questions/36901/what-does-and-do-for-python-parameters) -- `*x` gives a list, `**x` gives a dictionary. If you don't know this, your program dies immediately when you try to do something inappropriate for those types. "What if your program never does anything 'inappropriate'". Then your program works. 'nuff said. [How can I represent an 'Enum' in Python?](https://stackoverflow.com/questions/36932/whats-the-best-way-to-implement-an-enum-in-python) -- this is a plea for some kind of limited-domain type. A class with class-level values pretty much does that job. "What if someone changes the assignment". Easy to build. Override `__set__` to raise an exception. Yes static checking might spot this. No, it doesn't happen in practice that someone gets confused about an enum constant and a variable; and when they do, it's easy to spot at run time. "What if the logic never gets executed". Well, that's poor design and poor unit testing. Throwing a compiler error and putting in wrong logic that's never tested is no better than what happens in a dynamic language when it's never tested. [Generator Expressions vs. List Comprehension](https://stackoverflow.com/questions/47789/generator-expressions-vs-list-comprehension) -- static checking doesn't help resolve this question. [Why does 1+++2 = 3?](https://stackoverflow.com/questions/470139/why-does-12-3-in-python) -- static checking wouldn't spot this. 1+++2 in C is perfectly legal in spite of all the compiler checking. It's not the same thing in Python as it is in C, but just as legal. And just as confusing. [List of lists changes reflected across sublists unexpectedly](https://stackoverflow.com/questions/240178/unexpected-feature-in-a-python-list-of-lists) -- This is entirely conceptual. Static checking can't help solve this problem either. The Java equivalent would also compile and behave badly.
Well, I'm not much of a python programmer, but I'd say that the answer is 'YES'. Any dynamic language that lets you create a variable with any name at any time, could use a 'strict' pragma. Strict vars (one of the options for strict in Perl, 'use strict' turns them all on at once) in Perl requires that all variables are declared before they are used. Which means that this code: ``` my $strict_is_good = 'foo'; $strict_iS_good .= 'COMPILE TIME FATAL ERROR'; ``` Generates a fatal error at compile time. I don't know of a way to get Python to reject this code at compile time: ``` strict_is_good = 'foo'; strict_iS_good += 'RUN TIME FATAL ERROR'; ``` You will get a run-time exception that `strict_iS_good` is undefined. But only when the code is executed. If your test suite does not have 100% coverage, you can easily ship this bug. Any time I work in a language that does not have this behavior (PHP for example), I get nervous. I am not a perfect typist. A simple, but hard to spot, typo can cause your code to fail in ways that may be hard to track down. So, to reiterate, **YES** Python could use a 'strict' pragma to turn on compile time checks for things that can be checked at compile time. I can't think of any other checks to add, but a better Python programmer probably could think of some. **Note** I focus on the pragmatic effect of stict vars in Perl, and am glossing over some of the details. If you really want to know all the details see [the perldoc for strict](http://perldoc.perl.org/strict.html). **Update: Responses to some comments** *Jason Baker* : Static checkers like pylint are useful. But they represent an extra step that can be and often is skipped. Building some basic checks into the compiler guarantees that these checks are performed consistently. If these checks are controllable by a pragma, even the objection relating to the cost of the checks becomes moot. *popcnt* : I know that python will generate a run time exception. I said as much. I advocate compile time checking where possible. Please reread the post. *mpeters* : No computer analysis of code can find all errors--this amounts to solving the halting problem. Worse, to find typos in assignments, your compiler would need to know your *intentions* and find places where your intentions differ from your code. This is pretty clearly impossible. However this does not mean that no checking should be done. If there are classes of problems that are easy to detect, then it makes sense to trap them. I'm not familiar enough with pylint and pychecker to say what classes of errors they will catch. As I said I am very inexperienced with python. These static analysis programs are useful. However, I believe that unless they duplicate the capabilities of the compiler, the compiler will always be in a position to "know" more about the program than any static checker could. It seems wasteful not to take advantage of this to reduce errors where possible. **Update 2:** cdleary - In theory, I agree with you, a static analyzer can do any validation that the compiler can. And in the case of Python, it should be enough. However, if your compiler is complex enough (especially if you have lots of pragmas that change how compilation occurs, or if like Perl, you can run code at compile time), then the static analyzer must approach the complexity of the compiler/interpreter to do the analysis. Heh, all this talk of complex compilers and running code at compile time shows my Perl background. My understanding is that Python does not have pragmas and can not run arbitrary code at compile time. So, unless I am wrong or these features are added, a relatively simple parser in the static analyzer should suffice. It certainly would be helpful to force these checks at every execution. Of course, the way I'd do this is with a pragma. Once you add pragmas to the mix, you have started down a slippery slope and the complexity of you analyzer must grow in proportion to the power and flexibility you provide in your pragmas. If you are not careful, you can wind up like Perl, and then "only python can parse Python," a future I wouldn't want to see. Maybe a command line switch would be a better way to add forced static analysis ;) (In no way do intend to impugn Python's capabilities when I say that it can't futz with compile time behavior like Perl can. I have a hunch that this is a carefully considered design decision, and I can see the wisdom in it. Perl's extreme flexibility at compile time is, IMHO, a great strength and a terrible weakness of the language; I see the wisdom in this approach as well.)
Is there a need for a "use strict" Python compiler?
[ "", "python", "perl", "compiler-construction", "static-analysis", "use-strict", "" ]
This is my mail sending code. I was getting "There is Invalid character in Mail Header" error.When i changed my Computer Name some shortest name. The problem solved. But in my domain whole computer names like "04500-ab04545.xxxdomain.gov.tr" so I need to find another solution for this problem. So I cant give a static computer name while sending mail from c# code. ``` MailMessage msg = new MailMessage(); msg.Body = "axxxxxx"; msg.To.Add(new MailAddress("xxxx@xxxx.domain")); msg.From = new MailAddress("xxxx@xxxx.domain","blab blalb"); msg.Subject = "Subject xxx"; SmtpClient server = new SmtpClient("xxxxxxxx",25); server.Credentials = new NetworkCredential("xxxxx", "xxxxxxx"); SmtpClient server = new SmtpClient("mail.adalet.gov.tr",25); server.Credentials = new NetworkCredential("xxx", "xxx"); server.Send(msg); ```
I suspect this might be an Encoding related issue. Try using the `new MailAddress("xxxx@xxxx.domain","blab blalb", Encoding.Default)` constructor. Else try `Encoding.Unicode`. Update: After some digging, this exception is thrown from: ``` void System.Net.BufferBuilder.Append(string,int,int); ``` This will happen if you have any characters above \xff in the email address. This is not suppose to happen, as the name should be encoded already, but something else is going funny I guess.
What headers are the message trying to send with? You can easily dump with this MSDN snippet: ``` string[] keys = message.Headers.AllKeys; Console.WriteLine("Headers"); foreach (string s in keys) { Console.WriteLine("{0}:", s); Console.WriteLine(" {0}", message.Headers[s]); ```
Problem while sending mail by System.Net.Mail C#
[ "", "c#", "email", "system.net.mail", "" ]
What would be the practical side of the ability to define a class within an interface in Java: ``` interface IFoo { class Bar { void foobar () { System.out.println("foobaring..."); } } } ```
I can think of another usage than those linked by Eric P: defining a default/no-op implementation of the interface. ./alex ``` interface IEmployee { void workHard (); void procrastinate (); class DefaultEmployee implements IEmployee { void workHard () { procrastinate(); }; void procrastinate () {}; } } ``` Yet another sample — implementation of [Null Object Pattern](http://en.wikipedia.org/wiki/Null_Object_pattern): ``` interface IFoo { void doFoo(); IFoo NULL_FOO = new NullFoo(); final class NullFoo implements IFoo { public void doFoo () {}; private NullFoo () {}; } } ... IFoo foo = IFoo.NULL_FOO; ... bar.addFooListener (foo); ... ```
I think [this page](http://www.sap-img.com/java/why-we-define-a-class-inside-an-interface.htm) explains one example pretty well. You would use it to tightly bind a certain type to an interface. Shamelessly ripped off from the above link: ``` interface employee{ class Role{ public String rolename; public int roleId; } Role getRole(); // other methods } ``` In the above interface you are binding the Role type strongly to the employee interface(employee.Role).
Practical side of the ability to define a class within an interface in Java?
[ "", "java", "interface", "inner-classes", "" ]
I am looking for a tool to simplify analysing a linker map file for a large C++ project (VC6). During maintenance, the binaries grow steadily and I want to figure out where it comes from. I suspect some overzealeous template expansion in a library shared between different DLL's, but jsut browsign the map file doesn't give good clues. Any suggestions?
[This](http://www.sikorskiy.net/prj/amap/index.html) is a wonderful compiler generated map file analysis/explorer/viewer tool. Check if you can explore gcc generated map file. *amap : A tool to analyze .MAP files produced by 32-bit Visual Studio compiler and report the amount of memory being used by data and code. This app can also read and analyze MAP files produced by the Xbox360, Wii, and PS3 compilers.*
The map file should have the size of each section, you can write a quick tool to sort symbols by this size. There's also a command line tool that comes with MSVC (undname.exe) which you can use to demangle the symbols. Once you have the symbols sorted by size, you can generate this weekly or daily as you like and compare how the size of each symbol has changed over time. The map file alone from any single build may not tell much, but a historical report of compiled map files can tell you quite a bit.
MAP file analysis - where's my code size comes from?
[ "", "c++", "linker", "code-size", "" ]
I'm sorry if this is an obvious question but neither Google or a search here led me to an answer. Is there a way to remove an array entirely? I want the *opposite* of `int[] array = new int[5]`
Say you call: ``` void Foo(){ int[] a = new int[5]; } ``` In C# there is no way to undefine the variable `a`. That means `a` will be defined in `Foo` even if you set `a` to null. However, at the end of `Foo` `a` will fall out of scope. That means no code can reference it, and the garbage collector will take care of freeing the memory for you the next time it runs, which might not be for a long time.
You just have to let it go out of scope, and wait for GC to find it; which might not be immediately (in fact, it almost certainly won't be). If you have a field on a long-lived object (that is going to stay in scope), then you can set to null, which can help. You can influence the GC to collect sooner (not just your object: everything eligible), but you should rarely **if ever** do this. I use it only in test rigs; but: ``` GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced); // DON'T DO THIS!!! ``` for more on `GC.Collect`: * [What’s so wrong about using GC.Collect()?](https://stackoverflow.com/questions/118633/whats-so-wrong-about-using-gc-collect) * [When is it acceptable to call GC.Collect?](https://stackoverflow.com/questions/478167/when-is-it-acceptable-to-call-gc-collect)
How to delete an array in c#?
[ "", "c#", "arrays", "" ]
Is there some kind of subtle difference between those: ``` void a1(float &b) { b=1; }; a1(b); ``` and ``` void a1(float *b) { (*b)=1; }; a1(&b); ``` ? They both do the same (or so it seems from main() ), but the first one is obviously shorter, however most of the code I see uses second notation. Is there a difference? Maybe in case it's some object instead of float?
Both do the same, but one uses references and one uses pointers. [See my answer here for a comprehensive list of all the differences](https://stackoverflow.com/questions/57483/difference-between-pointer-variable-and-reference-variable-in-c/57492#57492).
Yes. The `*` notation says that what's being pass on the stack is a pointer, ie, address of something. The `&` says it's a reference. The effect is similar but not identical: Let's take two cases: ``` void examP(int* ip); void examR(int& i); int i; ``` If I call `examP`, I write ``` examP(&i); ``` which takes the address of the item and passes it on the stack. If I call `examR`, ``` examR(i); ``` I don't need it; now the compiler "somehow" passes a reference -- which practically means it gets and passes the address of `i`. On the code side, then ``` void examP(int* ip){ *ip += 1; } ``` I have to make sure to dereference the pointer. `ip += 1` does something very different. ``` void examR(int& i){ i += 1; } ``` always updates the value of `i`. For more to think about, read up on "call by reference" versus "call by value". The `&` notion gives C++ call by reference.
C++: difference between ampersand "&" and asterisk "*" in function/method declaration?
[ "", "c++", "pointers", "reference", "" ]
Is there a way to debug a regular expression in Python? And I'm not referring to the process of trying and trying till they work :) Here is how regexes can be debugged in Perl: ``` use re 'debug'; my $str = "GET http://some-site.com HTTP/1.1"; if($str =~/get\s+(\S+)/i) { print "MATCH:$1\n"; } ``` The code above produces the following output on my computer when ran: ``` Compiling REx "get\s+(\S+)" Final program: 1: EXACTF (3) 3: PLUS (5) 4: SPACE (0) 5: OPEN1 (7) 7: PLUS (9) 8: NSPACE (0) 9: CLOSE1 (11) 11: END (0) stclass EXACTF minlen 5 Matching REx "get\s+(\S+)" against "GET http://some-site.com HTTP/1.1" Matching stclass EXACTF against "GET http://some-site.com HTTP/1.1" (33 chars) 0 | 1:EXACTF (3) 3 | 3:PLUS(5) SPACE can match 1 times out of 2147483647... 4 | 5: OPEN1(7) 4 | 7: PLUS(9) NSPACE can match 20 times out of 2147483647... 24 | 9: CLOSE1(11) 24 | 11: END(0) Match successful! MATCH:http://some-site.com Freeing REx: "get\s+(\S+)" ```
``` >>> p = re.compile('.*', re.DEBUG) max_repeat 0 65535 any None >>> ``` [regex '|' operator vs separate runs for each sub-expression](https://stackoverflow.com/questions/580993/regex-operator-vs-separate-runs-for-each-sub-expression/582227#582227)
<https://www.debuggex.com> is also pretty good. It's an online Python (and a couple more languages) debugger, which has a pretty neat visualization of what does and what doesn't match. A pretty good resource if you need to draft a regexp quickly.
How can I debug a regular expression in Python?
[ "", "python", "regex", "debugging", "" ]
I would like to create a unit test using a mock web server. Is there a web server written in Java which can be easily started and stopped from a JUnit test case?
Try [Simple](http://www.simpleframework.org)([Maven](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.simpleframework%22%20a%3A%22simple%22)) its very easy to embed in a unit test. Take the RoundTripTest and examples such as the [PostTest](https://svn.code.sf.net/p/simpleweb/svn/trunk/simple-test/src/test/java/org/simpleframework/http/validate/test/PostTest.java) written with Simple. Provides an example of how to embed the server into your test case. Also Simple is much lighter and faster than Jetty, with no dependencies. So you won't have to add several jar files onto your classpath. Nor will you have to be concerned with `WEB-INF/web.xml` or any other artifacts.
[Wire Mock](http://wiremock.org/index.html) seems to offer a solid set of stubs and mocks for testing external web services. ``` @Rule public WireMockRule wireMockRule = new WireMockRule(8089); @Test public void exactUrlOnly() { stubFor(get(urlEqualTo("/some/thing")) .willReturn(aResponse() .withHeader("Content-Type", "text/plain") .withBody("Hello world!"))); assertThat(testClient.get("/some/thing").statusCode(), is(200)); assertThat(testClient.get("/some/thing/else").statusCode(), is(404)); } ``` It can integrate with spock as well. Example found [here](https://allegro.tech/2015/07/testing-server-faults-with-Wiremock.html).
How to mock a web server for unit testing in Java?
[ "", "java", "junit", "" ]
Take the below code: ``` private void anEvent(object sender, EventArgs e) { //some code } ``` --- What is the difference between the following ? ``` [object].[event] += anEvent; //and [object].[event] += new EventHandler(anEvent); ``` **[UPDATE]** Apparently, there is no difference between the two...the former is just syntactic sugar of the latter.
There is no difference. In your first example, the compiler will automatically infer the delegate you would like to instantiate. In the second example, you explicitly define the delegate. Delegate inference was added in C# 2.0. So for C# 1.0 projects, second example was your only option. For 2.0 projects, the first example using inference is what I would prefer to use and see in the codebase - since it is more concise.
``` [object].[event] += anEvent; ``` is just syntactic sugar for - ``` [object].[event] += new EventHandler(anEvent); ```
C#: Difference between ' += anEvent' and ' += new EventHandler(anEvent)'
[ "", "c#", "delegates", "event-handling", "" ]
Is it possible to install the latest JDK on Mac OS 10.5? What are the best options, considering that the Apple-supplied version is 1.5? **Update:** I am trying to install JDeveloper 11, which requires JDK 6.
If you have an Intel Mac with a Core 2 Duo, go to Applications/Java and run Java Preferences.app You'll be able to select "Java SE 6" as your default java version (version 5 is still the shipping default on the OS for compatibility, I guess).
Depending on your hardware the [latest version](http://developer.apple.com/java/) Apple supplies actually is Java 6, but it is not activated as the default version to use. Apple supplies Java 6 only for 64-bit Intel Macs (i.e. not for original Core Duo Macs). For older Macs, you could try the [OpenJDK port of Java 6 for Mac](http://landonf.bikemonkey.org/static/soylatte/) (SoyLatte).
Installing Java 6 on Mac OS
[ "", "java", "macos", "java-6", "" ]
asp.net 2.0 / jQuery / AJAX ``` <script type="text/javascript"> //updated to show proper method signature var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(hideMessage); function hideMessage(sender, args) { var ctl = args.get_postBackElement(); //check if ctl is the disired control //hide user notification message } </script> ``` i have several controls on the page that might initiate the AJAX request, but i only want my js to fire when i click one particular button. how do i check what control initiated the request so i can fire JS accordingly. **EDIT:** I worked around it, but I'd still like to know if I can do it this way. **Clarification:** I can't call the JS from onclick event, because the page is inside of the UpdatePanel, and i only want the JS to execute when AJAX Request ends and it was triggered by one particular button on the page. On server side, i set the myLabel.Text to some text, and then js checks if the $(myLabel.CliendID)'s innerHTML is not blank and fires the js. checking the innerHTML is my work-around since i can't figure out how to check the "sender" of AJAX Request. Hope this makes more sense now. **edit2:** I've read some [documentation](http://msdn.microsoft.com/en-us/library/bb311028.aspx), and turns out you CAN check the "sender" control. Thank you.
This is what I am doing in my code to identify what control has initialized the request. All javascript code. ``` function pageLoad() { if (!Sys.WebForms.PageRequestManager.getInstance().get_isInAsyncPostBack()) { Sys.WebForms.PageRequestManager.getInstance().add_endRequest(endRequestHandler); Sys.WebForms.PageRequestManager.getInstance().add_initializeRequest(initializeRequest); } } function endRequestHandler(sender, args) { if (sender._postBackSettings.sourceElement.id == '<%= gvResults.ClientID %>') { //do something because of this... } } function initializeRequest(sender, args) { if (CheckForSessionTimeout()) { args.set_cancel(true); } else { if (sender._postBackSettings.sourceElement.id == '<%= gvResults.ClientID %>') { //do something because of this } } } ``` **EDIT** Here is the method of checking for timeout on the client side. ``` var sessionTimeoutDateTime = new Date(); var sessionTimeoutInterval = <%= this.SesstionTimeoutMinutes %>; function CheckForSessionTimeout() { var currentDateTime = new Date() var iMiliSeconds = (currentDateTime - sessionTimeoutDateTime); if (iMiliSeconds >= sessionTimeoutInterval) { ShowSessionTimeout(); return true; } return false; } ```
I would recommend that you do not have each control execute the same javascript function. OR, if they do, pass a parameter that indicates which one executed it. Then, you can include your ajax in the js function that the control executes. And, if I'm not understanding the issue correctly, perhaps you could explain it in more detail or post some code.
Check what control initiated AJAX Request
[ "", "asp.net", "javascript", "jquery", "ajax", "pagerequestmanager", "" ]
I want to have a single Visual Studio project that builds a DLL file and an import library (.lib) file. (An import library is a statically-linked library that takes care of loading that DLL file in other projects that use it). So I went to Visual Studio C++ 2008 Express Edition, created a New Project of type Class Library, and set the "Configuration Type" to be "Dyanamic Library (.dll)". But when I build the solution, the only relevant output file I see is a DLL file; I don't see any LIB file getting generated. I looked in the project directory and all subdirectories (Release and Debug). I believe that it is possible to build a LIB and a DLL file at the same time because [on the MSDN it says "The linker creates the import library when the DLL is built."](http://msdn.microsoft.com/en-us/library/d14wsce5.aspx) Also, [another user of this website is creating LIB and DLL files at the same time using Visual C++](https://stackoverflow.com/questions/477461/when-building-a-dll-file-does-the-generated-lib-file-contain-the-dll-name). So how can I do it?
By selecting 'Class Library' you were accidentally telling it to make a .Net Library using the CLI (managed) extenstion of C++. Instead, create a Win32 project, and in the Application Settings on the next page, choose 'DLL'. You can also make an MFC DLL or ATL DLL from those library choices if you want to go that route, but it sounds like you don't.
Does your DLL project have any actual exports? If there are no exports, the linker will not generate an import library .lib file. In the non-Express version of VS, the import libray name is specfied in the project settings here: ``` Configuration Properties/Linker/Advanced/Import Library ``` I assume it's the same in Express (if it even provides the ability to configure the name).
How do I build an import library (.lib) AND a DLL in Visual C++?
[ "", "c++", "visual-studio", "dll", "import", "" ]
I have often thought it would be a good idea to allow the use of arrays as proper objects with their own methods instead of relying on helper classes like Arrays, Arrays and ArrayUtils. For example: ``` ints.sort(); // Arrays.sort(ints); int[] onemore = ints.add(8); // int[] onemore = ArrayUtils.add(ints, 8); ``` I am sure I am not the first with this idea but I have had trouble searching for others who have written about this before. Can anyone help me with some references on this topic? Is this thought to be a good or bad idea, and why? How easy would this be to implement? Some other examples might include (but don't get hung up on them, they're extraneous to the question itself): ``` int[] ints = {5,4,3,2,1}; // Arrays.sort (ints); ints.sort(); // int pos = Arrays.asList(ints).indexOf (5); // int pos = ArraysUtils.indexOf (ints, 5); int pos = ints.indexOf (5); // Arrays.reverse (ints); ints.reverse(); Array<Integer> array = ints; // cast to super class. // int length = Array.getLength (array); int length = array.getLength(); // Object n = Array.get (array, 3); Object n = array.get (3); // Array.set (array, 3, 7); array.set (3, 7); Object obj = array; // if (obj instanceof int[]) // System.out.println(Array.toString((int[]) obj)); // else if (....) System.out.println (obj); ```
Arrays are not classes in Java for a good reason - they map well onto people's understanding of how an array should work from experience with C-style languages. There are also performance reasons for making arrays low-level containers rather than objects. Because of this, sometimes there are performance benefits to using a primitive array rather than a Collection. If you want to use objects, you should just use a Collection (an ArrayList is an example of a collection). It can be clunky, but Collections provide the type of nice methodological access that you seem to want.
Those methods start to look an awful lot like ruby or python idioms. Unfortunately you don't get to do that in java (wish you could). For one, as others have pointed out, the collections classes do it for you. For another, myarray.sort() isn't so nice because you can create arrays of objects for which sorting has not been defined. Suppose I have ``` Foo[] foos; ``` And Foo is not Comparable. What happens on foos.sort()? We definitely wouldn't want to have it only work for primitives ``` int[] ints; ints.sort(); //legal Object[] objects; objects.sort(); //illegal ``` and you certainly couldn't have the compiler only allow the syntax for comparable objects. And once you get to something like ``` myarray.add(new Foo()); ``` it's sort of pointless, as arrays in java aren't growable. It *would* be nice if printing out an array didn't give you that useless ``` ([I'm an array(*&(* ``` rubbish, though.
Should Java treat arrays as objects?
[ "", "java", "arrays", "" ]
``` java version "1.5.0_14" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_14-b03) Java HotSpot(TM) Server VM (build 1.5.0_14-b03, mixed mode) ``` I'm trying to debug a NullPointerException I'm getting for passing a reference to statically defined field. To be more specific, I'm setting a global on a Drools3 working memory instance. ``` workingMemory.setGlobal("log", workingMemorieslog); ``` My assumption is that the Class where the field is statically defined is being garbage collected. (The receiving class must be using WeakReference or something like that, I don't really know) How would you suggest to debug this? I think that if I could know exactly when JVM's GC unloads a class / instance of a class then I could narrow down on the cause of the buggy behavior. (If not the exact time of the event at least getting an indication that something did happened). Thank you, Maxim.
To trace GC activity add this to java command: > -verbose:gc > -XX:+PrintGCTimeStamps > -XX:+PrintGCDetails NPE that you are getting is probably you passing null value.
Why not keep the class in memory then see if it still happens, if it does then your problem lies elsewhere. If not then you know the issue lies in garbage collection.
Howto print java class garbage collection events?
[ "", "java", "eclipse", "classpath", "" ]
Are there any predefined constants in the C# compiler to detect what version of the platform is being targeted? I can set a platform in Project options but how can I write single source for .NET 2 and .NET 3.5 ? Edit: I really want to write some code once and be able to switch behavior with something like #if CLR\_VERSION35 and then rn different configurations. The reverse question: If I make CLR35 and CLR20 configurations is it possible to select target platform based on this? The option is not available in VS2008 and I don't know a lot about MSBUILD yet.
.NET Framework 3.5 is a superset of 2.0 so all 2.0 apps should run on 3.5 without modification. As it is a superset there's no way of back-peddling to not use 3.5 functionality if your app finds itself on 2.0, not that makes sense anyway. If you need to target 2.0 and you can write what you need in 2.0 then that's the way to go, 3.5 will still run your app fine. Effectively, you have to write for the lowest common denominator or include the 3.5 runtime redistributables with your installer to ensure that your customer has the correct environment for your app.
In VS2008 you can set the target framework in the project properties (Solution explorer, right-click project). You also have the option when you create a new project, there's a drop-down in the top-right of the new project window.
Does C# compiler set version or platform constants?
[ "", "c#", ".net", "compiler-version", "" ]
I have two kinds of objects in my application where every object of one kind has exactly one corresponding object of the other kind. The obvious choice to keep track of this relationship is a `Map<type1, type2>`, like a HashMap. But somehow, I'm suspicious. Can I use an object as a key in the Map, pass it around, have it sitting in another collection, too, and retrieve its partner from the Map any time? After an object is created, all I'm passing around is an identifier, right? So probably no problem there. What if I serialize and deserialize the key? Any other caveats? Should I use something else to correlate the object pairs, like a number I generate myself?
1. The key needs to implement `.equals()` and `.hashCode()` correctly 2. The key *must not* be changed in any way that changes it's `.hashCode()` value while it's used as the key 3. Ideally any object used as a key in a `HashMap` should be immutable. This would automatically ensure that 2. is always held true. 4. Objects that could otherwise be GCed might be kept around when they are used as key and/or value.
> I have two kinds of objects in my > application where every object of one > kind has exactly one corresponding > object of the other kind. This really sounds like a has-a relationship and thus could be implemented using a simple attribute.
Any disadvantage to using arbitrary objects as Map keys in Java?
[ "", "java", "collections", "maps", "" ]
I'm always looking for a way to use all the tools I can and to stretch myself just beyond where I am at. But as much as I have read about delegates, I can never find a place to use them (like Interfaces, Generics, and a lot of stuff, but I digress.) I was hoping someone could show me when and how they used a delegate in *web* programming for asp.net c#(2.0 and above). Thank you and if this wrong for Stack Overflow, please just let me know.
bdukes is right about events. But you're not limited to just using delegates with events. Study the classic Observer Pattern for more examples on using delegates. Some text on the pattern points toward an event model, but from a raw learning perspective, you don't have to use events. One thing to remember: A delegate is just another type that can be used & passed around similar to your primitive types such as an "int". And just like "int", a delegate has it's own special characteristics that you can act on in your coding when you consume the delegate type. To get a really great handle on the subject and on some of it's more advanced and detailed aspects, get Joe Duffy's book, [.NET Framework 2.0](http://www.bookpool.com/sm/0764571354).
Well, whenever you handle an event, you're using a delegate.
When would I use a delegate in asp.net?
[ "", "c#", "asp.net", "delegates", "" ]
What unusual, unexpected consequences have occurred in terms of performance, memory, etc when switching from running your .NET applications under the 64 bit JIT vs. the 32 bit JIT? I'm interested in the good, but more interested in the surprisingly bad issues people have run into. I am in the process of writing a new .NET application which will be deployed in both 32bit and 64bit. There have been many questions relating to the issues with porting the application - I am unconcerned with the ["gotchas" from a programming/porting standpoint](http://msdn.microsoft.com/en-us/library/ms241064.aspx). (ie: Handling native/COM interop correctly, reference types embedded in structs changing the size of the struct, etc.) However, [this question and it's answer](https://stackoverflow.com/questions/632831/why-are-public-fields-faster-than-properties) got me thinking - What other issues am I overlooking? There have been many questions and blog posts that skirt around this issue, or hit one aspect of it, but I haven't seen anything that's compiled a decent list of problems. In particular - My application is very CPU bound and has huge memory usage patterns (hence the need for 64bit in the first place), as well as being graphical in nature. I'm concerned with what other hidden issues may exist in the CLR or JIT running on 64 bit Windows (using .NET 3.5sp1). Here are a few issues I'm currently aware of: * ([Now I know that](https://stackoverflow.com/questions/632831/why-are-public-fields-faster-than-properties)) Properties, even automatic properties, don't get inlined in x64. * The memory profile of the application changes, both because of the [size of references](https://stackoverflow.com/questions/426396/how-much-memory-does-a-c-net-object-use), but also because the memory allocator has different performance characteristics * [Startup times can suffer on x64](https://stackoverflow.com/questions/270614/-net-3-5-windows-forms-application-x86-vs-x64-load-times-on-64-bit-vista) I'd like to know what other, specific, issues people have discovered in the JIT on 64bit Windows, and also if there are any workarounds for performance. Thank you all! ----EDIT----- Just to clarify - I am aware that trying to optimize early is often bad. I am aware that second guessing the system is often bad. I also know that portability to 64bit has its own issues - we run and test on 64bit systems daily to help with this. etc. My application, however, is not your typical business application. It's a scientific software application. We have many processes that sit using 100% CPU on all of the cores (it's highly threaded) for hours at a time. I spend a LOT of time profiling the application, and that makes a huge difference. However, most profilers disable many features of the JIT, so the small details in things like memory allocation, inlining in the JIT, etc, can be very difficult to pin down when you're running under a profiler. Hence my need for the question.
I remember hearing an issue from an IRC channel I frequent. It optimises away the temporary copy in this instance: ``` EventHandler temp = SomeEvent; if(temp != null) { temp(this, EventArgs.Empty); } ``` Putting the race condition back in and causing potential null reference exceptions.
A particularly troublesome performance problem in .NET relates to the poor JIT: <https://connect.microsoft.com/VisualStudio/feedback/details/93858/struct-methods-should-be-inlined?wa=wsignin1.0> Basically, inlining and structs don't work well together on x64 (although that page *suggests* inlining now works but subsequent redunant copies aren't eliminated, that sounds suspect given the tiny perf. difference). In any case, after wrestling with .NET long enough for this, my solution is to use C++ for anything numerically intensive. Even in "good" cases for .NET, where you're not dealing with structs and using arrays where the bounds-checking is optimized out, C++ beats .NET [hands down](http://eamon.nerbonne.org/2009/02/net-numeric-performance-disappointing.html). If you're doing anything more complicated than dot products, the picture gets worse very quickly; the .NET code is both longer + less readable (because you need to manually inline stuff and/or can't use generics), and much slower. I've switched to using [Eigen](http://eigen.tuxfamily.org/index.php?title=Main_Page) in C++: it's absolutely great, resulting in readable code and high performance; a thin C++/CLI wrapper then provides the glue between the compute engine and the .NET world. Eigen works by template meta-programming; in compiles vector-expressions into SSE intrinsic instructions and does a lot of the nastiest cache-related loop unrolling and rearranging for you; and though focused on linear algebra, it'll work with integers and non-matrix array expressions too. So, for instance, if `P` is a matrix, this kind of stuff Just Works: ``` 1.0 / (P.transpose() * P).diagonal().sum(); ``` ...which doesn't allocate a temporarily transposed variant of P, and doesn't compute the whole matrix product but only the fields it needs. So, if you can run in Full Trust - just use C++ via C++/CLI, it works much much better.
My 32 bit headache is now a 64bit migraine?!? (or 64bit .NET CLR Runtime issues)
[ "", "c#", ".net", "vb.net", "clr", "jit", "" ]
What is the most efficient way to remove alternate (odd indexed or even indexed) elements in an `List<T>` without using a place holder list variable? Also it would be appreciated if you could mention the cost with each of your answer. I'm looking for an **efficient** way to do this Thanks in advance
If you call RemoveAt for every item you remove, you will be moving a lot of data. The most efficient is to move the items together that you want to keep, then remove the unused items at the end: ``` int pos = 0; for (int i = 0; i < values.Count; i += 2, pos++) { values[pos] = values[i]; } values.RemoveRange(pos, values.Count - pos); ``` Edit: This method will process a list of a million ints in 15 ms. Using RemoveAt it will take over three minutes... Edit2: You could actually start with pos=1 and i=2 (or 3), as the first item doesn't have to be copied to itself. This makes the code a bit less obvious though.
Just for consideration of a solution that creates a new list, with a list *old* you could do this: ``` var newList = old.Where((_, i) => i%2 != 0).ToList(); ``` or, obviously ``` var newList = l.Where((_, i) => i%2 == 0).ToList(); ``` depending which alternation you choose. **EDIT** The answer is quite a bit quicker. If you read something else here, it's because I measured on a weekend and weekend's brain is funny. :( The closure solution is about 40% quicker while the answer is app. 2 orders of magnitude faster. I suppose it will really depend how big your list becomes!
Removing alternate elements in a List<T>
[ "", "c#", "generics", "collections", "" ]
Is there a way to do multiple calls in the same "session" in Popen? For instance, can I make a call through it and then another one after it without having to concatenate the commands into one long string?
You're not "making a call" when you use popen, you're running an executable and talking to it over stdin, stdout, and stderr. If the executable has some way of doing a "session" of work (for instance, by reading lines from stdin) then, yes, you can do it. Otherwise, you'll need to exec multiple times. subprocess.Popen is (mostly) just a wrapper around execvp(3)
Assuming you want to be able to run a shell and send it multiple commands (and read their output), it appears you can do something like this: ``` from subprocess import * p = Popen(['/bin/sh'], shell=False, stdin=PIPE, stdout=PIPE, stderr=PIPE) ``` After which, e.g.,: ``` >>> p.stdin.write("cat /etc/motd\n") >>> p.stdout.readline() 'Welcome to dev-linux.mongo.com.\n' ``` (Of course, you should check `stderr` too, or else ask `Popen` to merge it with `stdout`). One major **problem** with the above is that the `stdin` and `stdout` pipes are in blocking mode, so it's easy to get "stuck" waiting forever for output from the shell. Although I haven't tried it, there's a [recipe](http://code.activestate.com/recipes/440554/) at the ActiveState site that shows how to address this. **Update**: after looking at the related questions/answers, it looks like it might be simpler to just use Python's built-in `select` module to see if there's data to read on `stdout` (you should also do the same for `stderr`, of course), e.g.: ``` >>> select.select([p.stdout], [], [], 0) ([<open file '<fdopen>', mode 'rb' at 0x10341690>], [], []) ```
Python persistent Popen
[ "", "python", "subprocess", "popen", "" ]
In a webapp I am building, I have a requirement in which I need to backup and restore around 10-15 tables from a SQL Server database. My solution.. -Backup: For each table, Fill a dataset , serialize that to a file -Restore: Deserialize the file into a dataset and use SQLBulkCopy Is this the best was to achieve my goal?
You need to make sure you consider the following. 1. What happens when the schema changes? How do you upgrade the data? 2. If any of the tables have identity columns you need to make sure you wack on identity insert before loading up the data. 3. Watch out for triggers, they may or may not be fired during your bulk loading depending on your settings.
You can use SQLSMO to backup/restore an entire SQL Server database. This [article](http://msdn.microsoft.com/en-us/magazine/cc163409.aspx) by John Papa covers it in detail. You can also programmatically run BACKUP/RESTORE commands against the DB.
Backing up and Restoring SQL Server Tables from C#
[ "", "c#", "asp.net", "sql-server", "" ]
With the new version of CodeIgniter; you can only set rules in a static `form_validation.php` file. I need to analyze the posted info (i.e. only if they select a checkbox). Only then do I want certain fields to be validated. What's the best way to do this, or must I use the old form validation class that is deprecated now?
You cannot only set rules in the config/form\_validation.php file. You can also set them with: ``` $this->form_validation->set_rules(); ``` More info on: <http://codeigniter.com/user_guide/libraries/form_validation.html#validationrules> However, the order of preference that CI has, is to first check if there are rules set with set\_rules(), if not, see if there are rules defined in the config file. So, if you have added rules in the config file, but you make a call to set\_rules() in the action, the config rules will never be reached. Knowing that, for conditional validations, I would have a specific method in a model that initializes the form\_validation object depending on the input (for that particular action). The typical situation where I've had the need to do this, is on validating shipping and billing addresses (are they the same or different). Hope that helps. :)
You could write your own function which checks whether said checkbox is selected, and applies the validation manually. ``` function checkbox_selected($content) { if (isset($_REQUEST['checkbox'])) { return valid_email($content); } } $this->form_validation->set_rules('email', 'Email', 'callback_checkbox_selected'); ```
How to Set Form Validation Rules for CodeIgniter Dynamically?
[ "", "php", "codeigniter", "" ]
On one of pages we're currently working on users can change the background of the text displayed. We would like to automatically alter the foreground colour to maintain *reasonable* contrast of the text. We would also prefer the colour range to be discretionary. For example, if background is changing from white to black in 255 increments, we don't want to see 255 shades of foreground colour. In this scenario, perhaps 2 to 4, just to maintain *reasonable* contrast. Any UI/design/colour specialists/painters out there to whip out the formula?
In terms of sheer readability, you want to use black and white text on whatever background it is. So convert RGB to HSV, and just check whether V is < 0.5. If so, white, if not, black. Try that first and see if you find it attractive. If you don't, then you probably want the white and black not to be so stark when your background is too bright or too dark. To tone this down, keep the same hue and saturation, and use these values for brightness: ``` background V foreground V 0.0-0.25 0.75 0.25-0.5 1.0 0.5-0.75 0.0 0.75-1.0 0.25 ``` On a medium color, you'll still see black or white text which will be nicely readable. On a dark color or light color, you'll see the same color text but at least 3/4 away in terms of brightness and therefore still readable. I hope it looks nice :)
Basing your black-white decision off of luma works pretty well for me. Luma is a weighted sum of the R, G, and B values, adjusted for human perception of relative brightness, apparently common in video applications. The official definition of luma has changed over time, with different weightings; see here: [http://en.wikipedia.org/wiki/Luma\_(video)](http://en.wikipedia.org/wiki/Luma_%28video%29). I got the best results using the Rec. 709 version, as in the code below. A black-white threshold of maybe 165 seems good. ``` function contrastingColor(color) { return (luma(color) >= 165) ? '000' : 'fff'; } function luma(color) // color can be a hx string or an array of RGB values 0-255 { var rgb = (typeof color === 'string') ? hexToRGBArray(color) : color; return (0.2126 * rgb[0]) + (0.7152 * rgb[1]) + (0.0722 * rgb[2]); // SMPTE C, Rec. 709 weightings } function hexToRGBArray(color) { if (color.length === 3) color = color.charAt(0) + color.charAt(0) + color.charAt(1) + color.charAt(1) + color.charAt(2) + color.charAt(2); else if (color.length !== 6) throw('Invalid hex color: ' + color); var rgb = []; for (var i = 0; i <= 2; i++) rgb[i] = parseInt(color.substr(i * 2, 2), 16); return rgb; } ```
Calculating contrasting colours in javascript
[ "", "javascript", "color-scheme", "" ]
Supposing I have a `File f` that represents a directory, then `f.delete()` will only delete the directory if it is empty. I've found a [couple](http://www.exampledepot.com/egs/java.io/DeleteDir.html) of [examples](http://www.rgagnon.com/javadetails/java-0483.html) online that use `File.listFiles()` or `File.list()` to get all the files in the directory and then recursively traverses the directory structure and delete all the files. However, since it's possible to create infinitely recursive directory structures (in both [Windows](http://blogs.msdn.com/oldnewthing/archive/2004/12/27/332704.aspx) and Linux (with symbolic links)) presumably it's possible that programs written in this style might never terminate. So, is there a better way to write such a program so that it doesn't fall into these pitfalls? Do I need to keep track of everywhere I've traversed and make sure I don't go around in circles or is there a nicer way? **Update:** In response to some of the answers (thanks guys!) - I'd rather the code didn't follow symbolic links and stayed within the directory it was supposed to delete. Can I rely on the Commons-IO implementation to do that, even in the Windows case?
If you really want your recursive directory deletion to follow through symbolic links, then I don't think there is any platform independent way of doing so without keeping track of all the directories you have traversed. However, in pretty much every case I can think of you would just want to delete the actual symbolic link pointing to the directory rather than recursively following through the symbolic link. If this is the behaviour you want then you can use the [FileUtils.deleteDirectory](https://commons.apache.org/proper/commons-io/javadocs/api-2.5/org/apache/commons/io/FileUtils.html#deleteDirectory(java.io.File)) method in Apache Commons IO.
Try [Apache Commons IO](http://commons.apache.org/io/apidocs/org/apache/commons/io/FileUtils.html) for a tested implementation. However, I don't think it this handles the infinite-recursion problem.
Deleting non-empty directories in Java
[ "", "java", "file-io", "recursion", "" ]
I'm in need of a solution to print or export (pdf/doc) from C#. I want to be able to design a template with place holders, bind an object (or xml) to this template, and get out a finished document. I'm not really sure if this is a reporting solution or not. I also don't want to have to roll my own printing / graphics code -- I'd like all display concerns handled in a template. I initially think of this as something Crystal Reports can do (although I've never used CR), but I'm not sure if I'm abusing the system here -- I'm not really interested in binding ADO.NET datasets at the moment (screw datasets). Can Crystal deal with binding to objects? Does SSRS or WPF play in this field too?
A subset of WPF-P is XPS which can be used to present your objects via databinding. One of the best choices if you are already using WPF. Google Keywords: XPS, FixedDocument, FlowDocument, WPF Printing
Might read through this thread: <http://groups.google.com/group/nhusers/browse_thread/thread/e2c2b8f834ae7ea8> Seems a lot of people like iTextSharp <http://itextsharp.sourceforge.net/>
C# - Templated Printing from Object(s)
[ "", "c#", "wpf", "crystal-reports", "reporting-services", "printing", "" ]
I have never used a SortedDictionary and was just curious if when you add values to it, for example, in a for loop, do the values get sorted automatically when you add them or do you have to sort them after you add them.
It's done automatically. The dictionary is structured in such a way that it must be sorted to operate correctly.
Yes. Microsoft discusses the SortedDictionary [here](http://msdn.microsoft.com/en-us/library/f7fta44c.aspx). A couple quotes from that page that indicate that sorting is taking place automatically: > "The SortedDictionary<(Of <(TKey, > TValue>)>) generic class is a binary > search tree with O(log n) retrieval, > where n is the number of elements in > the dictionary." O(log n) lookup indicates (typically) that we're in some tree looking for a match via a binary search. and > "SortedDictionary<(Of <(TKey, > TValue>)>) has faster insertion and > removal operations for unsorted data: > O(log n) as opposed to O(n) for > SortedList<(Of <(TKey, TValue>)>)." Again, O(log n) for the insertion, since it has to iterate through the structure to find the insertion point. Good fun.
Is a SortedDictionary automatically sorted?
[ "", "c#", "" ]
I need to convert decimal values to their fractional equivalents, similar to this [previous question](https://stackoverflow.com/questions/379176/format-double-as-fraction). I'm using the code posted in one of the answers as a starting point since it mostly does what I need it to do. ``` string ToMixedFraction(decimal x) { int whole = (int) x; int denominator = 64; int numerator = (int)( (x - whole) * denominator ); if (numerator == 0) { return whole.ToString(); } while ( numerator % 2 == 0 ) // simplify fraction { numerator /= 2; denominator /=2; } return string.Format("{0} {1}/{2}", whole, numerator, denominator); } ``` As I said, this code works fine for the most part, but I need to take common repeating decimal values (.3333333) and display that to the user as 1/3. Does anyone happen to know how this might be possible to do?
From [Link](https://web.archive.org/web/20170924064702/http://mathforum.org/library/drmath/view/61579.html) Take the number of digits in the repeating part as the numerator, take 9 repeating with the same number of digits and reduce the fraction. E.g., .3 repeating is the same as 3/9. Reduce by dividing both sides by the gcd (3 in this case) and you get 1/3. You'll need to do some extra math if you can extract a repeating decimal from a terminating decimal, e.g. .133333333.
The technique I learned in middle school: ``` x = 0.33333 10 x = 3.33333 10x - x = 3.3333 - .3333 9x = 3 x = 3 / 9 Reduce 3/9 to 1/3. ```
Format Repeating Decimal as a Fraction
[ "", "c#", "string", "formatting", "" ]
I'm working on some legacy code with a lot of code like ``` private final Logger logger = LogManager.getLogger(ThisClassName.class); ``` I'm wondering if there is any advantage to typing out "ThisClassName.class" as opposed to ``` LogManager.getLogger(getClass()); ``` As far as I can tell, there is none, but I'm wondering if there are any negative ramifications to using getClass(). Thanks.
If you're making the `Logger` `static`, you can't use `getClass()`.
Also, getClass() will obscure the actual class if there's a subclass that's actually invoking the method. For example, assume that in class A, method X, you call log.debug() and class B extends class A, overwriting method X, but calling super.X at some point. The log file will show class B, not class A.
Log4J - Is there any point in explicitly specifying the class name in the call to LogManager.getLogger()?
[ "", "java", "log4j", "" ]
I have a asp:BoundColumn with DateTime? data (i'll call it column X) in the asp:GridView. I loop through all the rows in the grid, and cast the .Text property to column X to DateTime (i need this to do some checking and change the cell background color if needed). I have 2 options: 1. wrap the cast in try/catch ... handle FormatException (in case column X gets rendered with some junk in it) ... and work only with valid DateTime values. 2. check the column X to make sure it's in the correct format (how?) before casing, and only cast if format is ok. Which approach should i take, and why? Thank you. **p.s. 1 up-vote for all explanations, accepted answer for TryParse** **Edit2: even though i used TryParse, I've unchecked the accepted answer to keep the discussion going.** **My "good data"/"bad data" ratio is about 2/1 - this means lots of bad input** I'll re-accept the answer in a couple of days, if nothing else comes up.
You could do the 2nd option. DateTime.TryParse can help.
I think it depends a bit on how often you expect it to be in the wrong format. Wrapping something in a try-catch doesn't really have any impact at all if the exception is seldom raised. But if you expect it to be raised, a lot, then you should maybe add a check to it.
Handling exceptions vs. Preventing them from occuring in the first place - C#
[ "", "c#", ".net-2.0", "error-handling", "try-catch", "" ]
I'm using WMI to collect system information. It works fine on every system I've tested it on, but I have one or two users that are reporting problems. The debug logs show the WMI code is throwing a "Provider load failure" exception. I haven't been able to replicate the issue. The users have verified that the WMI service is running in Automatic mode. Here's the exception: ``` System.Management.ManagementException: Provider load failure at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() ``` Any thoughts on how to troubleshoot and resolve this issue?
You might want to confirm all the dlls are properly registered (see <http://msdn.microsoft.com/en-us/library/bb961987.aspx>).
One way to possibly track down the root cause of the issue is to use [WBEMTest](http://technet.microsoft.com/en-us/library/cc180684.aspx) a tool that the MS Scripting Guys say is one of the easiest ways > "To find the provider of a WMI class..." The Scripting Guys: [Use PowerShell to Troubleshoot “Provider Load Failure”](http://blogs.technet.com/b/heyscriptingguy/archive/2012/09/12/use-powershell-to-troubleshoot-provider-load-failure.aspx) The high level steps specific to the Win32\_NetworkAdapter are described in this [Win32\_network adapter "provider load failure"](http://blogs.infosupport.com/win32_network-adapter-provider-load-failure/) post by Mark Wolzak at infoSupport. * Click start >> run >> wbemtest * click 'Connect…' to connect to a namespace * execute the query 'Select \* From MSFT\_WmiSelfEvent' * scroll down to the bottom and trace the following WMI events * Look at the details of any Msft\_WmiProvider\_InitializationOperationFailureEvent or Msft\_WmiProvider\_LoadOperationFailureEvent for the dll that is causing the issue Thanks to the [WMI–Provider Load Failure](http://msmvps.com/blogs/richardsiddaway/archive/2011/03/13/wmi-provider-load-failure.aspx) post at Richard Siddaway's Blog for pointing me to this tool and specific methodology.
How can I resolve a "Provider load failure" for WMI requests?
[ "", "c#", ".net", "wmi", "" ]
Are there any tools available in Linux which graphically or textually display memory usage for a program? For example, if I write a C++ program and would like to verify that objects are being allocated and deallocated properly in memory, are there applications available that would visually show the objects being instantiated and deleted? When I used to program in Visual Studio, I remember stepping through a program and using a debug pane to monitor memory usage and am looking for something similar to that in Linux.
It's not exactly what you are looking for, but have a look at [Valgrind](http://valgrind.org/).
I normally run **top** to keep an eye on overall memory usage.
monitor a program's memory usage in Linux
[ "", "c++", "linux", "memory", "" ]
I am working on a lib that I want to release in open source. I have started writing the tests for the code, and I was wondering how I am suppose to test a property in a .Net object. Lets say I have the following: ``` public class Person{ #region variables private string _name = String.Empty; private string _surname = String.Empty; #region properties public string Name{ get{ return _name; } } public string Surname{ get{ return _surname; } set{ _surname = value; } } } ``` I have two questions related to the code: 1. How do I Unit test a Property that just has a getter (Like Name in the example) 2. How do I Unit test a Property with a setter and a getter (Like Surname in the example) I want to test properties that are that simple because I have already found errors in other code were Itellinsense did the wrong autocomplete and the property was not returning the correct variable. Update: I am not talking about simple properties as the one in the example, they do have some logic behind them and are quite hard to debug. Writing a test that uses the setter to test the getter and vice versa is not good because if there is a fail I won't know which method to blame. I am using properties because they were added as public variables and later more logic had to be added.
> How do I Unit test a Property that > just has a getter (Like Name in the > example) Really not so different from testing if you had a setter. you'll just need to find another way of determining the output. Could be in a ctor, or the result of other setters/operations on the object. ``` [Test] public void NamePropTest() { Person p = new Person(); //Some code here that will set up the Person object // so that you know what the name will be Assert.AreEqual("some known value...", p.Name); } ``` If we had setters for Name and SurName, but only a getter for FullName, the test could look like this: ``` [Test] public void NamePropTest() { Person p = new Person(); p.Name = "Sean"; p.Surname = "Penn"; Assert.AreEqual("Sean Penn", p.FullName); } ```
> Don's waste your time on writing silly > tests for getters and setters. > > Another test will probably set the > Name and then get that property so you > will have code coverage for the > getter. You should test anything public facing, including properties. If you don't test a property, you run the risk that someone may add some logic inside it, breaking the functionality. Also you shouldn't rely on it being tested in other tests. This makes your tests brittle, and makes it harder to identify where the problem is as a test will be testing more than one thing.
UnitTesting Properties in .Net?
[ "", "c#", ".net", "unit-testing", "testing", "" ]
I've got a stored procedure in a MySQL database that simply updates a date column and returns the previous date. If I call this stored procedure from the MySQL client, it works fine, but when I try to call the stored procedure from Python using MySQLdb I can't seem to get it to give me the return value. Here's the code to the stored procedure: ``` CREATE PROCEDURE test_stuff.get_lastpoll() BEGIN DECLARE POLLTIME TIMESTAMP DEFAULT NULL; START TRANSACTION; SELECT poll_date_time FROM test_stuff.poll_table LIMIT 1 INTO POLLTIME FOR UPDATE; IF POLLTIME IS NULL THEN INSERT INTO test_stuff.poll_table (poll_date_time) VALUES ( UTC_TIMESTAMP() ); COMMIT; SELECT NULL as POLL_DATE_TIME; ELSE UPDATE test_stuff.poll_table SET poll_date_time = UTC_TIMESTAMP(); COMMIT; SELECT DATE_FORMAT(POLLTIME, '%Y-%m-%d %H:%i:%s') as POLL_DATE_TIME; END IF; END ``` The code I'm using to try to call the stored procedure is similar to this: ``` #!/usr/bin/python import sys import MySQLdb try: mysql = MySQLdb.connect(user=User,passwd=Passwd,db="test_stuff") mysql_cursor = mysql.cursor() results=mysql_cursor.callproc( "get_lastpoll", () ) print results mysql_cursor.close() mysql.close() except MySQLdb.Error, e: print "MySQL Error %d: %s" % ( e.args[0], e.args[1] ) sys.exit(1) ``` I know that you can do IN and OUT parameters, but from what I can determine from the MySQLdb documentation, this isn't possible with MySQLdb. Does anyone have any clue how I could get the results of the stored procedure? If I run it from a SQL tool, here's the output: ``` POLL_DATE_TIME ------------------- 2009-02-18 22:27:07 ``` If I run the Python script, it returns back an empty set, like this: ``` () ```
What I had to do is modify the Python code to use execute() instead of callproc(), and then use the fetchone() to get the results. I'm answering it myself since mluebke's answer wasn't entirely complete (even though it was helpful!). ``` mysql_cursor.execute( "call get_lastpoll();" ) results=mysql_cursor.fetchone() print results[0] ``` This gives me the correct output: ``` 2009-02-19 17:10:42 ``` See <https://stackoverflow.com/a/52715128/2391795> for advanced usage of `fetchone`, `fetchall`, etc.
`callproc` also works fine, you don't need to use `execute`: ``` mysql_cursor.callproc( "get_lastpoll", () ) result = mysql_cursor.fetchone() ```
Getting return values from a MySQL stored procedure in Python, using MySQLdb
[ "", "python", "mysql", "" ]