Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'd like to read whole content of a text file to a `std::string` object with c++. With Python, I can write: ``` text = open("text.txt", "rt").read() ``` It is very simple and elegant. I hate ugly stuff, so I'd like to know - what is the most elegant way to read a text file with C++? Thanks.
There are many ways, you pick which is the most elegant for you. Reading into char\*: ``` ifstream file ("file.txt", ios::in|ios::binary|ios::ate); if (file.is_open()) { file.seekg(0, ios::end); size = file.tellg(); char *contents = new char [size]; file.seekg (0, ios::beg); file.read (contents, size); file.close(); //... do something with it delete [] contents; } ``` Into std::string: ``` std::ifstream in("file.txt"); std::string contents((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>()); ``` Into vector<char>: ``` std::ifstream in("file.txt"); std::vector<char> contents((std::istreambuf_iterator<char>(in)), std::istreambuf_iterator<char>()); ``` Into string, using stringstream: ``` std::ifstream in("file.txt"); std::stringstream buffer; buffer << in.rdbuf(); std::string contents(buffer.str()); ``` file.txt is just an example, everything works fine for binary files as well, just make sure you use ios::binary in ifstream constructor.
There's [another thread](https://stackoverflow.com/questions/116038/what-is-the-best-way-to-slurp-a-file-into-a-stdstring-in-c) on this subject. My solutions from this thread (both one-liners): The nice (see Milan's second solution): ``` string str((istreambuf_iterator<char>(ifs)), istreambuf_iterator<char>()); ``` and the fast: ``` string str(static_cast<stringstream const&>(stringstream() << ifs.rdbuf()).str()); ```
What is the most elegant way to read a text file with c++?
[ "", "c++", "text", "file-io", "" ]
When you run something similar to: ``` UPDATE table SET datetime = NOW(); ``` on a table with 1 000 000 000 records and the query takes 10 seconds to run, will all the rows have the exact same time (minutes and seconds) or will they have different times? In other words, will the time be when the query started or when each row is updated? I'm running MySQL, but I'm thinking this applies to all dbs.
<http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html#function_now> > "NOW() returns a constant time that indicates the time at which the statement began to execute. (Within a stored routine or trigger, NOW() returns the time at which the routine or triggering statement began to execute.) This differs from the behavior for SYSDATE(), which returns the exact time at which it executes as of MySQL 5.0.13. "
Assign `NOW()` to a variable then update the datetime with variable: ``` update_date_time=now() ``` now update like this ``` UPDATE table SET datetime =update_date_time; ``` correct the syntax, as per your requirement
When running UPDATE ... datetime = NOW(); will all rows updated have the same date/time?
[ "", "mysql", "sql", "datetime", "sql-update", "" ]
I'm having performance oddities with Java2D. I know of the sun.java2d.opengl VM parameter to enable 3D acceleration for 2D, but even using that has some weird issues. Here are results of tests I ran: Drawing a 25x18 map with 32x32 pixel tiles on a JComponent Image 1 = .bmp format, Image 2 = A .png format ## Without -Dsun.java2d.opengl=true 120 FPS using .BMP image 1 13 FPS using .PNG image 2 ## With -Dsun.java2d.opengl=true 12 FPS using .BMP image 1 700 FPS using .PNG image 2 Without acceleration, I'm assuming some kind of transformation is taking place with every drawImage() I do in software, and is pulling down the FPS considerably in the case of .PNG. Why though, with acceleration, would the results switch (and PNG actually performs incredibly faster)?! Craziness! .BMP Image 1 is translated to an image type of TYPE\_INT\_RGB. .PNG Image 2 is translated to an image type of TYPE\_CUSTOM. In order to get consistent speed with and without opengl acceleration, I have to create a new BufferedImage with an image type of TYPE\_INT\_ARGB, and draw Image 1 or Image 2 to this new image. Here are the results running with that: ## Without -Dsun.java2d.opengl=true 120 FPS using .BMP image 1 120 FPS using .PNG image 2 ## With -Dsun.java2d.opengl=true 700 FPS using .BMP image 1 700 FPS using .PNG image 2 My real question is, can I assume that TYPE\_INT\_ARGB will be the native image type for all systems and platforms? I'm assuming this value could be different. Is there some way for me to get the native value so that I can always create new BufferedImages for maximum performance? Thanks in advance...
I think I found a solution by researching and putting bits and pieces together from too many Google searches. Here it is, comments and all: ``` private BufferedImage toCompatibleImage(BufferedImage image) { // obtain the current system graphical settings GraphicsConfiguration gfxConfig = GraphicsEnvironment. getLocalGraphicsEnvironment().getDefaultScreenDevice(). getDefaultConfiguration(); /* * if image is already compatible and optimized for current system * settings, simply return it */ if (image.getColorModel().equals(gfxConfig.getColorModel())) return image; // image is not optimized, so create a new image that is BufferedImage newImage = gfxConfig.createCompatibleImage( image.getWidth(), image.getHeight(), image.getTransparency()); // get the graphics context of the new image to draw the old image on Graphics2D g2d = newImage.createGraphics(); // actually draw the image and dispose of context no longer needed g2d.drawImage(image, 0, 0, null); g2d.dispose(); // return the new optimized image return newImage; } ``` In my previous post, GraphicsConfiguration was what held the information needed to create optimized images on a system. It seems to work pretty well, but I would have thought Java would automatically do this for you. Obviously you can't get too comfortable with Java. :) I guess I ended up answering my own question. Oh well, hopefully it'll help some of you I've seen trying to make use of Java for 2D games.
Well, this is old post but I'd like to share my findings about direct drawing with Swing/AWT, without BufferedImage. Some kind of drawing, as 3D, are better done when painting directly to a **int[]** buffer. Once done the images, you can use an **ImageProducer** instance, like **MemoryImageSource**, to produce images. I'm assuming you know how to perform your drawings directly, without help of Graphics/Graphics2. ``` /** * How to use MemoryImageSource to render images on JPanel * Example by A.Borges (2015) */ public class MyCanvas extends JPanel implements Runnable { public int pixel[]; public int width; public int height; private Image imageBuffer; private MemoryImageSource mImageProducer; private ColorModel cm; private Thread thread; public MyCanvas() { super(true); thread = new Thread(this, "MyCanvas Thread"); } /** * Call it after been visible and after resizes. */ public void init(){ cm = getCompatibleColorModel(); width = getWidth(); height = getHeight(); int screenSize = width * height; if(pixel == null || pixel.length < screenSize){ pixel = new int[screenSize]; } mImageProducer = new MemoryImageSource(width, height, cm, pixel,0, width); mImageProducer.setAnimated(true); mImageProducer.setFullBufferUpdates(true); imageBuffer = Toolkit.getDefaultToolkit().createImage(mImageProducer); if(thread.isInterrupted() || !thread.isAlive()){ thread.start(); } } /** * Do your draws in here !! * pixel is your canvas! */ public /* abstract */ void render(){ // rubisch draw int[] p = pixel; // this avoid crash when resizing if(p.length != width * height) return; for(int x=0; x < width; x++){ for(int y=0; y<height; y++){ int color = (((x + i) % 255) & 0xFF) << 16; //red color |= (((y + j) % 255) & 0xFF) << 8; //green color |= (((y/2 + x/2 - j) % 255) & 0xFF) ; //blue p[ x + y * width] = color; } } i += 1; j += 1; } private int i=1,j=256; @Override public void run() { while (true) { // request a JPanel re-drawing repaint(); try {Thread.sleep(5);} catch (InterruptedException e) {} } } @Override public void paintComponent(Graphics g) { super.paintComponent(g); // perform draws on pixels render(); // ask ImageProducer to update image mImageProducer.newPixels(); // draw it on panel g.drawImage(this.imageBuffer, 0, 0, this); } /** * Overrides ImageObserver.imageUpdate. * Always return true, assuming that imageBuffer is ready to go when called */ @Override public boolean imageUpdate(Image image, int a, int b, int c, int d, int e) { return true; } }// end class ``` Note we need unique instance of **MemoryImageSource** and **Image**. Do not create new Image or new ImageProducer for each frames, unless you have resized your JPanel. See **init()** method above. In a rendering thread, ask a **repaint()**. On Swing, **repaint()** will call the overridden **paintComponent()**, where it call your **render()** method and then ask your imageProducer to update image. With Image done, draw it with **Graphics.drawImage()**. To have a compatible Image, use proper **ColorModel** when you create your **Image**. I use **GraphicsConfiguration.getColorModel()**: ``` /** * Get Best Color model available for current screen. * @return color model */ protected static ColorModel getCompatibleColorModel(){ GraphicsConfiguration gfx_config = GraphicsEnvironment. getLocalGraphicsEnvironment().getDefaultScreenDevice(). getDefaultConfiguration(); return gfx_config.getColorModel(); } ```
Java2D Performance Issues
[ "", "java", "java-2d", "" ]
Very simply put: I have a class that consists mostly of static public members, so I can group similar functions together that still have to be called from other classes/functions. Anyway, I have defined two static unsigned char variables in my class public scope, when I try to modify these values in the same class' constructor, I am getting an "unresolved external symbol" error at compilation. ``` class test { public: static unsigned char X; static unsigned char Y; ... test(); }; test::test() { X = 1; Y = 2; } ``` I'm new to C++ so go easy on me. Why can't I do this?
If you are using **C++ 17** you can just use the `inline` specifier (see <https://stackoverflow.com/a/11711082/55721>) --- If using older versions of the C++ standard, you must add the definitions to match your declarations of X and Y ``` unsigned char test::X; unsigned char test::Y; ``` somewhere. You might want to also initialize a static member ``` unsigned char test::X = 4; ``` and again, you do that in the definition (usually in a CXX file) not in the declaration (which is often in a .H file)
Static data members declarations in the class declaration are not definition of them. To define them you should do this in the `.CPP` file to avoid duplicated symbols. The only data you can declare and define is integral static constants. (Values of `enums` can be used as constant values as well) You might want to rewrite your code as: ``` class test { public: const static unsigned char X = 1; const static unsigned char Y = 2; ... test(); }; test::test() { } ``` If you want to have ability to modify you static variables (in other words when it is inappropriate to declare them as const), you can separate you code between `.H` and `.CPP` in the following way: .H : ``` class test { public: static unsigned char X; static unsigned char Y; ... test(); }; ``` .CPP : ``` unsigned char test::X = 1; unsigned char test::Y = 2; test::test() { // constructor is empty. // We don't initialize static data member here, // because static data initialization will happen on every constructor call. } ```
Unresolved external symbol on static class members
[ "", "c++", "class", "static", "members", "" ]
I have a production server running with the following flag: -**XX:+HeapDumpOnOutOfMemoryError** Last night it generated a java-38942.hprof file when our server encountered a heap error. It turns out that the developers of the system knew of the flag but no way to get any useful information from it. Any ideas?
If you want a fairly advanced tool to do some serious poking around, look at [the Memory Analyzer project](http://www.eclipse.org/mat/) at Eclipse, contributed to them by SAP. Some of what you can do is mind-blowingly good for finding memory leaks etc -- including running a form of limited SQL (OQL) against the in-memory objects, i.e. > SELECT toString(firstName) FROM com.yourcompany.somepackage.User Totally brilliant.
You can use [JHAT](https://docs.oracle.com/javase/8/docs/technotes/tools/unix/jhat.html), The Java Heap Analysis Tool provided by default with the JDK. It's command line but starts a web server/browser you use to examine the memory. Not the most user friendly, but at least it's already installed most places you'll go. A very useful view is the "heap histogram" link at the very bottom. ex: `jhat -port 7401 -J-Xmx4G dump.hprof` `jhat` can execute OQL "these days" as well (bottom link "execute OQL")
How do I analyze a .hprof file?
[ "", "java", "profiling", "heap-memory", "" ]
I'm looking to add localization to a web site (asp.net /c# if that makes a difference) with a minimum of effort. I've looked at tools like [wiztom](http://www.wizart.com/en/index.shtml) [multilizer](http://www.multilizer.com/) and I am curious as to others experience with them. Have you used tools like these successfully and if so, what would you do differently?
From my experience with making localized applications I can say that there really isn't an easy, shrink-wrapped solution to this problem. Your best bet is to do a lot of reading on the topic (Google is your friend) and figure out a solution that works best in your specific instance. I think you'll have an issue with the "minimum of effort" part when it comes to localization.
I've used the following: 1. Alchemy Catalyst 2. Trados 3. Globalyzer 4. Lingobit Which you use depends on what you're looking to do: Are you looking for a tool to help you with string externalization alone (internationalization), or to help you manage the translation/localization workflow? How big is the application (# of pages, classes)? Has it been internationalized? Are you planning to do the localization (translation) in-house or outsource it? Thanks, Mike
Localization / internationalization tool
[ "", "c#", "asp.net", "localization", "" ]
I have a source base that, depending on defined flags at build time, creates two different apps. I can build both of these apps using a Makefile by specifying two different targets, one that compiles with a flag and one that compiles without, and having an aggregate target that builds both. How do I do the equivalent thing from Visual C# Express on Windows?
Create one solution with two project files in the same folder. Set two different configurations in your solution, one of them building one of the projects, the other one building the other project. Alternatively, you can have one project which always builds to intermediate binary and then have a postbuild step that copies it to the final binary depending on the flag.
Release vs Debug? You can create your own custom configurations as well. But building a completely different app based on the configuration seems like a bad idea. Normal visual studio can have multiple projects in one solution, and I think there's a way to shoe-horn this into Express edition (perhaps by opening a solution created elsewhere). Really,though, you may have to settle for keeping two instances of Visual Studio open at at a time. You can always set one of them to build right into the /bin/Release/ folder of the other project if you want.
How do I build two different apps from one Visual C# project?
[ "", "c#", "windows", "visual-studio", "build-process", "build-automation", "" ]
Would you recommend me the best approach to edit SQL query with Visual Studio 2008 Professional, please? I know I can open Query window from context menu in Server Explorer and edit text in SQL Pane. But unfortunately I am not allowed to save query to a file and Find and Replace commands are not working there. Alternatively I can open text file with extension .SQL in Visual Studio editor obtaining syntax highlighting and full editing capabilities but losing possibility to execute the script. Now I copy query text from one window to another and back but I hope there is better solution. Many thanks for suggestions!
If you create a Database project within your solution in Visual Studio, then you can set up a default database connection for that project. Then any \*.sql files that are included in the database project can be executed against that connection. What I usually do is select the text to be exectued and right-click it, then select "Run Selection". You can set up any number of database connections under the "Database References" node in the solution explorer, and choose the one you want to run your query against.
When I have to connect to Sql Server, I use NetBeans, as it has a number of features that make it much easier to use than Visual Studio. One of these is intellisense, which is very useful when you have an alias for a table name and lots of long field names. The general handling of connections and connection pools is also much cleaner in NetBeans. I could go on, but suggest trying it for yourself. You will almost certainly need the Sql Server jdbc driver, which can be downloaded from <http://msdn.microsoft.com/en-us/data/aa937724.aspx>
Editing SQL query with Visual Studio 2008
[ "", "sql", "visual-studio", "editor", "" ]
If I have a style defined ``` .style1 { width: 140px; } ``` can I reference it from a second style? ``` .style2 { ref: .style1; } ``` Or is there a way via javascript/jQuery? --- Edit To clarify the problem, I am trying to apply whatever style is defined for a #x and #c to .x and .c without altering the CSS as the CSS is going to have updates that are out of my control. I used width but really the style would be something more complex with font, border and other style elements being specified. Specifying multiple class names does work when the style is being applied to a class so I'll mark existing responses as answers, but I need to take the style being applied to an id and also apply it to a class style ... if that makes any sense.
There's no way to do it with CSS -- it's an oft-requested feature, but not included in the spec yet. You also can't do it directly with JS, but there's sort of a hacky workaround: ``` $('.style2').addClass ('style1'); ```
you can achieve the same functionality by allowing elements to inherit multiple styles. ex. ``` <p class="style1 style2">stuff</p> ``` and then your css would include, for example: ``` .style1 {width:140px;} .style2 {height:140px;} ``` edit: actually robert's answer might better approximate the method you are trying to achieve ``` .style1, .style2 {width: 140px;} .style2 {height: 140px;} <p class="style2">i will have both width and height applied</p> ```
Is there a way to reference an existing CSS style from another CSS style using CSS or javascript?
[ "", "javascript", "jquery", "dynamic-css", "" ]
Do any apps/packages exist that will support writing to OS X's log database from external sources. I'm not too familiar with the specifics of the database, beyond the fact that you can view its contents from the Console app. I'm not even sure if it's just a version of some simple DB like SQLite or if it is some sort of proprietary/internal/inaccessible kind of thing. My best guess so far is that one of two things might be possible: * It looks like it will accept log entries from natively run apps, so perhaps using some sort of daemon running in the background that could take text inputs and relay it to the log database would work? * The other alternative that came to mind was if there were some way to access the database directly, in which case a PHP script could simply connect to it to make log entries. If anyone more knowledgeable could fill in the blanks, I would be very grateful!
I've never used OS X but you might want to look into the [syslog](http://www.php.net/syslog) function.
In your PHP do a **syslog(LOG\_WARNING, "whatever");** In terminal type **syslog -w** It will output **Warning: whatever** It will then print out the syslogs. You will get system message as well in the syslog. The -w help only list recent stuff.
Using PHP to write to OS X's log database?
[ "", "php", "macos", "logging", "" ]
Is is possible to create COM component and ActiveX controls in .Net (using c# language). I searched internet but i could`t get anything. Thanks, santhosh
Declare an interface and implement it with class. If you have parameters/return values that are not OLE Automation compatible (custom structs, enums and so on), you might need to decorate them with the [MarshalAs attribute](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshalasattribute.aspx). Add the [GUID attribute](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.guidattribute.aspx) to both. Add the [COMVisible attribute](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.comvisibleattribute.aspx) to both. Alternatively, you can [mark the assembly](http://msdn.microsoft.com/en-us/library/ms182157(VS.80).aspx) with it. Use [tlbexp](http://msdn.microsoft.com/en-us/library/hfzzah2c.aspx) to generate a type library for native clients.
Yes, it is possible, there is this article in [CodeProject](http://www.codeproject.com/KB/COM/com_object_in_c_.aspx). A friend of mine tried it and had some trouble accessing the COM-object from his unmanaged app, though, so there are some pitfalls.
Create COM component and ActiveX controls
[ "", "c#", ".net", ".net-3.5", "" ]
What do the brackets do in a sql statement? For example, in the statement: ``` insert into table1 ([columnname1], columnname2) values (val1, val2) ``` Also, what does it do if the table name is in brackets?
The [] marks the delimitation of a identifier, so if you have a column whose name contains spaces like Order Qty you need to enclose it with [] like: ``` select [Order qty] from [Client sales] ``` They are also to escape reserved keywords used as identifiers
This is Microsoft SQL Server nonstandard syntax for "delimited identifiers." SQL supports delimiters for identifiers to allow table names, column names, or other metadata objects to contain the following: * SQL reserved words: "Order" * Words containing spaces: "Order qty" * Words containing punctuation: "Order-qty" * Words containing international characters * Column names that are case-sensitive: "Order" vs. "order" Microsoft SQL Server uses the square brackets, but this is not the syntax standard SQL uses for delimited identifiers. Standardly, double-quotes should be used for delimiters. In Microsoft SQL Server, you can enable a mode to use standard double-quotes for delimiters as follows: ``` SET QUOTED_IDENTIFIER ON; ```
[] brackets in sql statements
[ "", "sql", "ms-access", "" ]
When we execute `select count(*) from table_name` it returns the number of rows. What does `count(1)` do? What does `1` signify here? Is this the same as `count(*)` (as it gives the same result on execution)?
The parameter to the COUNT function is an expression that is to be evaluated for each row. The COUNT function returns the number of rows for which the expression evaluates to a non-null value. ( \* is a special expression that is not evaluated, it simply returns the number of rows.) There are two additional modifiers for the expression: ALL and DISTINCT. These determine whether duplicates are discarded. Since ALL is the default, your example is the same as count(ALL 1), which means that duplicates are retained. Since the expression "1" evaluates to non-null for every row, and since you are not removing duplicates, COUNT(1) should always return the same number as COUNT(\*).
Here is [a link](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:40208915257337) that will help answer your questions. In short: > count(\*) is the correct way to write > it and count(1) is OPTIMIZED TO BE > count(\*) internally -- since > > a) count the rows where 1 is not null > is less efficient than > b) count the rows
What does "select count(1) from table_name" on any database tables mean?
[ "", "sql", "database", "oracle", "" ]
I've been trying to get up to speed on some of the newer features in C# and one of them that I haven't had occasion to use is anonymous types. I understand the usage as it pertains to LINQ queries and I looked at [this SO post](https://stackoverflow.com/questions/48668/how-should-anonymous-types-be-used-in-c) which asked a similar question. Most of the examples I've seen on the net are related to LINQ queries, which is cool. I saw some somewhat contrived examples too but not really anything where I saw a lot of value. Do you have a novel use for anonymous types where you think it really provides you some utility?
With a bit of reflection, you can turn an anonymous type into a Dictionary<string, object>; Roy Osherove blogs his technique for this here: <http://weblogs.asp.net/rosherove/archive/2008/03/11/turn-anonymous-types-into-idictionary-of-values.aspx> Jacob Carpenter uses anonymous types as a way to initialize immutable objects with syntax similar to object initialization: <http://jacobcarpenter.wordpress.com/2007/11/19/named-parameters-part-2/> Anonymous types can be used as a way to give easier-to-read aliases to the properties of objects in a collection being iterated over with a `foreach` statement. (Though, to be honest, this is really nothing more than the standard use of anonymous types with [LINQ to Objects](http://msdn.microsoft.com/en-us/library/bb397919.aspx).) For example: ``` Dictionary<int, string> employees = new Dictionary<int, string> { { 1, "Bob" }, { 2, "Alice" }, { 3, "Fred" }, }; // standard iteration foreach (var pair in employees) Console.WriteLine("ID: {0}, Name: {1}", pair.Key, pair.Value); // alias Key/Value as ID/Name foreach (var emp in employees.Select(p => new { ID = p.Key, Name = p.Value })) Console.WriteLine("ID: {0}, Name: {1}", emp.ID, emp.Name); ``` While there's not much improvement in this short sample, if the `foreach` loop were longer, referring to `ID` and `Name` might improve readability.
ASP.NET MVC routing uses these objects all over the place.
Other than for LINQ queries, how do you use anonymous types in C#?
[ "", "c#", "anonymous-types", "" ]
I always felt that expecting exceptions to be thrown on a regular basis and using them as flow logic was a bad thing. Exceptions feel like they should be, well, the "*exception*". If you're expecting and planning for an exception, that would seem to indicate that your code should be refactored, at least in .NET... However. A recent scenario gave me pause. I posted this on msdn a while ago, but I'd like to generate more discussion about it and this is the perfect place! So, say you've got a database table which has a foreign key for several other tables (in the case that originally prompted the debate, there were 4 foreign keys pointing to it). You want to allow the user to delete, but only if there are NO foreign key references; you DON'T want to cascade delete. I normally just do a check to see if there are any references, and if there are, I inform the user instead of doing the delete. It's very easy and relaxing to write that in LINQ as related tables are members on the object, so Section.Projects and Section.Categories and et cetera is nice to type with intellisense and all... But the fact is that LINQ then has to hit potentially all 4 tables to see if there are any result rows pointing to that record, and hitting the database is obviously always a relatively expensive operation. The lead on this project asked me to change it to just catch a SqlException with a code of 547 (foreign key constraint) and deal with it that way. I was... *resistant*. But in this case, it's probably a lot more efficient to swallow the exception-related overhead than to swallow the 4 table hits... Especially since we have to do the check in every case, but we're spared the exception in the case when there are no children... Plus the database really should be the one responsible for handling referential integrity, that's its job and it does it well... So they won and I changed it. On some level it still feels *wrong* to me though. What do you guys think about expecting and intentionally handling exceptions? Is it okay when it looks like it'll be more efficient than checking beforehand? Is it more confusing to the next developer looking at your code, or less confusing? Is it safer, since the database might know about new foreign key constraints that the developer might not think to add a check for? Or is it a matter of perspective on what exactly you think best practice is?
Wow, First off, can you please distill the question down a bit, while it was nice to read a well thought out and explained question, that was quite a lot to digest. The short answer is "yes", but it can depend. * We have some applications where we have lots of business logic tied up in the SQL queries (not my design Gov!). If this is how it is structured, management can be difficult to convince of otherwise since it "already works". * In this situation, does it really make a big deal? Since it's still one trip across the wire and back. Does the server do much before it realises that it cannot continue (i.e.if there is a sequence of transactions that take place to your action, does it then fall over half way through, wasting time?). * Does it make sense to do the check in the UI first? Does it help with your application? If it provides a nicer user experience? (i.e. I have seen cases where you step through several steps in a wizard, it starts, then falls over, when it had all the info it needed to fall over after step 1). * Is concurrency an issue? Is it possible that the record may be removed/edited or whatever before your commit takes place (as in the classic `File.Exists` boo-boo). In my opinion: **I would do *both*. If I can fail fast and provide a better user experience, great. Any expected SQL (or any other) exceptions should be getting caught and fed back appropriately anyway.** I know there is a concensus that exceptions should not be used for other than *exceptional circumstances*, but remember, we are crossing application boundaries here, **expect nothing**. Like I said, this is like the `File.Exists`, there is no point, it can be deleted before you access it anyway.
Your lead is absolutely right. Exceptions are not just for once in a blue moon situations, but specifically for reporting other than expected outcomes. In this case the foreign key check would still take place, and exceptions are the mechanism by which you can be notified. What you should NOT do is catch and suppress exceptions with a blanket catchall statement. Doing fine-grained exception handling is specifically why exceptions were designed in the first place.
Catching exceptions as expected program execution flow control?
[ "", ".net", "sql", "sql-server", "exception", "" ]
I have a method lets say: ``` private static String drawCellValue( int maxCellLength, String cellValue, String align) { } ``` and as you can notice, I have a parameter called align. Inside this method I'm going to have some if condition on whether the value is a 'left' or 'right'.. setting the parameter as String, obviously I can pass any string value.. I would like to know if it's possible to have an Enum value as a method parameter, and if so, how? Just in case someone thinks about this; I thought about using a Boolean value but I don't really fancy it. First, how to associate true/false with left/right ? (Ok, I can use comments but I still find it dirty) and secondly, I might decide to add a new value, like 'justify', so if I have more than 2 possible values, Boolean type is definitely not possible to use. Any ideas?
This should do it: ``` private enum Alignment { LEFT, RIGHT }; String drawCellValue (int maxCellLength, String cellValue, Alignment align){ if (align == Alignment.LEFT) { //Process it... } } ```
Even cooler with enums you can use switch: ``` switch (align) { case LEFT: { // do stuff break; } case RIGHT: { // do stuff break; } default: { //added TOP_RIGHT but forgot about it? throw new IllegalArgumentException("Can't yet handle " + align); } } ``` Enums are cool because the output of the exception will be the name of the enum value, rather than some arbitrary int value.
Java: Enum parameter in method
[ "", "java", "parameters", "methods", "enums", "" ]
I have a class hierarchy as such: ``` +-- VirtualNode | INode --+ +-- SiteNode | | +-- AbstractNode --+ | +-- SiteSubNode ``` And a corresponding `NodeCollection` class that is build on `INode`. In order to display a `NodeCollection` I need to know the final type of each member. So I need a function like this ``` foreach (INode n in myNodeCollection) { switch(n.GetType()) { case(typeof(SiteNode)): // Display n as SiteNode } } ``` Now, this is really not an object oriented way of doing it. **Are there any patterns or recommended ways of doing the same thing, in your opinion?** **EDIT** I already thought of adding a `Display` or `Render` method to the INode interface. That has the side effect of coupling the view to the model, which I would really like to avoid.
What you're after is the [visitor pattern](http://en.wikipedia.org/wiki/Visitor_pattern), I think.
[Polymorphism:](http://en.wikipedia.org/wiki/Polymorphism_in_object-oriented_programming) When ever you have a select statement using the type of an object, it is a prime candidate for refactoring to polymorphism. Check out the book [Refactoring](https://rads.stackoverflow.com/amzn/click/com/0201485672) by Martin Fowler: "One of the most obvious symptoms of object-oriented code is its comparative lack of switch (or case) statements. The problem with switch statements is essentially that of duplication. Often you find the same switch statement scattered about a program in different places. If you add a new clause to the switch, you have to find all these switch, statements and change them. The objectoriented notion of polymorphism gives you an elegant way to deal with this problem. Most times you see a switch statement you should consider polymorphism. The issue is where the polymorphism should occur. Often the switch statement switches on a type code. You want the method or class that hosts the type code value. So use Extract Method to extract the switch statement and then Move Method to get it onto the class where the polymorphism is needed. At that point you have to decide whether to Replace Type Code with Subclasses or Replace Type Code with State/Strategy. When you have set up the inheritance structure, you can use Replace Conditional with Polymorphism." Here is one approach to using polymorphism in your situation: 1. Define an abstract method in AbstractNode named something like Display(). 2. Then actually implement Display() in each of the SiteNode and SiteSubNode classes. 3. Then, when you need to display these nodes, you could simply iterate through a collection containing items of type AbstractNode and call Display() for each. 4. The call to Display() will automatically resolve to the actual concrete implementation for the real type of that item. 5. Note: You could also move the Display() method from AbstractNode to the INode interface if VirtualNode is to be displayed.
Aren't there any better ways of displaying an object other than using hierarchy?
[ "", "c#", "design-patterns", "" ]
What does the following code do in C/C++? ``` if (blah(), 5) { //do something } ```
Comma operator is applied and the value 5 is used to determine the conditional's true/false. It will execute blah() and get something back (presumably), then the comma operator is employed and 5 will be the only thing that is used to determine the true/false value for the expression. --- Note that the , operator could be overloaded for the return type of the blah() function (which wasn't specified), making the result non-obvious.
If the comma operator is not overloaded, the code is similar to this: ``` blah(); if (5) { // do something } ``` If the comma operator is overloaded, the result will be based on that function. ``` #include <iostream> #include <string> using namespace std; string blah() { return "blah"; } bool operator,(const string& key, const int& val) { return false; } int main (int argc, char * const argv[]) { if (blah(), 5) { cout << "if block"; } else { cout << "else block"; } return 0; } ``` *(edited to show comma operator overloading scenario. thanks to David Pierre for commenting on this)*
What does the comma operator do?
[ "", "c++", "c", "operators", "comma-operator", "" ]
Is it a problem if you use the global keyword on variables you don't end up using? Compare: ``` function foo() { global $fu; global $bah; if (something()) { $fu->doSomething(); } else { $bah->doSomething(); } } function bar() { if (something()) { global $fu; $fu->doSomething(); } else { global $bah; $bah->doSomething(); } } ``` I'm quite aware that using the second method makes maintaining this code much harder, and that it's generally preferred to put all your globals at the start of functions, so: **Ignoring the difference in maintainability and code-styling of the two functions, is there a difference between these two in terms of overhead?**
If there is, it won't be (humanly) measurable, unless you are literally calling this function millions of times. And even if it was a recursive function with that property, I still wouldn't use your second method for the maintainability aspects you already brought up. **Edit:** For arguments sake, I actually went and benchmarked this, and `bar()` ended up slower by 0.1s over one million calls. Which means performance wise, you still have a reason to use the cleaner version.
As monoxide said, there's no significant performance difference. However, I'd avoid using global if at all possible; it's a bad road to go down and you'll end up with spaghetti. Use a static class; it'll keep things much better organized.
Does using global create any overhead?
[ "", "php", "optimization", "" ]
Unless I am missing an obvious built-in method, what is the quickest way to get the *n*th occurrence of a string within a string? I realize that I could loop the [IndexOf](https://msdn.microsoft.com/en-us/library/k8b1470s(v=vs.110).aspx) method by updating its start index on each iteration of the loop. But doing it this way seems wasteful to me.
That's basically what you need to do - or at least, it's the easiest solution. All you'd be "wasting" is the cost of n method invocations - you won't actually be checking any case twice, if you think about it. (IndexOf will return as soon as it finds the match, and you'll keep going from where it left off.)
You really could use the regular expression `/((s).*?){n}/` to search for n-th occurrence of substring `s`. In C# it might look like this: ``` public static class StringExtender { public static int NthIndexOf(this string target, string value, int n) { Match m = Regex.Match(target, "((" + Regex.Escape(value) + ").*?){" + n + "}"); if (m.Success) return m.Groups[2].Captures[n - 1].Index; else return -1; } } ``` **Note:** I have added `Regex.Escape` to original solution to allow searching characters which have special meaning to regex engine.
Get the index of the nth occurrence of a string?
[ "", "c#", ".net", "string", "indexing", "indexof", "" ]
JavaScript doesn't seem to have a native `trim()` method. How can I trim white spaces at the start and end of a string with JavaScript?
I know this question is ancient but now, Javascript actually does have a native .trim() <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/Trim>
The shortest form for jQuery: ``` string = $.trim(string); ``` [Link](http://docs.jquery.com/Utilities/jQuery.trim#str)
How to trim() white spaces from a string?
[ "", "javascript", "jquery", "string", "trim", "" ]
How, in the simplest possible way, distinguish between Windows XP and Windows Vista, using Python and [pywin32](http://python.net/crew/mhammond/win32/Downloads.html) or [wxPython](http://www.wxpython.org/)? Essentially, I need a function that called will return True iff current OS is Vista: ``` >>> isWindowsVista() True ```
Python has the lovely 'platform' module to help you out. ``` >>> import platform >>> platform.win32_ver() ('XP', '5.1.2600', 'SP2', 'Multiprocessor Free') >>> platform.system() 'Windows' >>> platform.version() '5.1.2600' >>> platform.release() 'XP' ``` NOTE: As mentioned in the comments proper values may not be returned when using older versions of python.
The simplest solution I found is this one: ``` import sys def isWindowsVista(): '''Return True iff current OS is Windows Vista.''' if sys.platform != "win32": return False import win32api VER_NT_WORKSTATION = 1 version = win32api.GetVersionEx(1) if not version or len(version) < 9: return False return ((version[0] == 6) and (version[1] == 0) and (version[8] == VER_NT_WORKSTATION)) ```
How to check if OS is Vista in Python?
[ "", "python", "windows", "windows-vista", "wxpython", "pywin32", "" ]
This seems very noisy to me. Five lines of overhead is just too much. ``` m_Lock.EnterReadLock() Try Return m_List.Count Finally m_Lock.ExitReadLock() End Try ``` So how would you simply this?
I was thinking the same, but in C# ;-p ``` using System; using System.Threading; class Program { static void Main() { ReaderWriterLockSlim sync = new ReaderWriterLockSlim(); using (sync.Read()) { // etc } } } public static class ReaderWriterExt { sealed class ReadLockToken : IDisposable { private ReaderWriterLockSlim sync; public ReadLockToken(ReaderWriterLockSlim sync) { this.sync = sync; sync.EnterReadLock(); } public void Dispose() { if (sync != null) { sync.ExitReadLock(); sync = null; } } } public static IDisposable Read(this ReaderWriterLockSlim obj) { return new ReadLockToken(obj); } } ```
All the solutions posted so far are at risk of deadlock. A using block like this: ``` ReaderWriterLockSlim sync = new ReaderWriterLockSlim(); using (sync.Read()) { // Do stuff } ``` gets converted into something like this: ``` ReaderWriterLockSlim sync = new ReaderWriterLockSlim(); IDisposable d = sync.Read(); try { // Do stuff } finally { d.Dispose(); } ``` This means that a ThreadAbortException (or similar) could happen between sync.Read() and the try block. When this happens the finally block never gets called, and the lock is never released! For more information, and a better implementation see: [Deadlock with ReaderWriterLockSlim and other lock objects](http://web.archive.org/web/20120323190453/http://www.nobletech.co.uk/Articles/ReaderWriterLockMgr.aspx). In short, the better implementation comes down to moving the lock into the `try` block like so: ``` ReaderWriterLockSlim myLock = new ReaderWriterLockSlim(); try { myLock.EnterReadLock(); // Do stuff } finally { // Release the lock myLock.ExitReadLock(); } ``` A wrapper [class](https://social.msdn.microsoft.com/Forums/pt-BR/60fa5522-4ac2-4395-9d5f-d001fdce09fd/thread-safe-classe-gerente-de-readerwriterlockslim-erro-de-compilacao?forum=vscsharppt) like the one in the accepted answer would be: ``` /// <summary> /// Manager for a lock object that acquires and releases the lock in a manner /// that avoids the common problem of deadlock within the using block /// initialisation. /// </summary> /// <remarks> /// This manager object is, by design, not itself thread-safe. /// </remarks> public sealed class ReaderWriterLockMgr : IDisposable { /// <summary> /// Local reference to the lock object managed /// </summary> private ReaderWriterLockSlim _readerWriterLock = null; private enum LockTypes { None, Read, Write, Upgradeable } /// <summary> /// The type of lock acquired by this manager /// </summary> private LockTypes _enteredLockType = LockTypes.None; /// <summary> /// Manager object construction that does not acquire any lock /// </summary> /// <param name="ReaderWriterLock">The lock object to manage</param> public ReaderWriterLockMgr(ReaderWriterLockSlim ReaderWriterLock) { if (ReaderWriterLock == null) throw new ArgumentNullException("ReaderWriterLock"); _readerWriterLock = ReaderWriterLock; } /// <summary> /// Call EnterReadLock on the managed lock /// </summary> public void EnterReadLock() { if (_readerWriterLock == null) throw new ObjectDisposedException(GetType().FullName); if (_enteredLockType != LockTypes.None) throw new InvalidOperationException("Create a new ReaderWriterLockMgr for each state you wish to enter"); // Allow exceptions by the Enter* call to propogate // and prevent updating of _enteredLockType _readerWriterLock.EnterReadLock(); _enteredLockType = LockTypes.Read; } /// <summary> /// Call EnterWriteLock on the managed lock /// </summary> public void EnterWriteLock() { if (_readerWriterLock == null) throw new ObjectDisposedException(GetType().FullName); if (_enteredLockType != LockTypes.None) throw new InvalidOperationException("Create a new ReaderWriterLockMgr for each state you wish to enter"); // Allow exceptions by the Enter* call to propogate // and prevent updating of _enteredLockType _readerWriterLock.EnterWriteLock(); _enteredLockType = LockTypes.Write; } /// <summary> /// Call EnterUpgradeableReadLock on the managed lock /// </summary> public void EnterUpgradeableReadLock() { if (_readerWriterLock == null) throw new ObjectDisposedException(GetType().FullName); if (_enteredLockType != LockTypes.None) throw new InvalidOperationException("Create a new ReaderWriterLockMgr for each state you wish to enter"); // Allow exceptions by the Enter* call to propogate // and prevent updating of _enteredLockType _readerWriterLock.EnterUpgradeableReadLock(); _enteredLockType = LockTypes.Upgradeable; } /// <summary> /// Exit the lock, allowing re-entry later on whilst this manager is in scope /// </summary> /// <returns>Whether the lock was previously held</returns> public bool ExitLock() { switch (_enteredLockType) { case LockTypes.Read: _readerWriterLock.ExitReadLock(); _enteredLockType = LockTypes.None; return true; case LockTypes.Write: _readerWriterLock.ExitWriteLock(); _enteredLockType = LockTypes.None; return true; case LockTypes.Upgradeable: _readerWriterLock.ExitUpgradeableReadLock(); _enteredLockType = LockTypes.None; return true; } return false; } /// <summary> /// Dispose of the lock manager, releasing any lock held /// </summary> public void Dispose() { if (_readerWriterLock != null) { ExitLock(); // Tidy up managed resources // Release reference to the lock so that it gets garbage collected // when there are no more references to it _readerWriterLock = null; // Call GC.SupressFinalize to take this object off the finalization // queue and prevent finalization code for this object from // executing a second time. GC.SuppressFinalize(this); } } protected ~ReaderWriterLockMgr() { if (_readerWriterLock != null) ExitLock(); // Leave references to managed resources so that the garbage collector can follow them } } ``` Making usage as follows: ``` ReaderWriterLockSlim myLock = new ReaderWriterLockSlim(); using (ReaderWriterLockMgr lockMgr = new ReaderWriterLockMgr(myLock)) { lockMgr.EnterReadLock(); // Do stuff } ``` Also, from [Joe Duffy's Blog](http://web.archive.org/web/20120113121354/http://www.bluebytesoftware.com/blog/PermaLink,guid,c4ea3d6d-190a-48f8-a677-44a438d8386b.aspx) > Next, the lock is not robust to asynchronous exceptions such as thread aborts and out of memory conditions. If one of these occurs while in the middle of one of the lock’s methods, the lock state can be corrupt, causing subsequent deadlocks, unhandled exceptions, and (sadly) due to the use of spin locks internally, a pegged 100% CPU. So if you’re going to be running your code in an environment that regularly uses thread aborts or attempts to survive hard OOMs, you’re not going to be happy with this lock.
How would you simplify Entering and Exiting a ReaderWriterLock?
[ "", "c#", ".net", "vb.net", "parallel-processing", "readerwriterlock", "" ]
A question about different methods of outputting html from PHP; what are the performance differences between these: Method 1 - variable concatenation ``` $html = ''; $html .= '<ul>'; for ($k = 1; $k < = 1000; $k++){ $html .= '<li> This is list item #'.$k.'</li>'; } $html .= '</ul>'; echo $html; ``` Method 2 - output buffering ``` ob_start(); echo '<ul>'; for ($k = 1; $k < = 1000; $k++){ echo '<li> This is list item #',$k,'</li>'; } echo '</ul>'; ``` I suspect you get some performance hit from continually modifying and enlarging a variable; is that correct? Cheers! Thanks GaryF, but I don't want an answer about architecture - this question is about performance. There seem to be some different opinions / testing about which one is faster, which is why there is not an accepted answer as yet.
It's a bit old, but [this post](http://blog.libssh2.org/index.php?/archives/28-How-long-is-a-piece-of-string.html) by Sara Golemon will probably help. AFAIK the output buffering functions are quite fast and efficient and so is `echo`, so that's what I would use.
The idea of string concatenation itself aside, you're really asking (I think) how you should be building up web pages, and it strikes me that any form of explicit concatentation is probably the wrong thing to do. Try using the [Model-View-Control pattern](http://en.wikipedia.org/wiki/Model-view-controller) to build up your data, and passing it to a simple templating library (like [Smarty](http://www.smarty.net/)), and let it worry about how to build your view. Better separation, fewer concerns.
PHP HTML generation - using string concatention
[ "", "php", "performance", "string-concatenation", "" ]
Pretty simple scenario. I have a web service that receives a byte array that is to be saved as a particular file type on disk. What is the most efficient way to do this in C#?
That would be [`File.WriteAllBytes()`](http://msdn.microsoft.com/en-us/library/system.io.file.writeallbytes.aspx).
`System.IO.File.WriteAllBytes(path, data)` should do fine.
What is the most efficient way to save a byte array as a file on disk in C#?
[ "", "c#", ".net", "" ]
I'm writing some code for a class constructor which loops through all the properties of the class and calls a generic static method which populates my class with data from an external API. So I've got this as an example class: ``` public class MyClass{ public string Property1 { get; set; } public int Property2 { get; set; } public bool Property3 { get; set; } public static T DoStuff<T>(string name){ // get the data for the property from the external API // or if there's a problem return 'default(T)' } } ``` Now in my constructor I want something like this: ``` public MyClass(){ var properties = this.GetType().GetProperties(); foreach(PropertyInfo p in properties){ p.SetValue(this, DoStuff(p.Name), new object[0]); } } ``` So the above constructor will thrown an error because I'm not supplying the generic type. So how do I pass in the type of the property in?
Do you want to call DoStuff<T> with T = the type of each property? In which case, "as is" you would need to use reflection and MakeGenericMethod - i.e. ``` var properties = this.GetType().GetProperties(); foreach (PropertyInfo p in properties) { object value = typeof(MyClass) .GetMethod("DoStuff") .MakeGenericMethod(p.PropertyType) .Invoke(null, new object[] { p.Name }); p.SetValue(this, value, null); } ``` However, this isn't very pretty. In reality I wonder if it wouldn't be better just to have: ``` static object DoStuff(string name, Type propertyType); ... and then object value = DoStuff(p.Name, p.PropertyType); ``` What does the generics give you in this example? Note that value-types will still get boxed etc during the reflection call - and even then boxing [isn't as bad as you might think](http://msmvps.com/blogs/jon_skeet/archive/2008/10/08/why-boxing-doesn-t-keep-me-awake-at-nights.aspx). Finally, in many scenarios, TypeDescriptor.GetProperties() is more appropriate than Type.GetProperties() - allows for flexible object models etc.
Was your constructor code meant to read like this: ``` public MyClass(){ var properties = this.GetType().GetProperties(); foreach(PropertyInfo p in properties){ p.SetValue(this, DoStuff(p.Name), new object[0]); } } ``` ? Note the `DoStuff` instead of `MyClass`. If so, the problem is that you're trying to use generics when they're really not applicable. The point of generics (well, one of the points) is to use compile-time type safety. Here you don't know the type at compile time! You could call the method by reflection (fetching the open form and then calling [MakeGenericMethod](http://msdn.microsoft.com/en-us/library/system.reflection.methodinfo.makegenericmethod.aspx)) but that's pretty ugly. Does `DoStuff` really need to be generic in the first place? Is it being used from elsewhere? The parameter to `PropertyInfo.SetValue` is just object, so you'd still get boxing etc even if you *could* call the method generically.
Reflection and generic types
[ "", "c#", ".net", "generics", "reflection", "" ]
What is a C#'s analog of `std::pair` in C++? I found `System.Web.UI.Pair` class, but I'd prefer something template-based.
Tuples [are available since .NET4.0](http://msdn.microsoft.com/en-us/library/system.tuple(v=vs.110).aspx) and support generics: ``` Tuple<string, int> t = new Tuple<string, int>("Hello", 4); ``` --- In previous versions you can use `System.Collections.Generic.KeyValuePair<K, V>` or a solution like the following: ``` public class Pair<T, U> { public Pair() { } public Pair(T first, U second) { this.First = first; this.Second = second; } public T First { get; set; } public U Second { get; set; } }; ``` And use it like this: ``` Pair<String, int> pair = new Pair<String, int>("test", 2); Console.WriteLine(pair.First); Console.WriteLine(pair.Second); ``` This outputs: ``` test 2 ``` Or even this chained pairs: ``` Pair<Pair<String, int>, bool> pair = new Pair<Pair<String, int>, bool>(); pair.First = new Pair<String, int>(); pair.First.First = "test"; pair.First.Second = 12; pair.Second = true; Console.WriteLine(pair.First.First); Console.WriteLine(pair.First.Second); Console.WriteLine(pair.Second); ``` That outputs: ``` test 12 true ```
`System.Web.UI` contained the `Pair` class because it was used heavily in ASP.NET 1.1 as an internal ViewState structure. **Update Aug 2017:** C# 7.0 / .NET Framework 4.7 provides a syntax to declare a Tuple with named items using the [`System.ValueTuple`](https://msdn.microsoft.com/en-us/library/system.valuetuple.aspx) struct. ``` //explicit Item typing (string Message, int SomeNumber) t = ("Hello", 4); //or using implicit typing var t = (Message:"Hello", SomeNumber:4); Console.WriteLine("{0} {1}", t.Message, t.SomeNumber); ``` see [MSDN](https://msdn.microsoft.com/en-us/magazine/mt493248.aspx) for more syntax examples. **Update Jun 2012:** [`Tuples`](http://msdn.microsoft.com/en-us/library/system.tuple.aspx) have been a part of .NET since version 4.0. Here is [an earlier article describing inclusion in.NET4.0](http://msdn.microsoft.com/en-us/magazine/dd942829.aspx) and support for generics: ``` Tuple<string, int> t = new Tuple<string, int>("Hello", 4); ```
What is a C# analog of C++ std::pair?
[ "", "c#", ".net", "data-structures", "std-pair", "base-class-library", "" ]
I've got a nice little class built that acts as a cache. Each item has an expiration TimeSpan or DateTime. Each time an attempt to access an item in the cache is made, the item's expiration is checked, and if it's expired, the item is removed from the cache and nothing is returned. That's great for objects that are accessed frequently, but if an item is put in the cache and never accessed again, it's never removed, even though it's expired. What's a good methodology for expiring such items from the cache? Should I have a background thread infinitely enumerating every item in the cache to check if it's expired?
In my experience, maintaining a custom caching mechanism became more trouble than it was worth. There are several libraries out there that have already solved these problems. I would suggest using one of them. A popular one in .Net is the Enterprise Library, although I have limited experience with its caching abilities. If you must use a custom caching mechanism, then I see no problem with a watchful thread idea you suggested. That is, if your application is a server-based application and not a web app. If it's a web app, you already have built in sliding expiration. You can then just wrap it in a strongly typed wrapper to avoid referencing cache items by key each time.
The best code is no code. Use the ASP.NET cache instead. You can reference it as System.Web.HttpRuntime.Cache in any application, not just web applications.
How to purge expired items from cache?
[ "", "c#", ".net", "caching", "" ]
I want to be able to explore the contents of a DB for this version of the DB. I was thinking of using the Squirrel DB client (which needs a JDBC driver). Therefore, I'm looking for a JDBC type 4 driver for SQL SERVER 3.5. Can somone point me to a FREE OR open source or trial ware ? If no JDBC driver, how do MS developers explore a given .SDF file ? Thank you, BR, ~A
There is a free program called SQLCeEditor that does it. I'd still like a JDBC driver though since that would make it easy to use with Eclipse and Java.
Have you tried [SQL Server Management Studio Express](http://www.microsoft.com/downloads/details.aspx?FamilyId=C243A5AE-4BD1-4E3D-94B8-5A0F62BF7796&DisplayLang=en)? > You can access SQL Server Compact 3.5 > databases stored on a smart device or > on the desktop computer by using SQL > Server Management Studio in SQL Server > or SQL Server Management Studio > Express (SSMSE). <http://technet.microsoft.com/en-us/library/ms172037.aspx> As for the JDBC Driver, you could take a look at [this one](http://msdn.microsoft.com/en-us/data/aa937724.aspx), provided by Microsoft. I don't know if it works with the Compact Edition or if you already tried it, but I thought it was worth mentioning.
jdbc driver for Microsoft SQL Server CE(Compact Edition) 3.5
[ "", "java", "sql-server", "database", "jdbc", "sql-server-ce", "" ]
A while a go I had to developed a music site that allowed audio files to be uploaded to a site and then converted in to various formats using ffmpeg, people would then download the uploaded audio files after purchasing them and a tmp file would be created and placed at the download location and was only valid for each download instance and the tmp file would then get deleted. Now I am revisiting the project, I have to add pictures and video as upload content also. I want to find the best method for storing the files, option 1 : storing the files in a folder and reference them in the database option 2 : storing the actual file in the database(mysql) as blob. I am toying around with this idea to consider the security implications of each method, and other issues I might have not calculated for.
See this earlier StackOverflow question [Storing images in a database, Yea or nay?](https://stackoverflow.com/questions/3748/storing-images-in-db-yea-or-nay). I know you mentioned images and video, however this question has relevance to all large binary content media files. The consensus seems to be that storing file paths to the images on the filesystem, rather then the actual images is the way to go.
I would recommend storing as files and storing their locations in the database. Storage the files in a database requires more resources and makes backing up/restoring databases slower. Do you really want to have to transfer lots of videos every time you do a database dump? File systems work very well for dishing out files, and you can back them up/sync them very easily.
Where to store uploaded files (sound, pictures and video)
[ "", "php", "mysql", "storage", "blob", "" ]
When I disable ViewState for the page. It does not allow any other control to use ViewState .. even if I set EnableViewState="true" for that particular control .. is it possible to enable ViewState for a control when ViewState is disabled for the page itself? if not how can disable viewstate for controls on page except for few without specifying EnableViewState="false" explicitly .. typing the same into so many controls is hectic ..
If you set turn page's ViewState off, then there is no way for you to enable ViewState for specific components. This is because ViewState is serialzed recursively, so when if the Page is not allowing ViewState, it will not serialize the ViewState for any of it's child controls. In answer to your question, if you don't want to explicitly turn ViewState off on individual controls, but want to keep some controls ViewState aware, the best way would be writing a small utility method which turns ViewState off for all controls (using recursion or otherwise). Then enable ViewState for the controls that you would like to enable ViewState for. Alternatively, a middle ground and less forceful way may possible if controls are groups inside other container controls (such as Panel). You can disable ViewState for all controls inside a Panel by disabling ViewState of the Panel.
> If you set turn page's ViewState off, > then there is no way for you to enable > ViewState for specific components. > This is because ViewState is serialzed > recursively, so when if the Page is > not allowing ViewState, it will not > serialize the ViewState for any of > it's child controls.it's child controls. With the advent of ASP.NET 4 we have a new property called [ViewStateMode](http://msdn.microsoft.com/en-us/library/system.web.ui.control.viewstatemode.aspx) that can be used to enable view state for an individual control even if view state is disabled for the page. To make use of this new property you can either create a base page class that sets the ViewStateMode to Disabled or create a PageAdapter that does the same thing. There is no viewStateMode property in the web.config. Here's the code for the page adapter: ``` using System.Web.UI; using System.Web.UI.Adapters; namespace Playground.Web.UI.Adapters { public class PageAdapter: System.Web.UI.Adapters.PageAdapter { protected override void OnLoad(EventArgs e) { ViewStateMode = ViewStateMode.Disabled; base.OnLoad(e); } } } ``` and here's the code for the browser file: ``` <browser refID="default"> <controladapters> <adapter controlType="System.Web.UI.Page" adapterType="Playground.Web.UI.Adapters.PageAdapter" /> </controladapters> </browser> ```
Enable ViewState for few controls and disable for others/page
[ "", "c#", "asp.net", "viewstate", "" ]
I have a dict, which I need to pass key/values as keyword arguments.. For example.. ``` d_args = {'kw1': 'value1', 'kw2': 'value2'} example(**d_args) ``` This works fine, *but* if there are values in the d\_args dict that are not accepted by the `example` function, it obviously dies.. Say, if the example function is defined as `def example(kw2):` This is a problem since I don't control either the generation of the `d_args`, or the `example` function.. They both come from external modules, and `example` only accepts some of the keyword-arguments from the dict.. Ideally I would just do ``` parsed_kwargs = feedparser.parse(the_url) valid_kwargs = get_valid_kwargs(parsed_kwargs, valid_for = PyRSS2Gen.RSS2) PyRSS2Gen.RSS2(**valid_kwargs) ``` I will probably just filter the dict, from a list of valid keyword-arguments, but I was wondering: **Is there a way to programatically list the keyword arguments the a specific function takes?**
A little nicer than inspecting the code object directly and working out the variables is to use the inspect module. ``` >>> import inspect >>> def func(a,b,c=42, *args, **kwargs): pass >>> inspect.getargspec(func) (['a', 'b', 'c'], 'args', 'kwargs', (42,)) ``` If you want to know if its callable with a particular set of args, you need the args without a default already specified. These can be got by: ``` def get_required_args(func): args, varargs, varkw, defaults = inspect.getargspec(func) if defaults: args = args[:-len(defaults)] return args # *args and **kwargs are not required, so ignore them. ``` Then a function to tell what you are missing from your particular dict is: ``` def missing_args(func, argdict): return set(get_required_args(func)).difference(argdict) ``` Similarly, to check for invalid args, use: ``` def invalid_args(func, argdict): args, varargs, varkw, defaults = inspect.getargspec(func) if varkw: return set() # All accepted return set(argdict) - set(args) ``` And so a full test if it is callable is : ``` def is_callable_with_args(func, argdict): return not missing_args(func, argdict) and not invalid_args(func, argdict) ``` (This is good only as far as python's arg parsing. Any runtime checks for invalid values in `kwargs` obviously can't be detected.)
This will print names of all passable arguments, keyword and non-keyword ones: ``` def func(one, two="value"): y = one, two return y print func.func_code.co_varnames[:func.func_code.co_argcount] ``` This is because first `co_varnames` are always parameters (next are local variables, like `y` in the example above). So now you could have a function: ``` def get_valid_args(func, args_dict): '''Return dictionary without invalid function arguments.''' validArgs = func.func_code.co_varnames[:func.func_code.co_argcount] return dict((key, value) for key, value in args_dict.iteritems() if key in validArgs) ``` Which you then could use like this: ``` >>> func(**get_valid_args(func, args)) ``` --- if you **really need only keyword arguments** of a function, you can use the `func_defaults` attribute to extract them: ``` def get_valid_kwargs(func, args_dict): validArgs = func.func_code.co_varnames[:func.func_code.co_argcount] kwargsLen = len(func.func_defaults) # number of keyword arguments validKwargs = validArgs[-kwargsLen:] # because kwargs are last return dict((key, value) for key, value in args_dict.iteritems() if key in validKwargs) ``` You could now call your function with known args, but extracted kwargs, e.g.: ``` func(param1, param2, **get_valid_kwargs(func, kwargs_dict)) ``` This assumes that `func` uses no `*args` or `**kwargs` magic in its signature.
Can you list the keyword arguments a function receives?
[ "", "python", "arguments", "introspection", "" ]
I'm investigating an annotation-based approach to validating Spring beans using [spring modules](https://springmodules.dev.java.net/). In [this tutorial](http://wheelersoftware.com/articles/spring-bean-validation-framework.html), the following bean (getters and setters omitted) is used as an example: ``` public final class User { @NotBlank @Length(max = 80) private String name; @NotBlank @Email @Length(max = 80) private String email; @NotBlank @Length(max = 4000) private String text; } ``` The error message that is used if a particular validation rule is disobeyed should follow this format: ``` bean-class.bean-propery[validation-rule]=Validation Error message ``` Examples for the class shown above include: ``` User.email[not.blank]=Please enter your e-mail address. User.email[email]=Please enter a valid e-mail address. User.email[length]=Please enter no more than {2} characters. ``` The fact that the message keys contain the class name presents a couple of problems: 1. If the class is renamed, the message keys also need to be changed 2. If I have another class (e.g. Person) with an email property that is validated identically to User.email, I need to duplicate the messages, e.g. Person.email[not.blank]=Please enter your e-mail address. Person.email[email]=Please enter a valid e-mail address. Person.email[length]=Please enter no more than {2} characters. In fact, the documentation claims that is possible to configure a default message for a particular rule (e.g. @Email) like this: ``` email=email address is invalid ``` This default message should be used if a bean-specific message for the rule cannot be found. However, my experience is that this simply does not work. An alternative mechanism for avoiding duplicate messages is to pass the key of the error message to the rule annotation. For example, assume I have defined the following default error message for the @Email rule ``` badEmail=Email address is invalid ``` This message should be used if I annotate the relevant property like this: ``` @Email(errorCode="badEmail") private String email; ``` However I tried this, out and again, it just doesn't seem to work. Has anyone found a way to avoid duplicating error messages when using this validation framework?
I took a quick look at the [BeanValidator API](https://springmodules.dev.java.net/source/browse/%2Acheckout%2A/springmodules/www/docs/javadocs/0.8/org/springmodules/validation/bean/BeanValidator.html), and it looks like you might want to try the *errorCodeConverter* property. You would need to implement your own [ErrorCodeConverter](https://springmodules.dev.java.net/source/browse/%2Acheckout%2A/springmodules/www/docs/javadocs/0.8/org/springmodules/validation/bean/converter/ErrorCodeConverter.html), or use one of the provided implementations? ``` .... <bean id="validator" class="org.springmodules.validation.bean.BeanValidator" p:configurationLoader-ref="configurationLoader" p:errorCodeConverter-ref="errorCodeConverter" /> <bean id="errorCodeConverter" class="contact.MyErrorCodeConverter" /> .... ``` *Note: configurationLoader is another bean defined in the config XML used in the tutorial* Example converter: ``` package contact; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springmodules.validation.bean.converter.ErrorCodeConverter; public class MyErrorCodeConverter implements ErrorCodeConverter { private Log log = LogFactory.getLog(MyErrorCodeConverter.class); @Override public String convertPropertyErrorCode(String errorCode, Class clazz, String property) { log.error(String.format("Property %s %s %s", errorCode, clazz.getClass().getName(), property)); return errorCode; // <------ use the errorCode only } @Override public String convertGlobalErrorCode(String errorCode, Class clazz) { log.error(String.format("Global %s %s", errorCode, clazz.getClass().getName())); return errorCode; } } ``` Now the properties should work: ``` MyEmailErrorCode=Bad email class Foo { @Email(errorCode="MyEmailErrorCode") String email } ```
Spring validation does have an ErrorCodeConverter that does this: org.springmodules.validation.bean.converter.KeepAsIsErrorCodeConverter When this is used, the resource bundle will be checked for the following codes: [errorCode.commandBeanName.fieldName, errorCode.fieldName, errorCode.fieldClassName, errorCode] * errorCode is the actual validation errorCode eg. not.blank, email. * commandBeanName is the same as the model key name that references the form backing bean. * fieldName is the name of the field. * fieldClassName is the field class name eg. java.lang.String, java.lang.Integer So for instance if I have a bean that is referenced in the model by the key "formBean" and the field emailAddress of type java.lang.String does not contain an email address, which causes the errorCode email. The validation framework will attempt to resolve the following message codes: [email.formBean.emailAddress, email.emailAddress, email.java.lang.String, email] If the errorCode is replaced by the errorCode "badEmail" like this: @Email(errorCode="badEmail") The messages codes that the framework will try resolve will be: [badEmail.formBean.emailAddress, badEmail.emailAddress, badEmail.java.lang.String, badEmail] I would suggest keeping the errodCode the same. Thus one message can be used for all fields that have that errorCode associated with them. If you need to be more specific with the message for a certain field you can add a message to the resource bundles with the code errorCode.commandBeanName.field.
annotation based Spring bean validation
[ "", "java", "validation", "spring-modules", "" ]
I have read the documentation on this and I think I understand. An [`AutoResetEvent`](http://msdn.microsoft.com/en-us/library/system.threading.autoresetevent.aspx) resets when the code passes through `event.WaitOne()`, but a [`ManualResetEvent`](http://msdn.microsoft.com/en-us/library/system.threading.manualresetevent.aspx) does not. Is this correct?
Yes. It's like the difference between a tollbooth and a door. The `ManualResetEvent` is the door, which needs to be closed (reset) manually. The `AutoResetEvent` is a tollbooth, allowing one car to go by and automatically closing before the next one can get through.
Just imagine that the `AutoResetEvent` executes `WaitOne()` and `Reset()` as a single atomic operation. The `AutoResetEvent` also guarantees to only release **one** waiting thread.
What is the difference between ManualResetEvent and AutoResetEvent in .NET?
[ "", "c#", ".net", "multithreading", "autoresetevent", "manualresetevent", "" ]
The in-house application framework we use at my company makes it necessary to put every SQL query into transactions, even though if I know that none of the commands will make changes in the database. At the end of the session, before closing the connection, I commit the transaction to close it properly. I wonder if there were any particular difference if I rolled it back, especially in terms of speed. Please note that I am using Oracle, but I guess other databases have similar behaviour. Also, I can't do anything about the requirement to begin the transaction, that part of the codebase is out of my hands.
Databases often preserve either a before-image journal (what it was before the transaction) or an after-image journal (what it will be when the transaction completes.) If it keeps a before-image, that has to be restored on a rollback. If it keeps an after-image, that has to replace data in the event of a commit. Oracle has both a journal and rollback space. The transaction journal accumulates blocks which are later written by DB writers. Since these are asynchronous, almost nothing DB writer related has any impact on your transaction (if the queue fills up, then you might have to wait.) Even for a query-only transaction, I'd be willing to bet that there's some little bit of transactional record-keeping in Oracle's rollback areas. I suspect that a rollback requires some work on Oracle's part before it determines there's nothing to actually roll back. And I think this is synchronous with your transaction. You can't really release any locks until the rollback is completed. [Yes, I know you aren't using any in your transaction, but the locking issue is why I think a rollback has to be fully released then all the locks can be released, then your rollback is finished.] On the other hand, the commit is more-or-less the expected outcome, and I suspect that discarding the rollback area might be slightly faster. You created no transaction entries, so the db writer will never even wake up to check and discover that there was nothing to do. I also expect that while commit may be faster, the differences will be minor. So minor, that you might not be able to even measure them in a side-by-side comparison.
I agree with the previous answers that there's no difference between COMMIT and ROLLBACK in this case. There might be a negligible difference in the CPU time needed to determine that there's nothing to COMMIT versus the CPU time needed to determine that there's nothing to ROLLBACK. But, if it's a negligible difference, we can safely forget about it. However, it's worth pointing out that there's a difference between a session that does a bunch of queries in the context of a single transaction and a session that does the same queries in the context of a series of transactions. If a client starts a transaction, performs a query, performs a COMMIT or ROLLBACK, then starts a second transaction and performs a second query, there's no guarantee that the second query will observe the same database state as the first. Sometimes, maintaining a single consistent view of the data is of the essence. Sometimes, getting a more current view of the data is of the essence. It depends on what you are doing. I know, I know, the OP didn't ask this question. But some readers may be asking it in the back of their minds.
Is there a difference between commit and rollback in a transaction only having selects?
[ "", "sql", "oracle", "transactions", "commit", "rollback", "" ]
Does windows have any decent sampling (eg. non-instrumenting) profilers available? Preferably something akin to Shark on MacOS, although i am willing to accept that i am going to have to pay for such a profiler on windows. I've tried the profiler in VS Team Suite and was not overly impressed, and was wondering if there were any other good ones. [Edit: Erk, i forgot to say this is for C/C++, rather than .NET -- sorry for any confusion]
[Intel VTune](http://www.intel.com/cd/software/products/asmo-na/eng/239144.htm) is good and is non-instrumenting. We evaluated a whole bunch of profilers for Windows, and this was the best for working with driver code (though it does unmanaged user level code as well). A particular strength is that it reads all the Intel processor performance counters, so you can get a good understanding of why your code is running slowly, and it was useful for putting prefetch instructions into our code and sorting out data layout to work well with the cache lines, and the way cache lines get invalidated in multi core systems. It is commercial, and I have to say it isn't the easiest UI in the world.
For Windows, check out the free [Xperf](http://blogs.msdn.com/pigscanfly/archive/2008/02/09/xperf-a-new-tool-in-the-windows-sdk.aspx) that ships with the Windows SDK. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like: Who is using the most CPU? Drill down to function name using call stacks. Who is allocating the most memory? Outstanding memory allocations (leaks) Who is doing the most registry queries? Disk writes? etc.
Decent profiler for Windows?
[ "", "c++", "windows", "performance", "profiling", "" ]
It appears that in PHP objects are passed by reference. Even assignment operators do not appear to be creating a copy of the Object. Here's a simple, contrived proof: ``` <?php class A { public $b; } function set_b($obj) { $obj->b = "after"; } $a = new A(); $a->b = "before"; $c = $a; //i would especially expect this to create a copy. set_b($a); print $a->b; //i would expect this to show 'before' print $c->b; //i would ESPECIALLY expect this to show 'before' ?> ``` In both print cases I am getting 'after' So, how do I pass **$a** to ***set\_b()*** by value, not by reference?
In PHP 5+ objects are passed by reference. In PHP 4 they are passed by value (that's why it had runtime pass by reference, which became deprecated). You can use the 'clone' operator in PHP5 to copy objects: ``` $objectB = clone $objectA; ``` Also, it's just objects that are passed by reference, not everything as you've said in your question...
The answers are commonly found in Java books. 1. cloning: If you don't override clone method, the default behavior is shallow copy. If your objects have only primitive member variables, it's totally ok. But in a typeless language with another object as member variables, it's a headache. 2. serialization/deserialization `$new_object = unserialize(serialize($your_object))` This achieves deep copy with a heavy cost depending on the complexity of the object.
How do I create a copy of an object in PHP?
[ "", "php", "copy", "clone", "shallow-copy", "" ]
How can I delete the contents of a local folder in Python? The current project is for Windows, but I would like to see \*nix also.
``` import os, shutil folder = '/path/to/folder' for filename in os.listdir(folder): file_path = os.path.join(folder, filename) try: if os.path.isfile(file_path) or os.path.islink(file_path): os.unlink(file_path) elif os.path.isdir(file_path): shutil.rmtree(file_path) except Exception as e: print('Failed to delete %s. Reason: %s' % (file_path, e)) ```
You can simply do this: ``` import os import glob files = glob.glob('/YOUR/PATH/*') for f in files: os.remove(f) ``` You can of course use an other filter in you path, for example : /YOU/PATH/\*.txt for removing all text files in a directory.
How to delete the contents of a folder?
[ "", "python", "file", "" ]
I ran into an interesting (and very frustrating) issue with the `equals()` method today which caused what I thought to be a well tested class to crash and cause a bug that took me a very long time to track down. Just for completeness, I wasn't using an IDE or debugger - just good old fashioned text editor and System.out's. Time was very limited and it was a school project. Anyhow - I was developing a basic shopping cart which could contain an *`ArrayList` of `Book` objects*. In order to implement the `addBook()`, `removeBook()`, and `hasBook()` methods of the Cart, I wanted to check if the `Book` already existed in the `Cart`. So off I go - ``` public boolean equals(Book b) { ... // More code here - null checks if (b.getID() == this.getID()) return true; else return false; } ``` All works fine in testing. I create 6 objects and fill them with data. Do many adds, removes, has() operations on the `Cart` and everything works fine. I read that you can *either have `equals(TYPE var)` or `equals(Object o) { (CAST) var }`* but assumed that since it was working, it didn't matter too much. Then I ran into a problem - I needed to create a `Book` object with *only* the `ID` in it from within the Book class. No other data would be entered into it. Basically the following: ``` public boolean hasBook(int i) { Book b = new Book(i); return hasBook(b); } public boolean hasBook(Book b) { // .. more code here return this.books.contains(b); } ``` All of a sudden, the `equals(Book b)` method no longer works. This took a VERY long time to track down without a good debugger and assuming the `Cart` class was properly tested and correct. After swaapping the `equals()` method to the following: ``` public boolean equals(Object o) { Book b = (Book) o; ... // The rest goes here } ``` Everything began to work again. Is there a reason the method decided not to take the Book parameter even though it clearly **was** a `Book` object? The only difference seemed to be it was instantiated from within the same class, and only filled with one data member. I'm very very confused. Please, shed some light?
In Java, the `equals()` method that is inherited from `Object` is: ``` public boolean equals(Object other); ``` In other words, the parameter must be of type `Object`. This is called *overriding*; your method `public boolean equals(Book other)` does what is called *overloading* to the `equals()` method. The `ArrayList` uses overridden `equals()` methods to compare contents (e.g. for its `contains()` and `equals()` methods), **not** overloaded ones. In most of your code, calling the one that didn't properly override `Object`'s equals was fine, but not compatible with `ArrayList`. So, not overriding the method correctly can cause problems. I override equals the following everytime: ``` @Override public boolean equals(Object other){ if (other == null) return false; if (other == this) return true; if (!(other instanceof MyClass)) return false; MyClass otherMyClass = (MyClass)other; ...test other properties here... } ``` The use of the `@Override` annotation can help a ton with silly mistakes. Use it whenever you think you are overriding a super class' or interface's method. That way, if you do it the wrong way, you will get a compile error.
If you use eclipse just go to the top menu > Source --> Generate equals() and > hashCode()
Overriding the java equals() method - not working?
[ "", "java", "equals", "overriding", "" ]
Given a credit card number and no additional information, what is the best way in PHP to determine whether or not it is a valid number? Right now I need something that will work with American Express, Discover, MasterCard, and Visa, but it might be helpful if it will also work with other types.
There are three parts to the validation of the card number: 1. **PATTERN** - does it match an issuers pattern (e.g. VISA/Mastercard/etc.) 2. **CHECKSUM** - does it actually check-sum (e.g. not just 13 random numbers after "34" to make it an AMEX card number) 3. **REALLY EXISTS** - does it actually have an associated account (you are unlikely to get this without a merchant account) ## Pattern * MASTERCARD Prefix=51-55, Length=16 (Mod10 checksummed) * VISA Prefix=4, Length=13 or 16 (Mod10) * AMEX Prefix=34 or 37, Length=15 (Mod10) * Diners Club/Carte Prefix=300-305, 36 or 38, Length=14 (Mod10) * Discover Prefix=6011,622126-622925,644-649,65, Length=16, (Mod10) * etc. ([detailed list of prefixes](http://en.wikipedia.org/wiki/Bank_card_number#Issuer_identification_number_.28IIN.29)) ## Checksum Most cards use the Luhn algorithm for checksums: [Luhn Algorithm described on Wikipedia](http://en.wikipedia.org/wiki/Luhn_algorithm) There are links to many implementations on the Wikipedia link, including PHP: ``` <? /* Luhn algorithm number checker - (c) 2005-2008 shaman - www.planzero.org * * This code has been released into the public domain, however please * * give credit to the original author where possible. */ function luhn_check($number) { // Strip any non-digits (useful for credit card numbers with spaces and hyphens) $number=preg_replace('/\D/', '', $number); // Set the string length and parity $number_length=strlen($number); $parity=$number_length % 2; // Loop through each digit and do the maths $total=0; for ($i=0; $i<$number_length; $i++) { $digit=$number[$i]; // Multiply alternate digits by two if ($i % 2 == $parity) { $digit*=2; // If the sum is two digits, add them together (in effect) if ($digit > 9) { $digit-=9; } } // Total up the digits $total+=$digit; } // If the total mod 10 equals 0, the number is valid return ($total % 10 == 0) ? TRUE : FALSE; } ?> ```
From [10 regular expressions you can't live without in PHP](http://web.archive.org/web/20080918014358/http://www.roughguidetophp.com/10-regular-expressions-you-just-cant-live-without-in-php/): ``` function check_cc($cc, $extra_check = false){ $cards = array( "visa" => "(4\d{12}(?:\d{3})?)", "amex" => "(3[47]\d{13})", "jcb" => "(35[2-8][89]\d\d\d{10})", "maestro" => "((?:5020|5038|6304|6579|6761)\d{12}(?:\d\d)?)", "solo" => "((?:6334|6767)\d{12}(?:\d\d)?\d?)", "mastercard" => "(5[1-5]\d{14})", "switch" => "(?:(?:(?:4903|4905|4911|4936|6333|6759)\d{12})|(?:(?:564182|633110)\d{10})(\d\d)?\d?)", ); $names = array("Visa", "American Express", "JCB", "Maestro", "Solo", "Mastercard", "Switch"); $matches = array(); $pattern = "#^(?:".implode("|", $cards).")$#"; $result = preg_match($pattern, str_replace(" ", "", $cc), $matches); if($extra_check && $result > 0){ $result = (validatecard($cc))?1:0; } return ($result>0)?$names[sizeof($matches)-2]:false; } ``` Sample input: ``` $cards = array( "4111 1111 1111 1111", ); foreach($cards as $c){ $check = check_cc($c, true); if($check!==false) echo $c." - ".$check; else echo "$c - Not a match"; echo "<br/>"; } ``` This gives us ``` 4111 1111 1111 1111 - Visa ```
What is the best way to validate a credit card in PHP?
[ "", "php", "validation", "e-commerce", "numbers", "credit-card", "" ]
When I parse my xml file (variable f) in this method, I get an error > C:\Documents and Settings\joe\Desktop\aicpcudev\OnlineModule\map.dtd (The system cannot find the path specified) I know I do not have the dtd, nor do I need it. How can I parse this File object into a Document object while ignoring DTD reference errors? ``` private static Document getDoc(File f, String docId) throws Exception{ DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(f); return doc; } ```
A similar approach to the one suggested by [@anjanb](https://stackoverflow.com/questions/155101/make-documentbuilderparse-ignore-dtd-references#155330) ``` builder.setEntityResolver(new EntityResolver() { @Override public InputSource resolveEntity(String publicId, String systemId) throws SAXException, IOException { if (systemId.contains("foo.dtd")) { return new InputSource(new StringReader("")); } else { return null; } } }); ``` I found that simply returning an empty InputSource worked just as well?
Try setting features on the DocumentBuilderFactory: ``` DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setValidating(false); dbf.setNamespaceAware(true); dbf.setFeature("http://xml.org/sax/features/namespaces", false); dbf.setFeature("http://xml.org/sax/features/validation", false); dbf.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false); dbf.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); DocumentBuilder db = dbf.newDocumentBuilder(); ... ``` Ultimately, I think the options are specific to the parser implementation. [Here is some documentation for Xerces2](http://xerces.apache.org/xerces2-j/features.html) if that helps.
Make DocumentBuilder.parse ignore DTD references
[ "", "java", "document", "dtd", "" ]
I need to develop a generic jQuery-based search plugin for the ASP.NET MVC application I'm building, but I can't figure out how it's supposed to fit, or what the best practice is. I want to do the following: ``` $().ready(function() { $('#searchHolder').customSearch('MyApp.Models.User'); }); ``` As long as I have implemented a specific interface on Models.User, jQuery will be able to talk to a reflection service to generically construct the relevant UI. Sounds fun, but it seems that I'm now calling the JavaScript from the View, which is in turn going to do some View-related activity to build the search UI, and then to do the search and interact with the user it's going to throw a bunch of Controller tasks in there. So where does this really fit? Is there a different way I can structure my jQuery plugin so that it conforms more to the idea of MVC? Does MVC work when it scales down to its own form *within* another MVC structure? Should I just ignore these issues for the sake of one plugin?
Just to follow up (I'm very surprised nobody else has had any opinions on this), in an effort to keep best practice I've opted to adopt [jTemplates](http://jtemplates.tpython.com/). It enables me to request some Model-style JSON from my server-side Controller and process it using syntax similar to that I would already use in a View, which now keeps any required JavaScript UI MVC-compatible. There's a small overhead in that the client will need to request the View template from the server, but if that becomes too slow I can always sacrifice a little and send it over with the initial JSON request.
It sounds to me like what you want are `partials`, a RoR term so not sure that they exist in the same format in ASP.NET MVC. Basically a partial is a part of a View thats defined in its own file and can be called from anywhere. So in your search controller, you would pull out the Model asked for, do some reflection to get the data and construct it into JSON, and also grab the partial View for that model. You might find it easier if you follow a convention for naming the partials based on the Model name, to save you having any big `switch` statements or extra config files. I could be wrong, but it sounds like you're a bit worried making a call to the Controller from Javascript and getting HTML returned. Thats perfectly OK, its just a case of fetching the View appropriately and making sure you don't process the rest of the page, only what you need for that call (why MVC is so much better than `UpdatePanel`s!)
Where does jQuery UI fit in MVC?
[ "", "javascript", "model-view-controller", "jquery-plugins", "" ]
I could probably write this myself, but the specific way I'm trying to accomplish it is throwing me off. I'm trying to write a generic extension method similar to the others introduced in .NET 3.5 that will take a nested IEnumerable of IEnumerables (and so on) and flatten it into one IEnumerable. Anyone have any ideas? Specifically, I'm having trouble with the syntax of the extension method itself so that I can work on a flattening algorithm.
Hmm... I'm not sure *exactly* what you want here, but here's a "one level" option: ``` public static IEnumerable<TElement> Flatten<TElement,TSequence> (this IEnumerable<TSequence> sequences) where TSequence : IEnumerable<TElement> { foreach (TSequence sequence in sequences) { foreach(TElement element in sequence) { yield return element; } } } ``` If that's not what you want, could you provide the signature of what you do want? If you don't need a generic form, and you just want to do the kind of thing that LINQ to XML constructors do, that's reasonably simple - although the recursive use of iterator blocks is relatively inefficient. Something like: ``` static IEnumerable Flatten(params object[] objects) { // Can't easily get varargs behaviour with IEnumerable return Flatten((IEnumerable) objects); } static IEnumerable Flatten(IEnumerable enumerable) { foreach (object element in enumerable) { IEnumerable candidate = element as IEnumerable; if (candidate != null) { foreach (object nested in candidate) { yield return nested; } } else { yield return element; } } } ``` Note that that will treat a string as a sequence of chars, however - you may want to special-case strings to be individual elements instead of flattening them, depending on your use case. Does that help?
Here's an extension that might help. It will traverse all nodes in your hierarchy of objects and pick out the ones that match a criteria. It assumes that each object in your hierarchy **has a collection property** that holds its child objects. ## Here's the extension: ``` /// Traverses an object hierarchy and return a flattened list of elements /// based on a predicate. /// /// TSource: The type of object in your collection.</typeparam> /// source: The collection of your topmost TSource objects.</param> /// selectorFunction: A predicate for choosing the objects you want. /// getChildrenFunction: A function that fetches the child collection from an object. /// returns: A flattened list of objects which meet the criteria in selectorFunction. public static IEnumerable<TSource> Map<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> selectorFunction, Func<TSource, IEnumerable<TSource>> getChildrenFunction) { // Add what we have to the stack var flattenedList = source.Where(selectorFunction); // Go through the input enumerable looking for children, // and add those if we have them foreach (TSource element in source) { flattenedList = flattenedList.Concat( getChildrenFunction(element).Map(selectorFunction, getChildrenFunction) ); } return flattenedList; } ``` ## Examples (Unit Tests): First we need an object and a nested object hierarchy. A simple node class ``` class Node { public int NodeId { get; set; } public int LevelId { get; set; } public IEnumerable<Node> Children { get; set; } public override string ToString() { return String.Format("Node {0}, Level {1}", this.NodeId, this.LevelId); } } ``` And a method to get a 3-level deep hierarchy of nodes ``` private IEnumerable<Node> GetNodes() { // Create a 3-level deep hierarchy of nodes Node[] nodes = new Node[] { new Node { NodeId = 1, LevelId = 1, Children = new Node[] { new Node { NodeId = 2, LevelId = 2, Children = new Node[] {} }, new Node { NodeId = 3, LevelId = 2, Children = new Node[] { new Node { NodeId = 4, LevelId = 3, Children = new Node[] {} }, new Node { NodeId = 5, LevelId = 3, Children = new Node[] {} } } } } }, new Node { NodeId = 6, LevelId = 1, Children = new Node[] {} } }; return nodes; } ``` First Test: flatten the hierarchy, no filtering ``` [Test] public void Flatten_Nested_Heirachy() { IEnumerable<Node> nodes = GetNodes(); var flattenedNodes = nodes.Map( p => true, (Node n) => { return n.Children; } ); foreach (Node flatNode in flattenedNodes) { Console.WriteLine(flatNode.ToString()); } // Make sure we only end up with 6 nodes Assert.AreEqual(6, flattenedNodes.Count()); } ``` This will show: ``` Node 1, Level 1 Node 6, Level 1 Node 2, Level 2 Node 3, Level 2 Node 4, Level 3 Node 5, Level 3 ``` Second Test: Get a list of nodes that have an even-numbered NodeId ``` [Test] public void Only_Return_Nodes_With_Even_Numbered_Node_IDs() { IEnumerable<Node> nodes = GetNodes(); var flattenedNodes = nodes.Map( p => (p.NodeId % 2) == 0, (Node n) => { return n.Children; } ); foreach (Node flatNode in flattenedNodes) { Console.WriteLine(flatNode.ToString()); } // Make sure we only end up with 3 nodes Assert.AreEqual(3, flattenedNodes.Count()); } ``` This will show: ``` Node 6, Level 1 Node 2, Level 2 Node 4, Level 3 ```
Recursive List Flattening
[ "", "c#", ".net", "recursion", "" ]
I have some code that creates a fairly large xml DOM and writes it off to a file (up to 50-100MB) . It basically creates the DOM and then calls a toString on it and writes it out with ofstream. Is there a way to get streaming output of the generated dom so that it doesn't create the whole structure in memory all at once and then copy it, etc? I will not modify any node after i create it so it can write it out and free up the memory right away. I could write my own xml class that does the xml construction but ... i don't think that's a good idea since i'll probably miss something when it comes down to escaping etc.
Ok, turns out libxml2 has a streaming API: <http://xmlsoft.org/examples/testWriter.c> It's a little old style (very C-ish) but you can write your wrapper around it.
I would recommend [GenX](http://www.tbray.org/ongoing/When/200x/2004/02/20/GenxStatus) as a streaming XML writer, I use this in [Programmer's Notepad](http://pnotepad.org/) and it works a treat, you can see examples of use in the source code. Extremely fast, and it produces good UTF-8 XML. Memory usage while you use it should remain roughly constant.
create and stream large XML document in C++
[ "", "c++", "xml", "" ]
I am getting the following error when I put class files in subfolders of my App\_Code folder: errorCS0246: The type or namespace name 'MyClassName' could not be found (are you missing a using directive or an assembly reference?) This class is not in a namespace at all. Any ideas?
You need to add codeSubDirectories to your compilation element in web.config ``` <configuration> <system.web> <compilation> <codeSubDirectories> <add directoryName="View"/> </codeSubDirectories> </compilation> </system.web> </configuration> ```
Check for BuildAction property of file. This should be set to "Compile"
How come classes in subfolders in my App_Code folder are not being found correctly?
[ "", "c#", "asp.net", "namespaces", "app-code", "" ]
Has anyone else seen people do this: ``` private string _name; public string Name{ get{ return _name; } set{ _name = value;}} ``` I understand using accessors if you are going to exercise some sort of control over how it gets set or perform some sort of function on it when there is a get. But if you are just going to do this, why not just make the variable public to begin with? Am I missing something?
If you make the member a public field, then you can't later refactor it into a property without changing the interface to your class. If you expose it as a property from the very beginning, you can make whatever changes to the property accessor functions that you need and the class's interface remains unchanged. Note that as of C# 3.0, you can implement a property without creating a backing field, e.g.: ``` public string Name { get; set; } ``` This removes what is pretty much the only justification for not implementing public fields as properties in the first place.
If you define a public interface with a property in assembly A, you could then use this interface in assembly B. Now, you can change the property's implementation (maybe fetching the value from a database instead of storing it in a field). Then you can recompile assembly A, and replace an older one. Assembly B would carry on fine because the interface wouldn't have changed. However, if you'd started off initially with a public field, and decided this wasn't suitable and wanted to change the implementation and to do that you needed to convert it to a property, then this would mean you'd have to change assembly A's public interface. Any clients of that interface (including assembly B) would also have to be recompiled and replaced to be able to work with this new interface. So, you're better off starting with a property initially. This encapsulates the implementation of the property, leaving you free to change it in the future without having to worry what clients (including assembly B) are already out in the world using assembly A. Because, if there are any clients already out in the world making use of assembly A, changing the interface would break all clients. If they're used by another team in your company, or another company, then they are going to be not happy if you break their assemblies by changing the interface of yours!
public variables vs private variables with accessors
[ "", "c#", "oop", "accessor", "" ]
``` <div id="myDiv"> <a>...</a> <a>...</a> <a>...</a> <a>...</a> <a>...</a> <a>...</a> </div> ``` If you wanted to select the 2nd, 3rd and 4th `a` tags in the above example, how would you do that? The only thing I can think of is: ``` $("#myDiv a:eq(1), #myDiv a:eq(2), #myDiv a:eq(3)") ``` But that doesn't look to be very efficient or pretty. I guess you could also select ALL the `a`s and then do run `.each` over them, but that could get very inefficient if there were a lot more `a`s.
jQuery [slice()](http://docs.jquery.com/Traversing/slice) function taking indexes of the first and the last needed elements selects a subset of the matched elements. Note what it doesn't include last element itself. In your particular case you should use ``` $("#myDiv a").slice(1, 4) ```
Using the [.slice()](http://docs.jquery.com/Traversing/slice) function does exactly what I need.
How to select a range of elements in jQuery
[ "", "javascript", "jquery", "" ]
Let me start with a specific example of what I'm trying to do. I have an array of year, month, day, hour, minute, second and millisecond components in the form `[ 2008, 10, 8, 00, 16, 34, 254 ]`. I'd like to instantiate a Date object using the following standard constructor: ``` new Date(year, month, date [, hour, minute, second, millisecond ]) ``` How can I pass my array to this constructor to get a new Date instance? *[ **Update**: My question actually extends beyond this specific example. I'd like a general solution for built-in JavaScript classes like Date, Array, RegExp, etc. whose constructors are beyond my reach. ]* I'm trying to do something like the following: ``` var comps = [ 2008, 10, 8, 00, 16, 34, 254 ]; var d = Date.prototype.constructor.apply(this, comps); ``` I probably need a "`new`" in there somewhere. The above just returns the current time as if I had called "`(new Date()).toString()`". I also acknowledge that I may be completely in the wrong direction with the above :) ***Note***: No `eval()` and no accessing the array items one by one, please. I'm pretty sure I should be able to use the array as is. --- ## Update: Further Experiments Since no one has been able to come up with a working answer yet, I've done more playing around. Here's a new discovery. I can do this with my own class: ``` function Foo(a, b) { this.a = a; this.b = b; this.toString = function () { return this.a + this.b; }; } var foo = new Foo(1, 2); Foo.prototype.constructor.apply(foo, [4, 8]); document.write(foo); // Returns 12 -- yay! ``` But it doesn't work with the intrinsic Date class: ``` var d = new Date(); Date.prototype.constructor.call(d, 1000); document.write(d); // Still returns current time :( ``` Neither does it work with Number: ``` var n = new Number(42); Number.prototype.constructor.call(n, 666); document.write(n); // Returns 42 ``` Maybe this just isn't possible with intrinsic objects? I'm testing with Firefox BTW.
I've done more investigation of my own and came up with the conclusion that **this is an impossible feat**, due to how the Date class is implemented. I've inspected the [SpiderMonkey](http://www.mozilla.org/js/spidermonkey/) source code to see how Date was implemented. I think it all boils down to the following few lines: ``` static JSBool Date(JSContext *cx, JSObject *obj, uintN argc, jsval *argv, jsval *rval) { jsdouble *date; JSString *str; jsdouble d; /* Date called as function. */ if (!(cx->fp->flags & JSFRAME_CONSTRUCTING)) { int64 us, ms, us2ms; jsdouble msec_time; /* NSPR 2.0 docs say 'We do not support PRMJ_NowMS and PRMJ_NowS', * so compute ms from PRMJ_Now. */ us = PRMJ_Now(); JSLL_UI2L(us2ms, PRMJ_USEC_PER_MSEC); JSLL_DIV(ms, us, us2ms); JSLL_L2D(msec_time, ms); return date_format(cx, msec_time, FORMATSPEC_FULL, rval); } /* Date called as constructor. */ // ... (from here on it checks the arg count to decide how to create the date) ``` When Date is used as a function (either as `Date()` or `Date.prototype.constructor()`, which are exactly the same thing), it defaults to returning the current time as a string in the locale format. This is regardless of any arguments that are passed in: ``` alert(Date()); // Returns "Thu Oct 09 2008 23:15:54 ..." alert(typeof Date()); // Returns "string" alert(Date(42)); // Same thing, "Thu Oct 09 2008 23:15:54 ..." alert(Date(2008, 10, 10)); // Ditto alert(Date(null)); // Just doesn't care ``` I don't think there's anything that can be done at the JS level to circumvent this. And this is probably the end of my pursuit in this topic. I've also noticed something interesting: ``` /* Set the value of the Date.prototype date to NaN */ proto_date = date_constructor(cx, proto); if (!proto_date) return NULL; *proto_date = *cx->runtime->jsNaN; ``` `Date.prototype` is a Date instance with the internal value of `NaN` and therefore, ``` alert(Date.prototype); // Always returns "Invalid Date" // on Firefox, Opera, Safari, Chrome // but not Internet Explorer ``` IE doesn't disappoint us. It does things a bit differently and probably sets the internal value to `-1` so that Date.prototype always returns a date slightly before epoch. --- ## Update I've finally dug into ECMA-262 itself and it turns out, what I'm trying to achieve (with the Date object) is -- by definition -- not possible: > **15.9.2 The Date Constructor Called as a Function** > > When Date is called as a > function rather than as a constructor, > it returns a string representing the > current time (UTC). > > **NOTE** The function > call `Date(…)` is not equivalent to the > object creation expression `new Date(…)` > with the same arguments. > > **15.9.2.1 Date ( [ year [, month [, date [, hours [, minutes [, seconds [, > ms ] ] ] ] ] ] ] )** > > All of the > arguments are optional; any arguments > supplied are accepted but are > completely ignored. A string is > created and returned as if by the > expression `(new Date()).toString()`.
I'd hardly call this elegant, but in my testing (FF3, Saf4, IE8) it works: `var arr = [ 2009, 6, 22, 10, 30, 9 ];` Instead of this: `var d = new Date( arr[0], arr[1], arr[2], arr[3], arr[4], arr[5] );` Try this: `var d = new Date( Date.UTC.apply( window, arr ) + ( (new Date()).getTimezoneOffset() * 60000 ) );`
Instantiating a JavaScript object by calling prototype.constructor.apply
[ "", "javascript", "reflection", "" ]
What am I doing wrong here? ``` string q = "john s!"; string clean = Regex.Replace(q, @"([^a-zA-Z0-9]|^\s)", string.Empty); // clean == "johns". I want "john s"; ```
just a FYI ``` string clean = Regex.Replace(q, @"[^a-zA-Z0-9\s]", string.Empty); ``` would actually be better like ``` string clean = Regex.Replace(q, @"[^\w\s]", string.Empty); ```
This: ``` string clean = Regex.Replace(dirty, "[^a-zA-Z0-9\x20]", String.Empty); ``` > > **\x20** is ascii hex for 'space' character you can add more individual characters that you want to be allowed. If you want for example **"?"** to be ok in the return string add **\x3f**.
Regex to match alphanumeric and spaces
[ "", "c#", "regex", "" ]
How can I use HttpWebRequest (.NET, C#) asynchronously?
Use [`HttpWebRequest.BeginGetResponse()`](http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.begingetresponse.aspx) ``` HttpWebRequest webRequest; void StartWebRequest() { webRequest.BeginGetResponse(new AsyncCallback(FinishWebRequest), null); } void FinishWebRequest(IAsyncResult result) { webRequest.EndGetResponse(result); } ``` The callback function is called when the asynchronous operation is complete. You need to at least call [`EndGetResponse()`](http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.endgetresponse.aspx) from this function.
By far the easiest way is by using [TaskFactory.FromAsync](http://msdn.microsoft.com/en-us/library/system.threading.tasks.taskfactory.fromasync.aspx) from the [TPL](http://msdn.microsoft.com/en-us/library/dd460717.aspx). It's literally a couple of lines of code when used in conjunction with the new [async/await](http://msdn.microsoft.com/en-us/library/hh191443.aspx) keywords: ``` var request = WebRequest.Create("http://www.stackoverflow.com"); var response = (HttpWebResponse) await Task.Factory .FromAsync<WebResponse>(request.BeginGetResponse, request.EndGetResponse, null); Debug.Assert(response.StatusCode == HttpStatusCode.OK); ``` If you can't use the C#5 compiler then the above can be accomplished using the [Task.ContinueWith](http://msdn.microsoft.com/en-us/library/system.threading.tasks.task.continuewith.aspx) method: ``` Task.Factory.FromAsync<WebResponse>(request.BeginGetResponse, request.EndGetResponse, null) .ContinueWith(task => { var response = (HttpWebResponse) task.Result; Debug.Assert(response.StatusCode == HttpStatusCode.OK); }); ```
How to use HttpWebRequest (.NET) asynchronously?
[ "", "c#", ".net", "asynchronous", "httprequest", "" ]
I have a couple of solutions, but none of them work perfectly. **Platform** 1. ASP.NET / VB.NET / .NET 2.0 2. IIS 6 3. IE6 (primarily), with some IE7; Firefox not necessary, but useful *Allowed 3rd Party Options* 1. Flash 2. ActiveX (would like to avoid) 3. Java (would like to avoid) **Current Attempts** *Gmail Style*: You can use javascript to add new Upload elements (input type='file'), then upload them all at once with the click of a button. This works, but still requires a lot of clicks. (I was able to use an invisible ActiveX control to detect things like File Size, which would be useful.) *Flash Uploader*: I discovered a couple of Flash Upload controls that use a 1x1 flash file to act as the uploader, callable by javascript. (One such control is [FancyUpload](http://digitarald.de/project/fancyupload/), another is [Dojo's Multiple File Uploader](http://www.sitepen.com/blog/2008/09/02/the-dojo-toolkit-multi-file-uploader/), yet another is one by [darick\_c at CodeProject](http://www.codeproject.com/KB/aspnet/FlashUpload.aspx).) These excited me, but I quickly ran into two issues: 1. Flash 10 will break the functionality that is used to call the multiple file upload dialogue box. The workaround is to use a transparent flash frame, or just use a flash button to call the dialogue box. That's not a huge deal. 2. The integrated windows authentication used on our intranet is not used when the Flash file attempts to upload the files, prompting the user for credentials. The workaround for this is to use cookieless sessions, which would be a nightmare for our project due to several other reasons. *Java Uploader*: I noticed several Java-based multiple-file uploaders, but most of the appear to cost money. If I found one that worked really well, I could arrange to purchase it. I'd just rather not. I also don't like the look of most of them. I liked FancyUpload because it interacted with html/javascript so that I could easily style and manage it any way I want. *ActiveX Uploader*: I found [an ActiveX solution](http://support.persits.com/xupload/demo1.asp) as well. It appears that ActiveX will work. I would just write my own instead of buying that one. This will be my last resort, I think. **Resolution** I would love to be able to use something like FancyUpload. If I can just get by the credentials prompt some way, it would be perfect. But, from my research, it appears that the only real workaround is cookieless sessions, which I just can't do. So, the question is: Is there a way to resolve the issues presented above OR is there a different solution that I have not listed which accomplishes the same goal?
[@davidinbcn.myopenid.co](https://stackoverflow.com/questions/50315/how-do-you-allow-multiple-file-uploads-on-an-internal-windows-authentication-in#70521): That's basically how I solved this issue. But, in an effort to provide a more detailed answer, I'm posting my solution here. **The Solution!** Create two web applications, or websites, or whatever. **Application A** is a simple web application. The purpose of this application is to receive file uploads and save them to the proper place. Set this up as an anonymous access allowed. Then make a single ASPX page that accepts posted files and saves them to a given location. (I'm doing this on an intranet. Internet sites may be exposing themselves to security issues by doing this. Take extra precautions if that is the case.) The code behind for this page would look something like this: ``` Dim uploads As HttpFileCollection = HttpContext.Current.Request.Files If uploads.Count > 0 Then UploadFiles(uploads) Else result = "error" err = "File Not Uploaded" End If ``` **Application B** is your primary site that will allow file uploads. Set this up as an authenticated web application that does not allow anonymous access. Then, place the [FancyUpload](http://digitarald.de/journal/54706744/fancyupload-for-flash-10/#comments) (or similar solution) on a page on this site. Configure it to post its files to Application A's upload ASPX page.
I don't think there is any work around for the integrated windows authentication. What you could possibly do is save the files to a generic unprotected folder and, in the case of swfupload, use a handler to move the file when its fully uploaded
How do you allow multiple file uploads on an internal windows-authentication intranet?
[ "", "javascript", "flash", "authentication", "activex", "file-upload", "" ]
i have a wordpress blog and want to give people the same user experience for adding comments that is in stackoverflow. There are a number of comments ajax plugins out there but i can't find a working one that allows you to inline on the main page, go in and add comments without first drilling down into a seperate single post page. Can anyone help here with either a wordpress plugin or php code to do this.
I was never able to get AJAXed Wordpress to do what me (and apparently the questioner) want to do. I use a custom solution that makes use of a plug-in called [Inline Ajax Comments](http://kashou.net/blog/inline-ajax-comments). I had a heck of a time finding a download link, but here's one that still works: <http://kashou.net/files/inline-ajax-comments.zip> In WordPress' theme editor, I edit index.html. After the following: ``` <?php the_content(''); ?> ``` I add (after enabling the plug-in of course): ``` <?php ajax_comments_link(); ?> <?php ajax_comments_div(); ?> ``` I then edited the plugin PHP file itself. I commented out blocks of code as follows: ``` if ($comment_count == '1') { echo('<span id="show-inline-comments-'. $id .'"> '); /* echo('<a href="javascript:;" id="show-inline-comments-link-'. $id .'" onmouseup="ajaxShowComments('. $id .', \''. $throbberURL .'\', \''. $commentpageURL .'\'); return false;">show comment &raquo;</a>'); */ echo('</span>'); echo('<span id="hide-inline-comments-'. $id .'" style="display: none;"> '); /* echo('<a href="#comments-'. $id .'" onmouseup="ajaxHideComments('. $id .', \''. $throbberURL .'\', \''. $commentpageURL .'\'); return true;">&laquo; hide comment</a>'); */ echo('</span>'); } else if ($comment_count > '1') { echo('<span id="show-inline-comments-'. $id .'"> '); /* echo('<a href="javascript:;" id="show-inline-comments-link-'. $id .'" onmouseup="ajaxShowComments('. $id .', \''. $throbberURL .'\', \''. $commentpageURL .'\'); return false;">show comments &raquo;</a>'); */ echo('</span>'); echo('<span id="hide-inline-comments-'. $id .'" style="display: none;"> '); /* echo('<a href="#comments-'. $id .'" onmouseup="ajaxHideComments('. $id .', \''. $throbberURL .'\', \''. $commentpageURL .'\'); return true;">&laquo; hide comments</a>'); */ echo('</span>'); } ``` IIRC, that's all I had to do, but let me know if that doesn't work for you. I'm trying to reverse engineer my own solution since it seems to be exactly what you want to do as well.
I think [AJAXed Wordpress](http://wordpress.org/extend/plugins/ajaxd-wordpress/) does what you're looking for, among other things: ### [AJAXed Wordpress](http://anthologyoi.com/awp) > AJAXed Wordpress (AWP) harnesses the power of both AJAX and Wordpress to improve > the user experience, the administration capabilities and the design potential of > any Wordpress based blog. It works on all WordPress versions from 2.1 - 2.6. > > Some of AWP’s features include loading posts inline, **inline comments**, threaded > comments, AJAX comment submission, AJAX Navigation, live comment preview and much > more. AWP is endlessly customizable and extensible. Even though AWP provides many > features, you are never forced to use features that you don’t want. All aspects of > the plugin are easily customized through a single Administration panel. Demo is available here <http://wordpress.mu/> and you can see the inline comments in action. Looks like what you were asking for.
wordpress - having comments inline ajax like in stackoverflow
[ "", "php", "ajax", "wordpress", "comments", "" ]
How do you detect if `Socket#close()` has been called on a socket on the remote side?
The `isConnected` method won't help, it will return `true` even if the remote side has closed the socket. Try this: ``` public class MyServer { public static final int PORT = 12345; public static void main(String[] args) throws IOException, InterruptedException { ServerSocket ss = ServerSocketFactory.getDefault().createServerSocket(PORT); Socket s = ss.accept(); Thread.sleep(5000); ss.close(); s.close(); } } public class MyClient { public static void main(String[] args) throws IOException, InterruptedException { Socket s = SocketFactory.getDefault().createSocket("localhost", MyServer.PORT); System.out.println(" connected: " + s.isConnected()); Thread.sleep(10000); System.out.println(" connected: " + s.isConnected()); } } ``` Start the server, start the client. You'll see that it prints "connected: true" twice, even though the socket is closed the second time. The only way to really find out is by reading (you'll get -1 as return value) or writing (an `IOException` (broken pipe) will be thrown) on the associated Input/OutputStreams.
Since the answers deviate I decided to test this and post the result - including the test example. The server here just writes data to a client and does not expect any input. The server: ``` ServerSocket serverSocket = new ServerSocket(4444); Socket clientSocket = serverSocket.accept(); PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true); while (true) { out.println("output"); if (out.checkError()) System.out.println("ERROR writing data to socket !!!"); System.out.println(clientSocket.isConnected()); System.out.println(clientSocket.getInputStream().read()); // thread sleep ... // break condition , close sockets and the like ... } ``` * clientSocket.isConnected() returns always true once the client connects (and even after the disconnect) weird !! * getInputStream().read() + makes the thread wait for input as long as the client is connected and therefore makes your program not do anything - except if you get some input + returns -1 if the client disconnected * **out.checkError() is true as soon as the client is disconnected so I recommend this**
How to detect a remote side socket close?
[ "", "java", "networking", "sockets", "tcp", "" ]
I call my JavaScript function. Why do I *sometimes* get the error 'myFunction is not defined' when it *is* defined? For example. I'll occasionally get 'copyArray is not defined' even in this example: ``` function copyArray( pa ) { var la = []; for (var i=0; i < pa.length; i++) la.push( pa[i] ); return la; } Function.prototype.bind = function( po ) { var __method = this; var __args = []; // Sometimes errors -- in practice I inline the function as a workaround. __args = copyArray( arguments ); return function() { /* bind logic omitted for brevity */ } } ``` As you can see, copyArray is defined *right there*, so this can't be about the order in which script files load. I've been getting this in situations that are harder to work around, where the calling function is located in another file that *should* be loaded after the called function. But this was the simplest case I could present, and appears to be the same problem. It doesn't happen 100% of the time, so I do suspect some kind of load-timing-related problem. But I have no idea what. @Hojou: That's part of the problem. The function in which I'm now getting this error is itself my addLoadEvent, which is basically a standard version of the common library function. @James: I understand that, and there is no syntax error in the function. When that is the case, the syntax error is reported as well. In this case, I am getting only the 'not defined' error. @David: The script in this case resides in an external file that is referenced using the normal <script src="file.js"></script> method in the page's head section. @Douglas: Interesting idea, but if this were the case, how could we *ever* call a user-defined function with confidence? In any event, I tried this and it didn't work. @sk: This technique has been tested across browsers and is basically copied from the [Prototype](http://en.wikipedia.org/wiki/Prototype_JavaScript_Framework) library.
It shouldn't be possible for this to happen if you're just including the scripts on the page. The "copyArray" function should always be available when the JavaScript code starts executing no matter if it is declared before or after it -- unless you're loading the JavaScript files in dynamically with a dependency library. There are all sorts of problems with timing if that's the case.
I had this function not being recognized as defined in latest Firefox for Linux, though Chromium was dealing fine with it. What happened in my case was that I had a former `SCRIPT` block, before the block that defined the function with problem, stated in the following way: ``` <SCRIPT src="mycode.js"/> ``` (That is, without the closing tag.) I had to redeclare this block in the following way. ``` <SCRIPT src="mycode.js"></SCRIPT> ``` And then what followed worked fine... weird huh?
Why is my JavaScript function sometimes "not defined"?
[ "", "javascript", "" ]
Is there a way in PHP to make HTTP calls and not wait for a response? I don't care about the response, I just want to do something like `file_get_contents()`, but not wait for the request to finish before executing the rest of my code. This would be super useful for setting off "events" of a sort in my application, or triggering long processes. Any ideas?
You can do trickery by using exec() to invoke something that can do HTTP requests, like `wget`, but you must direct all output from the program to somewhere, like a file or /dev/null, otherwise the PHP process will wait for that output. If you want to separate the process from the apache thread entirely, try something like (I'm not sure about this, but I hope you get the idea): ``` exec('bash -c "wget -O (url goes here) > /dev/null 2>&1 &"'); ``` It's not a nice business, and you'll probably want something like a cron job invoking a heartbeat script which polls an actual database event queue to do real asynchronous events.
The answer I'd previously accepted didn't work. It still waited for responses. This does work though, taken from [How do I make an asynchronous GET request in PHP?](https://stackoverflow.com/questions/962915/how-do-i-make-an-asynchronous-get-request-in-php) ``` function post_without_wait($url, $params) { foreach ($params as $key => &$val) { if (is_array($val)) $val = implode(',', $val); $post_params[] = $key.'='.urlencode($val); } $post_string = implode('&', $post_params); $parts=parse_url($url); $fp = fsockopen($parts['host'], isset($parts['port'])?$parts['port']:80, $errno, $errstr, 30); $out = "POST ".$parts['path']." HTTP/1.1\r\n"; $out.= "Host: ".$parts['host']."\r\n"; $out.= "Content-Type: application/x-www-form-urlencoded\r\n"; $out.= "Content-Length: ".strlen($post_string)."\r\n"; $out.= "Connection: Close\r\n\r\n"; if (isset($post_string)) $out.= $post_string; fwrite($fp, $out); fclose($fp); } ```
How to make HTTP requests in PHP and not wait on the response
[ "", "php", "http", "asynchronous", "" ]
We use an IBM database known as Universe that holds all of our user id's, passwords, and profile information in a table called USERINFO. Can I use the Membership Provider to connect to this database and authenticate the user? The database access is actually through a web service since we don't have a direct connect to the database. We have a web service method called GetUserInfo which accepts a parameter of username. The method will return the password and profile information.
As mentioned above, you'll need to create a custom membership provider which a fairly straightforward. You'll create a .NET class that inherits from System.Web.Security.MembershipProvider. There are several methods that need to be overriden in your class, but most are not even used by the MVC account controller. The main method you'll want to override is ValidateUser(username, password) which will get a user logged in. After you've implemented your class you'll need to register it in web.config which is easy as well. You can find a sample for a custom provider here: <http://msdn.microsoft.com/en-us/library/6tc47t75(VS.80).aspx> And a tutorial for the entire process here: <http://www.15seconds.com/issue/050216.htm> Keep in mind that the process for making a custom provider for MVC is the same for a standard ASP.NET web site, however MVC does not fully utilize all methods of the MembershipProvider class so it's much easier to implement.
You'll have to create a custom provider for that. It isn't very hard, as long as you can access the web service without an issue.
What is the best way to handle authentication in ASP.NET MVC with a Universe database?
[ "", "c#", "asp.net-mvc", "u2", "universe", "" ]
In IE when I insert text into a `<pre>` tag the newlines are ignored: ``` <pre id="putItHere"></pre> <script> function putText() { document.getElementById("putItHere").innerHTML = "first line\nsecond line"; } </script> ``` Using `\r\n` instead of a plain `\n` does not work. `<br/>` does work but inserts an extra blank line in FF, which is not acceptable for my purposes.
These [quirksmode.org bug report and comments](http://www.quirksmode.org/bugreports/archives/2004/11/innerhtml_and_t.html) about innerHTML behaviour of Internet Explorer could help: "*IE applies **HTML normalization** to the data that is assigned to the innerHTML property. This causes incorrect display of whitespace in elements that ought to preserve formatting, such as <pre> and <textarea>.*"
Does this work in IE? ``` document.getElementById("putItHere") .appendChild(document.createTextNode("first line\nsecond line")); ``` I tested it with Firefox and it works. :-)
Inserting a newline into a pre tag (IE, Javascript)
[ "", "javascript", "html", "internet-explorer", "pre", "" ]
I have a webpage that pulls information from a database, converts it to .csv format, and writes the file to the HTTPResponse. ``` string csv = GetCSV(); Response.Clear(); Response.ContentType = "text/csv"; Response.Write(csv); ``` This works fine, and the file is sent to the client with no problems. However, when the file is sent to the client, the name of the current page is used, instead of a more friendly name (like "data.csv"). ![alt text](https://ktrauberman.files.wordpress.com/2008/10/exportcsv.gif) My question is, how can I change the name of the file that is written to the output stream without writing the file to disk and redirecting the client to the file's url? **EDIT: Thanks for the responses guys. I got 4 of the same response, so I just chose the first one as the answer.**
I believe this will work for you. ``` Response.AddHeader("content-disposition", "attachment; filename=NewFileName.csv"); ```
You just need to set the **Content-Disposition** header ``` Content-Disposition: attachment; filename=data.csv ``` --- This Microsoft Support article has some good information [How To Raise a "File Download" Dialog Box for a Known MIME Type](http://support.microsoft.com/kb/260519)
Change name of file sent to client?
[ "", "c#", "asp.net", "http", "" ]
If I load the nextimg URL manually in the browser, it gives a new picture every time I reload. But this bit of code shows the same image every iteration of `draw()`. How can I force myimg not to be cached? ``` <html> <head> <script type="text/javascript"> function draw(){ var canvas = document.getElementById('canv'); var ctx = canvas.getContext('2d'); var rx; var ry; var i; myimg = new Image(); myimg.src = 'http://ohm:8080/cgi-bin/nextimg' rx=Math.floor(Math.random()*100)*10 ry=Math.floor(Math.random()*100)*10 ctx.drawImage(myimg,rx,ry); window.setTimeout('draw()',0); } </script> </head> <body onload="draw();"> <canvas id="canv" width="1024" height="1024"></canvas> </body> </html> ```
That actually sounds like a bug in the browser -- you could file at <http://bugs.webkit.org> if it's in Safari or <https://bugzilla.mozilla.org/> for Firefox. Why do i say potential browser bug? Because the browser realises it should not be caching on reload, yet it does give you a cached copy of the image when you request it programmatically. That said are you sure you're actually drawing anything? the Canvas.drawImage API will not wait for an image to load, and is spec'd to not draw if the image has not completely loaded when you try to use it. A better practice is something like: ``` var myimg = new Image(); myimg.onload = function() { var rx=Math.floor(Math.random()*100)*10 var ry=Math.floor(Math.random()*100)*10 ctx.drawImage(myimg,rx,ry); window.setTimeout(draw,0); } myimg.src = 'http://ohm:8080/cgi-bin/nextimg' ``` (You can also just pass `draw` as an argument to setTimeout rather than using a string, which will save reparsing and compiling the same string over and over again.)
The easiest way is to sling an ever-changing querystring onto the end: ``` var url = 'http://.../?' + escape(new Date()) ``` Some people prefer using `Math.random()` for that instead of `escape(new Date())`. But the correct way is probably to alter the headers the web server sends to disallow caching.
JavaScript: how to force Image() not to use the browser cache?
[ "", "javascript", "image", "caching", "" ]
Mobile Safari is a very capable browser, and it can handle my website as it is perfectly. However, there are a few elements on my page that could be optimized for browsing using this device; such as serving specific thumbnails that are smaller than the desktop counterparts to help fit more content into the screen. I would like to know how I can detect Mobile Safari (all versions, preferably) using PHP, so then I can serve a) a specific css file and b) different sized image thumbnails.
Compare the user agent string with the one of a Safari Mobile uses: [Safari Mobile User Agent String](http://developer.apple.com/library/safari/#documentation/AppleApplications/Reference/SafariWebContent/OptimizingforSafarioniPhone/OptimizingforSafarioniPhone.html#//apple_ref/doc/uid/TP40006517-SW3)
Thanks Joe, I read that page and found the [WebKit detection library](http://trac.webkit.org/wiki/DetectingWebKit) (in JavaScript). I changed the code to suit my needs. For anyone that's interested, here's my solution. ``` <?php /* detect Mobile Safari */ $browserAsString = $_SERVER['HTTP_USER_AGENT']; if (strstr($browserAsString, " AppleWebKit/") && strstr($browserAsString, " Mobile/")) { $browserIsMobileSafari = true; } ?> ```
How do I detect Mobile Safari server side using PHP?
[ "", "php", "iphone", "optimization", "mobile-safari", "" ]
We have recently migrated a large, high demand web application to Tomcat 5.5 from Tomcat 4 and have noticed some peculiar slowdown behavior that appears to be related to JVM pauses. In order to run our application and support increased load over time on Tomcat 4, many not so standard JVM parameters were set and tuned as per the below, and I am hoping someone with Tomcat JVM tuning experience can comment on anything that would likely be detrimental to a Tomcat 5.5 install. Note also that some of these could be carry over from previous versions of Java (we were running Tomcat 4 on Java 1.6 with these parameters successfully for some time, but some may have been introduced to help garbage collection on Java 1.4 which was the basis of our Tomcat 4 install for a long time, and may now doing more harm than good). Some notes: * Application memory footprint is around 1GB, probably slightly over. * CPU is not an issue - all machines serving the app (load balanced) are < 30% CPU * Lots of headroom on physical memory on the machines. * -XX:MaxPermSize=512m was the only parameter added as part of the 5.5 upgrade and was reactive to an outofmemory permgen space issue (which it solved). * Running on Java 1.6, Solaris OS -server -Xms1280m -Xmx1280m -XX:MaxPermSize=512m -XX:ParallelGCThreads=20 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=75 -XX:MaxTenuringThreshold=0 -XX:+AggressiveOpts -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:-TraceClassUnloading -Dsun.io.useCanonCaches=false -Dsun.net.client.defaultConnectTimeout=60000 -Dsun.net.client.defaultReadTimeout=60000
One of the Java Champions, Kirk Pepperdine's blog : <http://kirk.blog-city.com/how_to_cripple_gc_ergonomics.htm>. Quote 1 "GC documentation will tell you what the setting affects but often without telling what the effect will be. The biggest clue that you've taken the wrong fork in the road is when you explicitly set a value and then give a hint to GC ergonomics. Another clue is if you don't have a sound reason to adjust a setting. And just because some so called expert says this setting works best is only noise, not sound and certanly not a reason." Quote 2 "As I've stated in a prevous blog entry , don't touch the knobs unless you have a very good reason to do so. If you must touch the knobs, tred lightly by only using those that help ergonomics and not those that pin things down crippling ergonomics ability to meet your pause time and throughput goals." So, I would suggest that you go back to plain -server -Xms1280m -Xmx1280m -XX:MaxPermSize=512m -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:-TraceClassUnloading -Dsun.io.useCanonCaches=false -Dsun.net.client.defaultConnectTimeout=60000 -Dsun.net.client.defaultReadTimeout=60000 Find if that gives you better performance. If yes, stick to it BTW, did -XX:MaxPermSize=378m have any issues ? Java 1.6 has much better ergonomics than 1.4. You might want to tune it less than 1.4 BTW, did you try Tomcat 6 ? Tomcat 6 runs much better on Java 6 than Tomcat 5.5. P.S : I've been using Tomcat for a while now and usually try to give sun's JDK free reign with little tuning here and there.
As someone who's in the midst of messing with this as well, I certainly don't have any definitive answers, especially given how application-specific this sort of thing is. A good reference, which you've likely seen, is here: <http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html> However,it's a pretty long list of jvm parameters, which suggests that there's likely unnecessary parameters set, especially given that you have several debugging options on (PrintGCDetails, PrintGCTimeStamps, TraceClassUnloading) which can't be good on a production app. 60 second timeouts might also be eating up resources. "server" is default but won't do any harm. How does the application run with minimal tuning parameters (jvm size, MaxPermSize)?
Appropriate Tomcat 5.5 start-up parameters to tune JVM for extremely high demand, large heap web application?
[ "", "java", "tomcat", "jvm", "performance", "" ]
If I want to call `Bar()` instead of `Foo()`, does `Bar()` return me a copy (additional overhead) of what Foo() returns, or it returns the same object which `Foo()` places on the temporary stack? ``` vector<int> Foo(){ vector<int> result; result.push_back(1); return result; } vector<int> Bar(){ return Foo(); } ```
Both may happen. However, most compiler will not do copy as soon as you optimize. Your code indicate there should be a copy. However, the compiler is allowed to remove any copy that do not change the semantic and the program. Note: This is why you should NEVER have a copy constructor that does anything but copying correctly as you can never be sure if a copy will be actually done or not.
Normally it returns a copy of the returned `vector<int>`. However this highly depends on the optimization done by the compiler. See the following discussion. **Debug Build** ``` vector<int> Foo(){ 004118D0 push ebp 004118D1 mov ebp,esp 004118D3 push 0FFFFFFFFh 004118D5 push offset __ehhandler$?Foo@@YA?AV?$vector@HV?$allocator@H@std@@@std@@XZ (419207h) 004118DA mov eax,dword ptr fs:[00000000h] 004118E0 push eax 004118E1 sub esp,0F4h 004118E7 push ebx 004118E8 push esi 004118E9 push edi 004118EA lea edi,[ebp-100h] 004118F0 mov ecx,3Dh 004118F5 mov eax,0CCCCCCCCh 004118FA rep stos dword ptr es:[edi] 004118FC mov eax,dword ptr [___security_cookie (41E098h)] 00411901 xor eax,ebp 00411903 push eax 00411904 lea eax,[ebp-0Ch] 00411907 mov dword ptr fs:[00000000h],eax 0041190D mov dword ptr [ebp-0F0h],0 vector<int> result; 00411917 lea ecx,[ebp-24h] 0041191A call std::vector<int,std::allocator<int> >::vector<int,std::allocator<int> > (411050h) 0041191F mov dword ptr [ebp-4],1 result.push_back(1); 00411926 mov dword ptr [ebp-0FCh],1 00411930 lea eax,[ebp-0FCh] 00411936 push eax 00411937 lea ecx,[ebp-24h] 0041193A call std::vector<int,std::allocator<int> >::push_back (41144Ch) return result; 0041193F lea eax,[ebp-24h] 00411942 push eax 00411943 mov ecx,dword ptr [ebp+8] 00411946 call std::vector<int,std::allocator<int> >::vector<int,std::allocator<int> > (41104Bh) 0041194B mov ecx,dword ptr [ebp-0F0h] 00411951 or ecx,1 00411954 mov dword ptr [ebp-0F0h],ecx 0041195A mov byte ptr [ebp-4],0 0041195E lea ecx,[ebp-24h] 00411961 call std::vector<int,std::allocator<int> >::~vector<int,std::allocator<int> > (411415h) 00411966 mov eax,dword ptr [ebp+8] } ``` Here we can see that for `vector<int> result;` a new object is created on the stack at `[ebp-24h]` ``` 00411917 lea ecx,[ebp-24h] 0041191A call std::vector<int,std::allocator<int> >::vector<int,std::allocator<int> > (411050h) ``` When we get to `return result;` a new copy is created in storage allocated by the caller at `[ebp+8]` ``` 00411943 mov ecx,dword ptr [ebp+8] 00411946 call std::vector<int,std::allocator<int> >::vector<int,std::allocator<int> > (41104Bh) ``` And the destructor is called for the local parameter `vector<int> result` at `[ebp-24h]` ``` 0041195E lea ecx,[ebp-24h] 00411961 call std::vector<int,std::allocator<int> >::~vector<int,std::allocator<int> > (411415h) ``` **Release Build** ``` vector<int> Foo(){ 00401110 push 0FFFFFFFFh 00401112 push offset __ehhandler$?Foo@@YA?AV?$vector@HV?$allocator@H@std@@@std@@XZ (401F89h) 00401117 mov eax,dword ptr fs:[00000000h] 0040111D push eax 0040111E sub esp,14h 00401121 push esi 00401122 mov eax,dword ptr [___security_cookie (403018h)] 00401127 xor eax,esp 00401129 push eax 0040112A lea eax,[esp+1Ch] 0040112E mov dword ptr fs:[00000000h],eax 00401134 mov esi,dword ptr [esp+2Ch] 00401138 xor eax,eax 0040113A mov dword ptr [esp+8],eax vector<int> result; 0040113E mov dword ptr [esi+4],eax 00401141 mov dword ptr [esi+8],eax 00401144 mov dword ptr [esi+0Ch],eax result.push_back(1); return result; 00401147 push eax 00401148 mov dword ptr [esp+28h],eax 0040114C mov ecx,1 00401151 push esi 00401152 lea eax,[esp+14h] 00401156 mov dword ptr [esp+10h],ecx 0040115A mov dword ptr [esp+14h],ecx 0040115E push eax 0040115F lea ecx,[esp+1Ch] 00401163 push ecx 00401164 mov eax,esi 00401166 call std::vector<int,std::allocator<int> >::insert (401200h) 0040116B mov eax,esi } 0040116D mov ecx,dword ptr [esp+1Ch] 00401171 mov dword ptr fs:[0],ecx 00401178 pop ecx 00401179 pop esi 0040117A add esp,20h 0040117D ret ``` The line `vector<int> result` does not call the vector allocator because it is done at call site in `Bar`. The optimization makes no copy of the result from Foo.
Function returning the return of another function
[ "", "c++", "function", "vector", "return-value", "" ]
I'm currently trying to pass a mono threaded program to multithread. This software do heavy usage of "refCounted" objects, which lead to some issues in multithread. I'm looking for some design pattern or something that might solve my problem. The main problem is object deletion between thread, normally deletion only decrement the reference counting, and when refcount is equal to zero, then the object is deleted. This work well in monothread program, and allow some great performance improvement with copy of big object. However, in multithread, two threads might want to delete the same object concurrently, as the object is protected by a mutex, only one thread delete the object and block the other one. But when it releases the mutex, then the other thread continue its execution with invalid (freed object), which lead to memory corruption. Here is an example with this class RefCountedObject ``` class RefCountedObject { public: RefCountedObject() : _refCount( new U32(1) ) {} RefCountedObject( const RefCountedObject& obj ) : _refCount( obj._refCount ) { ACE_Guard< ACE_Mutex > guard( _refCountMutex ); ++(*_refCount); } ~RefCountedObject() { Destroy(); } RefCountedObject& operator=( const RefCountedObject& obj ) { if( this != &obj ) { Destroy(); ACE_Guard< ACE_Mutex > guard( _refCountMutex ); _refCount = obj._refCount; ++(*_refCount); } return *this; } private: void Destroy() { ACE_Guard< ACE_Mutex > guard( _refCountMutex ); // thread2 are waiting here --(*_refCount); // This cause a free memory write by the thread2 if( 0 == *_refCount ) delete _refCount; } private: mutable U32* _refCount; mutable ACE_Mutex _refCountMutex; // BAD: this mutex only protect the refCount pointer, not the refCount itself }; ``` Suppose that two threads want to delete the same RefCountedObject, both are in ~RefCountedObject and call Destroy(), the first thread has locked the mutex and the other one is waiting. After the deletion of the object by the first thread, the second will continue its execution and cause a free memory write. Anyone has experience with a similar problem and found a solution ? --- Thanks all for your help, I realize my mistake: The mutex is only protecting refCount pointer, not the refCount itself! I've created a RefCount class which is mutex protected. The mutex is now shared between all refCounted object. Now all works fine.
If the count is part of the object then you have an inherent problem if one thread can be trying to *increase* the reference count whilst another is trying to remove the *last* reference. There needs to be an extra value on the ref count for each globally accessible pointer to the object, so you can always safely increase the ref count if you've got a pointer. One option would be to use `boost::shared_ptr` [(see the docs)](http://www.boost.org/doc/libs/1_36_0/libs/smart_ptr/shared_ptr.htm). You can use the free functions `atomic_load`, `atomic_store`, `atomic_exchange` and `atomic_compare_exchange` (which are conspicuously absent from the docs) to ensure suitable protection when accessing global pointers to shared objects. Once your thread has got a `shared_ptr` referring to a particular object you can use the normal non-atomic functions to access it. Another option is to use Joe Seigh's atomic ref-counted pointer from his [atomic\_ptr\_plus project](http://atomic-ptr-plus.sourceforge.net/)
Surely each thread simply needs to manage the reference counts correctly... That is, if ThreadA and ThreadB are both working with Obj1 then BOTH ThreadA and ThreadB should own a reference to the object and BOTH should call release when they're done with the object. In a single threaded application it's likely that you have a point where a reference counted object is created, you then do work on the object and eventually call release. In a multi-threaded program you would create the object and then pass it to your threads (however you do that). Before passing the object to the thread you should call AddRef() on your object to give the thread its own reference count. The thread that allocated the object can then call release as it's done with the object. The threads that are working with the object will then call release when they're done and when the last reference is released the object will be cleaned up. Note that you dont want the code that's running on the threads themselves to call AddRef() on the object as you then have a race condition between the creating thread calling release on the object before the threads that you've dispatched to get a chance to run and call AddRef().
C++: Multithreading and refcounted object
[ "", "c++", "multithreading", "refcounting", "" ]
I have a web page where I'd like to remap Ctrl+N to a different behavior. I followed YUI's example of register Key Listeners and my function is called but Firefox still creates a new browser window. Things seem to work fine on IE7. How do I stop the new window from showing up? Example: ``` var kl2 = new YAHOO.util.KeyListener(document, { ctrl:true, keys:78 }, {fn:function(event) { YAHOO.util.Event.stopEvent(event); // Doesn't help alert('Click');}}); kl2.enable(); ``` It is possible to remove default behavior. Google Docs overrides Ctrl+S to save your document instead of bringing up Firefox's save dialog. I tried the example above with Ctrl+S but Firefox's save dialog still pops up. Since Google can stop the save dialog from coming up I'm sure there's a way to prevent most default keyboard shortcuts.
The trick is the 'fn' function is whack. Experimentally, you can see that the function type for fn takes two parameters. The first param actually contains the TYPE of event. The second one contains... and this is screwy: an array containing the codepoint at index 0 and the actual event object at index 1. So changing your code around a bit, it *should* look like this: ``` function callback(type, args) { var event = args[1]; // the actual event object alert('Click'); // like stopEvent, but the event still propogates to other YUI handlers YAHOO.util.Event.preventDefault(event); } var kl2 = new YAHOO.util.KeyListener(document, { ctrl:true, keys:78 }, {fn:callback}); kl2.enable(); ``` Also, for the love of lisp, don't use raw code points in your code. Use 'N'.charCodeAt(0) instead of "78". Or wrap it up as a function ``` function ord(char) { return char.charCodeAt(0); } ```
I'm just guessing here but I don't think it can be done. If it's possible it definitely shouldn't be. Generic keyboard shortcuts are something you should not mess with. What's next? Hook the window close button to open a new window...
Prevent Default Behavior in Key Listeners in YUI
[ "", "javascript", "firefox", "yui", "" ]
In PHP, which is quicker; using `include('somefile.php')` or querying a MySQL database with a simple `SELECT` query to get the same information? For example, say you had a JavaScript autocomplete search field which needed 3,000 terms to match against. Is it quicker to read those terms in from another file using `include` or to read them from a MySQL database using a simple `SELECT` query? **Edit:** This is assuming that the database and the file I want to include are on the same local machine as my code.
It depends. If your file is stored locally in your server and the database is installed in another machine, then the faster is to include the file. Buuuuut, because it depends on your system it could be not true. I suggest to you to make a PHP test script and run it 100 times from the command line, and repeat the test through HTTP (using cURL) Example: **use\_include.php** ``` <?php $start = microtime(true); include 'somefile.php'; echo microtime(true)-$start; ?> ``` **use\_myphp.php** ``` <?php $start = microtime(true); // __put_here_your_mysql_statements_to_retrieve_the_file__ echo microtime(true)-$start; ?> ```
Including a file should almost always be quicker. If your database is on another machine (e.g. in shared hosting) or in a multi-server setup the lookup will have to make an extra hop. However, in practice the difference is probably not going to matter. If the list is dynamic then storing it in MySQL will make your life easier. Static lists (e.g. countries or states) can be stored in a PHP include. If the list is quite short (a few hundred entries) and often used, you could load it straight into JavaScript and do away with AJAX. If you are going the MySQL route and are worried about speed then use caching. ``` $query = $_GET['query']; $key = 'query' . $query; if (!$results = apc_fetch($key)) { $statement = $db->prepare("SELECT name FROM list WHERE name LIKE :query"); $statement->bindValue(':query', "$query%"); $statement->execute(); $results = $statement->fetchAll(); apc_store($key, $results); } echo json_encode($results); ```
What's quicker; including another file or querying a MySQL database in PHP?
[ "", "php", "mysql", "performance", "include", "" ]
Can anyone recommend a ready-to-use class/library compatible with C/C++/MFC/ATL that would parse iCal/vCal/Google calendar files (with recurrences)? It can be free or commercial.
there is a [parser in PHP for iCal](http://www.phpclasses.org/browse/package/3278.html), you can downloaded and check the code to suit your language. for vCal/vCard parsing [there's a C Library](http://sourceforge.net/projects/ccard). for Google Calendar I couldn't find any exact answer, so, try to Google it.
For vCal you can try the [CCard](http://ccard.sourceforge.net/) project on SourceForge: <http://sourceforge.net/projects/ccard> It's a C library but it states Windows as a supported platform. \*Edit: balexandre already linked to it :)
Parsing iCal/vCal/Google calendar files in C++
[ "", "c++", "mfc", "icalendar", "google-calendar-api", "vcalendar", "" ]
So I understand what a static method or field is, I am just wondering when to use them. That is, when writing code what design lends itself to using static methods and fields. One common pattern is to use static methods as a static factory, but this could just as easily be done by overloading a constructor. Correct? For example: ``` var bmp = System.Drawing.Bitmap.LoadFromFile("Image01.jpg"); ``` As for static fields, is creating singelton-objects their best use?
It gives a better idea of the intent when you use a static factory -- it also lets you have different factories that take the same argument types but have a different meaning. For example, imagine if Bitmap had LoadFromResource(string) -- it would not be possible to have two constructors that both took string. **EDIT**: From stevemegson in the comments > A static factory can also return null, and can more easily return an instance that it got from cache. Many of my classes have a static FromId(int) to get an instance from a primary key, returning an existing cached instance if we have one.
Static methods are usually useful for operations that don't require any data from an instance of the class (from `this`) and can perform their intended purpose solely using their arguments. A simple example of this would be a method `Point::distance(Point a, Point b);` that calculates the distance between two points and don't require an instance. Static fields are useful among others for constants that don't change all that often and are used by all the instances of a class.
When should I write Static Methods?
[ "", "c#", "oop", "" ]
I thought people would be working on little code projects together, but I don't see them, so here's an easy one: Code that validates a valid US Zip Code. I know there are ZIP code databases out there, but there are still uses, like web pages, quick validation, and also the fact that zip codes keep getting issued, so you might want to use weak validation. I wrote a little bit about zip codes in a side project on my wiki/blog: <https://benc.fogbugz.com/default.asp?W24> There is also a new, weird type of zip code. <https://benc.fogbugz.com/default.asp?W42> I can do the javascript code, but it would be interesting to see how many languages we can get here.
**Javascript Regex Literal**: US Zip Codes: `/(^\d{5}$)|(^\d{5}-\d{4}$)/` ``` var isValidZip = /(^\d{5}$)|(^\d{5}-\d{4}$)/.test("90210"); ``` Some countries use [Postal Codes](http://en.wikipedia.org/wiki/Postal_code), which would fail this pattern.
``` function isValidUSZip(sZip) { return /^\d{5}(-\d{4})?$/.test(sZip); } ```
ZIP Code (US Postal Code) validation
[ "", "javascript", "validation", "" ]
Here's the situation: I'm developing a simple application with the following structure: * FormMain (startup point) * FormNotification * CompleFunctions Right? Well, in **FormMain** I have the following function: ``` private void DoItInNewThread(ParameterizedThreadStart pParameterizedThreadStart, object pParameters, ThreadPriority pThreadPriority) { Thread oThread = new Thread(pParameterizedThreadStart); oThread.CurrentUICulture = Settings.Instance.Language; oThread.IsBackground = true; oThread.Priority = pThreadPriority; oThread.Name = "μRemote: Background operation"; oThread.Start(pParameters); } ``` So, everytime that I need to call a time consuming method located on **ComplexFunctions** I do the following: ``` // This is FormMain.cs string strSomeParameter = "lala"; DoItInNewThread(new ParameterizedThreadStart(ComplexFunctions.DoSomething), strSomeParameter, ThreadPriority.Normal); ``` The other class, FormNotification, its a Form that display some information of the process to the user. This FormNotification could be called from FormMain or ComplexFunctions. Example: ``` // This is ComplexFunctions.cs public void DoSomething(string pSomeParameter) { // Imagine some time consuming task FormNotification formNotif = new FormNotification(); formNotif.Notify(); } ``` FormNotify has a timer, so, after 10 seconds closes the form. I'm not using formNotif.ShowDialog because I don't want to give focus to this Form. You could check [this link](https://stackoverflow.com/questions/156046/show-a-form-without-stealing-focus-in-c) to see what I'm doing in Notify. Ok, here's the problem: When I call **FormNotify** from **ComplexFunction** which is called from another Thread in **FormMain** ... this **FormNotify** disappears after a few milliseconds. It's the same effect that when you do something like this: ``` using(FormSomething formSomething = new FormSomething) { formSomething.Show(); } ``` **How can avoid this?** These are possible solutions that I don't want to use: * Using Thread.Sleep(10000) in FormNotify * Using FormNotif.ShowDialog() This is a simplified scenario (FormNotify does some other fancy stuff that just stay for 10 seconds, but they are irrelevant to see the problem). Thanks for your time!!! And please, sorry my english.
Almost every GUI library is designed to only allow calls that change the GUI to be made in a single thread designated for that purpose (called the UI thread). If you are in another thread, you are required to arrange for the call to change the GUI to be made in the UI thread. In .NET, the way to do that is to call Invoke (synchronous) or BeginInvoke (asynchronous). The equivalent Java Swing call is invokeLater() -- there are similar functions in almost every GUI library. There is something called thread affinity. There are two threads in a WinForm Application, one for rendering and one for managing user interface. You deal only with user interface thread. The rendering thread remains hidden - runs in the background. The only objects created on UI thread can manipulate the UI - i.e the objects have thread affinity with the UI thread. Since, you are trying to update UI (show a notification) from a different thread than the UI thread. So in your worker thread define a delegate and make FormMain listen to this event. In the event handler (define in FormMain) write code to show the FormNotify. Fire the event from the worker thread when you want to show the notification. When a thread other than the creating thread of a control tries to access one of that control's methods or properties, it often leads to unpredictable results. A common invalid thread activity is a call on the wrong thread that accesses the control's Handle property. Set CheckForIllegalCrossThreadCalls to true to find and diagnose this thread activity more easily while debugging. Note that illegal cross-thread calls will always raise an exception when an application is started outside the debugger. Note: setting CheckForIllegalCrossThreadCalls to ture should only be done in DEBUGGIN SITUATIONS ONLY. Unpredicatable results will occur and you will wind up trying to chase bugs that you will have a difficuly tome finding.
You aren't allowed to make WinForms calls from other threads. Look at BeginInvoke in the form -- you can call a delegate to show the form from the UI thread. Edit: From the comments (do not set CheckForIllegalCrossThreadCalls to false). **More Info** Almost every GUI library is designed to only allow calls that change the GUI to be made in a single thread designated for that purpose (called the UI thread). If you are in another thread, you are required to arrange for the call to change the GUI to be made in the UI thread. In .NET, the way to do that is to call Invoke (synchronous) or BeginInvoke (asynchronous). The equivalent Java Swing call is invokeLater() -- there are similar functions in almost every GUI library.
My form doesn't properly display when it is launched from another thread
[ "", "c#", "winforms", "multithreading", "" ]
I have a list of objects, each containing an Id, Code and Description. I need to convert this list into a Hashtable, using **Description** as the key and **Id** as the value. This is so the Hashtable can then be serialised to JSON. Is there a way to convert from List<Object> to Hashtable without writing a loop to go through each item in the list?
Let's assume that your List contains objects of type Foo (with an int Id and a string Description). You can use Linq to turn that list into a Dictionary like this: ``` var dict = myList.Cast<Foo>().ToDictionary(o => o.Description, o => o.Id); ```
If you have access to Linq, you can use the [ToDictionary](http://msdn.microsoft.com/en-us/library/bb549277.aspx) function.
How can I convert List<object> to Hashtable in C#?
[ "", "c#", "list", "hashtable", "" ]
I'm writing a little debug app for a bit of kit we're developing and I'd like to roll it out to a few users to see if they can provoke any crashes. Does anyone know a way of effectively wrapping a wxPython app to catch any and all unhandled exceptions that would cause the app to crash? Ideally I'd want to capture all output (not just errors) and log it to a file. Any unhandled exceptions ought to log to the current file and then allow the exception to pass on as per usual (i.e. the logging process ought to be transparent). I'm sure someone must have done something along these lines before, but I've not managed to turn up anything that looks useful via google.
For logging standard output, you can use a stdout wrapper, such as this one: ``` from __future__ import with_statement class OutWrapper(object): def __init__(self, realOutput, logFileName): self._realOutput = realOutput self._logFileName = logFileName def _log(self, text): with open(self._logFileName, 'a') as logFile: logFile.write(text) def write(self, text): self._log(text) self._realOutput.write(text) ``` You then have to initialize it in your main Python file (the one that runs everything): ``` import sys sys.stdout = OutWrapper(sys.stdout, r'c:\temp\log.txt') ``` As to logging exceptions, the easiest thing to do is to wrap `MainLoop` method of wx.App in a try..except, then extract the exception information, save it in some way, and then re-raise the exception through `raise`, e.g.: ``` try: app.MainLoop() except: exc_info = sys.exc_info() saveExcInfo(exc_info) # this method you have to write yourself raise ```
For the exception handling, assuming your log file is opened as log: ``` import sys import traceback def excepthook(type, value, tb): message = 'Uncaught exception:\n' message += ''.join(traceback.format_exception(type, value, tb)) log.write(message) sys.excepthook = excepthook ```
How can I capture all exceptions from a wxPython application?
[ "", "python", "exception", "error-handling", "wxwidgets", "error-reporting", "" ]
I've used [DJ Java Decompiler](http://members.fortunecity.com/neshkov/dj.html), which has a handy GUI, but it seems as if the latest version is only a trial and forces you to purchase the software after some period of days (I recall using an earlier free version about a year ago at a previous job). I'm aware of Jad and Jadclipse, but what I loved about DJ Java Decompiler was that it integrated with Windows Explorer - so I could simply open up a JAR in something like WinRAR, navigate thru the packages, and double-click on a .class file to view it's decompiled source. Can anyone suggest other good, free, .class viewers? The criteria I have in mind for these would be: * GUI-based * Integrates to Windows Explorer (so I don't have to run some command-line options like with JAD) * optional - can also show raw JVM bytecode commands In other words - I'd like to find the closest thing to [.NET Reflector](http://www.red-gate.com/products/reflector/index.htm) for Java as possible.
Eclipse will allow you to [view the bytecode for classes](http://archive.eclipse.org/eclipse/downloads/drops/R-3.3-200706251500/whatsnew/eclipse-news-part2.html), if the source is unavailable (search for 'disassembled bytecodes'). It seems there is also a third-party plugin that uses asm [here](http://asm.objectweb.org/eclipse/index.html).
JAD is one of the best Java Decompiler today. This is one brilliant piece of software. Nevertheless, the last JDK supported by JAD 1.5.8 (Apr 14, 2001) is JDK 1.3. DJ Java Decompiler, JadClipse, Cavaj and JarInspector are powered by Jad. The last version of Decafe Pro has been released on 2002-01-03. These viewers can not display Java 5 sources. So, I use [JD-GUI](http://jd.benow.ca/) : logic, I'm the author :)
Best free Java .class viewer?
[ "", "java", "decompiling", "" ]
``` class Foo(models.Model): title = models.CharField(max_length=20) slug = models.SlugField() ``` Is there a built-in way to get the slug field to autopopulate based on the title? Perhaps in the Admin and outside of the Admin.
for Admin in Django 1.0 and up, you'd need to use ``` prepopulated_fields = {'slug': ('title',), } ``` in your admin.py Your key in the prepopulated\_fields dictionary is the field you want filled, and the value is a tuple of fields you want concatenated. Outside of admin, you can use the `slugify` function in your views. In templates, you can use the `|slugify` filter. There is also this package which will take care of this automatically: <https://pypi.python.org/pypi/django-autoslug>
Thought I would add a complete and up-to-date answer with gotchas mentioned: ## 1. Auto-populate forms in Django Admin If you are only concerned about adding and updating data in the admin, you could simply use the [prepopulated\_fields](https://docs.djangoproject.com/en/dev/ref/contrib/admin/#django.contrib.admin.ModelAdmin.prepopulated_fields) attribute ``` class ArticleAdmin(admin.ModelAdmin): prepopulated_fields = {"slug": ("title",)} admin.site.register(Article, ArticleAdmin) ``` ## 2. Auto-populate custom forms in templates If you have built your own server-rendered interface with forms, you could auto-populate the fields by using either the [|slugify](https://docs.djangoproject.com/en/2.2/ref/templates/builtins/#slugify) tamplate filter or the [slugify](https://docs.djangoproject.com/en/2.2/ref/utils/#django.utils.text.slugify) utility when saving the form (is\_valid). ## 3. Auto-populating slugfields at model-level with django-autoslug The above solutions will only auto-populate the slugfield (or any field) when data is manipulated through those interfaces (the admin or a custom form). If you have an API, management commands or anything else that also manipulates the data you need to drop down to model-level. [django-autoslug](https://pypi.org/project/django-autoslug/) provides the AutoSlugField-fields which extends SlugField and allows you to set which field it should slugify neatly: ``` class Article(Model): title = CharField(max_length=200) slug = AutoSlugField(populate_from='title') ``` **The field uses pre\_save and post\_save signals to achieve its functionality so please see the gotcha text at the bottom of this answer.** ## 4. Auto-populating slugfields at model-level by overriding save() The last option is to implement this yourself, which involves overriding the default save() method: ``` class Article(Model): title = CharField(max_length=200) slug = SlugField() def save(self, *args, **kwargs): self.slug = slugify(self.title) super().save(*args, **kwargs) ``` ### NOTE: Bulk-updates will bypass your code (including signals) This is a common miss-understanding by beginners to Django. First you should know that the pre\_save and post\_save signals are directly related to the save()-method. Secondly the different ways to do bulk-updates in Django all circumvent the save()-method to achieve high performance, by operating directly on the SQL-layer. This means that for the example model used in solution 3 or 4 above: * Article.objects.all().update(title='New post') will **NOT** update the slug of any article * Using [bulk\_create](https://docs.djangoproject.com/en/2.2/ref/models/querysets/#bulk-create) or [bulk\_update](https://docs.djangoproject.com/en/2.2/ref/models/querysets/#bulk-update) on the Article model will **NOT** update the slug of any article. * Since the save()-method is not called, **no pre\_save or post\_save signals will be emitted**. To do bulk updates and still utilize code-level constraints the only solution is to iterate objects one by one and call its save()-method, which has drastically less performance than SQL-level bulk operations. You could of course use triggers in your database, though that is a totally different topic.
Is there an easy way to populate SlugField from CharField?
[ "", "python", "django", "slug", "" ]
What is the best way to write bytes in the middle of a file using Java?
Reading and Writing in the middle of a file is as simple as using a [`RandomAccessFile`](http://java.sun.com/javase/6/docs/api/java/io/RandomAccessFile.html) in Java. [`RandomAccessFile`](http://java.sun.com/javase/6/docs/api/java/io/RandomAccessFile.html), despite its name, is more like an `InputStream` and `OutputStream` and less like a `File`. It allows you to read or seek through `bytes` in a file and then begin writing over whichever bytes you care to stop at. Once you discover this class, it is very easy to use if you have a basic understanding of regular file i/o. A small example: ``` public static void aMethod(){ RandomAccessFile f = new RandomAccessFile(new File("whereDidIPutTHatFile"), "rw"); long aPositionWhereIWantToGo = 99; f.seek(aPositionWhereIWantToGo); // this basically reads n bytes in the file f.write("Im in teh fil, writn bites".getBytes()); f.close(); } ```
Use `RandomAccessFile` * [Tutorial](http://java.sun.com/docs/books/tutorial/essential/io/rafs.html) * [Javadocs](http://java.sun.com/javase/6/docs/api/java/io/RandomAccessFile.html)
Best Way to Write Bytes in the Middle of a File in Java
[ "", "java", "file", "java-io", "" ]
What is the best or most concise method for returning a string repeated an arbitrary amount of times? The following is my best shot so far: ``` function repeat(s, n){ var a = []; while(a.length < n){ a.push(s); } return a.join(''); } ```
Good news! [`String.prototype.repeat`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat) is [now a part of JavaScript](http://www.ecma-international.org/ecma-262/6.0/index.html#sec-string.prototype.repeat). ``` "yo".repeat(2); // returns: "yoyo" ``` The method is supported by all major browsers, except Internet Explorer. For an up to date list, see [MDN: String.prototype.repeat > Browser compatibility](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat#Browser_compatibility). MDN has [a polyfill](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/repeat) for browsers without support.
> **Note to new readers:** This answer is old and and not terribly practical - it's just "clever" because it uses Array stuff to get > String things done. When I wrote "less process" I definitely meant > "less code" because, as others have noted in subsequent answers, it > performs like a pig. So don't use it if speed matters to you. I'd put this function onto the String object directly. Instead of creating an array, filling it, and joining it with an empty char, just create an array of the proper length, and join it with your desired string. Same result, less process! ``` String.prototype.repeat = function( num ) { return new Array( num + 1 ).join( this ); } alert( "string to repeat\n".repeat( 4 ) ); ```
Repeat String - Javascript
[ "", "javascript", "string", "" ]
This is what I currently have: ``` CREATE OR REPLACE TRIGGER MYTRIGGER AFTER INSERT ON SOMETABLE FOR EACH ROW DECLARE v_emplid varchar2(10); BEGIN SELECT personnum into v_emplid FROM PERSON WHERE PERSONID = :new.EMPLOYEEID; dbms_output.put(v_emplid); /* INSERT INTO SOMEOTHERTABLE USING v_emplid and some of the other values from the trigger table*/ END MYTRIGGER; ``` DBA\_ERRORS has this error: PL/SQL: ORA-00923: FROM keyword not found where expected
1) There must be something else to your example because that sure seems to work for me ``` SQL> create table someTable( employeeid number ); Table created. SQL> create table person( personid number, personnum varchar2(10) ); Table created. SQL> ed Wrote file afiedt.buf 1 CREATE OR REPLACE TRIGGER MYTRIGGER 2 AFTER INSERT ON SOMETABLE 3 FOR EACH ROW 4 DECLARE 5 v_emplid varchar2(10); 6 BEGIN 7 SELECT personnum 8 into v_emplid 9 FROM PERSON 10 WHERE PERSONID = :new.EMPLOYEEID; 11 dbms_output.put(v_emplid); 12 /* INSERT INTO SOMEOTHERTABLE USING v_emplid and some of the other values from the trigger table*/ 13* END MYTRIGGER; 14 / Trigger created. SQL> insert into person values( 1, '123' ); 1 row created. SQL> insert into sometable values( 1 ); 1 row created. ``` 2) You probably want to declare V\_EMPLID as being of type Person.PersonNum%TYPE so that you can be certain that the data type is correct and so that if the data type of the table changes you won't need to change your code. 3) I assume that you know that your trigger cannot query or update the table on which the trigger is defined (so no queries or inserts into someTable).
You are playing with Lava (not just fire) in your trigger. DBMS\_OUTPUT in a trigger is really, really bad. You can blow-out on a buffer overflow in your trigger and the whole transaction is shot. Good luck tracking that down. If you must do output-to-console like behavior, invoke an AUTONOMOUS TRANSACTION procedure that writes to a table. Triggers are pretty evil. I used to like them, but they are too hard to remember about. They affect data often times leading to MUTATING data (scary and not just because Halloween is close). We use triggers to change the value of columns like .new:LAST\_MODIFIED := sysdate and .new:LAST\_MODIFIED\_BY := user. That's it. Don't ever allow a TRIGGER to prevent a transaction from completing. Find another option.
What is the syntax to use a Select statement inside a PL/SQL Trigger?
[ "", "sql", "oracle", "plsql", "ora-00923", "" ]
Is it possible to set a default value for columns in JPA, and if, how is it done using annotations?
Actually it is possible in JPA, although a little bit of a hack using the `columnDefinition` property of the `@Column` annotation, for example: ``` @Column(name="Price", columnDefinition="Decimal(10,2) default '100.00'") ```
You can do the following: ``` @Column(name="price") private double price = 0.0; ``` There! You've just used zero as the default value. Note this will serve you if you're only accessing the database from this application. If other applications also use the database, then you should make this check from the database using [Cameron's](https://stackoverflow.com/questions/197045/setting-default-values-for-columns-in-jpa/375202#375202) *columnDefinition* annotation attribute, or some other way.
Setting default values for columns in JPA
[ "", "java", "jpa", "annotations", "" ]
Now that `shared_ptr` is in tr1, what do you think should happen to the use of `std::auto_ptr`? They both have different use cases, but all use cases of `auto_ptr` can be solved with `shared_ptr`, too. Will you abandon `auto_ptr` or continue to use it in cases where you want to express explicitly that only one class has ownership at any given point? My take is that using `auto_ptr` can add clarity to code, precisely by adding nuance and an indication of the design of the code, but on the other hand, it add yet another subtle issue when training new programmers: they need to understand smart pointers and the fine details of how they work. When you use only one smart pointer everywhere, you can just lay down a rule 'wrap all pointers in `shared_ptr`' and be done with it. What's your take on this?
To provide a little more ammunition to the 'avoid `std::auto_ptr`' camp: `auto_ptr` is being deprecated in the next standard (C++0x). I think this alone is good enough ammunition for any argument to use something else. However, as [Konrad Rudolph](https://stackoverflow.com/questions/197048/idiomatic-use-of-stdautoptr-or-only-use-sharedptr#197083) mentioned, the default replacement for `auto_ptr` should probably be `boost::scoped_ptr`. The semantics of `scoped_ptr` more closely match those of `auto_ptr` and it is intended for similar uses. The next C++09 standard will have something similar called unique\_ptr. However, using `shared_ptr` anywhere that `scoped_ptr` should be used will not break anything, it'll just add a very slight bit of inefficiency to deal with the reference count if the object is never actually going to be shared. So for private member pointers that will never be handed out to another object - use `scoped_ptr`. If the pointer will be handed out to something else (this includes using them in containers or if all you want to do is transfer ownership and not keep or share it) - use `shared_ptr`.
auto\_ptr is nice in signatures, too. When a function takes an `auto_ptr<T>` by value, it means it will consume the `T`. If a function returns an `auto_ptr<T>`, it's clear that it relinquishes ownership. This can communicate your intents about the lifetime. On the other hand, using `scoped_ptr<T>` implies that you don't want to care about the lifetime of the `T`. This also implies you can use it in more places. Both smart pointers are valid choices, you can certainly have both in a single program.
Idiomatic use of std::auto_ptr or only use shared_ptr?
[ "", "c++", "coding-style", "smart-pointers", "tr1", "" ]
I have 150+ SQL queries in separate text files that I need to analyze (just the actual SQL code, not the data results) in order to identify all column names and table names used. Preferably with the number of times each column and table makes an appearance. Writing a brand new SQL parsing program is trickier than is seems, with nested SELECT statements and the like. There has to be a program, or code out there that does this (or something close to this), but I have not found it.
I actually ended up using a tool called [SQL Pretty Printer](http://www.dpriver.com/pp/sqlformat.htm). You can purchase a desktop version, but I just used the free online application. Just copy the query into the text box, set the Output to "List DB Object" and click the Format SQL button. It work great using around 150 different (and complex) SQL queries.
How about using the Execution Plan report in MS SQLServer? You can save this to an xml file which can then be parsed.
Is there a way to parser a SQL query to pull out the column names and table names?
[ "", "sql", "parsing", "" ]
How do I determine if an object reference is null in C# w/o throwing an exception if it is null? i.e. If I have a class reference being passed in and I don't know if it is null or not.
What Robert said, but for that particular case I like to express it with a guard clause like this, rather than nest the whole method body in an if block: ``` void DoSomething( MyClass value ) { if ( value == null ) return; // I might throw an ArgumentNullException here, instead value.Method(); } ```
testing against null will never\* throw an exception ``` void DoSomething( MyClass value ) { if( value != null ) { value.Method(); } } ``` --- \* never as in *should never*. As @Ilya Ryzhenkov points out, an *incorrect* implementation of the != operator for MyClass could throw an exception. Fortunately Greg Beech has a good blog post on [implementing object equality in .NET](http://gregbee.ch/blog/implementing-object-equality-in-dotnet).
How do I detect a null reference in C#?
[ "", "c#", "" ]
Is there any substantial difference between those two terms?. I understand that JDK stands for Java Development Kit that is a subset of SDK (Software Development Kit). But specifying Java SDK, it should mean the same as JDK.
From this [wikipedia entry](http://en.wikipedia.org/wiki/Java_Development_Kit#Ambiguity_between_a_JDK_and_an_SDK): > The JDK is a subset of what is loosely defined as a software development kit (SDK) in the general sense. In the descriptions which accompany their recent releases for Java SE, EE, and ME, Sun acknowledge that under their terminology, the JDK forms the subset of the SDK which is responsible for the writing and running of Java programs. The remainder of the SDK is composed of extra software, such as Application Servers, Debuggers, and Documentation. The "extra software" seems to be Glassfish, MySQL, and NetBeans. [This page](http://java.sun.com/javaee/downloads/index.jsp) gives a comparison of the various packages you can get for the Java EE SDK.
JDK is the SDK for Java. SDK stands for 'Software Development Kit', a developers tools that enables one to write the code with more more ease, effectiveness and efficiency. SDKs come for various languages. They provide a lot of APIs (Application Programming Interfaces) that makes the programmer's work easy. [![enter image description here](https://i.stack.imgur.com/TO6NR.png)](https://i.stack.imgur.com/TO6NR.png) The SDK for Java is called as JDK, the Java Development Kit. So by saying SDK for Java you are actually referring to the JDK. Assuming that you are new to Java, there is another term that you'll come across- JRE, the acronym for Java Runtime Environment. JRE is something that you need when you try to run software programs written in Java. Java is a platform independent language. The JRE runs the JVM, the Java Virtual Machine, that enables you to run the software on any platform for which the JVM is available.
Differences between JDK and Java SDK
[ "", "sdk", "java", "" ]
What is the most efficient way of turning the list of values of a dictionary into an array? For example, if I have a `Dictionary` where `Key` is `String` and `Value` is `Foo`, I want to get `Foo[]` I am using VS 2005, C# 2.0
``` // dict is Dictionary<string, Foo> Foo[] foos = new Foo[dict.Count]; dict.Values.CopyTo(foos, 0); // or in C# 3.0: var foos = dict.Values.ToArray(); ```
Store it in a list. It is easier; ``` List<Foo> arr = new List<Foo>(dict.Values); ``` Of course if you specifically want it in an array; ``` Foo[] arr = (new List<Foo>(dict.Values)).ToArray(); ```
Convert dictionary values into array
[ "", "c#", ".net", "arrays", "generics", "dictionary", "" ]
I'm looking for a technique (javascript, CSS, whatever ???) that will let me control the amount of a string that is displayed. The string is the result of a search (and therefore not initially known). A simple Character count approach is trivial, but not acceptable, as it needs to handle proportional fonts. In otherwords if I want to limit to say 70 pixels then the examples below show different character counts (9 and 15) both measuring the same:- Welcome M... Hi Iain if I've ... If you look at Yahoo search results they are able to limit the length of title strings and add ellipsis on the end of long strings to indicate more. (try site:loot.com wireless+keyboard+and+mouse to see an example of Yahoo achieving this) Any Ideas?
Perhaps the CSS property `overflow: hidden;` can help you, in conjuntion with `width`.
Using a span with fixed width, overflow-x:hidden and white-space:nowrap would be a start. To get the elipsis in a cross browser scenario will be difficult. IE has text-overflow:elipsis but that is non-standard. This is emulated with -o-text-overflow in Opera. However mozilla doesn't have this. The yahoo Javascript APIs handle this.
Limit displayed length of string on web page
[ "", "javascript", "css", "string", "dhtml", "" ]
what is the best way of displaying/using the revision number in a java webapp? we just use ant to build our .war archive, no buildserver or such. i'd hope there was some kind if $ref that i could write in a resource file, but this is only updated when the file in question is committed. i need it globally. what would you recommend? post-commit triggers that update certain files? custom ant scripts? is there a more non-hacky way of doing this? or it it better to have my own version number independent of svn. edit: great suggestions! thanks a lot for the answers!
**There are a couple of Ant tasks that can do this for you.** [SvnAnt task](http://subclipse.tigris.org/svnant.html) from tigris is the oldest. Documentation is [here](http://subclipse.tigris.org/svnant/svn.html#info) - in particular take a look at the `info` element which exposes the Subversion repository's revision number as an Ant property which it calls `rev`. You can write this value to your resouces file using the normal Ant substituion mechanisms. Someone has also put up a [simillar (simpler) task](http://code.google.com/p/svntask/) on google code hosting - never used it though so can't comment. Either of these seem like the neatest way to me if you already have Ant in your build.
We use the following ant task to include the svn version in a attribute in the jar, along with the version of other packages that are being used ``` <target name="package" depends="compile" description="Package up the project as a jar"> <!-- Create the subversion version string --> <exec executable="svnversion" failifexecutionfails="no" outputproperty="version"> <arg value="."/> <arg value="-n"/> </exec> <!-- Create the time stamp --> <tstamp> <format property="timeAndDate" pattern="HH:mm d-MMMM-yyyy"/> </tstamp> <jar destfile="simfraserv.jar"> <manifest> <attribute name="Built-By" value="${user.name} on ${time.date}" /> <attribute name="Implementation-Version" value="${svn.version}" /> <attribute name="Implementation-Java" value="${java.vendor} ${java.version}" /> <attribute name="Implementation-Build-OS" value="${os.name} ${os.arch} ${os.version}" /> <attribute name="JVM-Version" value="${common.sourcelevel}+" /> </manifest> <fileset dir="bin"> <include name="**/*.class"/> </fileset> <fileset dir="src"> <include name="**"/> </fileset> </jar> </target> ``` And then you can access it in your webapp like this ``` String version = this.getClass().getPackage().getImplementationVersion(); ```
obtain current svn revision in webapp
[ "", "java", "svn", "web-applications", "revision", "" ]
This is a nut I'm cracking these days Application I'm working on has some advanced processing towards SQL. One of the operations selects various metadata on the objects in the current context from different tables, based on the item names in the collection. For this, a range of "select...from...where...in()" is executed, and to prevent malicious SQL code, Sql parameters are used for constructing the contents of the "in()" clause. However, when the item collection for constructing the "in()" clause is larger than 2100 items, this fails due to the Sql Server limitation of max 2100 Sql parameters per query. One approach I'm trying out now is creating a #temp table for storing all item names and then joining the table in the original query, instead of using "where in()". This has me scratching my head on how to populate the table with the item names stored in an Array in the .NET code. Surely, there has to be some bulk way to insert everything rather than issuing a separate "insert into" for each item? Other than that, I'm very much interested in alternative approaches for solving this issue. Thanks a lot
Hrm, without knowing context and more about the data and how you are using the results and performance issues, i will try to suggest an alternative. Could you possibly split into multiple queries? Do the same as you do now, but instead of building a query with 2100+ in items, build two with 1050 in each, and then merge the results.
One potential workaround is to use the ability to query XML and simply send all the data for your 'in' as an xml column and then join on that. The same approach could be used to populate your temp table, but then again, why not just use it directly. Here's a short sample that should illustrate: ``` declare @wanted xml set @wanted = '<ids><id>1</id><id>2</id></ids>' select * from (select 1 Id union all select 3) SourceTable where Id in(select Id.value('.', 'int') from @wanted.nodes('/ids/id') as Foo(Id)) ``` Simply build the xml in your application and pass it as parameter.
Parameterized Sql queries
[ "", "c#", ".net", "sql-server-2005", "ado.net", "" ]
I need to zip and password-protect a file. Is there a good (free) library for this? This needs to be opened by a third party, so the password protection needs to work with standard tools.
UPDATE 2020: There are other choices now, notably [Zip4J](http://www.lingala.net/zip4j/). --- After much searching, I've found three approaches: A freely available set of source code, suitable for a single file zip. However, there is no license. Usage is AesZipOutputStream.zipAndEcrypt(...). <http://merkert.de/de/info/zipaes/src.zip> (<https://forums.oracle.com/forums/thread.jspa?threadID=1526137>) UPDATE: This code is now Apache licensed and released at <https://github.com/mobsandgeeks/winzipaes> (exported from original home at [Google code](http://code.google.com/p/winzipaes/)) . It worked for me (one file in the zip), and fills a hole in Java's opens source libraries nicely. A commercial product ($500 at the time of writing). I can't verify if this works, as their trial license approach is complex. Its also a ported .NET app: <http://www.nsoftware.com/ipworks/zip/default.aspx> A commercial product ($290 at the time of writing). Suitable only for Wnidows as it uses a dll: <http://www.example-code.com/java/zip.asp>
You can try [Zip4j](http://www.lingala.net/zip4j/), a pure java library to handle zip file. It supports encryption/ decryption of PKWare and AES encryption methods. Key features: * Create, Add, Extract, Update, Remove files from a Zip file * Read/Write password protected Zip files * Supports AES 128/256 Encryption * Supports Standard Zip Encryption * Supports Zip64 format * Supports Store (No Compression) and Deflate compression method * Create or extract files from Split Zip files (Ex: z01, z02,...zip) * Supports Unicode file names * Progress Monitor License: * Zip4j is released under [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
Recommendations on a free library to be used for zipping files
[ "", "java", "encryption", "zip", "" ]
How can I do the Ruby method ["Flatten" Ruby Method](http://www.ruby-doc.org/core/classes/Array.html#M002241) in C#. This method flattens a jagged array into a single-dimensional array. For example: ``` s = [ 1, 2, 3 ] #=> [1, 2, 3] t = [ 4, 5, 6, [7, 8] ] #=> [4, 5, 6, [7, 8]] a = [ s, t, 9, 10 ] #=> [[1, 2, 3], [4, 5, 6, [7, 8]], 9, 10] a.flatten #=> [1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ```
Recursive solution: ``` IEnumerable Flatten(IEnumerable array) { foreach(var item in array) { if(item is IEnumerable) { foreach(var subitem in Flatten((IEnumerable)item)) { yield return subitem; } } else { yield return item; } } } ``` EDIT 1: [Jon](https://stackoverflow.com/users/22656/jon-skeet) explains in the comments why it cannot be a generic method, take a look! EDIT 2: [Matt](https://stackoverflow.com/users/615/matt-hamilton) suggested making it an extension method. Here you go, just replace the first line with: ``` public static IEnumerable Flatten(this IEnumerable array) ``` and you can use it like this: ``` foreach(var item in myArray.Flatten()) { ... } ```
I would have responded in a comment, but I need more than 300 characters. @Alexander's solution is awesome, but it runs into a problem with arrays of strings. Since string implements IEnumerable, I think it will end up returning each character in every string. You can use a generic parameter to tell it what kind of thing you are hoping to have returned in these cases, e.g.: ``` public static IEnumerable Flatten<T>(IEnumerable e) { if (e == null) yield break; foreach (var item in e) { if (item is T) yield return (T)item; else if (item is IEnumerable) { foreach (var subitem in Flatten<T>((IEnumerable)item)) yield return subitem; } else yield return item; } } ```
Flatten Ruby method in C#
[ "", "c#", "ruby", "arrays", "" ]
I have a collection of ClickOnce packages in a publish folder on a network drive and need to move them all to another server (our DR machine). After copy/pasting the whole directory and running the setups on the new machine I get an error message stating that it cannot find the old path: > Activation of > ...MyClickOnceApp.application resulted > in exception. Following failure > messages were detected: > > + Downloading file://oldMachine/c$/MyClickOnceApp.application did not succeed. > > + Could not find a part of the path '\\oldMachine\c$\MyClickOnceApp.application'. Once I change the installation [URL](http://en.wikipedia.org/wiki/Uniform_Resource_Locator) to point at my new machine, I get another error: > Manifest XML signature is not valid. > > + The digital signature of the object did not verify. I've tried using [MageUI.exe](http://msdn.microsoft.com/en-us/library/xhctdw55.aspx), to modify the deployment URL, but it asks for a certificate, which I don't have. What am I doing wrong and how do I successfully move published ClickOnce packages?
I found a solution: Firstly, using MageUI, I changed the "Start Location" under "Deployment Options". On saving, it prompted me to sign with a key, which I created there and then. I then ran the `setup.exe` file, and it worked without fail. After checking which files had changed, I realised it was only the one file: the application manifest file (`myAppName.application`). The only things that changed in the file were the ***deployment provider*** and the ***signature*** (which is what I changed in MageUI). Once I realised this was how to do it, I used the command line version of MageUI called `Mage.exe`, which comes with the SDK. Below is the batch file I created to do all of this on the command line: > REM Set the enviroment > call "C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat" > > REM Update the deployment provider URL > mage -Update %1.application -pu %2 > > REM Sign the manifest with our key > mage -Sign %1.application -CertFile C:\AppKey.pfx -Password myPw I can now use this to run against all of my published applications in a quick and easy way. I hope this helps.
Without getting into too much detail, this should get you going. ClickOnce manifests must be signed with a certificate for security reasons. You can purchase a code signing certificate or generate a test one. The main drawback of a test certificate is that your application publisher will appear as "Unknown" rather than your company's name. In Visual Studio, open your project's properties and go to the "Signing" tab, select "Sign the ClickOnce manifests", and "Create Test Certificate". Next, click "More Details" to bring up a dialog and click "Install Certificate". This will run you through a wizard to get your test cert in your store. Make sure you put it in the "Personal" store. Now you can use MageUI to edit your manifests. Any time you save it will prompt you to sign the manifests but you should now be able to select the test cert you just stored. Always edit/sign the application manifest before editing/signing the deployment manifest. This is confusing because the application manifest *isn't* the file with the **.application** extension. Good luck!
How to move a ClickOnce deployment package
[ "", "c#", "clickonce", "manifest", "" ]
I'm trying to modify the class of an element if an ajax call based on that element is successful ``` <script type='text/javascript'> $("#a.toggle").click(function(e){ $.ajax({ url: '/changeItem.php', dataType: 'json', type: 'POST', success: function(data,text){ if(data.error=='') { if($(this).hasClass('class1')) { $(this).removeClass('class1'); $(this).addClass('class2'); } else if($(this).hasClass('class2')) { $(this).removeClass('class2'); $(this).addClass('class1'); } } else(alert(data.error)); } }); return false; }); </script> <a class="toggle class1" title='toggle-this'>Item</a> ``` My understanding of the problem is that in the success function *this* references the ajax object parameters, NOT the calling dom element like it does within other places of the click function. So, how do I reference the calling dom element and check / add / remove classes?
You can just store it in a variable. Example: ``` $("#a.toggle").click(function(e) { var target = $(this); $.ajax({ url: '/changeItem.php', dataType: 'json', type: 'POST', success: function(data,text) { if(data.error=='') { if(target.hasClass('class1')) { target .removeClass('class1') .addClass('class2'); } else if(target.hasClass('class2')) { target .removeClass('class2') .addClass('class1'); } } else(alert(data.error)); } }); return false; }); ```
jQuery passes the target of the event, along with some other information about it, to your handler function. See <http://docs.jquery.com/Events_%28Guide%29> for more info about this. In your code, it'd be referenced like $(e.target).
How do I reference the calling dom element in a jquery ajax success function?
[ "", "javascript", "jquery", "" ]
Is there such a thing as unit test generation? If so... ...does it work well? ...What are the auto generation solutions that are available for .NET? ...are there examples of using a technology like this? ...is this only good for certain types of applications, or could it be used to replace all manually written unit testing?
Take a look at [Pex](http://research.microsoft.com/Pex/). Its a Microsoft Research project. From the website: *Pex generates Unit Tests from hand-written Parameterized Unit Tests through Automated Exploratory Testing based on Dynamic Symbolic Execution.* ## UPDATE for 2019: As mentioned in the comments, Pex is now called [IntelliTest](https://learn.microsoft.com/en-us/visualstudio/test/generate-unit-tests-for-your-code-with-intellitest?view=vs-2019) and is a feature of Visual Studio Enterprise Edition. It supports emitting tests in MSTest, MSTest V2, NUnit, and xUnit format and it is extensible so you can use it with other unit test frameworks. But be aware of the following caveats: * Supports only C# code that targets the .NET Framework. * Does not support x64 configurations. * Available in Visual Studio **Enterprise Edition only**
I believe there's no point in Unit test generation, as far as TDD goes. You only make unit tests so that you're sure that you (as a developer) are on track w/ regards to design and specs. Once you start generating tests automatically, it loses that purpose. Sure it would probably mean 100% code coverage, but that coverage would be senseless and empty. Automated unit tests also mean that your strategy is test-after, which is opposite of TDD's test-before tenet. Again, TDD is not about tests. That being said I believe MSTest does have an automatic unit-test generation tool -- I was able to use one with VS2005.
Auto-generation of .NET unit tests
[ "", "c#", ".net", "unit-testing", "automation", "automated-tests", "" ]
This semester, I took a course in computer graphics at my University. At the moment, we're starting to get into some of the more advanced stuff like heightmaps, averaging normals, tesselation etc. I come from an object-oriented background, so I'm trying to put everything we do into reusable classes. I've had good success creating a camera class, since it depends mostly on the one call to gluLookAt(), which is pretty much independent of the rest of the OpenGL state machine. However, I'm having some trouble with other aspects. Using objects to represent primitives hasn't really been a success for me. This is because the actual render calls depend on so many external things, like the currently bound texture etc. If you suddenly want to change from a surface normal to a vertex normal for a particular class it causes a severe headache. I'm starting to wonder whether OO principles are applicable in OpenGL coding. At the very least, I think that I should make my classes less granular. What is the stack overflow community's views on this? What are your best practices for OpenGL coding?
The most practical approach seems to be to ignore most of OpenGL functionality that is not directly applicable (or is slow, or not hardware accelerated, or is a no longer a good match for the hardware). OOP or not, to render some scene those are various types and entities that you usually have: **Geometry** (meshes). Most often this is an array of vertices and array of indices (i.e. three indices per triangle, aka "triangle list"). A vertex can be in some arbitrary format (e.g. only a float3 position; a float3 position + float3 normal; a float3 position + float3 normal + float2 texcoord; and so on and so on). So to define a piece of geometry you need: * define it's vertex format (could be a bitmask, an enum from a list of formats; ...), * have array of vertices, with their components interleaved ("interleaved arrays") * have array of triangles. If you're in OOP land, you could call this class a *Mesh*. **Materials** - things that define how some piece of geometry *is rendered*. In a simplest case, this could be a color of the object, for example. Or whether lighting should be applied. Or whether the object should be alpha-blended. Or a texture (or a list of textures) to use. Or a vertex/fragment shader to use. And so on, the possibilities are endless. Start by putting things that *you need* into materials. In OOP land that class could be called (surprise!) a *Material*. **Scene** - you have pieces of geometry, a collection of materials, time to define what is in the scene. In a simple case, each object in the scene could be defined by: - What geometry it uses (pointer to Mesh), - How it should be rendered (pointer to Material), - Where it is located. This could be a 4x4 transformation matrix, or a 4x3 transformation matrix, or a vector (position), quaternion (orientation) and another vector (scale). Let's call this a *Node* in OOP land. **Camera**. Well, a camera is nothing more than "where it is placed" (again, a 4x4 or 4x3 matrix, or a position and orientation), plus some projection parameters (field of view, aspect ratio, ...). So basically that's it! You have a scene which is a bunch of Nodes which reference Meshes and Materials, and you have a Camera that defines where a viewer is. Now, where to put actual OpenGL calls is a design question only. I'd say, don't put OpenGL calls into Node or Mesh or Material classes. Instead, make something like *OpenGLRenderer* that can traverse the scene and issue all calls. Or, even better, make something that traverses the scene independent of OpenGL, and put lower level calls into OpenGL dependent class. So yes, all of the above is pretty much platform independent. Going this way, you'll find that glRotate, glTranslate, gluLookAt and friends are quite useless. You have all the matrices already, just pass them to OpenGL. This is how most of *real actual code* in real games/applications work anyway. Of course the above can be complicated by more complex requirements. Particularly, Materials can be quite complex. Meshes usually need to support lots of different vertex formats (e.g. packed normals for efficiency). Scene Nodes might need to be organized in a hierarchy (this one can be easy - just add parent/children pointers to the node). Skinned meshes and animations in general add complexity. And so on. But the main idea is simple: there is Geometry, there are Materials, there are objects in the scene. Then some small piece of code is able to render them. In OpenGL case, setting up meshes would most likely create/activate/modify VBO objects. Before any node is rendered, matrices would need to be set. And setting up Material would touch most of remaining OpenGL state (blending, texturing, lighting, combiners, shaders, ...).
**Object transformations** Avoid depending on OpenGL to do your transformations. Often, tutorials teach you how to play with the transformation matrix stack. I would not recommend using this approach since you may need some matrix later that will only be accessible through this stack, and using it is very long since the GPU bus is designed to be fast from CPU to GPU but not the other way. **Master object** A 3D scene is often thought as a tree of objects in order to know object dependencies. There is a debate about what should be at the root of this tree, a list of object or a master object. I advice using a master object. While it does not have a graphical representation, it will be simpler because you will be able to use recursion more effectively. **Decouple scene manager and renderer** I disagree with @ejac that you should have a method on each object doing OpenGL calls. Having a separate Renderer class browsing your scene and doing all the OpenGL calls will help you decouple your scene logic and OpenGL code. This is adds some design difficulty but will give you more flexibility if you ever have to change from OpenGL to DirectX or anything else API related.
What are some best practices for OpenGL coding (esp. w.r.t. object orientation)?
[ "", "c++", "opengl", "oop", "" ]
Whilst refactoring some code I came across some getter methods that returns a std::string. Something like this for example: ``` class foo { private: std::string name_; public: std::string name() { return name_; } }; ``` Surely the getter would be better returning a `const std::string&`? The current method is returning a copy which isn't as efficient. Would returning a const reference instead cause any problems?
The only way this can cause a problem is if the caller stores the reference, rather than copy the string, and tries to use it after the object is destroyed. Like this: ``` foo *pFoo = new foo; const std::string &myName = pFoo->getName(); delete pFoo; cout << myName; // error! dangling reference ``` However, since your existing function returns a copy, then you would not break any of the existing code. **Edit:** Modern C++ (i. e. C++11 and up) supports [Return Value Optimization](https://en.wikipedia.org/wiki/Copy_elision), so returning things by value is no longer frowned upon. One should still be mindful of returning extremely large objects by value, but in most cases it should be ok.
Actually, another issue **specifically** with returning a string *not* by reference, is the fact that `std::string` provides access via pointer to an internal `const char*` via the [c\_str()](http://www.cplusplus.com/reference/string/string/c_str/) method. This has caused me many hours of debugging headache. For instance, let's say I want to get the name from foo, and pass it to JNI to be used to construct a jstring to pass into Java later on, and that `name()` is returning a copy and not a reference. I might write something like this: ``` foo myFoo = getFoo(); // Get the foo from somewhere. const char* fooCName = foo.name().c_str(); // Woops! foo.name() creates a temporary that's destructed as soon as this line executes! jniEnv->NewStringUTF(fooCName); // No good, fooCName was released when the temporary was deleted. ``` If your caller is going to be doing this kind of thing, it might be better to use some type of smart pointer, or a const reference, or at the very least have a nasty warning comment header over your foo.name() method. I mention JNI because former Java coders might be particularly vulnerable to this type of method chaining that may seem otherwise harmless.
Returning a const reference to an object instead of a copy
[ "", "c++", "constants", "" ]
I truly love VIM - it's one of only a handful of applications I've every come across that make you feel warm and fuzzy inside. However, for PHP development, I still use PDT Eclipse although I would love to switch. The reason I can't quite at the moment is the CTRL+SPACE code-assist functionality that I rely on so much - it's so useful, especially when type hinting, or using PHPDoc variable comment blocks. I know there are cool plugins for VIM out there that can probably replicate this functionality and then some - but what are they?
Vim has [OmniCompletion](http://vimdoc.sourceforge.net/htmldoc/version7.html#new-omni-completion) built in, you should add this to your .vimrc: ``` filetype plugin on au FileType php set omnifunc=phpcomplete#CompletePHP ``` In addition I recommend you this plugins: * [VTreeExplorer](http://www.vim.org/scripts/script.php?script_id=184) * [snippetsEmu](http://www.vim.org/scripts/script.php?script_id=1318) and also take a look to this [article](http://tech.blog.box.net/2007/06/20/how-to-debug-php-with-vim-and-xdebug-on-linux/) about php debugging in Vim, and this [paper](http://www.scribd.com/doc/263139/VIM-for-PHP-Programmers), it has many useful tips for using Vim in PHP Development.
Code assist it's a new feature of VIM 7 [Ctrl+x] [Ctrl+o] will auto complete your code or open a popup of options More features of VIM 7 can be found in this [blogpost](http://linuxhelp.blogspot.com/2006/09/visual-walk-through-of-couple-of-new.html) Btw, I too like vim, but the PDT (and other IDEs) has much more features than code assist that make me preffer them over it.
Converting from Eclipse PDT to Vim
[ "", "php", "eclipse", "vim", "eclipse-pdt", "" ]
I am currently having a hardtime understanding and implementing events in C# using delagates. I am used to the Java way of doing things: 1. Define an interface for a listener type which would contain a number of method definitions 2. Define adapter class for that interface to make things easier if I'm not interested in all the events defined in a listener 3. Define Add, Remove and Get[] methods in the class which raises the events 4. Define protected fire methods to do the dirty work of looping through the list of added listeners and calling the correct method This I understand (and like!) - I know I could do this exactly the same in c#, but it seems that a new (better?) system is in place for c#. After reading countless tutorials explaining the use of delegates and events in c# I still am no closer to really understanding what is going on :S --- In short, for the following methods how would I implement the event system in c#: ``` void computerStarted(Computer computer); void computerStopped(Computer computer); void computerReset(Computer computer); void computerError(Computer computer, Exception error); ``` ^ The above methods are taken from a Java application I once made which I'm trying to port over to c#. Many many thanks!
You'd create four events, and methods to raise them, along with a new EventArgs-based class to indicate the error: ``` public class ExceptionEventArgs : EventArgs { private readonly Exception error; public ExceptionEventArgs(Exception error) { this.error = error; } public Error { get { return error; } } } public class Computer { public event EventHandler Started = delegate{}; public event EventHandler Stopped = delegate{}; public event EventHandler Reset = delegate{}; public event EventHandler<ExceptionEventArgs> Error = delegate{}; protected void OnStarted() { Started(this, EventArgs.Empty); } protected void OnStopped() { Stopped(this, EventArgs.Empty); } protected void OnReset() { Reset(this, EventArgs.Empty); } protected void OnError(Exception e) { Error(this, new ExceptionEventArgs(e)); } } ``` Classes would then subscribe to the event using either a method or a an anonymous function: ``` someComputer.Started += StartEventHandler; // A method someComputer.Stopped += delegate(object o, EventArgs e) { Console.WriteLine("{0} has started", o); }; someComputer.Reset += (o, e) => Console.WriteLine("{0} has been reset"); ``` A few things to note about the above: * The OnXXX methods are protected so that derived classes can raise the events. This isn't always necessary - do it as you see fit. * The `delegate{}` piece on each event declaration is just a trick to avoid having to do a null check. It's subscribing a no-op event handler to each event * The event declarations are *field-like events*. What's actually being created is both a variable *and* an event. Inside the class you see the variable; outside the class you see the event. See my [events/delegates](http://pobox.com/~skeet/csharp/events.html) article for much more detail on events.
You'll have to define a single delegate for that ``` public delegate void ComputerEvent(object sender, ComputerEventArgs e); ``` ComputerEventArgs would be defined like this: ``` public class ComputerEventArgs : EventArgs { // TODO wrap in properties public Computer computer; public Exception error; public ComputerEventArgs(Computer aComputer, Exception anError) { computer = aComputer; error = anError; } public ComputerEventArgs(Computer aComputer) : this(aComputer, null) { } } ``` The class that fires the events would have these: ``` public YourClass { ... public event ComputerEvent ComputerStarted; public event ComputerEvent ComputerStopped; public event ComputerEvent ComputerReset; public event ComputerEvent ComputerError; ... } ``` This is how you assign handlers to the events: ``` YourClass obj = new YourClass(); obj.ComputerStarted += new ComputerEvent(your_computer_started_handler); ``` Your handler is: ``` private void ComputerStartedEventHandler(object sender, ComputerEventArgs e) { // do your thing. } ```
C# event handling (compared to Java)
[ "", "c#", "event-handling", "delegates", "" ]
when i have a many-to.many relation with nhibernate and let nhibernate generate my db schema, it adds an aditional table that contains the primary keys of the related entities. is it possible to add additional fields to this and access them without having to hassle around with sql manually?
I don't think thats possible. If you are saying that the relation has some state than in essence it is an object in it's own right and should be treated (mapped) as such.
Agree with Jasper. What you are modeling in that case is not a relation but an entity itself, with 1-N and N-1 relations to the other two entities. It is not that NHibernate cannot handle it, it is that you simply cannot model it.
additional fields in NHibernate many-to-many relation tables
[ "", "c#", ".net", "nhibernate", "" ]
I've used ZenTest and autotest to work on Ruby projects before, and I used to using them for test-driven development a la [this configuration](http://blog.internautdesign.com/2006/11/12/autotest-growl-goodness). I have a project that I'm currently working on in Java, and I was wondering if there is something similar in the Java world to achieve the same effect.
Might I also suggest Infinitest, it is under active development and works with other languages besides Java. I believe it works fine with Scala, but I haven't had much luck using it with Groovy. It is free for personal use and is being developed by Improving.
I use junit max which is a eclipse plugin written by kent beck
Is there something like ZenTest/Autotest for Java and JUnit
[ "", "java", "ruby", "unit-testing", "tdd", "autotest", "" ]
I find it annoying that I can't clear a list. In this example: ``` a = [] a.append(1) a.append(2) a = [] ``` The second time I initialize a to a blank list, it creates a new instance of a list, which is in a different place in memory, so I can't use it to reference the first, not to mention it's inefficient. The only way I can see of retaining the same pointer is doing something like the following: ``` for i in range(len(a)): a.pop() ``` This seems pretty long-winded though, is there a better way of solving this?
You are looking for: ``` del L[:] ```
I'm not sure why you're worried about the fact that you're referencing a new, empty list in memory instead of the same "pointer". Your other list is going to be collected sooner or later and one of the big perks about working in a high level, garbage-collected language is that you don't normally need to worry about stuff like this.
Clearing a list
[ "", "python", "memory-management", "refactoring", "list", "" ]
Using jQuery, how do you check if there is an option selected in a select menu, and if not, assign one of the options as selected. (The select is generated with a maze of PHP functions in an app I just inherited, so this is a quick fix while I get my head around those :)
While I'm not sure about exactly what you want to accomplish, this bit of code worked for me. ``` <select id="mySelect" multiple="multiple"> <option value="1">First</option> <option value="2">Second</option> <option value="3">Third</option> <option value="4">Fourth</option> </select> <script type="text/javascript"> $(document).ready(function() { if (!$("#mySelect option:selected").length) { $("#mySelect option[value='3']").attr('selected', 'selected'); } }); </script> ```
No need to use jQuery for this: ``` var foo = document.getElementById('yourSelect'); if (foo) { if (foo.selectedIndex != null) { foo.selectedIndex = 0; } } ```
Check if option is selected with jQuery, if not select a default
[ "", "javascript", "jquery", "forms", "dom", "html-select", "" ]
I have a server written in Java that runs as a Windows service (thanks to Install4J). I want this service to be able to download the latest version of the JAR file it runs from, and start running the new code. The stitch is that I don't want the Windows service to fully exit. Ideally, I would accomplish this by a unix-style exec() call to stop the current version and run the new one. How can I best accomplish this?
Here is a complicated, but portable, way. Split your code into two jars. One very small jar is there just to manage process startup. It creates a ClassLoader that holds the other jar on its classpath. When you want to load a new version, you terminate all threads running code from the old jar. Null out all references to instances of classes from the old jar. Null out all references to the ClassLoader that loaded the old jar. At this point, if you didn't miss anything, the old classes and ClassLoader should be eligible for garbage collection. Now you start over with a new ClassLoader instance pointing at the new jar, and restart your application code.
As far as I know, there is no way to do this in Java. I suppose you could work around it by using the Java [`Runtime.exec`](http://java.sun.com/javase/6/docs/api/java/lang/Runtime.html#exec(java.lang.String)) or [`ProcessBuilder`](http://java.sun.com/javase/6/docs/api/java/lang/ProcessBuilder.html)'s start() command (which start new processes) then letting the current one end... the docs state > The subprocess is not killed when > there are no more references to the > Process object, but rather the > subprocess continues executing > asynchronously. I'm assuming the same is true if the parent finishes and is garbage collected. The catch is Runtime.exec's process will no longer have valid in, out, and err streams.
How can I replace the current Java process, like a unix-style exec?
[ "", "java", "windows-services", "" ]
Inside a service, what is the best way to determine a special folder path (e.g., "My Documents") for a specific user? `SHGetFolderPath` allows you to pass in a token, so I am assuming there is some way to impersonate the user whose folder you are interested in. Is there a way to do this based just on a username? If not, what is the minimum amount of information you need for the user account? I would rather not have to require the user's password. (Here is a [related question](https://stackoverflow.com/questions/131716/get-csidllocalappdata-path-for-any-user-on-windows).)
I would mount the user's registry hive and look for the path value. Yes, it's a sub-optimal solution, for all the reasons mentioned (poor forwards compatibility, etc.). However, like many other things in Windows, MS didn't provide an API way to do what you want to do, so it's the best option available. You can get the SID (not GUID) of the user by using [LookupAccountName](http://msdn.microsoft.com/en-us/library/aa379159.aspx). You can load the user's registry hive using [LoadUserProfile](http://msdn.microsoft.com/en-us/library/bb762281.aspx), but unfortunately this also requires a user token, which is going to require their password. Fortunately, you can manually load the hive using [RegLoadKey](http://msdn.microsoft.com/en-us/library/ms724889(VS.85).aspx) into an arbitrary location, read the data, and unload it (I think). Yes, it's a pain, and yes, it's probably going to break in future versions of Windows. Perhaps by that time MS will have provided an API to do it, back-ported it into older versions of Windows, and distributed it automatically through Windows update... but I wouldn't hold my breath. P.S. This information intended to augment the information provided in your related question, including disclaimers.
Please, do not go into the registry to find this information. That location might change in future versions of Windows. Use SHGetFolderPath instead. <http://msdn.microsoft.com/en-us/library/bb762181(VS.85).aspx> Edit: It looks like [LogonUser](http://msdn.microsoft.com/en-us/library/aa378184(VS.85).aspx) will provide the token for the other user that you need.
How can I get the path of a Windows "special folder" for a specific user?
[ "", "c++", "windows", "winapi", "special-folders", "" ]