text
stringlengths
8
267k
meta
dict
Q: Can anyone recommend a good reference for setting up Hibernate3 with Eclipse? I've looked at some of the Eclipse and Hibernate tutorials and the ones for Hibernate2 are pretty good, pretty intuitive. Recently I tried to setup Hibernate3, using the Eclipse plugin, and failed to get the Hibernate tools to work, outside of physically moving the jar files from the plugins directory to my lib directory (I shouldn't have to do this!) A: I'm not sure if you are still needing an answer to your question, but our very own stackoverflow website has probably the solution you are looking for. If that doesn't fully answer your question, then this might do the trick : "Hibernate and Eclipse Integration. from the linked Hibernate website: In the MANIFEST.MF file of the Hibernate plugin (which NEEDS the buddy loading), such as org.hibernate.eclipse, add a line: Eclipse-BuddyPolicy:registered and in the MANIFEST.MF file of your plugin project or RCP project, add the line: Eclipse-RegisterBuddy:org.hibernate.eclipse Important to notice the syntax - our plugin is willing to be seen by the hibernate library, using Eclipse-RegisterBuddy, and Hibernate is registering itself with Eclipse-BuddyPolicy. While this is stated clearly in the Eclipse Help (in retrospect!) it is critical to get the syntax precisely correct. Finally, if you are using HibernateUtil as your main entry point into Hibernate, then in your plugin start method add the line: Class.forName("myPlugin.HibernateUtil"); //full class name should go here This works - assumption is that hibernate.cfg.xml is in the src directory of your plugin and this is in the classpath. Hope this helps you out.
{ "language": "en", "url": "https://stackoverflow.com/questions/159227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Visual Studio 2008: is there any way to turn on the Class Name/Method Name drop downs in C#? (like VB.Net has) I'm working in .Net 3.5sp1 in C# for an ASP.Net solution and I'm wondering if there's any way to turn on the Class Name and Method Name drop-downs in the text editor that VB.Net has at the top. It's one of the few things from VB that I actually miss. Edit: Also, is there any way to get the drop downs to be populated with the possible events? e.g. (Page Events) | (Declarations) A: Go To: Tools -> Options -> Text Editor -> C# -> General -> Navigation Bar Make sure it is clicked, and that should show something at the top of your code that has all the classes and methods listed in your file. A: In Visual Studio 2008 (and probably earlier versions): Tools -> Options -> Text Editor -> C#(*) -> General -> Navigation bar (*) or your preferred editor language A: Try [Tools]->[Options]->[Text Editor]->[C#]->[General]->[Check "Navigation Bar] A: note that the lists don't fill in until the cursor is within the namespace/class
{ "language": "en", "url": "https://stackoverflow.com/questions/159237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: RAII vs. exceptions The more we use RAII in C++, the more we find ourselves with destructors that do non-trivial deallocation. Now, deallocation (finalization, however you want to call it) can fail, in which case exceptions are really the only way to let anybody upstairs know of our deallocation problem. But then again, throwing-destructors are a bad idea because of the possibility of exceptions being thrown during stack unwinding. std::uncaught_exception() lets you know when that happens, but not much more, so aside from letting you log a message before termination there's not much you can do, unless you're willing to leave your program in an undefined state, where some stuff is deallocated/finalized and some not. One approach is to have no-throw destructors. But in many cases that just hides a real error. Our destructor might, for example, be closing some RAII-managed DB connections as a result of some exception being thrown, and those DB connections might fail to close. This doesn't necessarily mean we're ok with the program terminating at this point. On the other hand, logging and tracing these errors isn't really a solution for every case; otherwise we would have had no need for exceptions to begin with. With no-throw destructors we also find ourselves having to create "reset()" functions that are supposed to be called before destruction - but that just defeats the whole purpose of RAII. Another approach is just to let the program terminate, as it's the most predictable thing you can do. Some people suggest chaining exceptions, so that more than one error can be handled at a time. But I honestly never actually seen that done in C++ and I've no idea how to implement such a thing. So it's either RAII or exceptions. Isn't it? I'm leaning toward no-throw destructors; mainly because it keeps things simple(r). But I really hope there's a better solution, because, as I said, the more we use RAII, the more we find ourselves using dtors that do non-trivial things. Appendix I'm adding links to interesting on-topic articles and discussions I've found: * *Throwing Destructors *StackOverflow discussion on the problems with SEH *StackOverflow discussion on throwing-destructors (thanks, Martin York) *Joel on Exceptions *SEH Considered Harmful *CLR Exception Handling which also touches on exception chaining *Herb Sutter on std::uncaught_exception and why it's not as useful as you think *Historical discussion on the matter with interesting participants (long!) *Stroustrup explaining RAII *Andrei Alexandrescu's Scope Guard A: From the original question: Now, deallocation (finalization, however you want to call it) can fail, in which case exceptions are really the only way to let anybody upstairs know of our deallocation problem Failure to cleanup a resource either indicates: * *Programmer error, in which case, you should log the failure, followed by notifying the user or terminating the application, depending on application scenario. For example, freeing an allocation that has already been freed. *Allocator bug or design flaw. Consult the documentation. Chances are the error is probably there to help diagnose programmer errors. See item 1 above. *Otherwise unrecoverable adverse condition that can be continued. For example, the C++ free store has a no-fail operator delete. Other APIs (such as Win32) provide error codes, but will only fail due to programmer error or hardware fault, with errors indicating conditions like heap corruption, or double free, etc. As for unrecoverable adverse conditions, take the DB connection. If closing the connection failed because the connection was dropped -- cool, you're done. Don't throw! A dropped connection (should) result in a closed connection, so there's no need to do anything else. If anything, log a trace message to help diagnose usage issues. Example: class DBCon{ public: DBCon() { handle = fooOpenDBConnection(); } ~DBCon() { int err = fooCloseDBConnection(); if(err){ if(err == E_fooConnectionDropped){ // do nothing. must have timed out } else if(fooIsCriticalError(err)){ // critical errors aren't recoverable. log, save // restart information, and die std::clog << "critical DB error: " << err << "\n"; save_recovery_information(); std::terminate(); } else { // log, in case we need to gather this info in the future, // but continue normally. std::clog << "non-critical DB error: " << err << "\n"; } } // done! } }; None of these conditions justify attempting a second kind of unwind. Either the program can continue normally (including exception unwind, if unwind is in progress), or it dies here and now. Edit-Add If you really want to be able to keep some sort of link to those DB connections that can't close -- perhaps they failed to close due to intermittent conditions, and you'd like to retry later -- then you can always defer cleanup: vector<DBHandle> to_be_closed_later; // startup reserves space DBCon::~DBCon(){ int err = fooCloseDBConnection(); if(err){ .. else if( fooIsRetryableError(err) ){ try{ to_be_closed.push_back(handle); } catch (const bad_alloc&){ std::clog << "could not close connection, err " << err << "\n" } } } } Very not pretty, but it might get the job done for you. A: You're looking at two things: * *RAII, which guarantees that resources are cleaned up when scope is exited. *Completing an operation and finding out whether it succeeded or not. RAII promises that it will complete the operation (free memory, close the file having attempted to flush it, end a transaction having attempted to commit it). But because it happens automatically, without the programmer having to do anything, it doesn't tell the programmer whether those operations it "attempted" succeeded or not. Exceptions are one way to report that something failed, but as you say, there's a limitation of the C++ language that means they aren't suitable to do that from a destructor[*]. Return values are another way, but it's even more obvious that destructors can't use those either. So, if you want to know whether your data was written to disk, you can't use RAII for that. It does not "defeat the whole purpose of RAII", since RAII will still try to write it, and it will still release the resources associated with the file handle (DB transaction, whatever). It does limit what RAII can do -- it won't tell you whether the data was written or not, so for that you need a close() function that can return a value and/or throw an exception. [*] It's quite a natural limitation too, present in other languages. If you think RAII destructors should throw exceptions to say "something has gone wrong!", then something has to happen when there's already an exception in flight, that is "something else has gone wrong even before that!". Languages that I know that use exceptions don't permit two exceptions in flight at once - the language and syntax simply don't allow for it. If RAII is to do what you want, then exceptions themselves need to be redefined so that it makes sense for one thread to have more than one thing going wrong at a time, and for two exceptions to propagate outward and two handlers to be called, one to handle each. Other languages allow the second exception to obscure the first, for example if a finally block throws in Java. C++ pretty much says that the second one must be suppressed, otherwise terminate is called (suppressing both, in a sense). In neither case are the higher stack levels informed of both faults. What is a bit unfortunate is that in C++ you can't reliably tell whether one more exception is one too many (uncaught_exception doesn't tell you that, it tells you something different), so you can't even throw in the case where there isn't already an exception in flight. But even if you could do it in that case, you'd still be stuffed in the case where one more is one too many. A: It reminds me a question from a colleague when I explained him the exception/RAII concepts: "Hey, what exception can I throw if the computer's switched off?" Anyway, I agree with Martin York's answer RAII vs. exceptions What's the deal with Exceptions and Destructors? A lot of C++ features depend on non-throwing destructors. In fact, the whole concept of RAII and its cooperation with code branching (returns, throws, etc.) is based on the fact deallocation won't fail. In the same way some functions are not supposed to fail (like std::swap) when you want to offer high exception guarantees to your objects. Not that it doesn't mean you can't throw exceptions through destructors. Just that the language won't even try to support this behaviour. What would happen if it was authorized? Just for the fun, I tried to imagine it... In the case your destructor fails to free your resource, what will you do? Your object is probably half destructed, what would you do from an "outside" catch with that info? Try again? (if yes, then why not trying again from within the destructor?...) That is, if you could access your half-destructed object it anyway: What if your object is on the stack (which is the basic way RAII works)? How can you access an object outside its scope? Sending the resource inside the exception? Your only hope would be to send the "handle" of the resource inside the exception and hoping code in the catch, well... try again to deallocate it (see above)? Now, imagine something funny: void doSomething() { try { MyResource A, B, C, D, E ; // do something with A, B, C, D and E // Now we quit the scope... // destruction of E, then D, then C, then B and then A } catch(const MyResourceException & e) { // Do something with the exception... } } Now, let's imagine for some reason the destructor of D fails to deallocate the resource. You coded it to send an exception, that will be caught by the catch. Everything goes well: You can handle the failure the way you want (how you will in a constructive way still eludes me, but then, it is not the problem now). But... Sending the MULTIPLE resources inside the MULTIPLE exceptions? Now, if ~D can fail, then ~C can, too. as well as ~B and ~A. With this simple example, you have 4 destructors which failed at the "same moment" (quitting the scope). What you need is not not a catch with one exception, but a catch with an array of exceptions (let's hope the code generated for this does not... er... throw). catch(const std::vector<MyResourceException> & e) { // Do something with the vector of exceptions... // Let's hope if was not caused by an out-of-memory problem } Let's get retarted (I like this music...): Each exception thrown is a different one (because the cause is different: Remember that in C++, exceptions need not derive from std::exception). Now, you need to simultaneously handle four exceptions. How could you write catch clauses handling the four exceptions by their types, and by the order they were thrown? And what if you have multiple exceptions of the same type, thrown by multiple failed deallocation? And what if when allocating the memory of the exception arrays of arrays, your program goes out of memory and, er... throw an out of memory exception? Are you sure you want to spend time on this kind of problem instead of spending it figuring why the deallocation failed or how to react to it in another way? Apprently, the C++ designers did not see a viable solution, and just cut their losses there. The problem is not RAII vs Exceptions... No, the problem is that sometimes, things can fail so much that nothing can be done. RAII works well with Exceptions, as long as some conditions are met. Among them: The destructors won't throw. What you are seeing as an opposition is just a corner case of a single pattern combining two "names": Exception and RAII In the case a problem happens in the destructor, we must accept defeat, and salvage what can be salvaged: "The DB connection failed to be deallocated? Sorry. Let's at least avoid this memory leak and close this File." While the exception pattern is (supposed to be) the main error handling in C++, it is not the only one. You should handle exceptional (pun intended) cases when C++ exceptions are not a solution, by using other error/log mechanisms. Because you just met a wall in the language, a wall no other language that I know of or heard of went through correctly without bringing down the house (C# attempt was worthy one, while Java's one is still a joke that hurts me on the side... I won't even speak about scripting languages who will fail on the same problem in the same silent way). But in the end, no matter how much code you'll write, you won't be protected by the user switching the computer off. The best you can do, you already wrote it. My own preference goes with a throwing finalize method, a non-throwing destructor cleaning resources not finalized manually, and the log/messagebox (if possible) to alert about the failure in the destructor. Perhaps you're not putting up the right duel. Instead of "RAII vs. Exception", it should be "Trying to freeing resources vs. Resources that absolutely don't want to be freed, even when threatened by destruction" :-) A: One thing I would ask is, ignoring the question of termination and so on, what do you think an appropriate response is if your program can't close its DB connection, either due to normal destruction or exceptional destruction. You seem to rule out "merely logging" and are disinclined to terminate, so what do you think is the best thing to do? I think if we had an answer to that question then we would have a better idea of how to proceed. No strategy seems particularly obvious to me; apart from anything else, I don't really know what it means for closing a database connection to throw. What is the state of the connection if close() throws? Is it closed, still open, or indeterminate? And if it's indeterminate, is there any way for the program to revert to a known state? A destructor failing means that there was no way to undo the creation of an object; the only way to return the program to a known (safe) state is to tear down the entire process and start over. A: You SHOULD NOT throw an exception out of a destructor. Note: Updated to refeclt changes in the standard: In C++03 If an exception is already propagating then the application will terminate. In C++11 If the destructor is noexcept (the default) then the application will terminate. The Following is based on C++11 If an exception escapes a noexcept function it is implementation defined if the stack is even unwound. The Following is based on C++03 By terminate I mean stop immediately. Stack unwinding stops. No more destructors are called. All bad stuff. See the discussion here. throwing exceptions out of a destructor I don't follow (as in disagree with) your logic that this causes the destructor to get more complicated. With the correct usage of smart pointers this actually makes the destructor simpler as everything now becomes automatic. Each class tides up its own little piece of the puzzle. No brain surgery or rocket science here. Another Big win for RAII. As for the possibility of std::uncaught_exception() I point you at Herb Sutters article about why it does not work A: What are the reasons why your destruction might fail? Why not look to handling those before actually destructing? For example, closing a database connection may be because: * *Transaction in progress. (Check std::uncaught_exception() - if true, rollback, else commit - these are the most likely desired actions unless you have a policy that says otherwise, before actually closing the connection.) *Connection is dropped. (Detect and ignore. The server will rollback automatically.) *Other DB error. (Log it so we can investigate and possibly handle appropriately in the future. Which may be to detect and ignore. In the meantime, try rollback and disconnect again and ignore all errors.) If I understand RAII properly (which I might not), the whole point is its scope. So it's not like you WANT transactions lasting longer than the object anyway. It seems reasonable to me, then, that you want to ensure closure as best you can. RAII doesn't make this unique - even without objects at all (say in C), you still would try to catch all error conditions and deal with them as best as you can (which is sometimes to ignore them). All RAII does is force you to put all that code in a single place, no matter how many functions are using that resource type. A: You can tell whether there is currently an exception in flight (e.g. we are between the throw and catch block performing stack unwinding, perhaps copying exception objects, or similar) by checking bool std::uncaught_exception() If it returns true, throwing at this point will terminate the program, If not, it's safe to throw (or at least as safe as it ever is). This is discussed in Section 15.2 and 15.5.3 of ISO 14882 (C++ standard). This doesn't answer the question of what to do when you hit an error while cleaning up an exception, but there really aren't any good answers to that. But it does let you distinguish between normal exit and exceptional exit if you wait to do something different (like log&ignore it) in the latter case, rather than simply panicing. A: If one really needs to deal with some errors during the finalization process, it should not be done within the destructor. Instead, a separate function that returns an error code or may throw should be used instead. To reuse the code, you can call this function inside the destructor, but you must not allow the exception to leak out. As some people mentioned, it is not really resource deallocation, but something like resource commit during exit. As other people mentioned, what can you do if saving fails during a forced power-off? There are probably no all-satisfying answers, but I would suggest one of the following approaches: * *Just allow the failure and loss to happen *Save the unsaved part to somewhere else and allow the recovery to happen later (see the other approach if this does not work either) If you do not like either of this approaches, make your user explicitly save. Tell them not to rely on the auto-save during a power-off.
{ "language": "en", "url": "https://stackoverflow.com/questions/159240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: .net framework version req'd for MS Sync Framework Can anyone tell me what version of the .NET framework (CLR and BCL) is req'd for the recently-released MS Sync Framework (for support of occasionally-connected-applications)? Its listed as Sync Framework V1 for ADO.NET v2.0 but none of its listed req'ments say anything about the .NET fx version that is required to support it. Anyone got any experience with this...? A: Sync Services for ADO.NET 2.0 requires ADO.NET 2.0 on the server. Sync Services requires ADO.NET 2.0 for desktop clients or .NET Compact Framework 2.0 Service Pack 2 for device clients. So, you should have .net framework 2.0 or above. A: I'm pretty sure it will work with 2.0 and newer. Remember, 3.5 and 3.0 are still running on the 2.0 CLR. You get everything you need for Sync within the Sync namespaces, and I think that those are not dependent on any 3.0 or 3.5 language features. A: If it specifies ADO.NET 2.0, then you'll need at least .NET 2.0. ADO.NET changed a lot between 1.1 and 2.0
{ "language": "en", "url": "https://stackoverflow.com/questions/159242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Pitch identification in Linux Is there any free software tool or combination that allows me to identify the pitch of a recorded singing session? The idea is to display some kind of graph with the current pitch in a time line along with markers for the standard notes (C3, C#3, D, etc). I don't need pitch correction and I don't need it to be done in real time, either. I know that once there was a plugin for Rosegarden that did that, but it has gone missing. A: Checkout Audacity. It came out of a project to do musical pitch analysis. A: Not exactly what you are looking for, but the Singstar lookalike Ultrastar-NG at least does something like this. http://ultrastar-ng.sourceforge.net/ A: I'm unaware of any software package that has this built in. If you're interested in writing something like this, you'll want to look at Discrete Fourier Transforms. This turns a time-series sample into a collection of frequencies. But this leaves you with no information about when the various frequencies occur, so you must do a windowed Fourier Transform, with windows of whatever time-resolution you want. Increasing the time resolution decreases the frequency resolution, however. The simplest thing to do is to figure out the largest frequency component in any window and call that the frequency. But real music (a) has chords and (b) has overtones and undertones. In addition singing often has "tremolo", where the singer varies the actual pitch around the theoretical pitch the music is marked at. A: Praat will at least do automatic pitch estimation of complex sounds. Though I don't know if it can mark the standard notes as you requested. Rob
{ "language": "en", "url": "https://stackoverflow.com/questions/159251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I perform an action after an UpdatePanel updates? When I have a regular textbox in a UpdatePanel (not an ASP.NET control) with some JavaScript events on it, after the UpdatePanel updates my events are gone. Is there a way to re-attach my events after the update? (Preferably without putting my events inline). A: You can use the endRequest event of the PageRequestManager class. A: You can have a setInterval() loop on document load that would search for the element in the update panel and if it didn't have the events, it can re-attach them. A: The events are gone because your textbox is a new element in the DOM (after the UpdatePanel refresh). As said by korchev, use the endRequest event to re-attach the eventhandlers.
{ "language": "en", "url": "https://stackoverflow.com/questions/159252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the ideal data type to use when storing latitude / longitude in a MySQL database? Bearing in mind that I'll be performing calculations on lat / long pairs, what datatype is best suited for use with a MySQL database? A: We store latitude/longitude X 1,000,000 in our oracle database as NUMBERS to avoid round off errors with doubles. Given that latitude/longitude to the 6th decimal place was 10 cm accuracy that was all we needed. Many other databases also store lat/long to the 6th decimal place. A: MySQL's Spatial Extensions are the best option because you have the full list of spatial operators and indices at your disposal. A spatial index will allow you to perform distance-based calculations very quickly. Please keep in mind that as of 6.0, the Spatial Extension is still incomplete. I am not putting down MySQL Spatial, only letting you know of the pitfalls before you get too far along on this. If you are dealing strictly with points and only the DISTANCE function, this is fine. If you need to do any calculations with Polygons, Lines, or Buffered-Points, the spatial operators do not provide exact results unless you use the "relate" operator. See the warning at the top of 21.5.6. Relationships such as contains, within, or intersects are using the MBR, not the exact geometry shape (i.e. an Ellipse is treated like a Rectangle). Also, the distances in MySQL Spatial are in the same units as your first geometry. This means if you're using Decimal Degrees, then your distance measurements are in Decimal Degrees. This will make it very difficult to get exact results as you get furthur from the equator. A: When I did this for a navigation database built from ARINC424 I did a fair amount of testing and looking back at the code, I used a DECIMAL(18,12) (Actually a NUMERIC(18,12) because it was firebird). Floats and doubles aren't as precise and may result in rounding errors which may be a very bad thing. I can't remember if I found any real data that had problems - but I'm fairly certain that the inability to store accurately in a float or a double could cause problems The point is that when using degrees or radians we know the range of the values - and the fractional part needs the most digits. The MySQL Spatial Extensions are a good alternative because they follow The OpenGIS Geometry Model. I didn't use them because I needed to keep my database portable. A: TL;DR Use FLOAT(8,5) if you're not working in NASA / military and not making aircrafts navi systems. To answer your question fully, you'd need to consider several things: Format * *degrees minutes seconds: 40° 26′ 46″ N 79° 58′ 56″ W *degrees decimal minutes: 40° 26.767′ N 79° 58.933′ W *decimal degrees 1: 40.446° N 79.982° W *decimal degrees 2: -32.60875, 21.27812 *Some other home-made format? Noone forbids you from making your own home-centric coordinates system and store it as heading and distance from your home. This could make sense for some specific problems you're working on. So the first part of the answer would be - you can store the coordinates in the format your application uses to avoid constant conversions back and forth and make simpler SQL queries. Most probably you use Google Maps or OSM to display your data, and GMaps are using "decimal degrees 2" format. So it will be easier to store coordinates in the same format. Precision Then, you'd like to define precision you need. Of course you can store coordinates like "-32.608697550570334,21.278081997935146", but have you ever cared about millimeters while navigation to the point? If you're not working in NASA and not doing satellites or rockets or planes trajectories, you should be fine with several meters accuracy. Commonly used format is 5 digits after dots which gives you 50cm accuracy. Example: there is 1cm distance between X,21.2780818 and X,21.2780819. So 7 digits after dot give you 1/2cm precision and 5 digits after dot will give you 1/2 meters precision (because minimal distance between distinct points is 1m, so rounding error cannot be more than half of it). For most civil purposes it should be enough. degrees decimal minutes format (40° 26.767′ N 79° 58.933′ W) gives you exactly the same precision as 5 digits after dot Space-efficient storage If you've selected decimal format, then your coordinate is a pair (-32.60875, 21.27812). Obviously, 2 x (1 bit for sign, 2 digits for degrees and 5 digits for exponent) will be enough. So here I'd like to support Alix Axel from comments saying that Google suggestion to store it in FLOAT(10,6) is really extra, because you don't need 4 digits for main part (since sign is separated and latitude is limited to 90 and longitude is limited to 180). You can easily use FLOAT(8,5) for 1/2m precision or FLOAT(9,6) for 50/2cm precision. Or you can even store lat and long in separated types, because FLOAT(7,5) is enough for lat. See MySQL float types reference. Any of them will be like normal FLOAT and equal to 4 bytes anyway. Usually space is not an issue nowadays, but if you want to really optimize the storage for some reason (Disclaimer: don't do pre-optimization), you may compress lat(no more than 91 000 values + sign) + long(no more than 181 000 values + sign) to 21 bits which is significantly less than 2xFLOAT (8 bytes == 64 bits) A: In a completely different and simpler perspective: * *if you are relying on Google for showing your maps, markers, polygons, whatever, then let the calculations be done by Google! *you save resources on your server and you simply store the latitude and longitude together as a single string (VARCHAR), E.g.: "-0000.0000001,-0000.000000000000001" (35 length and if a number has more than 7 decimal digits then it gets rounded); *if Google returns more than 7 decimal digits per number, you can get that data stored in your string anyway, just in case you want to detect some flees or microbes in the future; *you can use their distance matrix or their geometry library for calculating distances or detecting points in certain areas with calls as simple as this: google.maps.geometry.poly.containsLocation(latLng, bermudaTrianglePolygon)) *there are plenty of "server-side" APIs you can use (in Python, Ruby on Rails, PHP, CodeIgniter, Laravel, Yii, Zend Framework, etc.) that use Google Maps API. This way you don't need to worry about indexing numbers and all the other problems associated with data types that may screw up your coordinates. A: While it isn't optimal for all operations, if you are making map tiles or working with large numbers of markers (dots) with only one projection (e.g. Mercator, like Google Maps and many other slippy maps frameworks expect), I have found what I call "Vast Coordinate System" to be really, really handy. Basically, you store x and y pixel coordinates at some way-zoomed-in -- I use zoom level 23. This has several benefits: * *You do the expensive lat/lng to mercator pixel transformation once instead of every time you handle the point *Getting the tile coordinate from a record given a zoom level takes one right shift. *Getting the pixel coordinate from a record takes one right shift and one bitwise AND. *The shifts are so lightweight that it is practical to do them in SQL, which means you can do a DISTINCT to return only one record per pixel location, which will cut down on the number records returned by the backend, which means less processing on the front end. I talked about all this in a recent blog post: http://blog.webfoot.com/2013/03/12/optimizing-map-tile-generation/ A: * *Latitudes range from -90 to +90 (degrees), so DECIMAL(10, 8) is ok for that *longitudes range from -180 to +180 (degrees) so you need DECIMAL(11, 8). Note: The first number is the total number of digits stored, and the second is the number after the decimal point. In short: lat DECIMAL(10, 8) NOT NULL, lng DECIMAL(11, 8) NOT NULL A: Depends on the precision that you require. Datatype Bytes resolution ------------------ ----- -------------------------------- Deg*100 (SMALLINT) 4 1570 m 1.0 mi Cities DECIMAL(4,2)/(5,2) 5 1570 m 1.0 mi Cities SMALLINT scaled 4 682 m 0.4 mi Cities Deg*10000 (MEDIUMINT) 6 16 m 52 ft Houses/Businesses DECIMAL(6,4)/(7,4) 7 16 m 52 ft Houses/Businesses MEDIUMINT scaled 6 2.7 m 8.8 ft FLOAT 8 1.7 m 5.6 ft DECIMAL(8,6)/(9,6) 9 16cm 1/2 ft Friends in a mall Deg*10000000 (INT) 8 16mm 5/8 in Marbles DOUBLE 16 3.5nm ... Fleas on a dog From: http://mysql.rjweb.org/doc.php/latlng To summarise: * *The most precise available option is DOUBLE. *The most common seen type used is DECIMAL(8,6)/(9,6). As of MySQL 5.7, consider using Spatial Data Types (SDT), specifically POINT for storing a single coordinate. Prior to 5.7, SDT does not support indexes (with exception of 5.6 when table type is MyISAM). Note: * *When using POINT class, the order of the arguments for storing coordinates must be POINT(latitude, longitude). *There is a special syntax for creating a spatial index. *The biggest benefit of using SDT is that you have access to Spatial Analyses Functions, e.g. calculating distance between two points (ST_Distance) and determining whether one point is contained within another area (ST_Contains). A: The spatial functions in PostGIS are much more functional (i.e. not constrained to BBOX operations) than those in the MySQL spatial functions. Check it out: link text A: depending on you application, i suggest using FLOAT(9,6) spatial keys will give you more features, but in by production benchmarks the floats are much faster than the spatial keys. (0,01 VS 0,001 in AVG) A: MySQL uses double for all floats ... So use type double. Using float will lead to unpredictable rounded values in most situations A: Based on this wiki article http://en.wikipedia.org/wiki/Decimal_degrees#Accuracy the appropriate data type in MySQL is Decimal(9,6) for storing the longitude and latitude in separate fields. A: Use DECIMAL(8,6) for latitude (90 to -90 degrees) and DECIMAL(9,6) for longitude (180 to -180 degrees). 6 decimal places is fine for most applications. Both should be "signed" to allow for negative values. A: No need to go far, according to Google Maps, the best is FLOAT(10,6) for lat and lng. A: Basically it depends on the precision you need for your locations. Using DOUBLE you'll have a 3.5nm precision. DECIMAL(8,6)/(9,6) goes down to 16cm. FLOAT is 1.7m... This very interesting table has a more complete list: http://mysql.rjweb.org/doc.php/latlng : Datatype Bytes Resolution Deg*100 (SMALLINT) 4 1570 m 1.0 mi Cities DECIMAL(4,2)/(5,2) 5 1570 m 1.0 mi Cities SMALLINT scaled 4 682 m 0.4 mi Cities Deg*10000 (MEDIUMINT) 6 16 m 52 ft Houses/Businesses DECIMAL(6,4)/(7,4) 7 16 m 52 ft Houses/Businesses MEDIUMINT scaled 6 2.7 m 8.8 ft FLOAT 8 1.7 m 5.6 ft DECIMAL(8,6)/(9,6) 9 16cm 1/2 ft Friends in a mall Deg*10000000 (INT) 8 16mm 5/8 in Marbles DOUBLE 16 3.5nm ... Fleas on a dog A: Use MySQL's spatial extensions with GIS. A: Google provides a start to finish PHP/MySQL solution for an example "Store Locator" application with Google Maps. In this example, they store the lat/lng values as "Float" with a length of "10,6" http://code.google.com/apis/maps/articles/phpsqlsearch.html A: I suggest you use Float datatype for SQL Server. A: The ideal datatype for storing Lat Long values is decimal(9,6) This is at approximately 10cm precision, whilst only using 5 bytes of storage. e.g. CAST(123.456789 as decimal(9,6)) A: GeoLocationCoordinates returns a double data type representing the position's latitude and longitude in decimal degrees. You can try using double. A: A FLOAT should give you all of the precision you need, and be better for comparison functions than storing each co-ordinate as a string or the like. If your MySQL version is earlier than 5.0.3, you may need to take heed of certain floating point comparison errors however. Prior to MySQL 5.0.3, DECIMAL columns store values with exact precision because they are represented as strings, but calculations on DECIMAL values are done using floating-point operations. As of 5.0.3, MySQL performs DECIMAL operations with a precision of 64 decimal digits, which should solve most common inaccuracy problems when it comes to DECIMAL columns A: Lat Long calculations require precision, so use some type of decimal type and make the precision at least 2 higher than the number you will store in order to perform math calculations. I don't know about the my sql datatypes but in SQL server people often use float or real instead of decimal and get into trouble because these are are estimated numbers not real ones. So just make sure the data type you use is a true decimal type and not a floating decimal type and you should be fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/159255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "488" }
Q: Cross Browser Flash Detection in Javascript Does anyone have an example of script that can work reliably well across IE/Firefox to detect if the browser is capable of displaying embedded flash content. I say reliably because I know its not possible 100% of the time. A: SWFObject is very reliable. I have used it without trouble for quite a while. A: Carl Yestrau's JavaScript Flash Detection Library, here: http://www.featureblend.com/javascript-flash-detection-library.html ... may be what you're looking for. A: Perhaps adobe's flash player detection kit could be helpful here? http://www.adobe.com/products/flashplayer/download/detection_kit/ A: Detecting and embedding Flash within a web document is a surprisingly difficult task. I was very disappointed with the quality and non-standards compliant markup generated from both SWFObject and Adobe's solutions. Additionally, my testing found Adobe's auto updater to be inconsistent and unreliable. The JavaScript Flash Detection Library (Flash Detect) and JavaScript Flash HTML Generator Library (Flash TML) are a legible, maintainable and standards compliant markup solution. -"Luke read the source!" A: Code for one liner isFlashExists variable: <script type='text/javascript' src='//ajax.googleapis.com/ajax/libs/swfobject/2.2/swfobject.js'> </script> <script type='text/javascript'> var isFlashExists = swfobject.hasFlashPlayerVersion('1') ? true : false ; if (isFlashExists) { alert ('flash exists'); } else { alert ('NO flash'); } </script> Note that there is an alternative like this: swfobject.getFlashPlayerVersion(); A: I know this is an old post, but I've been looking for a while and didn't find anything. I've implemented the JavaScript Flash Detection Library. It works very well and it is documented for quick use. It literally took me 2 minutes. Here is the code I wrote in the header: <script src="Scripts/flash_detect.js"></script> <script type="text/javascript"> if (!FlashDetect.installed) { alert("Flash is required to enjoy this site."); } else { alert("Flash is installed on your Web browser."); } </script>         A: You could use closure compiler to generate a small, cross-browser flash detection: // ==ClosureCompiler== // @compilation_level ADVANCED_OPTIMIZATIONS // @output_file_name default.js // @formatting pretty_print // @use_closure_library true // ==/ClosureCompiler== // ADD YOUR CODE HERE goog.require('goog.userAgent.flash'); if (goog.userAgent.flash.HAS_FLASH) { alert('flash version: '+goog.userAgent.flash.VERSION); }else{ alert('no flash found'); } which results in the following "compiled" code: var a = !1, b = ""; function c(d) { d = d.match(/[\d]+/g); d.length = 3; return d.join(".") } if (navigator.plugins && navigator.plugins.length) { var e = navigator.plugins["Shockwave Flash"]; e && (a = !0, e.description && (b = c(e.description))); navigator.plugins["Shockwave Flash 2.0"] && (a = !0, b = "2.0.0.11") } else { if (navigator.mimeTypes && navigator.mimeTypes.length) { var f = navigator.mimeTypes["application/x-shockwave-flash"]; (a = f && f.enabledPlugin) && (b = c(f.enabledPlugin.description)) } else { try { var g = new ActiveXObject("ShockwaveFlash.ShockwaveFlash.7"), a = !0, b = c(g.GetVariable("$version")) } catch (h) { try { g = new ActiveXObject("ShockwaveFlash.ShockwaveFlash.6"), a = !0, b = "6.0.21" } catch (i) { try { g = new ActiveXObject("ShockwaveFlash.ShockwaveFlash"), a = !0, b = c(g.GetVariable("$version")) } catch (j) {} } } } } var k = b; a ? alert("flash version: " + k) : alert("no flash found"); A: View the source at http://whatsmy.browsersize.com (lines 14-120). Here is the abstracted cross browser code on jsbin for flash detection only, works on: FF/IE/Safari/Opera/Chrome. A: what about: var hasFlash = function() { var flash = false; try{ if(new ActiveXObject('ShockwaveFlash.ShockwaveFlash')){ flash=true; } }catch(e){ if(navigator.mimeTypes ['application/x-shockwave-flash'] !== undefined){ flash=true; } } return flash; }; A: If you are interested in a pure Javascript solution, here is the one that I copy from Brett: function detectflash(){ if (navigator.plugins != null && navigator.plugins.length > 0){ return navigator.plugins["Shockwave Flash"] && true; } if(~navigator.userAgent.toLowerCase().indexOf("webtv")){ return true; } if(~navigator.appVersion.indexOf("MSIE") && !~navigator.userAgent.indexOf("Opera")){ try{ return new ActiveXObject("ShockwaveFlash.ShockwaveFlash") && true; } catch(e){} } return false; } A: Minimum version I've ever used (doesn't check version, just Flash Plugin): var hasFlash = function() { return (typeof navigator.plugins == "undefined" || navigator.plugins.length == 0) ? !!(new ActiveXObject("ShockwaveFlash.ShockwaveFlash")) : navigator.plugins["Shockwave Flash"]; }; A: I agree with Max Stewart. SWFObject is the way to go. I'd like to supplement his answer with a code example. This ought to to get you started: Make sure you have included the swfobject.js file (get it here): <script type="text/javascript" src="swfobject.js"></script> Then use it like so: if(swfobject.hasFlashPlayerVersion("9.0.115")) { alert("You have the minimum required flash version (or newer)"); } else { alert("You do not have the minimum required flash version"); } Replace "9.0.115" with whatever minimum flash version you need. I chose 9.0.115 as an example because that's the version that added h.264 support. If the visitor does not have flash, it will report a flash version of "0.0.0", so if you just want to know if they have flash at all, use: if(swfobject.hasFlashPlayerVersion("1")) { alert("You have flash!"); } else { alert("You do not flash :-("); } A: If you just wanted to check whether flash is enabled, this should be enough. function testFlash() { var support = false; //IE only if("ActiveXObject" in window) { try{ support = !!(new ActiveXObject("ShockwaveFlash.ShockwaveFlash")); }catch(e){ support = false; } //W3C, better support in legacy browser } else { support = !!navigator.mimeTypes['application/x-shockwave-flash']; } return support; } Note: avoid checking enabledPlugin, some mobile browser has tap-to-enable flash plugin, and will trigger false negative. A: To create a Flash object standart-compliant (with JavaScript however), I recommend you take a look at Unobtrusive Flash Objects (UFO) http://www.bobbyvandersluis.com/ufo/index.html A: Have created a small .swf which redirects. If the browser is flash enabled it will redirect. package com.play48.modules.standalone.util; import flash.net.URLRequest; class Redirect { static function main() { flash.Lib.getURL(new URLRequest("http://play48.com/flash.html"), "_self"); } } A: Using Google Closure compiler goog.require('goog.userAgent.flash') library I created this 2 functions. boolean hasFlash() Returns if browser has flash. function hasFlash(){ var b = !1; function c(a) {if (a = a.match(/[\d]+/g)) {a.length = 3;}} (function() { if (navigator.plugins && navigator.plugins.length) { var a = navigator.plugins["Shockwave Flash"]; if (a && (b = !0, a.description)) {c(a.description);return;} if (navigator.plugins["Shockwave Flash 2.0"]) {b = !0;return;} } if (navigator.mimeTypes && navigator.mimeTypes.length && (a = navigator.mimeTypes["application/x-shockwave-flash"], b = !(!a || !a.enabledPlugin))) {c(a.enabledPlugin.description);return;} if ("undefined" != typeof ActiveXObject) { try { var d = new ActiveXObject("ShockwaveFlash.ShockwaveFlash.7");b = !0;c(d.GetVariable("$version"));return; } catch (e) {} try { d = new ActiveXObject("ShockwaveFlash.ShockwaveFlash.6");b = !0; return; } catch (e) {} try { d = new ActiveXObject("ShockwaveFlash.ShockwaveFlash"), b = !0, c(d.GetVariable("$version")); } catch (e) {} } })(); return b; } boolean isFlashVersion(version) Returns if the flash version is greater than provided version function isFlashVersion(version) { var e = String.prototype.trim ? function(a) {return a.trim()} : function(a) {return /^[\s\xa0]*([\s\S]*?)[\s\xa0]*$/.exec(a)[1]}; function f(a, b) {return a < b ? -1 : a > b ? 1 : 0}; var h = !1,l = ""; function m(a) {a = a.match(/[\d]+/g);if (!a) {return ""}a.length = 3;return a.join(".")} (function() { if (navigator.plugins && navigator.plugins.length) { var a = navigator.plugins["Shockwave Flash"]; if (a && (h = !0, a.description)) {l = m(a.description);return} if (navigator.plugins["Shockwave Flash 2.0"]) {h = !0;l = "2.0.0.11";return} } if (navigator.mimeTypes && navigator.mimeTypes.length && (a = navigator.mimeTypes["application/x-shockwave-flash"], h = !(!a || !a.enabledPlugin))) {l = m(a.enabledPlugin.description);return} if ("undefined" != typeof ActiveXObject) { try { var b = new ActiveXObject("ShockwaveFlash.ShockwaveFlash.7");h = !0;l = m(b.GetVariable("$version"));return } catch (g) {} try { b = new ActiveXObject("ShockwaveFlash.ShockwaveFlash.6");h = !0;l = "6.0.21";return } catch (g) {} try { b = new ActiveXObject("ShockwaveFlash.ShockwaveFlash"), h = !0, l = m(b.GetVariable("$version")) } catch (g) {} } })(); var n = l; return (function(a) { var b = 0,g = e(String(n)).split("."); a = e(String(a)).split("."); for (var p = Math.max(g.length, a.length), k = 0; 0 == b && k < p; k++) { var c = g[k] || "",d = a[k] || ""; do { c = /(\d*)(\D*)(.*)/.exec(c) || ["", "", "", ""];d = /(\d*)(\D*)(.*)/.exec(d) || ["", "", "", ""]; if (0 == c[0].length && 0 == d[0].length) {break} b = f(0 == c[1].length ? 0 : parseInt(c[1], 10), 0 == d[1].length ? 0 : parseInt(d[1], 10)) || f(0 == c[2].length, 0 == d[2].length) || f(c[2], d[2]);c = c[3];d = d[3] } while (0 == b); } return 0 <= b })(version) }
{ "language": "en", "url": "https://stackoverflow.com/questions/159261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "97" }
Q: Seeking a way to have a "Hover button" to expand a section i have a flow panel that i'm adding extra items to it at runtime based on whether they have chosen to show all the items. that's all works fine; the expansion is controlled by a toolbar button. the trouble is we'd like the user to be able to move his mouse over the "+" sign to expand the section. initially i looked at TSpeedButton (OnMouseEnter) but even when it's "Flat", the focus rectangle still shows and so the glyph isn't centered. the main problem with this solution is it's appearance. then i looked at making a descendant of TImage. that's a bit "unconventional" but it'd work. in OnMouseEnter or OnClick, it'd toggle an internal boolean "Expanded" flag and then load the appropriate picture from a resource. i have a dislike for unconventional solutions like that. i need to add it to a few different screens so it's probably prudent for me to have/build a component for this. i have JVCL but i don't see anything suitable offhand. thank you for your comments/help! A: I always liked the approach used by the ModelMaker Code Explorer. For example, when you're adding a new method, some rarely-used stuff is displayed collapsed ('Options and Directives' in the image below). (source: 17slon.com) When you hover over the text, you notice that it's actually a flat button. (Except that it's not - I believe Gerrit does some custom painting magic here). (source: 17slon.com) When you click this button, a panel appears. Button is still there, but with a new image. You can click it to close the panel. (source: 17slon.com) The state of this toggle button is preserved between sessions. IOW, even if you restart the Delphi, next tima you invoke 'Add Method', the 'Options and Directives' panel will appear exactly as you left it the last time. A: i have a dislike for unconventional solutions like that. Over the past few years, I have grown a bit suspicious of unconventional UI solutions — which is what you seem to be creating here. Why not just use a button that the user actually has to click? That seems to be much more common in the software I use, be it MS Office or programming utilities. Also, I'd make the button somewhat larger: in the screenshot, it really seems like a tiny little thing you have to target with your mouse cursor. Oh well, and if I'm bugging you with advise you haven't asked for anyway, why not give it ">>" as a caption instead of "+"? And if you'd give it a textual caption with a mnemonic as well, it'd actually be keyboard accessible. All this should make your UI better and more intuitive. I guess. I do apologize for not answering your question, but I hope you'll spend 2 minutes thinking whether your users would actually prefer this solution :-) Good luck! A: Actually, I think that using a TImage in this situation is pretty conventional. I have seen many people suggest using the TImage, when either the TButton, or any of its associates did not have the right amount of control for whatever the developer was trying to do. Have you tried a TBitBtn? I think when you get rid of the text it centers whatever image you have associated with it. I just checked in Delphi 6, all I have installed on this machine, and it had the MouseMove event.
{ "language": "en", "url": "https://stackoverflow.com/questions/159262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In Flash, how would I run an e4x statement when that statement is stored in a String? So I have something like this: var xmlStatement:String = "xmlObject.node[3].@thisValue"; What mystery function do I have to use so that I can execute xmlStatement and get thisValue from that xmlObject? Like.... var attribute:String = mysteryFunction(xmlStatement); P.S. I know eval() works for actionscript2, I need the as3 solution. :) A: Unfortunately this is not possible in ActionScript 3. This however might be a solution: http://blog.betabong.com/2008/09/23/e4x-string-parser/ A: For your example it would be: var attribute : String = String( E4X.evaluate( XMLList(xmlobject) , 'node[3].@thisValue' ) );
{ "language": "en", "url": "https://stackoverflow.com/questions/159266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Do OCUnit and OCMock work on the iPhone SDK? I simply could not make it work, and I am wondering if I am wasting my time, or if I am simply stupid! Sorry I don't have the exact error I have right now. But I just want to know if it work or not! A: Colin Barrett has a blog post about OCMock and the iPhone. A: Not so sure about OCMock, but OCUnit support is now included in iPhone 2.2 SDK. You can download an example application from Stanford iPhone Application Programming CS193P Lecture 19. A: Sen:te (the creator of the framework) explains how to use OCUnit with an iPhone project: http://www.sente.ch/s/?p=535&lang=en. A: I created some OCUnit tests for an iPhone app, but in order to run the tests I had to compile for Mac OS X, not iPhone OS, and switching back and forth was a pain. The Google framework is cleaner, they can run the tests in the simulator or on the device. A: I got hung up on the same thing. I finally found the answer on Mitch's World then reposted the solution on my site. The quick fix is to add the OCMock.framework folder to /Library/frameworks and reference it from there. For whatever reason XCode doesn't want to add folders external to its natural framework seach to the path. Until I find out more this is the best we can do. -Cliff A: At the time of writing OCUnit "just works" on the iPhone. Apple are shipping templates that works out of the box. A: Take a look here. You'll find a Xcode template you can use that has OCUnit and OCMock all ready setup for you. A: I don't know whether OCUnit works with iPhone, but there is an iPhoneUnitTesting framework available from Google Code. A: You might be interested in this project which integrats OCMock & gh-unit (based on Google toolkit) Unit test
{ "language": "en", "url": "https://stackoverflow.com/questions/159280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Java RMI: Client security policy grant { permission java.security.AllPermission; }; This works. grant file:///- { permission java.security.AllPermission; }; This does not work. Could someone please explain to me why? A: The syntax should be: grant codeBase "file:///-" { ... }; See the docs. Note the semicolon. Be very careful assigning permissions to code. Are you sure the codebase should be a file URL (normal for development, not for production...). A: The directive "grant { permission }" means grant the permission to all code no matter where it came from. In other words, when there is no codebase specified, the code could be loaded from the network or the file system. The second directive (if it worked) would only apply to the local file system. It would be specifying all files (recursively) on the local file system. I'm not sure that "file:///" is a valid URL by itself. I know that file:///tmp/- works.
{ "language": "en", "url": "https://stackoverflow.com/questions/159282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I create a custom directive for Apache Velocity I am using Apache's Velocity templating engine, and I would like to create a custom Directive. That is, I want to be able to write "#doMyThing()" and have it invoke some java code I wrote in order to generate the text. I know that I can register a custom directive by adding a line userdirective=my.package.here.MyDirectiveName to my velocity.properties file. And I know that I can write such a class by extending the Directive class. What I don't know is how to extend the Directive class -- some sort of documentation for the author of a new Directive. For instance I'd like to know if my getType() method return "BLOCK" or "LINE" and I'd like to know what should my setLocation() method should do? Is there any documentation out there that is better than just "Use the source, Luke"? A: Also was trying to come up with a custom directive. Couldn't find any documentation at all, so I looked at some user created directives: IfNullDirective (nice and easy one), MergeDirective as well as velocity build-in directives. Here is my simple block directive that returns compressed content (complete project with some directive installation instructions is located here): import java.io.IOException; import java.io.StringWriter; import java.io.Writer; import org.apache.velocity.context.InternalContextAdapter; import org.apache.velocity.exception.MethodInvocationException; import org.apache.velocity.exception.ParseErrorException; import org.apache.velocity.exception.ResourceNotFoundException; import org.apache.velocity.exception.TemplateInitException; import org.apache.velocity.runtime.RuntimeServices; import org.apache.velocity.runtime.directive.Directive; import org.apache.velocity.runtime.parser.node.Node; import org.apache.velocity.runtime.log.Log; import com.googlecode.htmlcompressor.compressor.HtmlCompressor; /** * Velocity directive that compresses an HTML content within #compressHtml ... #end block. */ public class HtmlCompressorDirective extends Directive { private static final HtmlCompressor htmlCompressor = new HtmlCompressor(); private Log log; public String getName() { return "compressHtml"; } public int getType() { return BLOCK; } @Override public void init(RuntimeServices rs, InternalContextAdapter context, Node node) throws TemplateInitException { super.init(rs, context, node); log = rs.getLog(); //set compressor properties htmlCompressor.setEnabled(rs.getBoolean("userdirective.compressHtml.enabled", true)); htmlCompressor.setRemoveComments(rs.getBoolean("userdirective.compressHtml.removeComments", true)); } public boolean render(InternalContextAdapter context, Writer writer, Node node) throws IOException, ResourceNotFoundException, ParseErrorException, MethodInvocationException { //render content to a variable StringWriter content = new StringWriter(); node.jjtGetChild(0).render(context, content); //compress try { writer.write(htmlCompressor.compress(content.toString())); } catch (Exception e) { writer.write(content.toString()); String msg = "Failed to compress content: "+content.toString(); log.error(msg, e); throw new RuntimeException(msg, e); } return true; } } A: On the Velocity wiki, there's a presentation and sample code from a talk I gave called "Hacking Velocity". It includes an example of a custom directive. A: Block directives always accept a body and must end with #end when used in a template. e.g. #foreach( $i in $foo ) this has a body! #end Line directives do not have a body or an #end. e.g. #parse( 'foo.vtl' ) You don't need to both with setLocation() at all. The parser uses that. Any other specifics i can help with? Also, have you considered using a "tool" approach? Even if you don't use VelocityTools to automatically make your tool available and whatnot, you can just create a tool class that does what you want, put it in the context and either have a method you call to generate content or else just have its toString() method generate the content. e.g. $tool.doMyThing() or just $myThing Directives are best for when you need to mess with Velocity internals (access to InternalContextAdapter or actual Nodes). A: Prior to velocity v1.6 I had a #blockset($v)#end directive to be able to deal with a multiline #set($v) but this function is now handled by the #define directive. Custom block directives are a pain with modern IDEs because they don't parse the structure correctly, assuming your #end associated with #userBlockDirective is an extra and paints the whole file RED. They should be avoided if possible. I copied something similar from the velocity source code and created a "blockset" (multiline) directive. import org.apache.velocity.runtime.directive.Directive; import org.apache.velocity.runtime.RuntimeServices; import org.apache.velocity.runtime.parser.node.Node; import org.apache.velocity.context.InternalContextAdapter; import org.apache.velocity.exception.MethodInvocationException; import org.apache.velocity.exception.ResourceNotFoundException; import org.apache.velocity.exception.ParseErrorException; import org.apache.velocity.exception.TemplateInitException; import java.io.Writer; import java.io.IOException; import java.io.StringWriter; public class BlockSetDirective extends Directive { private String blockKey; /** * Return name of this directive. */ public String getName() { return "blockset"; } /** * Return type of this directive. */ public int getType() { return BLOCK; } /** * simple init - get the blockKey */ public void init( RuntimeServices rs, InternalContextAdapter context, Node node ) throws TemplateInitException { super.init( rs, context, node ); /* * first token is the name of the block. I don't even check the format, * just assume it looks like this: $block_name. Should check if it has * a '$' or not like macros. */ blockKey = node.jjtGetChild( 0 ).getFirstToken().image.substring( 1 ); } /** * Renders node to internal string writer and stores in the context at the * specified context variable */ public boolean render( InternalContextAdapter context, Writer writer, Node node ) throws IOException, MethodInvocationException, ResourceNotFoundException, ParseErrorException { StringWriter sw = new StringWriter(256); boolean b = node.jjtGetChild( 1 ).render( context, sw ); context.put( blockKey, sw.toString() ); return b; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/159292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can I prevent a base constructor from being called by an inheritor in C#? I've got a (poorly written) base class that I want to wrap in a proxy object. The base class resembles the following: public class BaseClass : SomeOtherBase { public BaseClass() {} public BaseClass(int someValue) {} //...more code, not important here } and, my proxy resembles: public BaseClassProxy : BaseClass { public BaseClassProxy(bool fakeOut){} } Without the "fakeOut" constructor, the base constructor is expected to be called. However, with it, I expected it to not be called. Either way, I either need a way to not call any base class constructors, or some other way to effectively proxy this (evil) class. A: There is a way to create an object without calling any instance constructors. Before you proceed, be very sure you want to do it this way. 99% of the time this is the wrong solution. This is how you do it: FormatterServices.GetUninitializedObject(typeof(MyClass)); Call it in place of the object's constructor. It will create and return you an instance without calling any constructors or field initializers. When you deserialize an object in WCF, it uses this method to create the object. When this happens, constructors and even field initializers are not run. A: At least 1 ctor has to be called. The only way around it I see is containment. Have the class inside or referencing the other class. A: I don't believe you can get around calling the constructor. But you could do something like this: public class BaseClass : SomeOtherBase { public BaseClass() {} protected virtual void Setup() { } } public BaseClassProxy : BaseClass { bool _fakeOut; protected BaseClassProxy(bool fakeOut) { _fakeOut = fakeOut; Setup(); } public override void Setup() { if(_fakeOut) { base.Setup(); } //Your other constructor code } } A: If what you want is to not call either of the two base class constructors, this cannot be done. C# class constructors must call base class constructors. If you don't call one explicitly, base( ) is implied. In your example, if you do not specify which base class constructor to call, it is the same as: public BaseClassProxy : BaseClass { public BaseClassProxy() : base() { } } If you prefer to use the other base class constructor, you can use: public BaseClassProxy : BaseClass { public BaseClassProxy() : base(someIntValue) { } } Either way, one of the two will be called, explicitly or implicitly. A: If you do not explicitly call any constructor in the base class, the parameterless constructor will be called implicitly. There's no way around it, you cannot instantiate a class without a constructor being called. A: I am affraid that not base calling constructor isn't option. A: When you create a BaseClassProxy object it NEEDS to create a instance of it's base class, so you need to call the base class constructor, what you can doo is choose wich one to call, like: public BaseClassProxy (bool fakeOut) : base (10) {} To call the second constructor instead of the first one A: I ended up doing something like this: public class BaseClassProxy : BaseClass { public BaseClass BaseClass { get; private set; } public virtual int MethodINeedToOverride(){} public virtual string PropertyINeedToOverride() { get; protected set; } } This got me around some of the bad practices of the base class. A: constructors are public by nature. do not use a constructor and use another for construction and make it private.so you would create an instance with no paramtersand call that function for constructing your object instance. A: All right, here is an ugly solution to the problem of one class inheriting the constructors of another class that I didn't want to allow some of them to work. I was hoping to avoid using this in my class but here it is: Here is my class constructor: public MyClass(); { throw new Exception("Error: Must call constructor with parameters."); } OK now you were warned that it was ugly. No complaints please! I wanted to force at least the minimal parameters from my main constructor without it allowing the inherited base constructor with no parameters. I also believe that if you create a constructor and do not put the : base() after it that it will not call the base class constructor. And if you create constructors for all of the ones in the base class and provide the same exact parameters for them in the main class, that it will not pass through. But this can be tedious if you have a lot of constructors in the base class! A: It is possible to create an object without calling the parameterless constructor (see answer above). But I use code like this to create a base class and an inherited class, in which I can choose whether to execute the base class's init. public class MyClass_Base { public MyClass_Base() { /// Don't call the InitClass() when the object is inherited /// !!! CAUTION: The inherited constructor must call InitClass() itself when init is needed !!! if (this.GetType().IsSubclassOf(typeof(MyClass_Base)) == false) { this.InitClass(); } } protected void InitClass() { // The init stuff } } public class MyClass : MyClass_Base { public MyClass(bool callBaseClassInit) { if(callBaseClassInit == true) base.InitClass(); } } A: Here is my solution to the problem using System; public class Program { public static void Main() { Console.WriteLine(new Child().Test); } public class Child : Parent { public Child() : base(false) { //No Parent Constructor called } } public class Parent { public int Test {get;set;} public Parent() { Test = 5; } public Parent(bool NoBase){ //Don't do anything } } } A simple elegant solution. You can change it according to your need. A: Another simple solution from me: class parent { public parent() { //code for all children if (this.GetType() == typeof(child1)) { //code only for objects of class "child1" } else { //code for objects of other child classes } } } class child1 : parent { public child1() {} } // class child2: parent ... child3 : parent ... e.t.c
{ "language": "en", "url": "https://stackoverflow.com/questions/159296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: How do you compile static pthread-win32 lib for x64? It looks like some work has been done to make pthread-win32 work with x64, but there are no build instructions. I have tried simly building with the Visual Studio x64 Cross Tools Command Prompt, but when I try to link to the lib from an x64 application, it can't see any of the function exports. It seems like it is still compiling the lib as x86 or something. I've even tried adding /MACHINE to the makefile in the appropriate places, but it doesn't help. Has anyone gotten this to work? A: You can use the vcpkg here. Which is the Windows package manager for C++. It supports pthread building and also other open source libraries. I wanted to use a static pthread library. When i downloaded the pthread i got the dll(pthread.dll) and import lib(pthread.lib) i.e I can not use only pthread.lib I had to use the pthread.dll file. So using vcpkg I have built the static lib. Which I can use without any dll dependencies Using "vcpkg" you can build both Static and Dynamic Libraries You can use below steps Below i have added the steps for all DLL (x86|x64) and LIB (x86|x64) cases. You can build it as per your need. Clone the vcpkg from git directory vcpkg git repo From the directory where you have cloned vcpkg run below command- Which will install the vcpkg bootstrap - vcpkg.bat Check for the library availability by running below commands vcpkg search pthread Which will show you below result mbedtls[pthreads] Multi-threading support pthread 3.0.0 empty package, linking to other port pthreads 3.0.0-6 pthreads for windows As you can see it supports pthread for windows 1 .Building Dynamic Library with import lib (DLL) Building x86 DLL vcpkg install pthreads:x86-windows Which will build the dll and import library in .\vcpkg\installed\x86-windows from here copy the lib and include and you can use them Building x64 DLL vcpkg install pthreads:x64-windows Which will build the dll and import library in .\vcpkg\installed\x64-windows from here copy the lib and include folders. 2. Building Static Library (LIB) Building x86 LIB vcpkg install pthreads:x86-windows-static Which will build the dll and import library in .\vcpkg\installed\x86-windows-static from here copy the lib and include and you can use them Building x64 LIB vcpkg install pthreads:x64-windows-static Which will build the dll and import library in .\vcpkg\installed\x64-windows-static from here copy the lib and include folders. NOTE : Try to use with admin privileges A: For me, I just use a 64-bit windows compiler (mingw-w64 cross compiler in this particular case) then make (with2.9.1) like: $ make clean GC-static Then how I install it for use (some of this may not be needed, of course), cp libpthreadGC2.a $mingw_w64_x86_64_prefix/lib/libpthread.a cp pthread.h sched.h semaphore.h $mingw_w64_x86_64_prefix/include then to use it, you have to define this (example ffmpeg configure line to use it): --extra-cflags=-DPTW32_STATIC_LIB Anyhow that's one way. Another way is to do the same then modify the *.h files and remove all references to dllexport from the headers (or manually define DPTW32_STATIC_LIB in the headers). ex: sed 's/ __declspec (dllexport)//g;s/ __declspec (dllimport)//g' (ref: zeranoe build scripts) A: Until it's officially released, it looks like you have to check out the CVS head to get version 2.9 of the library. Version 2.9 has all the x64 patches, but you will still have problems if you try to compile the static library from the command line. The only workaround I know of is to use the DLLs instead of statically linking the LIB. A: Here's how I did it (VS2015). Should work for older Visual Studios too. 1) Download the release .zip from SourceForge 2) Unpack to a clean folder- should see "pthreads.2" 3) Open up your Visual Studio command prompt, navigate to "pthreads.2." 4) Run "nmake", no arguments. It produces a help message listing all the legal commands you can give 'nmake' to build it. For more info, see "pthreads.2\FAQ" file which explains their 3 different flavors of 'cleanup' handling. I would suggest building "VC" and "VC-debug" (and maybe the static ones of those) only. The 'real' pthreads is a C system library on POSIX platforms like Linux, so only those combos are going to give you the exact same C error behavior on Windows that you'd get on Linux, FreeBSD, etc. A: to expand kgriffs answer one has to do two more things to actually build a 64bit DLL and not 32bit DLL. First download latest pthreads via CVS (as suggested here) 1) use 64bit build tools - achieved by loading correct VC environment settings in command line (more about it here): C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\vcvarsall.bat amd64 (change the 11.0 to whatever version you are using) 2) As it is written in the pthreads Makefile: TARGET_CPU is an environment variable set by Visual Studio Command Prompt as provided by the SDK (VS 2010 Express plus SDK 7.1) PLATFORM is an environment variable that may be set in the VS 2013 Express x64 cross development environment which means, that if it was not done by the vcvars (in my case it wasn't) you need to set TARGET_CPU or PLATFORM (just in case I set them both): set TARGET_CPU=x64 set PLATFORM=x64 3) and now the final step: nmake clean VC nmake clean VC-debug this will make a 64bit DLL files (and proper import library and PDB). I can verify that it works with Visual Studio 2012. A: This message might help. A: I was successful in replacing "pthread-win32" with "pthreads4w" https://sourceforge.net/projects/pthreads4w/ and compiling in MSVC2019 for x64 target using the console nmake command. Even statically link.
{ "language": "en", "url": "https://stackoverflow.com/questions/159298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Runtime callable wrapper (RCW) scope - process or application domain? What is the scope of Runtime Callable Wrapper (RCW), when referencing unmanaged COM objects? According to the docs: The runtime creates exactly one RCW for each COM object, regardless of the number of references that exist on that object. If I had to "guess" - this explanation should mean "one per process", but is it really? Any additional documentation will be very welcome. My application runs in its own application domain (it is Outlook addin), and I would like to know what happens if I use Marshal.ReleaseComObject(x) in a loop until it's count reaches 0 (as recommended). Will it release references from other addins (running in other application domain in the same Outlook process)? EDIT: Perfect - now the confusion is even bigger. Based on the 2 answers (from Lette and Ilya) we have 2 different answers. The official MSDN doc says per process (for ver. 2.0+), but it is missing this sentence for ver. 1.1 of the doc. In the same time, in Mason Bendixen's article, it says it's per appdomain. As his article is old (April 2007), I have send him an email asking for clarification, but if someone else has to add something, please do. Thanks A: In managed, we have a per app domain cache mapping canonical IUnknowns back to RCWs. When an IUnknown enters the system (through a marshal call, through activation, as a return parameter from a method call, etc.), we check the cache to see if an RCW already exists for the COM object. If a mapping exists, a reference to the existing RCW is returned. Otherwise a new RCW is created and a cache mapping is added. from Mason's Blog A: The Mason Bendixen blog article that Ilya cites is correct: the RCW is scoped to the AppDomain, not to the process. I can only guess that the Runtime Callable Wrapper (MSDN 2.0) article was speaking "casually". That article is not necessarily incorrect in the general sense, because it is most typical to execute using only a single AppDomain, but that sentence is not technically accurate. As to your specific question: "I would like to know what happens if I use Marshal.ReleaseComObject(x) in a loop until it's count reaches 0 (as recommended). Will it release references from other addins (running in other application domain in the same Outlook process)??" The answer to this depends on how you set up your add-in. In general, if you do not take precautions, then the answer is yes, it would impact the references in other add-ins operating from within the same AppDomain. But since you state that you are running from a separate AppDomain, then, no, it would not. There is a COM Shim Wizard Version 2.3.1 that you can use to isolate your add-in. The documentation for the COM Shim Wizard can be found here: Isolating Microsoft Office Extensions with the COM Shim Wizard Version 2.3.1. The COM Shim Wizard uses reflection to build a customized COM front-end loader that loads your add-in assembly within a separate AppDomain. This creates safety in two respects: (1) By using a separate, customized COM entry point, your add-in is correctly identified separately by Microsoft Office from all other add-ins. Otherwise, by default, all add-ins share the same default mscoree.dll loader. The problem with sharing the same loader is that if any add-in has a crash, then mscoree.dll will be identified by Microsoft Office as the source of the problem and will not load it automatically the next time. You can turn it on again manually, but your add-in would not load automatically the next time due to a problem in someone else's add-in! (2) By loading your assembly within a separate AppDomain, the runtime callable wrappers (RCWs) are isolated from the other add-ins that are loaded into the same process. In this case, if you call Marshal.ReleaseComObject(object) or Marshal.FinalReleaseComObject(object) then you would not be impacting anyone else's add-ins. More importantly, if any of those other add-ins make such calls, then your add-in would be protected from being corrupted. :-) The downside to using the COM Shim Wizard is that by operating out of a separate AppDomain there is extra marshalling overhead. I don't believe that this should be noticeable for a Microsoft Outlook add-in. It can be a factor, however, for some intensive routines that have lots of calls to the object model, such as can sometimes be the case for a Microsoft Excel add-in. You stated that you are already running your add-in from a separate AppDomain. If this is true, then you are already isolated from Marshal.ReleaseComObject(object) and Marshal.FinalReleaseComObject(object) calls with respect to other AppDomains. (I am curious as to how you are doing this, by the way... Are you explicitly creating your own AppDomain? The default add-in template in Visual Studio does not run in separate AppDomain and loads using the mscoree.dll.) If you are creating your own AppDomain, your code is isolated, but its identity might not be separate from other add-ins, however, as your add-in would still be sharing the default mscoree.dll loader, unless you utilized some other means to address this. I hope this helps... A: According to the same docs: The runtime maintains a single RCW per process for each object. I think we can safely assume that object = instance, so if the addins/AppDomains doesn't hold references to the same instance, the call to ReleaseComObject won't release references to instances created elsewhere. Edit: The wording of the docs may be wrong, as stated elsewhere. If so, since your add-in is running in a separate AppDomain, you're in luck. Even if the different add-ins reference the same instance (e.g. a Message object in Outlook), ReleaseComObject called in your AppDomain will not cause RCWs in other AppDomains to lose the reference to that instance.
{ "language": "en", "url": "https://stackoverflow.com/questions/159313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: When should you override OnEvent as opposed to subscribing to the event when inheritting When should one do the following? class Foo : Control { protected override void OnClick(EventArgs e) { // new code here } } As opposed to this? class Foo : Control { public Foo() { this.Click += new EventHandler(Clicked); } private void Clicked(object sender, EventArgs e) { // code } } A: The event is for external subscribers. When you are deriving some control, always override the OnEvent method instead of subscribing to the event. This way, you can be sure when your code is called, because the actual event is fired when you call base.OnEvent(), and you can call this before your code, after your code, in the middle of your code or not at all. You can then also react on return values from the event (i.e. changed properties in the EventArgs object). A: Be aware that (at least in .NET 2.0) I have found a few places in the framework (specifically in the DataTable class) where the OnFoo method is only called when the corresponding Foo event has been handled! This contravenes the framework design guidelines but we're stuck with it. I've gotten around it by handling the event with a dummy handler somewhere in the class, eg: public class MyDataTable : DataTable { public override void EndInit() { base.EndInit(); this.TableNewRow += delegate(object sender, DataTableNewRowEventArgs e) { }; } protected override void OnTableNewRow(DataTableNewRowEventArgs e) { base.OnTableNewRow(e); // your code here } } A: Overriding rather than attaching a delegate will result in more efficient code, so it is generally recommended that you always do this where possible. For more information see this MSDN article. Here is a pertinent quote: The protected OnEventName method also allows derived classes to override the event without attaching a delegate to it. A derived class must always call the OnEventName method of the base class to ensure that registered delegates receive the event. A: Subscribing to the event is intended for a control to monitor events on a different control. For monitoring your own event OnClick is fine. Note, however, that Control.OnClick handles firing those subscribed events, so be sure to call it in your override. A: If you override like Kent Boogaart comments you'll need to be carefull to call back base.OnClick to allow event suscriptions to be called A: An inherited class should never subscribe to it's own events, or it's base class' events. Now, if a class has an instance of another, different, class in it, then it can consume that class' events, and determine if it should raise it's own event or not. For example, I rolled out a MRU List class recently. In it, there was a number of ToolStripMenuItem controls, whose click event I consumed. After that click event was consumed, I then raised my class's event. (see that source code here) A: It is worth noting that there are some corner cases where it only works with handlers and not with OnEvent overrides. One such example- Why style is not applied when I'm removing StartupUri in WPF?
{ "language": "en", "url": "https://stackoverflow.com/questions/159317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What's wrong with my SOAP call to OnTime from my SVN post-commit hook? My Subversion repository is on a Linux server and my OnTime 2007 system is on a Windows 2003 server. I have a post-commit hook script that launches two Perl scripts. One sends an email—works great. The other is supposed to write the details from the SVN commit to the Notes section of the OnTime tracking system. I have lots of debugging statements in the Perl scripts, so I can see that the details of the commit are accurately retrieved. The problem is writing them to OnTime. I'm using SOAP to pass the details, but it isn't working. The code is: $service = SOAP::Lite->uii(URI) $service->proxy(URL_to_OnTime) $service->on_action(sub{URI . UpdateDefectNotes}) $method = SOAP::Data->name(UpdateDefectNotes)->attr({xmlns=>URI}) $response = $service->call($method => $defectid,$name,$revisionid,$notes) The response code I get back is 1, but I don't know if this is success or failure. All I know is that the Notes section in OnTime for the defectid is NOT updated. Can anyone help? Nancy A: Add this in to display the XML request/response. $service->on_debug( sub { print @_ } );
{ "language": "en", "url": "https://stackoverflow.com/questions/159331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C++ compile-time expression as an array size I'm not sure if the term's actually "Array Addition". I'm trying to understand what does the following line do: int var[2 + 1] = {2, 1}; How is that different from int var[3]? I've been using Java for several years, so I'd appreciate if explained using Java-friendly words. Edit: Thousands of thanks to everyone who helped me out, Occam's Razor applies here. A: It's not different. C++ allows expressions (even non-constant expressions) in the subscripts of array declarations (with some limitations; anything other than the initial subscript on a multi-dimensional array must be constant). int var[]; // illegal int var[] = {2,1}; // automatically sized to 2 int var[3] = {2,1}; // equivalent to {2,1,0}: anything not specified is zero int var[3]; // however, with no initializer, nothing is initialized to zero Perhaps the code you are reading writes 2 + 1 instead of 3 as a reminder that a trailing 0 is intentional. A: How is that different from int var[3]? In no way that I can see. A: It is any different from int var[3]. The compiler will evaluate 2 + 1 and replace it with 3 during compilation. A: var[2 + 1] is not different from var[3]. The author probably wanted to emphasize that var array will hold 2 data items and a terminating zero. A: It isn't any different; it is int var[3]. Someone might write their array like this when writing char arrays in order to add space for the terminating 0. char four[4 + 1] = "1234"; It doesn't seem to make any sense working with an int array. A: This creates an array of 3 integers. You're right, there is no difference whether you express it as2 + 1 or 3, as long as the value is compile-time constant. The right side of the = is an initializer list and it tells the compiler how to fill the array. The first value is 2, the second 1 and the third is 0 since no more values are specified. The zero fill only happens when you use an initializer list. Otherwise there is no guarantee of that the array has any particular values. I've seen this done with char arrays, to emphasize that one char is reserved for a string terminator, but never for an int array.
{ "language": "en", "url": "https://stackoverflow.com/questions/159339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to troubleshoot "DataMember Not Found" in ActiveReports ActiveReports seems like a powerful flexible tool, but if you make a mistake anywhere, you get an exception "data member not found. please check your datasource and datamember properties". There is no indication as to which datasource/datamember is at fault or what subreport the problem lies in, but Active Reports must know this! The stack trace is no use, as the error is thrown after the report.run() method is invoked from deep within code generated by Active Reports itself. Does anybody have a solution other than commenting out one subreport after another and checking all fields in turn? A: I was getting the same error while trying to pass a raw dataset to a report as below: ... Dim rpt as New ActiveReport With rpt .DataSource = _data .Run() End With ... I specified a table in the dataset and the error went away. ... .DataSource = _data.Tables(0) ... ` A: Unfortunately I don't know a way to immediately tell which subreport that error is coming from. Indeed, that error message should be improved when you're using subreports. I will report it to the ActiveReports development team.
{ "language": "en", "url": "https://stackoverflow.com/questions/159351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why can't we declare var a = new List at class level? I know we cannot do this at class level but at method level we can always do this. var myList=new List<string> // or something else like this This question came to my mind since wherever we declare variable like this. We always provide the type information at the RHS of the expression. So compiler doesn't need to do type guessing. (correct me if i am wrong). so question remains WHY NOT at class level while its allowed at method level A: There are technical issues with implementing this feature. The common cases seem simple but the tougher cases (e.g., fields referencing other fields in chains or cycles, expressions which contain anonymous types) are not. See Eric Lippert's blog for an in-depth explanation: Why no var on fields? A: The compiler guys just didn't implement the support. It's entirely compiler magic, and the compiler doesn't actually put something into IL that says "figure out the type at runtime", it knows the type and builds it in, so it could've done that for members as well. It just doesn't. I'm pretty sure that if you asked an actual compiler guy on the C# compiler team, you'd get something official, but there's no magic happening here and it should be possible to do the same for members fields. A: The var keyword was invented specific to support anonymous types. You are generally NOT going to declare anonymous types at the class level, and thus it was not implemented. Your example statement var myList=new List<string> is not a very good example of how to use the var keyword since it's not for the intended purpose. A: It's not as simple as implementing var in a method since you also have to take into acccount different modifiers and attributes like so: [MyAttribute()] protected internal readonly var list = new List<T>(); What I would really have liked is a type-inferenced const! public const notFoundStatus = 404; // int A: Pass List Type in Generic class Class1 { public void genmethod<T>(T i,int Count) { List<string> list = i as List<string>; for (int j = 0; j < Count; j++) { Console.WriteLine(list[j]); } } static void Main(string[] args) { Class1 c = new Class1(); c.genmethod<string>("str",0); List<string> l = new List<string>(); l.Add("a"); l.Add("b"); l.Add("c"); l.Add("d"); c.genmethod<List<string>>(l,l.Count); Console.WriteLine("abc"); Console.ReadLine(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/159359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Are standard .Net 2 dlls compatible with the silverlight 2.0 runtime? If I avoid referencing assemblies that don't exist in the silverlight 2.0 runtime, will the.Net 2.0 library dlls I create with VS2008 work with silverlight without recompilation or other alteration? A: No, you will still need to recompile against the Silverlight versions of the assemblies. You can add these files to a Silverlight Class Library project "as link", sharing the same file between both projects so you at least won't have to worry about getting out of sync. A: In addition to creating the files in one project and adding them as links in the other, you might still encounter API difference between the desktop and Silverlight APIs. You can work around those code differences with #if blocks, i.e. #if SILVERLIGHT /* some code */ #else // WPF /* some other code */ #endif
{ "language": "en", "url": "https://stackoverflow.com/questions/159362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Attribute not included in the generated proxy class Using .Net 3.0 and VS2005. The objects in question are consumed from a WCF service then serialized back into XML for a legacy API. So rather than serializing the TestObject, it was serializing .TestObject which was missing the [XmlRoot] attribute; however, all the [Xml*] attributes for the child elements were in the generated proxy code so they worked just fine. So all the child elements worked just fine, but the enclosing element did not because the [XmlRoot] attribute wasn't included in the generated proxy code. The original object that included the [XmlRoot] attribute serializes fine manually. Can I have the proxy code include the [XmlRoot] attribute so the generated proxy class serializes correctly as well? If I can't do that I suspect I'll have to use [XmlType] but that causes minor havoc requiring me to change other components so I would prefer the former. I also want to avoid having to manually edit the autogenerated proxy class. Here is some sample code (I have included the client and the service in the same app because this is quick and for test purposes. Comment out the service referencing code and add the service reference while running the app, then uncomment the service code and run.) namespace SerializationTest { class Program { static void Main( string[] args ) { Type serviceType = typeof( TestService ); using (ServiceHost host = new ServiceHost( serviceType, new Uri[] { new Uri( "http://localhost:8080/" ) } )) { ServiceMetadataBehavior behaviour = new ServiceMetadataBehavior(); behaviour.HttpGetEnabled = true; host.Description.Behaviors.Add( behaviour ); host.AddServiceEndpoint( serviceType, new BasicHttpBinding(), "TestService" ); host.AddServiceEndpoint( typeof( IMetadataExchange ), new BasicHttpBinding(), "MEX" ); host.Open(); TestServiceClient client = new TestServiceClient(); localhost.TestObject to = client.GetObject(); String XmlizedString = null; using (MemoryStream memoryStream = new MemoryStream()) { XmlSerializer xs = new XmlSerializer( typeof( localhost.TestObject ) ); using (XmlWriter xmlWriter = XmlWriter.Create(memoryStream)) { xs.Serialize( xmlWriter, to ); memoryStream = (MemoryStream)xmlWriter.BaseStream; XmlizedString = Encoding.UTF8.GetString( memoryStream.ToArray() ); Console.WriteLine( XmlizedString ); } } } Console.ReadKey(); } } [Serializable] [XmlRoot( "SomethingElse" )] public class TestObject { private bool _worked; public TestObject() { Worked = true; } [XmlAttribute( AttributeName = "AttributeWorked" )] public bool Worked { get { return _worked; } set { _worked = value; } } } [ServiceContract] public class TestService { [OperationContract] [XmlSerializerFormat] public TestObject GetObject() { return new TestObject(); } } } Here is the Xml this generates. <?xml version="1.0" encoding="utf-8"?> <TestObject xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" AttributeWorked="true" /> A: == IF == This is only for the XmlRoot attribute. The XmlSerializer has one constructor where you can specify the XmlRoot attribute. Kudos to csgero for pointing it. His comment should be the solution. XmlSerializer Constructor (Type, XmlRootAttribute) Initializes a new instance of the XmlSerializer class that can serialize objects of the specified type into XML documents, and deserialize an XML document into object of the specified type. It also specifies the class to use as the XML root element. A: I found someone who provides a means to solve this situation: Matevz Gacnik's Weblog Using that approach of XmlAttributeOverrides, I wrote the following: private static XmlSerializer GetOverridedSerializer() { // set overrides for TestObject element XmlAttributes attrsTestObject = new XmlAttributes(); XmlRootAttribute rootTestObject = new XmlRootAttribute("SomethingElse"); attrsTestObject.XmlRoot = rootTestObject; // create overrider XmlAttributeOverrides xOver = new XmlAttributeOverrides(); xOver.Add(typeof(localhost.TestObject), attrsTestObject); XmlSerializer xSer = new XmlSerializer(typeof(localhost.TestObject), xOver); return xSer; } Just put that method in the Program class of your example, and replace the following line in Main(): //XmlSerializer xs = new XmlSerializer(typeof(localhost.TestObject)); XmlSerializer xs = GetOverridedSerializer(); And then run to see the results. Here is what I got: <?xml version="1.0" encoding="utf-8"?><SomethingElse xmlns:xsi="http://www.w3.o rg/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Attribu teWorked="true" />
{ "language": "en", "url": "https://stackoverflow.com/questions/159373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: ReportViewer Control and Ajax UpdatePanel Did anyone of you ever find a way of getting the Microsoft Report Viewer Control (Web) to work from within an Ajax UpdatePanel? A: The only way really is to create an iframe with the report in there iirc. However, this post here a guy claims he has a way to fix it with some code. albeit i havnt even tried this as I have never had a need to show any of my reports in an update panel. I tend to keep my reports external of any ajax apps, for example when a report is requested i will open a new window with just the report. My users like that better anyhow. A: i fixed this bug by using Microsoft Report Viewer 2010 Redistributable Package from : http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=a941c6b2-64dd-4d03-9ca7-4017a0d164fd then change web config as following from <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="Microsoft.ReportViewer.Common, Version=9.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> </assemblies> <assemblies> <add assembly="Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="Microsoft.ReportViewer.Common, Version=9.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> </assemblies> to <assemblies> <add assembly="Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add assembly="Microsoft.ReportViewer.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </assemblies> add this to runtime <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="Microsoft.ReportViewer.WebForms" publicKeyToken="b03f5f7f11d50a3a"/> <bindingRedirect oldVersion="9.0.0.0-9.1.0.0" newVersion="10.0.0.0"/> </dependentAssembly> </assemblyBinding> A: Never tried really, but I'm sure that control wouldn't work straight away. I'm pretty sure it needs to load some extra Javascript, because it adds so much complexity, so you might need to load those before updating the panel. A: I can also confirm that the latest release (2010) mentioned in previous post corrects issue. It also removes the need to explicitly set AsyncRendering=False: I mention this because other suggestions out there on the web say to set that value on that property A: Here's a exemple: <asp:Button ID="Button1" runat="server" OnClick="ViewReport_Clicked" Text="View Report" SkinID="ButtonA" /> <asp:UpdatePanel ID="TFD_UP" runat="server"> <ContentTemplate> <rsweb:ReportViewer ID="ReportViewer1" runat="server" SizeToReportContent="True" Height="202px" Width="935px" Font-Names="Verdana" Font-Size="8pt" InteractiveDeviceInfos="(Collection)" WaitMessageFont-Names="Verdana" WaitMessageFont-Size="14pt" Visible="false"> <LocalReport ReportPath="Reports\Report4.rdlc"> <DataSources> <rsweb:ReportDataSource DataSourceId="SqlDataSourceArchiSpecs" Name="Proc_TechFilesDownloadsDataSetParent" /> </DataSources> </LocalReport> </rsweb:ReportViewer> <asp:SqlDataSource ID="SqlDataSourceArchiSpecs" runat="server" ConnectionString="<%$ ConnectionStrings:ArchiSpecsDBConnectionString %>" SelectCommand="PROC_TECHNICALFILES_DOWNLOAD_DETAILS" SelectCommandType="StoredProcedure"> <SelectParameters> <asp:Parameter Name="supId" Type="Int32" /> <asp:Parameter Name="startDate" Type="DateTime" /> <asp:Parameter Name="endDate" Type="DateTime" /> </SelectParameters> </asp:SqlDataSource> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="Button1" EventName="Click" /> </Triggers> </asp:UpdatePanel>
{ "language": "en", "url": "https://stackoverflow.com/questions/159391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I parse Apache's error log in PHP? I want to create a script that parses or makes sense of apache's error log to see what the most recent error was. I was wondering if anyone out there has something that does this or has any ideas where to start? A: there are piles of php scripts that do this, just do a google search for examples. if you want to roll your own, it's nothing more complex than reading any other file. just make sure you know the location of your logfiles (defined in the httpd.conf file) and the format your log files are in. the format is also defined in httpd.conf A: Here's a small-ish class that makes it easy to read a number of characters from the back of a large file w/o overloading memory. The test setting lets you see it in action cannibalizing itself. BigFile.php <?php $run_test = true; $test_file = 'BigFile.php'; class BigFile { private $file_handle; /** * * Load the file from a filepath * @param string $path_to_file * @throws Exception if path cannot be read from */ public function __construct( $path_to_log ) { if( is_readable($path_to_log) ) { $this->file_handle = fopen( $path_to_log, 'r'); } else { throw new Exception("The file path to the file is not valid"); } } /** * * 'Finish your breakfast' - Jay Z's homme Strict */ public function __destruct() { fclose($this->file_handle); } /** * * Returns a number of characters from the end of a file w/o loading the entire file into memory * @param integer $number_of_characters_to_get * @return string $characters */ public function getFromEnd( $number_of_characters_to_get ) { $offset = -1*$number_of_characters_to_get; $text = ""; fseek( $this->file_handle, $offset , SEEK_END); while(!feof($this->file_handle)) { $text .= fgets($this->file_handle); } return $text; } } if( $run_test ) { $number_of_characters_to_get = 100000; $bf = new BigFile($test_file); $text = $bf->getFromEnd( $number_of_characters_to_get ); echo "$test_file has the following $number_of_characters_to_get characters at the end: <br/> <pre>$text</pre>"; } ?> A: There are a few things to consider first: * *Firstly, your PHP user may not have access to Apache's log files. *Secondly, PHP and Apache aren't going to tell you where said log file is, *Lastly, Apache log files can get quite large. However, if none of these apply, you can use the normal file reading commands to do it. The easiest way to get the last error is $contents = @file('/path/to/error.log', FILE_SKIP_EMPTY_LINES); if (is_array($contents)) { echo end($contents); } unset($contents); There's probably a better way of doing this that doesn't oink up memory, but I'll leave that as an exercise for the reader. One last comment: PHP also has an ini setting to redirect PHP errors to a log file: error_log = /path/to/error.log You can set this in httpd.conf or in an .htaccess file (if you have access to one) using the php_flag notation: php_flag error_log /web/mysite/logs/error.log A: for anyone else looking for a sample script, i threw something together, it's got the basics: <?php exec('tail /usr/local/apache/logs/error_log', $output); ?> <Table border="1"> <tr> <th>Date</th> <th>Type</th> <th>Client</th> <th>Message</th> </tr> <? foreach($output as $line) { // sample line: [Wed Oct 01 15:07:23 2008] [error] [client 76.246.51.127] PHP 99. Debugger->handleError() /home/gsmcms/public_html/central/cake/libs/debugger.php:0 preg_match('~^\[(.*?)\]~', $line, $date); if(empty($date[1])) { continue; } preg_match('~\] \[([a-z]*?)\] \[~', $line, $type); preg_match('~\] \[client ([0-9\.]*)\]~', $line, $client); preg_match('~\] (.*)$~', $line, $message); ?> <tr> <td><?=$date[1]?></td> <td><?=$type[1]?></td> <td><?=$client[1]?></td> <td><?=$message[1]?></td> </tr> <? } ?> </table> A: Have you tried biterScripting ? I am a system admin and I have been using to parse logs. It is univx style scripting. biterScripting.com -> Free download.
{ "language": "en", "url": "https://stackoverflow.com/questions/159393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Are there any user interface prototyping tools for Eclipse? I am looking into designing new features for Eclipse-based programming tools, from the requirements/ideas perspective. To really do this quickly, I would like to sketch UI elements without having to code things -- my concern is with the concepts and ideas right now, not the possible later realization. Are there any such graphical sketching tools for Eclipse? (on a side note, I should also note that I find Eclipse a better idea every day, in the way that you can combine partial systems from very many different sources into a single environment. It really is the future of IDEs, especially for embedded systems. It used to pretty horrible pre-Eclipse-3.0, but now it does seem to work) A: WireframeSketcher is a tool that helps quickly create wireframes, mockups and prototypes for desktop, web and mobile applications. It comes both as a standalone version and as a plug-in for Eclipse IDEs. It has some distinctive features like storyboards, components, linking and vector PDF export. Among supported IDEs are are Aptana, Flash Builder, Zend Studio and Rational Application Developer. (source: wireframesketcher.com) A: Incidentally, NetBeans is known for having a really good GUI editor (Matisse), but I realize that you weren't asking about NetBeans :) A: I've tried the Visual Editor Project before, but in the past it crashed my instance of Eclipse, and I haven't visited it since. Jigloo is a new one that I'd like to try out soon. A: This is really specific to Eclipse: it is the platform of choice for general IDEs today, and I am looking to sketch out extensions to it. The target programming language is more likely to be raw assembler and C than anything else -- OS, driver, system-level debug.
{ "language": "en", "url": "https://stackoverflow.com/questions/159422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Awk scripting help - Logic Issue I'm currently writing a simple .sh script to parse an Exim log file for strings matching " o' ". Currently, when viewing output.txt, all that is there is a 0 printed on every line(606 lines). I'm guessing my logic is wrong, as awk does not throw any errors. Here is my code(updated for concatenation and counter issues). Edit: I've adopted some new code from dmckee's answer that I'm now working with over the old code in favor of simplicity. awk '/o'\''/ { line = "> "; for(i = 20; i <= 33; i++) { line = line " " $i; } print line; }' /var/log/exim/main.log > output.txt Any ideas? EDIT: For clarity's sake, I'm grepping for "o'" in email addresses, because ' is an illegal character in email addresses(and in our databases, appears only with o'-prefixed names). EDIT 2: As per commentary request, here is a sanitized sample of some desired output: [xxx.xxx.xxx.xxx] kathleen.o'toole@domain.com <kathleen.o'toole@domain.com> routing defer (-51): retry time not reached [xxx.xxx.xxx.xxx] julie.o'brien@domain.com <julie.o'brien@domain.com> routing defer (-51): retry time not reached [xxx.xxx.xxx.xxx] james.o'dell@domain.com <james.o'dell@domain.com> routing defer (-51): retry time not reached [xxx.xxx.xxx.xxx] daniel_o'leary@domain.com <aniel_o'leary@domain.com> routing defer (-51): retry time not reached The reason I'm starting at 20 in my loop is because everything before the 20th field is just standard log information that isn't needed for my purposes here. All I need is everything from the IP and beyond for this solution(the messages for each 550 error are different for each mail server in use out there. I'm compiling a list of common ones) A: + means numerical addition in awk. If you want to concatenate, just place the constants and/or expressions separated with spaces. So, this line += " " + $i should become line = line " " $i EDIT: Iff exim log files (I am more into Postfix :) are separated by a single space, isn't the following more simple: grep -F o\' /var/log/exim/main.log | cut -d\ -f20-33 >output.txt ? A: There is no real need for the grep here. Let awk select the matching lines for you (and fixing your concatenation bug as per ΤΖΩΤΖΙΟΥ): awk '/o'\''/ { line = "> "; for(i = 20; i <= 33; i++) { line = line " " $i; } print line; }' /var/log/exim/main.log > output.txt Of course, you end up needing some weird escaping if you do it at the promp like above. It is cleaner in a script... Edit: On the first pass I missed the += problem... Also assuming that the line you gave above is partial, as it has only 13ish fields (by default fields are white space delimited). A: "'" is not illegal in local parts. From RFC2821, section 4.1.2: Local-part = Dot-string / Quoted-string Dot-string = Atom *("." Atom) Atom = 1*atext 2821 further references RFC2822 for non-locally-defined elements, so: atext = ALPHA / DIGIT / ; Any character except controls, "!" / "#" / ; SP, and specials. "$" / "%" / ; Used for atoms "&" / "'" / "*" / "+" / "-" / "/" / "=" / "?" / "^" / "_" / "`" / "{" / "|" / "}" / "~" In other words, "'" is a perfectly legal unquoted characted to have in an email localpart. Now, it may not be legal at your site, but that's not what you said. Sorry for not staying directly on topic, but I wanted to correct your assertion. A: Off task, and simpler still: python. import fileinput for line in fileinput.input(): if "'" in line: fields = line.split(' ') print "> ", ' '.join( fields[20:34] )
{ "language": "en", "url": "https://stackoverflow.com/questions/159423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: ReceivePayment Pending for Quickbooks XML and Merchant Services In the Quickbooks SDK Manual, there is a section called "Using ReceivePayment for Credit Card Authorization and Capture". It reads... Using ReceivePayment for Credit Card Authorization and Capture If the company is subscribed to QBMS, you can record a ReceivePaymentAdd that is basically a pending transaction. That is, in this usage, you want to save a QBMS authorization transaction into QuickBooks. Thus, the ReceivePaymentAdd contains a CreditCardTxnInfo aggregate with a CreditCardTxnType of Authorization. QuickBooks saves this as a pending transaction. Later, when the authorized charge is captured to become a real charge in QBMS, you can record that charge into QuickBooks by modifying that ReceivePayment (ReceivePaymentMod). The ReceivePaymentMod will have a CreditCardTxnInfoMod containing data from the QBMS capture transaction, with a CreditCardTxnType of Capture. QuickBooks automatically removes the pending status and records the transaction. My question is, How do you actually do that with QBXML? Right now, I have a VB.NET application that sends invoices to quickbooks, but then users have to switch to quickbooks, and click "Customers -> Receive Payments" to charge their credit card (using Quickbooks Merchant Services). It would be awfully nice to automate this in some way, perhaps by sending Quickbooks an XML message to charge the card?) A: I'm not quite sure what you mean... the way I would approach it is: * *Use the QBMS XML API to authorize the card *Push the receive payment and authorization to QuickBooks *When ready, use the QBMS API to charge the card *Issue an ReceivePaymentMod to record the capture in QuickBooks As far as I know, there is no way to tell QuickBooks to do the capture on it's own. But you can use the QBMS API to do the capture.
{ "language": "en", "url": "https://stackoverflow.com/questions/159431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the simplest way to convert char[] to/from tchar[] in C/C++(ms)? This seems like a pretty softball question, but I always have a hard time looking up this function because there seem there are so many variations regarding the referencing of char and tchar. A: TCHAR is a Microsoft-specific typedef for either char or wchar_t (a wide character). Conversion to char depends on which of these it actually is. If TCHAR is actually a char, then you can do a simple cast, but if it is truly a wchar_t, you'll need a routine to convert between character sets. See the function MultiByteToWideChar() A: MultiByteToWideChar but also see "A few of the gotchas of MultiByteToWideChar". A: There are a few answers in this post as well, especially if you're looking for a cross-platform solution: UTF8 to/from wide char conversion in STL A: Although in this particular situation I think the TChar is a wide character I'll only need to do the conversion if it isn't. which I gotta check somehow. if (sizeof(TCHAR) != sizeof(wchar_t)) { .... } The cool thing about that is both sizes of the equals are constants, which means that the compiler will handle (and remove) the if(), and if they are equal, remove everything inside the braces A: Here is the CPP code that duplicates _TCHAR * argv[] to char * argn[]. http://www.wincli.com/?p=72 If you adopting old code to Windows, simple use define mentioned in the code as optional. A: The simplest way is to use the conversion macros: * *CW2A *CA2W *etc... MSDN A: You can put condition in your code ifdef _UNICODE { //DO LIKE TCHAR IS WIDE CHAR } ELSE { //DO LIKE TCHAR IS CHAR } A: I realize this is an old thread, but it didn't get me the "right" answer, so am adding it now. The way this appears to be done now is to use the TEXT macro. The example for FindFirstFile at msdn points this out. http://msdn.microsoft.com/en-us/library/windows/desktop/aa364418%28v=vs.85%29.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/159442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Pivot Table and Concatenate Columns I have a database in the following format: ID TYPE SUBTYPE COUNT MONTH 1 A Z 1 7/1/2008 1 A Z 3 7/1/2008 2 B C 2 7/2/2008 1 A Z 3 7/2/2008 Can I use SQL to convert it into this: ID A_Z B_C MONTH 1 4 0 7/1/2008 2 0 2 7/2/2008 1 0 3 7/2/2008 So, the TYPE, SUBTYPE are concatenated into new columns and COUNT is summed where the ID and MONTH match. Any tips would be appreciated. Is this possible in SQL or should I program it manually? The database is SQL Server 2005. Assume there are 100s of TYPES and SUBTYPES so and 'A' and 'Z' shouldn't be hard coded but generated dynamically. A: select id, sum(case when type = 'A' and subtype = 'Z' then [count] else 0 end) as A_Z, sum(case when type = 'B' and subtype = 'C' then [count] else 0 end) as B_C, month from tbl_why_would_u_do_this group by id, month You change requirements more than our marketing team! If you want it to be dynamic you'll need to fall back on a sproc. A: SQL Server 2005 offers a very useful PIVOT and UNPIVOT operator which allow you to make this code maintenance-free using PIVOT and some code generation/dynamic SQL /* CREATE TABLE [dbo].[stackoverflow_159456]( [ID] [int] NOT NULL, [TYPE] [char](1) NOT NULL, [SUBTYPE] [char](1) NOT NULL, [COUNT] [int] NOT NULL, [MONTH] [datetime] NOT NULL ) ON [PRIMARY] */ DECLARE @sql AS varchar(max) DECLARE @pivot_list AS varchar(max) -- Leave NULL for COALESCE technique DECLARE @select_list AS varchar(max) -- Leave NULL for COALESCE technique SELECT @pivot_list = COALESCE(@pivot_list + ', ', '') + '[' + PIVOT_CODE + ']' ,@select_list = COALESCE(@select_list + ', ', '') + 'ISNULL([' + PIVOT_CODE + '], 0) AS [' + PIVOT_CODE + ']' FROM ( SELECT DISTINCT [TYPE] + '_' + SUBTYPE AS PIVOT_CODE FROM stackoverflow_159456 ) AS PIVOT_CODES SET @sql = ' ;WITH p AS ( SELECT ID, [MONTH], [TYPE] + ''_'' + SUBTYPE AS PIVOT_CODE, SUM([COUNT]) AS [COUNT] FROM stackoverflow_159456 GROUP BY ID, [MONTH], [TYPE] + ''_'' + SUBTYPE ) SELECT ID, [MONTH], ' + @select_list + ' FROM p PIVOT ( SUM([COUNT]) FOR PIVOT_CODE IN ( ' + @pivot_list + ' ) ) AS pvt ' EXEC (@sql)
{ "language": "en", "url": "https://stackoverflow.com/questions/159456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Experiences and tips for programming with and for Amazon's cloud servers/apps/tools? We're looking into developing a product that would use Amazon's cloud tools (EC2, SQS, etc), and I'm curious what tips/gotchas/pointers people that have used these technologies have. One tip/whatever per post, please. A: The Elasticfox plug-in for Mozilla makes doing a lot of the EC2 stuff easier. It can be found at: Elasticfox Firefox Extension for Amazon EC2. This page has links specifically to download the Elasticfox plug-in and also the associated Sourceforge project. Well worth using... A: Get a developer account at Right Scale. It's free and a god-send for a guy who hates remembering those dumb commands and arguments. If you only resort to Amazon-supplied tools, you're throwing away your human rights. A: An important concept to grasp: the file system your EC2 instance lives on while it's running is not persistent. There are tools/services available that let you mount file systems backed by S3 storage, or you can upload to S3 or other storage service from the instance, but when an instance closes the associated file system is no more. As for tools, I've found Amazon's tools to be great, but you should probably be comfortable with the command line if you're taking this route. A: We're interested in EC2 where i work. We don't care about web-serving or enterprisey stuff, just massive number crunching for physics, using python. This EC2 stuff had me befuddled, with most documentation oriented toward businessy applications and using C# or Java, but this slide show clarified much for me, especially for using python: http://www.datawrangling.com/pycon-2008-elasticwulf-slides A: As for SimpleDB, it has a very limited query language and it is very restrictive. If you planning on having lot of complex queries, you must first sit down and think how to organize your data to make those queries possible. One thing missing, but that will probably will be added, is the ability to count the results of a given query, much like SQL's COUNT. Performance is ok, but I consider the latency maybe a little high. A: For managing your EC2 instances, etc. Amazon also offers - in beta since a couple of days - the management console which has similar functionality to the Elasticfox Firefox plugin but is a pure web console. https://console.aws.amazon.com
{ "language": "en", "url": "https://stackoverflow.com/questions/159459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: JPA - Unknown entity bean class Hopefully, I can explain this issue properly. I have 3 classes that deals with my entities. @MappedSuperclass public abstract class Swab implements ISwab { ... private Collection<SwabAccounts> accounts; ... } @Entity @Table(name="switches") @DiscriminatorColumn(name="type") @DiscriminatorValue(value="DMS500") public class DmsSwab extends Swab implements ISwab, Serializable { ... private ObjectPool pool; ... @Transient public ObjectPool getPool(){ return pool; } ... } @Entity(name="swab_accounts") public class SwabAccounts implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private int swab_account_id; private int swab_id; ... } And in a EJB a query is being doing this way DmsSwab dms = em.find(DmsSwab.class, 2); List<Swab> s = new ArrayList<Swab>(1); s.add(dms); My persistence.xml looks like this: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="dflow-pu" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <class>com.dcom.sap.dms.DmsSwab</class> <class>com.dcom.sap.jpa.SwabAccounts</class> <properties> <property name="toplink.jdbc.user" value="dflow"/> <property name="toplink.jdbc.password" value="dflow"/> <property name="toplink.jdbc.url" value="jdbc:mysql://itcd-400447:3306/dflow"/> <property name="toplink.jdbc.driver" value="com.mysql.jdbc.Driver"/> </properties> </persistence-unit> </persistence> I get this error: java.lang.IllegalArgumentException: Unknown entity bean class: class com.dcom.sap.dms.DmsSwab, please verify that this class has been marked with the @Entity annotation. com.dcom.sap.SwabException: java.lang.IllegalArgumentException: Unknown entity bean class: class com.dcom.sap.dms.DmsSwab, please verify that this class has been marked with the @Entity annotation. Caused by: java.lang.IllegalArgumentException: Unknown entity bean class: class com.dcom.sap.dms.DmsSwab, please verify that this class has been marked with the @Entity annotation. at oracle.toplink.essentials.internal.ejb.cmp3.base.EntityManagerImpl.findInternal(EntityManagerImpl.java:306) at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerImpl.find(EntityManagerImpl.java:148) I am running netbeans 6.1 with the version of glassfish that comes with it. MySql 5.0. A: define this entity in class tag inside the persistence.xml A: According to the error message and what I figure from your code, the error seems to be in the persistence.xml file, can you be a bit more verbose ? A: I had the same error and, complementing the information above, my case was a ClassLoader issue. My app has three files. A ejb-module.jar which depends on app-lib.jar (library that contains pojo and database entities) and a web-module.war which depends on app-lib.jar. In the deployment, the app-lib.jar was loaded twice by the glassfish. Googling, I found out that I should copy the app-lib.jar to a "shared" lib in the glassfish domain. I've copied the postgresql.jar to "domain-dir/lib" and my app-lib.jar to "domain-dir/lib/applibs". Have it done, the app worked like a charm. The used explanation can be found here: http://docs.oracle.com/cd/E19798-01/821-1752/beade/index.html A: I solved this issue creating a ContextListener in to my Web App, invoking the close of the entity manager factory at destroy context, : public void contextDestroyed(ServletContextEvent servletContextEvent) { try { logger.info("contextDestroyed..."); LifeCycleManager lifeCycleManager = ServiceLocator.getLifeCycleManager(); lifeCycleManager.closeEntityManagerFactory(); } catch (Exception e) { logger.error(e.getMessage(), e); } } I also create a bean with name LifeCycleManager and inside them invoke a DAO method to close the entity manager factory: public void closeEntityManagerFactory() throws BusinessException { logger.info("closeEntityManager"); try { logger.info("closing entity manager factory..."); genericDAO.closeEntityManagerFactory(); logger.info("Entity manager factiry closed"); } catch (Exception e) { throw new BusinessException(BusinessErrorCode.CODIGO_EJEMPLO_01, Severity.ERROR); } } Inside the DAO: ... @Autowired private EntityManagerFactory entityManagerFactory; ... public void closeEntityManagerFactory() { logger.info("closing entity manager factory"); getEntityManagerFactory().close(); logger.info("entity manager factory closed"); } Using this each time I deploy a change from my eclipse environment the destroy context is invoked. I hope could help you guys, my environment is WebLogic Server 11gR1 and JPA 1.0. A: Mario was right when he mentions EntityManagerFactory here. Both: java.lang.IllegalArgumentException: Unknown entity bean class... and java.lang.IllegalStateException: This web container has not yet been started... These exceptions occur when you redeploy a web application multiple times but didn't close EntityManagerFactory properly. follow this instruction to register ServletContextListener and this instruction to close EntityManagerFactory properly.
{ "language": "en", "url": "https://stackoverflow.com/questions/159469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Reset Expander to default collapse behavior I'm using an expander inside a Resizer (a ContentControl with a resize gripper), and it expands/collapses properly when the control initially comes up. Once I resize it, the Expander won't properly collapse, as documented below. I ran Snoop on my application, and I don't see any heights set on Expander or its constituents. How would I go about convincing Expander to collapse properly again? Or modifying Resizer to not make Expander sad would work as well. Expander documentation says: "For an Expander to work correctly, do not specify a Height on the Expander control when the ExpandDirection property is set to Down or Up. Similarly, do not specify a Width on the Expander control when the ExpandDirection property is set to Left or Right. When you set a size on the Expander control in the direction that the expanded content is displayed, the area that is defined by the size parameter is displayed with a border around it. This area displays even when the window is collapsed. To set the size of the expanded window, set size dimensions on the content of the Expander control or the ScrollViewer that encloses the content." A: I resolved the problem by moving the Resizer inside the Expander, but I've run into the Expander issue elsewhere, so would still like an answer if someone has it. thanks A: I haven't had a chance to mock up this particular issue since then, but I recently discovered that setting Height or Width to Double.NaN resets it to its default free-spirited behavior. Ironically, this was from reading the code of the Resizer control I was using in the first place. A: Answering this a bit late (2+ years), but, hey, better late than never, right? Anyway, I ran into this exact problem and was able to solve it with some code-behind to save and reset column widths. I have a 3 columned Grid, with some content in the first column, the GridSplitter in the second column, and the Expander in the third column. It looks like what is happening is that after the GridSplitter is moved the width of the column containing the Expander is altered from Auto to a fixed size. This causes the Expander to no longer collapse as expected. So, I added a private variable and two event handlers: private GridLength _columnWidth; private void Expander_Expanded (object sender, RoutedEventArgs e) { // restore column fixed size saved in Collapse event Column2.Width = _columnWidth; } private void Expander_Collapsed (object sender, RoutedEventArgs e) { // save current column width so we can restore when expander is expanded _columnWidth = Column2.Width; // reset column width to auto so the expander will collapse properly Column2.Width = GridLength.Auto; } When the Expander is collapsed I save Column2's fixed width (which was altered from Auto auto-magically in the background somewhere) then reset the width to Auto. Then, when the expander is expanded, I restore the column back to the fixed width so it expands to the same width it was before it was collapsed. Here's the XAML for reference: <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="2*" /> <ColumnDefinition Width="Auto" /> <ColumnDefinition x:Name="Column2" Width="Auto" /> </Grid.ColumnDefinitions> <ScrollViewer Grid.Column="0" VerticalScrollBarVisibility="Auto"> <!-- some content goes here --> </ScrollViewer> <GridSplitter HorizontalAlignment="Right" VerticalAlignment="Stretch" Grid.Column="1" ResizeBehavior="PreviousAndNext" Width="5" Background="Black" /> <Expander Grid.Column="2" ExpandDirection="Left" IsExpanded="True" Style="{StaticResource LeftExpander}" Expanded="Expander_Expanded" Collapsed="Expander_Collapsed"> <Grid> <TextBox TextWrapping="Wrap" Height="Auto" Margin="0 5 5 5" /> </Grid> </Expander> </Grid>
{ "language": "en", "url": "https://stackoverflow.com/questions/159470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way for a website to check if a user has installed a client app? Let's say I've got a website that works better if a client has installed and logged into a desktop application. I'd like to be able to do 2 things: * *Alter the website if they haven't installed the app (to make it easy for them to find a link to the installer) *If they've installed the app on a couple of machines, determine which machine they are browsing from I'd like something that works on Windows and OSX, on any of the major browsers. Linux is a bonus. A few thoughts: * *Websites can detect if you've got Flash installed. How does that work and could it be used for both of my goals? *Could I just let the client serve HTTP on localhost and do some javascript requests to fetch a local ID? I know google desktop search did something like this at one point. Is this a standard practice? Thanks! A: You can register a protocol from your desktop application (see this). This can be used, for example, to open your desktop application with arbitrary data from the website. You could then have your desktop app send a HTTP request to your webserver, telling it what machine you are on. A: You can have a browser plugin (activex for IE or Netscape plugin for the rest of the browsers) that can communicate with the application. When the webpage is loaded, it can try to instantiate the plugin and if it succeeded, it can use it as a proxy to the application. If it fails, then either the app is not installed or the plugin was explictly disabled by the user. Either way, your website should degrade its functionality accordingly. Update: Forgot to answer your questions: * *Flash does it exactly this way. Flash is a browser plugin that is created by the web pages. *You can have a machine ID generated at the application/plugin install time and your plugin can pass that machine ID to the webpage when it is created. On the topic of using local webserver: I would stay away from having a local webserver, mainly because of security considerations. It takes quite a lot of work to make sure your local webserver is locked down sufficiently and there are no XSS vulnerabilities that other malicious websites can exploit to make it do stuff on their behalf. Plus, having a webserver means that either it has to run as a system-wide process, or if it runs as the user, you can have the website interact with only one user's instance of the application, even though multiple users can be logged on and running it at the same time. Google Desktop Search suffered from both the XSS security vulnerability (though they fixed it) and the limitation of only one user being able to use it on a machine (I don't know if they fixed this one yet, though chances are they did). A: Websites can detect if you've got Flash installed. Actually, I believe a browser can detect if you have the Flash plugin for the browser installed, and webpages can offer "installed" and "uninstalled" option that the browser can choose. Otherwise, you are asking for a means, by putting some code in a webpage, of being able to analyze a user's home computer, and report what it learned to you website. Can you say Major Security Hole? A: If you can pick a development environment for the desktop app, then check out AIR from Adobe. It lets you develop desktop applications using either html/javascript, Flash, or Flex. It has API calls you can use from a browser based flash app to see if the desktop based AIR app is installed, what version, etc. You can even launch it and pass parameters from the web app to the desktop app. http://www.rogue-development.com/blog2/2008/03/interacting-with-an-air-app-from-a-browser-based-app/ A: Websites can detect if you've got Flash installed. How does that work and could it be used for both of my goals? it's quite a bit simple, your browser tries to render some additional files, with some specific formats such as flash .swf and I the browser doesn't find installation, then will be start downloading, or you will get the option to download that program. Flash also uses AC_RunActiveContent.js please take a look at this js, people usually put this on their webpages if (AC_FL_RunContent == 0) { alert("This page requires AC_RunActiveContent.js."); } else { AC_FL_RunContent( 'codebase','http://download.macromedia.com/pub/shockwave cabs/flash swflash.cab#version=8,0,0,0','width','981','height','635','id','build5','align','middle','src','build5','quality','high','bgcolor','#ffffff','name','build5','allowscriptaccess','sameDomain','allowfullscreen','false','pluginspage','http://www.macromedia.com/go/getflashplayer','movie','build5' ); //end AC code }
{ "language": "en", "url": "https://stackoverflow.com/questions/159476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can not get CSS Sticky footer to work. What am I doing wrong? Well, this is my first post here and really enjoying the site. I have a very basic (ugly as sin) site I have started and for some reason, I can not get the CSS Sticky footer to work for FireFox. IE works but FF shows it halfway up the page. The URL is http://dev.aipoker.co.uk I know I should be developing in FF and bug fixing in IE so I am guessing I might have actually made a mistake and somehow it works in IE but nowhere else. Can anyone help put me out of my misery please? Thanks, guys and gals. A: I've had success with code like this: footer { display: block; position: absolute; width: 100%; bottom: 0px; } A: Try this one, it works well on Firefox. BTW, you should listen to Boagworld's podcast if you don't already. It's brilliant! :) Cheers. A: The minimal changes I can see to do this would be: * *move footerSection inside of body *set position absolute on both body and footerSection *set bottom = 0px on footerSection which ends up with something like this in your head: <style type="text/css"> #body, #footerSection { position: absolute; } #footerSection { bottom: 0px; } </style> <div id="body"> ... <div id="footerSection"> ... </div> </div> A: This is all you need to know about css only sticky footers & sticky navs: Stick to bottom of page Position: absolute; top:auto; bottom: 0; Stick to bottom of screen Position: fixed; top:auto; bottom:0; Any issues and it's probably due to where you placed your html code (don't make the footer a child element unless it's sticking to the content wrapper), or overlapping CSS. You can apply the same technique to sticky navigation by flipping the auto & top. It'sis cross browser compatible (From memory from IE7 and above) including mobiles.
{ "language": "en", "url": "https://stackoverflow.com/questions/159487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Scriptaculous sortable matrix I am trying to create a sortable image matrix, 5x5 using scriptaculous javascript library by I can't make it work. I am trying using a table but I am having trouble linking the <td> into the Sortable object. Do you guys have any hint or documentation I can go through to create this ? Thanks A: Use floating DIVs, not tables. 1) Create a class in your stylesheet for your boxes. .boxes { float:left; width:150px; height:150px; border:1px solid #cccccc } 2) Make a container for your boxes. and put the boxes inside. <div id='container' style='width:750px;height:750px;'> <div id='box1' class='boxes'></div> ...etc </div> 3) Make your Sortable Sortable.create('container',{tag:'div'}) There is a demo of this type of thing available here. Drop me a line if you need more help.
{ "language": "en", "url": "https://stackoverflow.com/questions/159491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What languages have a good GUI API/Designer? I've been wanting to become proficient in a new language for a while. There are a few languages I want to learn but it's pretty important for me to be able to create a (Application) GUI. I work in C# so I have become very accustom to the GUI designer. I would love to get better with C++ or Java (both of which I have a small amount of experience with). Other languages could be interesting too. I just really need to be able to make a GUI reasonably easily. So what (non .net) language has a really good method of designing GUIs? An extension to this question might be what are the most common GUI APIs/designers? A: I agree with pmlarocque in that you should use NetBeans if using Java. It really makes GUI design easy. As an aside, I also recommend pencil and paper. That really has helped me throughout the years, start making sketches of what you want it to be then replicate it in the IDE. A: For GUI in C++ you should look C++ Builder, you can get Turbo C++ Explorer for free A: May I recommend Flex? Flex Builder has a really nice GUI designer. As for Java, both NetBeans and Eclipse IDEs are good choices. To design GUIs in Java, you can use SWT, AWT or Swing widget toolkits. I heard that some people experienced problems with SWT projects running on NetBeans. However, NetBeans comes with a built-in GUI Builder for Swing , a very powerful widget toolkit. Of course, there's also a plug-in for Eclipse that allows you to build Swing GUIs, so it basically comes down to which IDE you prefer... A: I use both Visual Studio and Delphi, and the Delphi GUI editor is significantly better. It is worth a try. They make a free version. A: NetBeans has a great Swing GUI Builder (formerly Project Matisse). This is for Java, I think it was started by Sun but is an open source project. Very similar to Eclipse, but I found an advantage with NetBeans due to this GUI builder. Check it out at: http://www.netbeans.org/features/java/swing.html A: The cross-platform Qt GUI framework (which is mainly for C++) comes with Qt Designer. A: Interface Builder and Qt Designer are by far the best GUI design tools I've ever used. A: I would recommend you to look at Delphi. It's object pascal with a nice IDE and a nice community! Take a look at www.codegear.com CodeGear have also a C++ IDE, so you can have the bundle and put your hand dirty at Delphi and C++! Hope his helps vIceBerg A: WPF/Silverlight: Expression Blend A: Not as big as it used to be, but Powerbuilder. A: Visual Dataflex has a decent GUI editor. It's a nice solution for building "database-agnostic" database applications. A: Well, I have two options: Objective-C on the Mac using Cocoa GUI framework, or Java for everything (Mac, Linux, MS-Windows) using the Swing API. If you want to program in Objective-C targeting Mac OS X operating system for Apple Macintosh, iPhone, or iPod Touch - then the Interface Builder that comes bundled with the Xcode IDE (part of the Developer bundle) is really good. You will need a Mac, of course, to be able to use it. If you have a Linux or Windows PC already, then you probably have a monitor, USB mouse, and USB keyboard. So you could get an Mac Mini for $599 and hook those up to it. The Developer bundle is free. Just go to developer.apple.com and sign up for free Developer tools once you get your Mac. If you are going to be a professional developer, then you might want to go there before you get your Macintosh and see if registering as a Pro and buying a Macintosh and stuff under that deal would net you more bang for your buck. This Interface Builder of Apple's is pretty famous. It is what gave the NeXT computer is high reputation for being the way to create applications really fast. Wall Street financial firms, government agencies, and research types - plus a fair number of 3rd party commercial software developers - used it to create GUI applications very rapidly. The name of Apple's Cocoa framework, by the way, used to be Next Step. When Apple bough NeXT from Steve Jobs, they renamed Next Step Cocoa. However, the classes still begin with NS as a little artifact of their heritage. What people like about Interface Builder is that it has a very good layout manager and it lets you "wire" UI objects to other objects, making the latter "targets". Wiring them together this way creates a "connection". So far this sounds very unexciting, I know. However, it gets exciting when you start doing it. You can design your actual runnable GUI in the designer and actually run it before you have written any code. Writing code lets you incrementally flesh out the user interface that have behavior more than UI stimulus-response behavior. Anyway, the idea is that you can bang out a prototype extremely quickly, get feedback from someone based on this concrete GUI - and then fill in the details with Objective-C programming. The most famous thing that was ever created with Next Step (Cocoa) is the World Wide Web (WWW). You may have heard of it. Well, the first web browser in the world was created by Tim Berniers-Lee at CERN in 1989 using Next Step, which had just come out the year before (1988). He said he liked Next Step because it let him create his web browser very quickly. Even more impressively, his web browser not only allowed users to view web pages - his browsre also let users edit the web pages they viewed. If you want to program in Java, NetBeans has a very nice Swing GUI designer. It comes built into NetBeans. The GUI designer very easy to use and seems to have a full set of capabilities. My ownly dislike is that it puts commented sections in the code that you cannot edit. JBuilder did not put those annoying comments/restrictions in but JBuilder has pretty much faded from the scene these days. Another downside of NetBeans is that it creates a .form file with the same name as the GUI class you are editing. Java code refactoring tools, other than NetBeans, are not going to know about this file. So, if you manually move the package the class is part of (or rename the class) - or use Eclipse or some other program to do it - you are going to have problems. You will need to be sure to use NetBeans to move/rename your class. Eclipse had one in the form of an experimental plugin that was an okay start for a GUI designer called VE (Visual Editor) a number of years back. However, VE does not appear to have been updated in a couple of years. I really like the true portability of Java programs. Java programs with GUIs are no exception. I recommend adopting Java as your new language and using NetBeans as your first IDE, since you favor GUI program designs with a WYSIWYG editor. Later, I suggest you also learn Eclipse. That way you will benefit from its more powerful code editing/refactoring capabilities. You do not have to make an either-or choice between the two IDEs. With some caveats, like I have given - you can use both. A: Use Microsoft Expression Blend to layout your GUI. WPF... Before, I was using Qt Designer. First time I started using Expression Blend, I fell in love with it. So much easier to use than Qt Designer. If your app requires high-performance, back-end it with some native-code language like C++. If not, just stick with C# or Python. Remember it's not just the tool that you use, but how it works as a whole Too many combinations of different languages/vendors sometimes just makes you want to pull your hair our! A: Netbeans IDE for Java as a sweet GUI designer. A: wxGlade is a GUI designer that can generate Python, C++, Perl, or Lisp and uses the wxWidgets library. And it's free. A: Java Netbeans is good, and since java is fairly close to c# in syntax, it might make an easy learning experience A: IntelliJ IDEA for Java http://www.jetbrains.com/idea/features/gui_builder.html Here's a video: http://www.javalobby.org/eps/intellij_ui_designer/ A: Vaadin You can write business-oriented desktop-style web apps using only Java on the server side yet rendered automatically using Web standards client-side in the web browser. The Vaadin Framework provides the magic of letting you define your desired fields, labels, buttons, and other widgets in a layout all using pure Java. By harnassing GWT technology, Vaadin transforms your Java code at runtime into the content for display in a user’s web browser. Your app is rendered using all the Web goodness of HTTP, HTTP/2, HTML, HTML5, CSS, DOM, JavaScript, WebSocket, Push, and so on… but does so transparently to the Java programmer. All that transformation is done under the covers. As a Vaadin programmer, all I deal with is Java coding. I prefer using the well-documented API to programmatically layout the contents of my forms and widgets for the user-interface. You can do so free-of-cost using the open-source framework. Alternatively, you can use their commercial product Vaadin Designer for a visual drag-and-drop layout editor tool. Try the live Sampler and other demos such as Reindeer demo. A: Since you mentioned you use C++, I'd recommend MatDeck(https://labdeck.com/comparison/). They offer a GUI Designer in their unique code. Their code is based on C++(It's called MatDeck C++ style code) and since it's specialized to their software it needs a lot less code to run. They offer much more and are a fully-fledged software. They also have a whole page that compares their code(https://labdeck.com/python/c-style-script/).
{ "language": "en", "url": "https://stackoverflow.com/questions/159492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Setting the scrollbar position of a ListBox Can I programatically set the position of a WPF ListBox's scrollbar? By default, I want it to go in the center. A: To move the vertical scroll bar in a ListBox do the following: * *Name your list box (x:Name="myListBox") *Add Loaded event for the Window (Loaded="Window_Loaded") *Implement Loaded event using method: ScrollToVerticalOffset Here is a working sample: XAML: <Window x:Class="ListBoxScrollPosition.Views.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Loaded="Window_Loaded" Title="Main Window" Height="100" Width="200"> <DockPanel> <Grid> <ListBox x:Name="myListBox"> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> <ListBoxItem>Zamboni</ListBoxItem> </ListBox> </Grid> </DockPanel> </Window> C# private void Window_Loaded(object sender, RoutedEventArgs e) { // Get the border of the listview (first child of a listview) Decorator border = VisualTreeHelper.GetChild(myListBox, 0) as Decorator; if (border != null) { // Get scrollviewer ScrollViewer scrollViewer = border.Child as ScrollViewer; if (scrollViewer != null) { // center the Scroll Viewer... double center = scrollViewer.ScrollableHeight / 2.0; scrollViewer.ScrollToVerticalOffset(center); } } } A: Dim cnt as Integer = myListBox.Items.Count Dim midPoint as Integer = cnt\2 myListBox.ScrollIntoView(myListBox.Items(midPoint)) or myListBox.SelectedIndex = midPoint It depends on if you want the middle item just shown, or selected. A: I've just changed a bit code of Zamboni and added position calculation. var border = VisualTreeHelper.GetChild(list, 0) as Decorator; if (border == null) return; var scrollViewer = border.Child as ScrollViewer; if (scrollViewer == null) return; scrollViewer.ScrollToVerticalOffset((scrollViewer.ScrollableHeight/list.Items.Count)* (list.Items.IndexOf(list.SelectedItem) + 1)); A: I have a ListView named MusicList. MusicList automatically moves to the next element after playing a music. I create an event handler for Player.Ended event as follows (a la Zamboni): if (MusicList.HasItems) { Decorator border = VisualTreeHelper.GetChild(MusicList, 0) as Decorator; if (border != null) { ScrollViewer scrollViewer = border.Child as ScrollViewer; if (scrollViewer != null) { MusicList.ScrollIntoView(MusicList.SelectedItem); } } } You get the next element seen at the bottom. A: I don't think ListBoxes have that, but ListViews have the EnsureVisible method that moves the scrollbar to the place needed in order to make sure an item is shown.
{ "language": "en", "url": "https://stackoverflow.com/questions/159506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What are the limitations of Loose XAML? I have been experimenting with WPF and rendering strict XAML markup in a web browser, also known as Loose XAML (explained here and here). It strikes me as mostly useful for displaying static content. However, it also appears possible to bind to an XML data provider. Loose XAML files are not compiled with an application, which creates the following limitations: * *They do not allow external assemblies *No use of classes, code-behind (or any C#) *No two-way databinding What additional limitations are there? * *I have not found a way to databind to a database provider (SQL Server) *Is the .NET Framework required on the client machine in order to render the XAML in the browser? *Are Search Engines able to interrogate Loose XAML to appropriately rank the pages? EDIT: I have attempted to bind the XML data provider to a web service (using this simple example) and have not been successful. These findings lead me to further research where I found that this is not supported: "The XMLDataProvider is designed to be read-only (in other words, it doesn't provide the ability to commit changes), and it isn't able to deal with XML data that may come from other sources (such as a database record, a web service message, and so on)." -Matthew MacDonald, Pro WPF A: At least framework 3.0 is required to view loose XAML pages in IE. You can even check for it on your site by looking for ".NET CLR 3.0" in the user agent string. A database connection, if it is even possible, would not be done directly in the loose XAML because of the need for procedural code to open the connection. A: AFAIK it's impossible to define a connection string in XAML. So you can't access your SQL db. Note: It IS possible to databind to a webservice however using XmlDataProvider. So that could be a way you could send your data through... Edit: btw, I found this list of features of the Sandboxed environment, which your app is running in when using XBAP and loose XAML. it's a bit dated, but probably most limitations still apply. A: I've done a lot of work in hosting the Dynamic Language Runtime (DLR) and allowing scripts to be embedded in XAML. I'm at the point now that I feel like Loose XAML is not a second class citizen as I can handle events, write value converters, run Python/Ruby/JScript to do things like connect to SQL server. See my blog to see if this direction suits you. I dynamically load assemblies using an attached property - once loaded, you can reference the classes in the assembly in the usual manner. So, too answer the question, there are MANY limitations of Loose XAML out of the box (like not being able to route an event to an event handler), but these can be overcome with a bit of work. I've only used XAML/WPF for desktop apps. Hopefully someone else jumps in to answer you browser specific questions. I have a library that I use in commercial work for DLR hosting and embedding DLR scripts in XAML that I've been meaning to turn into a supported product. If this would be of interest to you, be sure to let me know.
{ "language": "en", "url": "https://stackoverflow.com/questions/159512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to detect an update using .NET 2 System.Configuration.Install? I created a class derived from System.Configuration.Install for my installer. But the code in Uninstall() is being called when I try to update the application. How can I detect that the user is trying to update instead of uninstall? Maybe this post explains it better than me: My problem boils down to: when the user performs an update (i.e. double clicks on MyAppVer2.msi when they already have MyAppVer1.msi installed) the Uninstall method inside my Installer is called first, but I have no apparent property to check from inside this method to detect that an update is being performed so that I can branch my code appropriately. A: There is a setting in your setup project that will "uninstall" previous versions by default, turn this flag OFF, then you will not have to worry! A: Blind guess here, but I'd start out by checking the Installer.Context property for a parameter. If that's no help, there may be something in the savedState parameter passed to Uninstall. Last chance would be to prompt the user, and set the child installers as appropiate. A: The deployment project that ships with Visual Studio is SEVERELY underpowered to deal with anything beyond the simpiliest scenarios. In your case, you'll need to do one of the following: * *Figure out a way to set a flag prior to the original MSI being uninstalled, which you can check in the installer class. *Prompt the user visually in the installer class. *Redesign your install/uninstall logic to not be dependant on the situation in which the uninstaller was called. A: Is there a reason why you can't use WIX, which can handle this sort of thing more efficiently, have a look at the Upgrade tutorial, Here
{ "language": "en", "url": "https://stackoverflow.com/questions/159513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Text editor to open big (giant, huge, large) text files I mean 100+ MB big; such text files can push the envelope of editors. I need to look through a large XML file, but cannot if the editor is buggy. Any suggestions? A: Tips and tricks less Why are you using editors to just look at a (large) file? Under *nix or Cygwin, just use less. (There is a famous saying – "less is more, more or less" – because "less" replaced the earlier Unix command "more", with the addition that you could scroll back up.) Searching and navigating under less is very similar to Vim, but there is no swap file and little RAM used. There is a Win32 port of GNU less. See the "less" section of the answer above. Perl Perl is good for quick scripts, and its .. (range flip-flop) operator makes for a nice selection mechanism to limit the crud you have to wade through. For example: $ perl -n -e 'print if ( 1000000 .. 2000000)' humongo.txt | less This will extract everything from line 1 million to line 2 million, and allow you to sift the output manually in less. Another example: $ perl -n -e 'print if ( /regex one/ .. /regex two/)' humongo.txt | less This starts printing when the "regular expression one" finds something, and stops when the "regular expression two" find the end of an interesting block. It may find multiple blocks. Sift the output... logparser This is another useful tool you can use. To quote the Wikipedia article: logparser is a flexible command line utility that was initially written by Gabriele Giuseppini, a Microsoft employee, to automate tests for IIS logging. It was intended for use with the Windows operating system, and was included with the IIS 6.0 Resource Kit Tools. The default behavior of logparser works like a "data processing pipeline", by taking an SQL expression on the command line, and outputting the lines containing matches for the SQL expression. Microsoft describes Logparser as a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. The results of the input query can be custom-formatted in text based output, or they can be persisted to more specialty targets like SQL, SYSLOG, or a chart. Example usage: C:\>logparser.exe -i:textline -o:tsv "select Index, Text from 'c:\path\to\file.log' where line > 1000 and line < 2000" C:\>logparser.exe -i:textline -o:tsv "select Index, Text from 'c:\path\to\file.log' where line like '%pattern%'" The relativity of sizes 100 MB isn't too big. 3 GB is getting kind of big. I used to work at a print & mail facility that created about 2% of U.S. first class mail. One of the systems for which I was the tech lead accounted for about 15+% of the pieces of mail. We had some big files to debug here and there. And more... Feel free to add more tools and information here. This answer is community wiki for a reason! We all need more advice on dealing with large amounts of data... A: Free read-only viewers: * *Large Text File Viewer (Windows) – Fully customizable theming (colors, fonts, word wrap, tab size). Supports horizontal and vertical split view. Also support file following and regex search. Very fast, simple, and has small executable size. *klogg (Windows, macOS, Linux) – A maintained fork of glogg. Its main feature is regular expression search. It supports monitoring file changes (like tail), bookmarks, highlighting patterns using different colors, and has serious optimizations built in. But from a UI standpoint, it's rather minimal. *LogExpert (Windows) – "A GUI replacement for tail." It's really a log file analyzer, not a large file viewer, and in one test it required 10 seconds and 700 MB of RAM to load a 250 MB file. But its killer features are the columnizer (parse logs that are in CSV, JSONL, etc. and display in a spreadsheet format) and the highlighter (show lines with certain words in certain colors). Also supports file following, tabs, multifiles, bookmarks, search, plugins, and external tools. *Lister (Windows) – Very small and minimalist. It's one executable, barely 500 KB, but it still supports searching (with regexes), printing, a hex editor mode, and settings. Free editors: * *Your regular editor or IDE. Modern editors can handle surprisingly large files. In particular, Vim (Windows, macOS, Linux), Emacs (Windows, macOS, Linux), Notepad++ (Windows), Sublime Text (Windows, macOS, Linux), and VS Code (Windows, macOS, Linux) support large (~4 GB) files, assuming you have the RAM. *Large File Editor (Windows) – Opens and edits TB+ files, supports Unicode, uses little memory, has XML-specific features, and includes a binary mode. *GigaEdit (Windows) – Supports searching, character statistics, and font customization. But it's buggy – with large files, it only allows overwriting characters, not inserting them; it doesn't respect LF as a line terminator, only CRLF; and it's slow. Builtin programs (no installation required): * *less (macOS, Linux) – The traditional Unix command-line pager tool. Lets you view text files of practically any size. Can be installed on Windows, too. *Notepad (Windows) – Decent with large files, especially with word wrap turned off. *MORE (Windows) – This refers to the Windows MORE, not the Unix more. A console program that allows you to view a file, one screen at a time. Web viewers: * *readfileonline.com – Another HTML5 large file viewer. Supports search. Paid editors/viewers: * *010 Editor (Windows, macOS, Linux) – Opens giant (as large as 50 GB) files. *SlickEdit (Windows, macOS, Linux) – Opens large files. *UltraEdit (Windows, macOS, Linux) – Opens files of more than 6 GB, but the configuration must be changed for this to be practical: Menu » Advanced » Configuration » File Handling » Temporary Files » Open file without temp file... *EmEditor (Windows) – Handles very large text files nicely (officially up to 248 GB, but as much as 900 GB according to one report). *BssEditor (Windows) – Handles large files and very long lines. Don’t require an installation. Free for non commercial use. *loxx (Windows) – Supports file following, highlighting, line numbers, huge files, regex, multiple files and views, and much more. The free version can not: process regex, filter files, synchronize timestamps, and save changed files.
{ "language": "en", "url": "https://stackoverflow.com/questions/159521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1022" }
Q: Why do I get "Cannot redirect after HTTP headers have been sent" when I call Response.Redirect()? When I call Response.Redirect(someUrl) I get the following HttpException: Cannot redirect after HTTP headers have been sent. Why do I get this? And how can I fix this issue? A: A Redirect can only happen if the first line in an HTTP message is "HTTP/1.x 3xx Redirect Reason". If you already called Response.Write() or set some headers, it'll be too late for a redirect. You can try calling Response.Headers.Clear() before the Redirect to see if that helps. A: Just check if you have set the buffering option to false (by default its true). For response.redirect to work, * *Buffering should be true, *you should not have sent more data using response.write which exceeds the default buffer size (in which case it will flush itself causing the headers to be sent) therefore disallowing you to redirect. A: Using return RedirectPermanent(myUrl) worked for me A: You can also use below mentioned code Response.Write("<script type='text/javascript'>"); Response.Write("window.location = '" + redirect url + "'</script>");Response.Flush(); A: Once you send any content at all to the client, the HTTP headers have already been sent. A Response.Redirect() call works by sending special information in the headers that make the browser ask for a different URL. Since the headers were already sent, asp.net can't do what you want (modify the headers) You can get around this by a) either doing the Redirect before you do anything else, or b) try using Response.Buffer = true before you do anything else, to make sure that no output is sent to the client until the whole page is done executing. A: According to the MSDN documentation for Response.Redirect(string url), it will throw an HttpException when "a redirection is attempted after the HTTP headers have been sent". Since Response.Redirect(string url) uses the Http "Location" response header (http://en.wikipedia.org/wiki/HTTP_headers#Responses), calling it will cause the headers to be sent to the client. This means that if you call it a second time, or if you call it after you've caused the headers to be sent in some other way, you'll get the HttpException. One way to guard against calling Response.Redirect() multiple times is to check the Response.IsRequestBeingRedirected property (bool) before calling it. // Causes headers to be sent to the client (Http "Location" response header) Response.Redirect("http://www.stackoverflow.com"); if (!Response.IsRequestBeingRedirected) // Will not be called Response.Redirect("http://www.google.com"); A: There is one simple answer for this: You have been output something else, like text, or anything related to output from your page before you send your header. This affect why you get that error. Just check your code for posible output or you can put the header on top of your method so it will be send first. A: If you are trying to redirect after the headers have been sent (if, for instance, you are doing an error redirect from a partially-generated page), you can send some client Javascript (location.replace or location.href, etc.) to redirect to whatever URL you want. Of course, that depends on what HTML has already been sent down. A: My Issue got resolved by adding the Exception Handler to handle "Cannot redirect after HTTP headers have been sent". this Error as shown below code catch (System.Threading.ThreadAbortException) { // To Handle HTTP Exception "Cannot redirect after HTTP headers have been sent". } catch (Exception e) {//Here you can put your context.response.redirect("page.aspx");} A: I solved the problem using: Response.RedirectToRoute("CultureEnabled", RouteData.Values); instead of Response.Redirect. A: Be sure that you don't use Responses' methods like Response.Flush(); before your redirecting part. A: Error Cannot redirect after HTTP headers have been sent. System.Web.HttpException (0x80004005): Cannot redirect after HTTP headers have been sent. Suggestion If we use asp.net mvc and working on same controller and redirect to different Action then you do not need to write.. Response.Redirect("ActionName","ControllerName"); its better to use only return RedirectToAction("ActionName"); or return View("ViewName"); A: The redirect function probably works by using the 'refresh' http header (and maybe using a 30X code as well). Once the headers have been sent to the client, there is not way for the server to append that redirect command, its too late. A: If you get Cannot redirect after HTTP headers have been sent then try this below code. HttpContext.Current.Server.ClearError(); // Response.Headers.Clear(); HttpContext.Current.Response.Redirect("/Home/Login",false); A: There are 2 ways to fix this: * *Just add a return statement after your Response.Redirect(someUrl); ( if the method signature is not "void", you will have to return that "type", of course ) as so: Response.Redirect("Login.aspx"); return; Note the return allows the server to perform the redirect...without it, the server wants to continue executing the rest of your code... *Make your Response.Redirect(someUrl) the LAST executed statement in the method that is throwing the exception. Replace your Response.Redirect(someUrl) with a string VARIABLE named "someUrl", and set it to the redirect location... as follows: //......some code string someUrl = String.Empty .....some logic if (x=y) { // comment (original location of Response.Redirect("Login.aspx");) someUrl = "Login.aspx"; } ......more code // MOVE your Response.Redirect to HERE (the end of the method): Response.Redirect(someUrl); return;
{ "language": "en", "url": "https://stackoverflow.com/questions/159523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "90" }
Q: How to get SQL Profiler to monitor trigger execution I have a trace setup for SQL Server Profiler to monitor SQL that is executed on a database. I recently discovered that trigger execution is not included in the trace. After looking through available events for a trace, I do not see any that look like they would include trigger execution. Does anyone know how to setup a trace to monitor the execution of triggers? A: Stored procedures: - SP:StmtStarting - SP:StmtCompleted A: In SQL Server Profiler 2008, when starting/configuring the trace, go to the "Events Selection" tab, click on the "Show all events" checkbox, and then in the list under the Stored Procedures section select the SP:StmtStarting and SP:StmtCompleted events to be included in the trace.
{ "language": "en", "url": "https://stackoverflow.com/questions/159526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: Nginx + fastcgi truncation problem I'm running a Django site using the fastcgi interface to nginx. However, some pages are being served truncated (i.e. the page source just stops, sometimes in the middle of a tag). How do I fix this (let me know what extra information is needed, and I'll post it) Details: I'm using flup, and spawning the fastcgi server with the following command: python ./manage.py runfcgi umask=000 maxchildren=5 maxspare=1 minspare=0 method=prefork socket=/path/to/runfiles/django.sock pidfile=/path/to/runfiles/django.pid The nginx config is as follows: # search and replace this: {project_location} pid /path/to/runfiles/nginx.pid; worker_processes 2; error_log /path/to/runfiles/error_log; events { worker_connections 1024; use epoll; } http { # default nginx location include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; sendfile on; tcp_nopush on; keepalive_timeout 75 20; tcp_nodelay on; client_max_body_size 10m; client_body_buffer_size 256k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; client_body_temp_path /path/to/runfiles/client_body_temp; proxy_temp_path /path/to/runfiles/proxy_temp; fastcgi_temp_path /path/to/runfiles/fastcgi_temp; gzip on; gzip_min_length 1100; gzip_buffers 4 32k; gzip_types text/plain text/html application/x-javascript text/xml text/css; ignore_invalid_headers on; server { listen 80; server_name alpha2.sonyalabs.com; index index.html; root /path/to/django-root/static; # static resources location ~* ^/static/.*$ { root /path/to/django-root; expires 30d; break; } location / { # host and port to fastcgi server fastcgi_pass unix:/path/to/runfiles/django.sock; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; } location /403.html { root /usr/local/nginx; access_log off; } location /401.html { root /usr/local/nginx; access_log off; } location /404.html { root /usr/local/nginx; access_log off; } location = /_.gif { empty_gif; access_log off; } access_log /path/to/runfiles/localhost.access_log main; error_log /path/to/runfiles/localhost.error_log; } } A: I had the same exact problem running Nagios on nginx. I stumbled upon your question while googling for an answer, and reading "permission denied" related answers it struck me (and perhaps it will help you) : * *Nginx error.log was reporting : 2011/03/07 11:36:02 [crit] 30977#0: *225952 open() "/var/lib/nginx/fastcgi/2/65/0000002652" failed (13: Permission denied) *so I just ran # chown -R www-data:www-data /var/lib/nginx/fastcgi *Fixed ! (and thank you for your indirect help) A: Check your error logs for "Permission denied" errors writing to .../nginx/tmp/... files. Nginx will work fine unless it needs temporary space, and that typically happens at 32K boundaries. If you find these errors, make sure the tmp directory is writable by the user nginx runs as. A: What fastcgi interface are you using and how. Is it flup? If yes, paste the way you spawn the server and how it's hooked into nginx. Without that information it's just guessing what could go wrong. Possible problems: * *nginx is buggy. At least lighttpd has horrible fastcgi bugs, I wouldn't wonder if nginx has some too :) *Django is dying with a traceback in an internal system that is not properly catched and closes the fastcgi server which you can't see from the client. In that situation wrap the fastcgi server application call and try/except it to print the exception. But server log and config would be great. A: try to raise "gzip_buffers" may help. see here: http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl A: FastCGI is not to blame for this. I ran into exactly the same issue using nginx/gunicorn. Reducing the response size to less than 32k (in the specific case using the spaceless tag in the template) solved it. As dwc says, it's probably a hard limit due to the way nginx uses address space. A: I'm running very similar configurations to this both on my webhost (Webfaction) and on a local Ubuntu dev server and I don't see any problems. I'm guessing it's a time-out or full buffer that's causing this. Can you post the output of the nginx error log? Also what version of nginx are you using? As a side note it may be worth looking at django-logging to find out what your fastcgi process is doing.
{ "language": "en", "url": "https://stackoverflow.com/questions/159541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Mahjong - Arrange tiles to ensure at least one path to victory, regardless of layout Regardless of the layout being used for the tiles, is there any good way to divvy out the tiles so that you can guarantee the user that, at the beginning of the game, there exists at least one path to completing the puzzle and winning the game? Obviously, depending on the user's moves, they can cut themselves off from winning. I just want to be able to always tell the user that the puzzle is winnable if they play well. If you randomly place tiles at the beginning of the game, it's possible that the user could make a few moves and not be able to do any more. The knowledge that a puzzle is at least solvable should make it more fun to play. A: I know this is an old question, but I came across this when solving the problem myself. None of the answers here are quite perfect, and several of them have complicated caveats or will break on pathological layouts. Here is my solution: Solve the board (forward, not backward) with unmarked tiles. Remove two free tiles at a time. Push each pair you remove onto a "matched pair" stack. Often, this is all you need to do. If you run into a dead end (numFreeTiles == 1), just reset your generator :) I have found I usually don't hit dead ends, and have so far have a max retry count of 3 for the 10-or-so layouts I have tried. Once I hit 8 retries, I give up and just randomly assign the rest of the tiles. This allows me to use the same generator for both setting up the board, and the shuffle feature, even if the player screwed up and made a 100% unsolvable state. Another solution when you hit a dead end is to back out (pop off the stack, replacing tiles on the board) until you can take a different path. Take a different path by making sure you match pairs that will remove the original blocking tile. Unfortunately, depending on the board, this may loop forever. If you end up removing a pair that resembles a "no outlet" road, where all subsequent "roads" are a dead end, and there are multiple dead ends, your algorithm will never complete. I don't know if it is possible to design a board where this would be the case, but if so, there is still a solution. To solve that bigger problem, treat each possible board state as a node in a DAG, with each selected pair being an edge on that graph. Do a random traversal, until you find a leaf node at depth 72. Keep track of your traversal history so that you never repeat a descent. Since dead ends are more rare than first-try solutions in the layouts I have used, what immediately comes to mind is a hybrid solution. First try to solve it with minimal memory (store selected pairs on your stack). Once you've hit the first dead end, degrade to doing full marking/edge generation when visiting each node (lazy evaluation where possible). I've done very little study of graph theory, though, so maybe there's a better solution to the DAG random traversal/search problem :) Edit: You actually could use any of my solutions w/ generating the board in reverse, ala the Oct 13th 2008 post. You still have the same caveats, because you can still end up with dead ends. Generating a board in reverse has more complicated rules, though. E.g, you are guaranteed to fail your setup if you don't start at least SOME of your rows w/ the first piece in the middle, such as in a layout w/ 1 long row. Picking a completely random (legal) first move in a forward-solving generator is more likely to lead to a solvable board. A: The only thing I've been able to come up with is to place the tiles down in matching pairs as kind of a reverse Mahjong Solitaire game. So, at any point during the tile placement, the board should look like it's in the middle of a real game (ie no tiles floating 3 layers up above other tiles). If the tiles are place in matching pairs in a reverse game, it should always result in at least one forward path to solve the game. I'd love to hear other ideas. A: Place all the tiles in reverse (ie layout out the board starting in the middle, working out) To tease the player further, you could do it visibly but at very high speed. A: Play the game in reverse. Randomly lay out pieces pair by pair, in places where you could slide them into the heap. You'll need a way to know where you're allowed to place pieces in order to end up with a heap that matches some preset pattern, but you'd need that anyway. A: I believe the best answer has already been pushed up: creating a set by solving it "in reverse" - i.e. starting with a blank board, then adding a pair somewhere, add another pair in a solvable position, and so on... If you a prefer "Big Bang" approach (generating the whole set randomly at the beginning), are a very macho developer or just feel masochistic today, you could represent all the pairs you can take out from the given set and how they depend on each other via a directed graph. From there, you'd only have to get the transitive closure of that set and determine if there's at least one path from at least one of the initial legal pairs that leads to the desired end (no tile pairs left). Implementing this solution is left as an exercise to the reader :D A: Here are rules i used in my implementation. When buildingheap, for each fret in a pair separately, find a cells (places), which are: * *has all cells at lower levels already filled *place for second fret does not block first, considering if first fret already put onboard *both places are "at edges" of already built heap: * *EITHER has at least one neighbour at left or right side *OR it is first fret in a row (all cells at right and left are recursively free) These rules does not guarantee a build will always successful - it sometimes leave last 2 free cells self-blocking, and build should be retried (or at least last few frets) In practice, "turtle" built in no more then 6 retries. Most of existed games seems to restrict putting first ("first on row") frets somewhere in a middle. This come up with more convenient configurations, when there are no frets at edges of very long rows, staying up until last player moves. However, "middle" is different for different configurations. Good luck :) P.S. If you've found algo that build solvable heap in one turn - please let me know. A: You have 144 tiles in the game, each of the 144 tiles has a block list.. (top tile on stack has an empty block list) All valid moves require that their "current__vertical_Block_list" be empty.. this can be a 144x144 matrix so 20k of memory plus a LEFT and RIGHT block list, also 20 k each. Generate a valid move table from (remaning_tiles) AND ((empty CURRENT VERTICAL BLOCK LIST) and ((empty CURRENT LEFT BLOCK LIST) OR (empty CURRENT RIGHT BLOCK LIST))) Pick 2 random tiles from the valid move table, record them Update the (current tables Vert, left and right), record the Tiles removed to a stack Now we have a list of moves that constitute a valid game. Assign matching tile types to each of the 72 moves. for challenging games, track when each tile becomes available. find sets that have are (early early early late) and (late late late early) since it's blank, you find 1 EE 1 LL and 2 LE blocks.. of the 2 LE block, find an EARLY that blocks ANY other EARLY that (except rightblocking a left side piece) Once youve got a valid game play around with the ordering. A: Solitaire? Just a guess, but I would assume that your computer would need to beat the game(or close to it) to determine this. Another option might be to have several preset layouts(that allow winning, mixed in with your current level. To some degree you could try making sure that one of the 4 tiles is no more than X layers below another X. Most games I see have the shuffle command for when someone gets stuck. I would try a mix of things and see what works best.
{ "language": "en", "url": "https://stackoverflow.com/questions/159547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Javascript Commands Only Executed When Unminimized I want to log to the console when I'm using un-minimized JavaScript files. Comments are taken out already when I minimize JavaScript. I'm wondering if there's a way I can write a command that isn't commented out but will still be taken out when I minimize the JavaScript file. A: I think I'd be pretty upset if a Javascript minimizer changed the behaviour of my code based on some funny/clever/odd code construct. How could you ever be sure that code construct isn't there intentionally? As has been suggested, have a variable that disables logging. Then as part of your minimize script or batch job, you can swap that variable to its non-logging state using sed (for example) before minimizing. A: Unless whatever you're using to minimize your JS supports conditional statements I don't think you can do this. Why not just log things if a certain variable is set? A: If your goal is just to reduce the js size you can separate you logging functions into a separate file. In your "main" js add a function function doLogging(object){} then in your separate logging functions file replace the function with function doLogging(object){/*your logging code*/}; Just remember to include your main js before the logging js. When you minify just comment out the logging script tags from the html. This way you will only have one (or a couple of) empty function definitions in the minified js and one line of code calling those functions per loggingn action.
{ "language": "en", "url": "https://stackoverflow.com/questions/159549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: String.Format like functionality in T-SQL? I'm looking for a built-in function/extended function in T-SQL for string manipulation similar to the String.Format method in .NET. A: If you are using SQL Server 2012 and above, you can use FORMATMESSAGE. eg. DECLARE @s NVARCHAR(50) = 'World'; DECLARE @d INT = 123; SELECT FORMATMESSAGE('Hello %s, %d', @s, @d) -- RETURNS 'Hello World, 123' More examples from MSDN: FORMATMESSAGE SELECT FORMATMESSAGE('Signed int %i, %d %i, %d, %+i, %+d, %+i, %+d', 5, -5, 50, -50, -11, -11, 11, 11); SELECT FORMATMESSAGE('Signed int with leading zero %020i', 5); SELECT FORMATMESSAGE('Signed int with leading zero 0 %020i', -55); SELECT FORMATMESSAGE('Unsigned int %u, %u', 50, -50); SELECT FORMATMESSAGE('Unsigned octal %o, %o', 50, -50); SELECT FORMATMESSAGE('Unsigned hexadecimal %x, %X, %X, %X, %x', 11, 11, -11, 50, -50); SELECT FORMATMESSAGE('Unsigned octal with prefix: %#o, %#o', 50, -50); SELECT FORMATMESSAGE('Unsigned hexadecimal with prefix: %#x, %#X, %#X, %X, %x', 11, 11, -11, 50, -50); SELECT FORMATMESSAGE('Hello %s!', 'TEST'); SELECT FORMATMESSAGE('Hello %20s!', 'TEST'); SELECT FORMATMESSAGE('Hello %-20s!', 'TEST'); SELECT FORMATMESSAGE('Hello %20s!', 'TEST'); NOTES: * *Undocumented in 2012 *Limited to 2044 characters *To escape the % sign, you need to double it. *If you are logging errors in extended events, calling FORMATMESSAGE comes up as a (harmless) error A: take a look at xp_sprintf. example below. DECLARE @ret_string varchar (255) EXEC xp_sprintf @ret_string OUTPUT, 'INSERT INTO %s VALUES (%s, %s)', 'table1', '1', '2' PRINT @ret_string Result looks like this: INSERT INTO table1 VALUES (1, 2) Just found an issue with the max size (255 char limit) of the string with this so there is an alternative function you can use: create function dbo.fnSprintf (@s varchar(MAX), @params varchar(MAX), @separator char(1) = ',') returns varchar(MAX) as begin declare @p varchar(MAX) declare @paramlen int set @params = @params + @separator set @paramlen = len(@params) while not @params = '' begin set @p = left(@params+@separator, charindex(@separator, @params)-1) set @s = STUFF(@s, charindex('%s', @s), 2, @p) set @params = substring(@params, len(@p)+2, @paramlen) end return @s end To get the same result as above you call the function as follows: print dbo.fnSprintf('INSERT INTO %s VALUES (%s, %s)', 'table1,1,2', default) A: There is a way, but it has its limitations. You can use the FORMATMESSAGE() function. It allows you to format a string using formatting similar to the printf() function in C. However, the biggest limitation is that it will only work with messages in the sys.messages table. Here's an article about it: microsoft_library_ms186788 It's kind of a shame there isn't an easier way to do this, because there are times when you want to format a string/varchar in the database. Hopefully you are only looking to format a string in a standard way and can use the sys.messages table. Coincidentally, you could also use the RAISERROR() function with a very low severity, the documentation for raiseerror even mentions doing this, but the results are only printed. So you wouldn't be able to do anything with the resulting value (from what I understand). Good luck! A: Raw t-sql is limited to CHARINDEX(), PATINDEX(), REPLACE(), and SUBSTRING() for string manipulation. But with sql server 2005 and later you can set up user defined functions that run in .Net, which means setting up a string.format() UDF shouldn't be too tough. A: I think there is small correction while calculating end position. Here is correct function **>>**IF OBJECT_ID( N'[dbo].[FormatString]', 'FN' ) IS NOT NULL DROP FUNCTION [dbo].[FormatString] GO /*************************************************** Object Name : FormatString Purpose : Returns the formatted string. Original Author : Karthik D V http://stringformat-in-sql.blogspot.com/ Sample Call: SELECT dbo.FormatString ( N'Format {0} {1} {2} {0}', N'1,2,3' ) *******************************************/ CREATE FUNCTION [dbo].[FormatString]( @Format NVARCHAR(4000) , @Parameters NVARCHAR(4000) ) RETURNS NVARCHAR(4000) AS BEGIN --DECLARE @Format NVARCHAR(4000), @Parameters NVARCHAR(4000) select @format='{0}{1}', @Parameters='hello,world' DECLARE @Message NVARCHAR(400), @Delimiter CHAR(1) DECLARE @ParamTable TABLE ( ID INT IDENTITY(0,1), Parameter VARCHAR(1000) ) Declare @startPos int, @endPos int SELECT @Message = @Format, @Delimiter = ','**>>** --handle first parameter set @endPos=CHARINDEX(@Delimiter,@Parameters) if (@endPos=0 and @Parameters is not null) --there is only one parameter insert into @ParamTable (Parameter) values(@Parameters) else begin insert into @ParamTable (Parameter) select substring(@Parameters,0,@endPos) end while @endPos>0 Begin --insert a row for each parameter in the set @startPos = @endPos + LEN(@Delimiter) set @endPos = CHARINDEX(@Delimiter,@Parameters, @startPos) if (@endPos>0) insert into @ParamTable (Parameter) select substring(@Parameters,@startPos,@endPos - @startPos) else insert into @ParamTable (Parameter) select substring(@Parameters,@startPos,4000) End UPDATE @ParamTable SET @Message = REPLACE ( @Message, '{'+CONVERT(VARCHAR,ID) + '}', Parameter ) RETURN @Message END Go grant execute,references on dbo.formatString to public A: One more idea. Although this is not a universal solution - it is simple and works, at least for me :) For one placeholder {0}: create function dbo.Format1 ( @String nvarchar(4000), @Param0 sql_variant ) returns nvarchar(4000) as begin declare @Null nvarchar(4) = N'NULL'; return replace(@String, N'{0}', cast(isnull(@Param0, @Null) as nvarchar(4000))); end For two placeholders {0} and {1}: create function dbo.Format2 ( @String nvarchar(4000), @Param0 sql_variant, @Param1 sql_variant ) returns nvarchar(4000) as begin declare @Null nvarchar(4) = N'NULL'; set @String = replace(@String, N'{0}', cast(isnull(@Param0, @Null) as nvarchar(4000))); return replace(@String, N'{1}', cast(isnull(@Param1, @Null) as nvarchar(4000))); end For three placeholders {0}, {1} and {2}: create function dbo.Format3 ( @String nvarchar(4000), @Param0 sql_variant, @Param1 sql_variant, @Param2 sql_variant ) returns nvarchar(4000) as begin declare @Null nvarchar(4) = N'NULL'; set @String = replace(@String, N'{0}', cast(isnull(@Param0, @Null) as nvarchar(4000))); set @String = replace(@String, N'{1}', cast(isnull(@Param1, @Null) as nvarchar(4000))); return replace(@String, N'{2}', cast(isnull(@Param2, @Null) as nvarchar(4000))); end and so on... Such an approach allows us to use these functions in SELECT statement and with parameters of nvarchar, number, bit and datetime datatypes. For example: declare @Param0 nvarchar(10) = N'IPSUM' , @Param1 int = 1234567 , @Param2 datetime2(0) = getdate(); select dbo.Format3(N'Lorem {0} dolor, {1} elit at {2}', @Param0, @Param1, @Param2); A: Actually there is no built in function similar to string.Format function of .NET is available in SQL server. There is a function FORMATMESSAGE() in SQL server but it mimics to printf() function of C not string.Format function of .NET. SELECT FORMATMESSAGE('This is the %s and this is the %s.', 'first variable', 'second variable') AS Result A: I have created a user defined function to mimic the string.format functionality. You can use it. stringformat-in-sql UPDATE: This version allows the user to change the delimitter. -- DROP function will loose the security settings. IF object_id('[dbo].[svfn_FormatString]') IS NOT NULL DROP FUNCTION [dbo].[svfn_FormatString] GO CREATE FUNCTION [dbo].[svfn_FormatString] ( @Format NVARCHAR(4000), @Parameters NVARCHAR(4000), @Delimiter CHAR(1) = ',' ) RETURNS NVARCHAR(MAX) AS BEGIN /* Name: [dbo].[svfn_FormatString] Creation Date: 12/18/2020 Purpose: Returns the formatted string (Just like in C-Sharp) Input Parameters: @Format = The string to be Formatted @Parameters = The comma separated list of parameters @Delimiter = The delimitter to be used in the formatting process Format: @Format = N'Hi {0}, Welcome to our site {1}. Thank you {0}' @Parameters = N'Karthik,google.com' @Delimiter = ',' Examples: SELECT dbo.svfn_FormatString(N'Hi {0}, Welcome to our site {1}. Thank you {0}', N'Karthik,google.com', default) SELECT dbo.svfn_FormatString(N'Hi {0}, Welcome to our site {1}. Thank you {0}', N'Karthik;google.com', ';') */ DECLARE @Message NVARCHAR(400) DECLARE @ParamTable TABLE ( Id INT IDENTITY(0,1), Paramter VARCHAR(1000)) SELECT @Message = @Format ;WITH CTE (StartPos, EndPos) AS ( SELECT 1, CHARINDEX(@Delimiter, @Parameters) UNION ALL SELECT EndPos + (LEN(@Delimiter)), CHARINDEX(@Delimiter, @Parameters, EndPos + (LEN(@Delimiter))) FROM CTE WHERE EndPos > 0 ) INSERT INTO @ParamTable ( Paramter ) SELECT [Id] = SUBSTRING(@Parameters, StartPos, CASE WHEN EndPos > 0 THEN EndPos - StartPos ELSE 4000 END ) FROM CTE UPDATE @ParamTable SET @Message = REPLACE(@Message, '{'+ CONVERT(VARCHAR, Id) + '}', Paramter ) RETURN @Message END A: Here is my version. Can be extended to accommodate more number of parameters and can extend formatting based on type. Currently only date and datetime types are formatted. Example: select dbo.FormatString('some string %s some int %s date %s','"abcd"',100,cast(getdate() as date),DEFAULT,DEFAULT) select dbo.FormatString('some string %s some int %s date time %s','"abcd"',100,getdate(),DEFAULT,DEFAULT) Output: some string "abcd" some int 100 date 29-Apr-2017 some string "abcd" some int 100 date time 29-Apr-2017 19:40 Functions: create function dbo.FormatValue(@param sql_variant) returns nvarchar(100) begin /* Tejasvi Hegde, 29-April-2017 Can extend formatting here. */ declare @result nvarchar(100) if (SQL_VARIANT_PROPERTY(@param,'BaseType') in ('date')) begin select @result = REPLACE(CONVERT(CHAR(11), @param, 106), ' ', '-') end else if (SQL_VARIANT_PROPERTY(@param,'BaseType') in ('datetime','datetime2')) begin select @result = REPLACE(CONVERT(CHAR(11), @param, 106), ' ', '-')+' '+CONVERT(VARCHAR(5),@param,108) end else begin select @result = cast(@param as nvarchar(100)) end return @result /* BaseType: bigint binary char date datetime datetime2 datetimeoffset decimal float int money nchar numeric nvarchar real smalldatetime smallint smallmoney time tinyint uniqueidentifier varbinary varchar */ end; create function dbo.FormatString( @format nvarchar(4000) ,@param1 sql_variant = null ,@param2 sql_variant = null ,@param3 sql_variant = null ,@param4 sql_variant = null ,@param5 sql_variant = null ) returns nvarchar(4000) begin /* Tejasvi Hegde, 29-April-2017 select dbo.FormatString('some string value %s some int %s date %s','"abcd"',100,cast(getdate() as date),DEFAULT,DEFAULT) select dbo.FormatString('some string value %s some int %s date time %s','"abcd"',100,getdate(),DEFAULT,DEFAULT) */ declare @result nvarchar(4000) select @param1 = dbo.formatValue(@param1) ,@param2 = dbo.formatValue(@param2) ,@param3 = dbo.formatValue(@param3) ,@param4 = dbo.formatValue(@param4) ,@param5 = dbo.formatValue(@param5) select @param2 = cast(@param2 as nvarchar) EXEC xp_sprintf @result OUTPUT,@format , @param1, @param2, @param3, @param4, @param5 return @result end; A: here's what I found with my experiments using the built-in FORMATMESSAGE() function sp_addmessage @msgnum=50001,@severity=1,@msgText='Hello %s you are #%d',@replace='replace' SELECT FORMATMESSAGE(50001, 'Table1', 5) when you call up sp_addmessage, your message template gets stored into the system table master.dbo.sysmessages (verified on SQLServer 2000). You must manage addition and removal of template strings from the table yourself, which is awkward if all you really want is output a quick message to the results screen. The solution provided by Kathik DV, looks interesting but doesn't work with SQL Server 2000, so i altered it a bit, and this version should work with all versions of SQL Server: IF OBJECT_ID( N'[dbo].[FormatString]', 'FN' ) IS NOT NULL DROP FUNCTION [dbo].[FormatString] GO /*************************************************** Object Name : FormatString Purpose : Returns the formatted string. Original Author : Karthik D V http://stringformat-in-sql.blogspot.com/ Sample Call: SELECT dbo.FormatString ( N'Format {0} {1} {2} {0}', N'1,2,3' ) *******************************************/ CREATE FUNCTION [dbo].[FormatString]( @Format NVARCHAR(4000) , @Parameters NVARCHAR(4000) ) RETURNS NVARCHAR(4000) AS BEGIN --DECLARE @Format NVARCHAR(4000), @Parameters NVARCHAR(4000) select @format='{0}{1}', @Parameters='hello,world' DECLARE @Message NVARCHAR(400), @Delimiter CHAR(1) DECLARE @ParamTable TABLE ( ID INT IDENTITY(0,1), Parameter VARCHAR(1000) ) Declare @startPos int, @endPos int SELECT @Message = @Format, @Delimiter = ',' --handle first parameter set @endPos=CHARINDEX(@Delimiter,@Parameters) if (@endPos=0 and @Parameters is not null) --there is only one parameter insert into @ParamTable (Parameter) values(@Parameters) else begin insert into @ParamTable (Parameter) select substring(@Parameters,0,@endPos) end while @endPos>0 Begin --insert a row for each parameter in the set @startPos = @endPos + LEN(@Delimiter) set @endPos = CHARINDEX(@Delimiter,@Parameters, @startPos) if (@endPos>0) insert into @ParamTable (Parameter) select substring(@Parameters,@startPos,@endPos) else insert into @ParamTable (Parameter) select substring(@Parameters,@startPos,4000) End UPDATE @ParamTable SET @Message = REPLACE ( @Message, '{'+CONVERT(VARCHAR,ID) + '}', Parameter ) RETURN @Message END Go grant execute,references on dbo.formatString to public Usage: print dbo.formatString('hello {0}... you are {1}','world,good') --result: hello world... you are good A: At the moment this doesn't really exist (although you can of course write your own). There is an open connect bug for it: https://connect.microsoft.com/SQLServer/Feedback/Details/3130221, which as of this writing has just 1 vote. A: this is bad approach. you should work with assembly dll's, in which will do the same for you with better performance. A: Not exactly, but I would check out some of the articles on string handling (amongst other things) by "Phil Factor" (geddit?) on Simple Talk.
{ "language": "en", "url": "https://stackoverflow.com/questions/159554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "104" }
Q: iPhone user's mobile number Can I get user's mobile number (CLI number) in Objective-C on the iPhone from its sim card? A: Not with the official SDK. You could ask the user to point to its own contact entry using the AddressBookUI framework's pickers, or a UI of your own design. A: You can get the user's phone # from NSUserDefaults. And then look up their address book entry. This method is completely undocumented and liable to break at a moments noticed. Also it is fragile - user might have a bad address book with the same # used multiple times etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/159556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SQL: parse the first, middle and last name from a fullname field How do I parse the first, middle, and last name out of a fullname field with SQL? I need to try to match up on names that are not a direct match on full name. I'd like to be able to take the full name field and break it up into first, middle and last name. The data does not include any prefixes or suffixes. The middle name is optional. The data is formatted 'First Middle Last'. I'm interested in some practical solutions to get me 90% of the way there. As it has been stated, this is a complex problem, so I'll handle special cases individually. A: Alternative simple way is to use parsename : select full_name, parsename(replace(full_name, ' ', '.'), 3) as FirstName, parsename(replace(full_name, ' ', '.'), 2) as MiddleName, parsename(replace(full_name, ' ', '.'), 1) as LastName from YourTableName source A: Reverse the problem, add columns to hold the individual pieces and combine them to get the full name. The reason this will be the best answer is that there is no guaranteed way to figure out a person has registered as their first name, and what is their middle name. For instance, how would you split this? Jan Olav Olsen Heggelien This, while being fictious, is a legal name in Norway, and could, but would not have to, be split like this: First name: Jan Olav Middle name: Olsen Last name: Heggelien or, like this: First name: Jan Olav Last name: Olsen Heggelien or, like this: First name: Jan Middle name: Olav Last name: Olsen Heggelien I would imagine similar occurances can be found in most languages. So instead of trying to interpreting data which does not have enough information to get it right, store the correct interpretation, and combine to get the full name. A: Unless you have very, very well-behaved data, this is a non-trivial challenge. A naive approach would be to tokenize on whitespace and assume that a three-token result is [first, middle, last] and a two-token result is [first, last], but you're going to have to deal with multi-word surnames (e.g. "Van Buren") and multiple middle names. A: This query is working fine. SELECT name ,Ltrim(SubString(name, 1, Isnull(Nullif(CHARINDEX(' ', name), 0), 1000))) AS FirstName ,Ltrim(SUBSTRING(name, CharIndex(' ', name), CASE WHEN (CHARINDEX(' ', name, CHARINDEX(' ', name) + 1) - CHARINDEX(' ', name)) <= 0 THEN 0 ELSE CHARINDEX(' ', name, CHARINDEX(' ', name) + 1) - CHARINDEX(' ', name) END)) AS MiddleName ,Ltrim(SUBSTRING(name, Isnull(Nullif(CHARINDEX(' ', name, Charindex(' ', name) + 1), 0), CHARINDEX(' ', name)), CASE WHEN Charindex(' ', name) = 0 THEN 0 ELSE LEN(name) END)) AS LastName FROM yourtableName A: Are you sure the Full Legal Name will always include First, Middle and Last? I know people that have only one name as Full Legal Name, and honestly I am not sure if that's their First or Last Name. :-) I also know people that have more than one Fisrt names in their legal name, but don't have a Middle name. And there are some people that have multiple Middle names. Then there's also the order of the names in the Full Legal Name. As far as I know, in some Asian cultures the Last Name comes first in the Full Legal Name. On a more practical note, you could split the Full Name on whitespace and threat the first token as First name and the last token (or the only token in case of only one name) as Last name. Though this assumes that the order will be always the same. A: This Will Work in Case String Is FirstName/MiddleName/LastName Select DISTINCT NAMES , SUBSTRING(NAMES , 1, CHARINDEX(' ', NAMES) - 1) as FirstName, RTRIM(LTRIM(REPLACE(REPLACE(NAMES,SUBSTRING(NAMES , 1, CHARINDEX(' ', NAMES) - 1),''),REVERSE( LEFT( REVERSE(NAMES), CHARINDEX(' ', REVERSE(NAMES))-1 ) ),'')))as MiddleName, REVERSE( LEFT( REVERSE(NAMES), CHARINDEX(' ', REVERSE(NAMES))-1 ) ) as LastName From TABLENAME A: Here is a self-contained example, with easily manipulated test data. With this example, if you have a name with more than three parts, then all the "extra" stuff will get put in the LAST_NAME field. An exception is made for specific strings that are identified as "titles", such as "DR", "MRS", and "MR". If the middle name is missing, then you just get FIRST_NAME and LAST_NAME (MIDDLE_NAME will be NULL). You could smash it into a giant nested blob of SUBSTRINGs, but readability is hard enough as it is when you do this in SQL. Edit-- Handle the following special cases: 1 - The NAME field is NULL 2 - The NAME field contains leading / trailing spaces 3 - The NAME field has > 1 consecutive space within the name 4 - The NAME field contains ONLY the first name 5 - Include the original full name in the final output as a separate column, for readability 6 - Handle a specific list of prefixes as a separate "title" column SELECT FIRST_NAME.ORIGINAL_INPUT_DATA ,FIRST_NAME.TITLE ,FIRST_NAME.FIRST_NAME ,CASE WHEN 0 = CHARINDEX(' ',FIRST_NAME.REST_OF_NAME) THEN NULL --no more spaces? assume rest is the last name ELSE SUBSTRING( FIRST_NAME.REST_OF_NAME ,1 ,CHARINDEX(' ',FIRST_NAME.REST_OF_NAME)-1 ) END AS MIDDLE_NAME ,SUBSTRING( FIRST_NAME.REST_OF_NAME ,1 + CHARINDEX(' ',FIRST_NAME.REST_OF_NAME) ,LEN(FIRST_NAME.REST_OF_NAME) ) AS LAST_NAME FROM ( SELECT TITLE.TITLE ,CASE WHEN 0 = CHARINDEX(' ',TITLE.REST_OF_NAME) THEN TITLE.REST_OF_NAME --No space? return the whole thing ELSE SUBSTRING( TITLE.REST_OF_NAME ,1 ,CHARINDEX(' ',TITLE.REST_OF_NAME)-1 ) END AS FIRST_NAME ,CASE WHEN 0 = CHARINDEX(' ',TITLE.REST_OF_NAME) THEN NULL --no spaces @ all? then 1st name is all we have ELSE SUBSTRING( TITLE.REST_OF_NAME ,CHARINDEX(' ',TITLE.REST_OF_NAME)+1 ,LEN(TITLE.REST_OF_NAME) ) END AS REST_OF_NAME ,TITLE.ORIGINAL_INPUT_DATA FROM ( SELECT --if the first three characters are in this list, --then pull it as a "title". otherwise return NULL for title. CASE WHEN SUBSTRING(TEST_DATA.FULL_NAME,1,3) IN ('MR ','MS ','DR ','MRS') THEN LTRIM(RTRIM(SUBSTRING(TEST_DATA.FULL_NAME,1,3))) ELSE NULL END AS TITLE --if you change the list, don't forget to change it here, too. --so much for the DRY prinicple... ,CASE WHEN SUBSTRING(TEST_DATA.FULL_NAME,1,3) IN ('MR ','MS ','DR ','MRS') THEN LTRIM(RTRIM(SUBSTRING(TEST_DATA.FULL_NAME,4,LEN(TEST_DATA.FULL_NAME)))) ELSE LTRIM(RTRIM(TEST_DATA.FULL_NAME)) END AS REST_OF_NAME ,TEST_DATA.ORIGINAL_INPUT_DATA FROM ( SELECT --trim leading & trailing spaces before trying to process --disallow extra spaces *within* the name REPLACE(REPLACE(LTRIM(RTRIM(FULL_NAME)),' ',' '),' ',' ') AS FULL_NAME ,FULL_NAME AS ORIGINAL_INPUT_DATA FROM ( --if you use this, then replace the following --block with your actual table SELECT 'GEORGE W BUSH' AS FULL_NAME UNION SELECT 'SUSAN B ANTHONY' AS FULL_NAME UNION SELECT 'ALEXANDER HAMILTON' AS FULL_NAME UNION SELECT 'OSAMA BIN LADEN JR' AS FULL_NAME UNION SELECT 'MARTIN J VAN BUREN SENIOR III' AS FULL_NAME UNION SELECT 'TOMMY' AS FULL_NAME UNION SELECT 'BILLY' AS FULL_NAME UNION SELECT NULL AS FULL_NAME UNION SELECT ' ' AS FULL_NAME UNION SELECT ' JOHN JACOB SMITH' AS FULL_NAME UNION SELECT ' DR SANJAY GUPTA' AS FULL_NAME UNION SELECT 'DR JOHN S HOPKINS' AS FULL_NAME UNION SELECT ' MRS SUSAN ADAMS' AS FULL_NAME UNION SELECT ' MS AUGUSTA ADA KING ' AS FULL_NAME ) RAW_DATA ) TEST_DATA ) TITLE ) FIRST_NAME A: It's difficult to answer without knowing how the "full name" is formatted. It could be "Last Name, First Name Middle Name" or "First Name Middle Name Last Name", etc. Basically you'll have to use the SUBSTRING function SUBSTRING ( expression , start , length ) And probably the CHARINDEX function CHARINDEX (substr, expression) To figure out the start and length for each part you want to extract. So let's say the format is "First Name Last Name" you could (untested.. but should be close) : SELECT SUBSTRING(fullname, 1, CHARINDEX(' ', fullname) - 1) AS FirstName, SUBSTRING(fullname, CHARINDEX(' ', fullname) + 1, len(fullname)) AS LastName FROM YourTable A: Like #1 said, it's not trivial. Hyphenated last names, initials, double names, inverse name sequence and a variety of other anomalies can ruin your carefully crafted function. You could use a 3rd party library (plug/disclaimer - I worked on this product): http://www.melissadata.com/nameobject/nameobject.htm A: I would do this as an iterative process. 1) Dump the table to a flat file to work with. 2) Write a simple program to break up your Names using a space as separator where firsts token is the first name, if there are 3 token then token 2 is middle name and token 3 is last name. If there are 2 tokens then the second token is the last name. (Perl, Java, or C/C++, language doesn't matter) 3) Eyeball the results. Look for names that don't fit this rule. 4) Using that example, create a new rule to handle that exception... 5) Rinse and Repeat Eventually you will get a program that fixes all your data. A: Here's a stored procedure that will put the first word found into First Name, the last word into Last Name and everything in between into Middle Name. create procedure [dbo].[import_ParseName] ( @FullName nvarchar(max), @FirstName nvarchar(255) output, @MiddleName nvarchar(255) output, @LastName nvarchar(255) output ) as begin set @FirstName = '' set @MiddleName = '' set @LastName = '' set @FullName = ltrim(rtrim(@FullName)) declare @ReverseFullName nvarchar(max) set @ReverseFullName = reverse(@FullName) declare @lengthOfFullName int declare @endOfFirstName int declare @beginningOfLastName int set @lengthOfFullName = len(@FullName) set @endOfFirstName = charindex(' ', @FullName) set @beginningOfLastName = @lengthOfFullName - charindex(' ', @ReverseFullName) + 1 set @FirstName = case when @endOfFirstName <> 0 then substring(@FullName, 1, @endOfFirstName - 1) else '' end set @MiddleName = case when (@endOfFirstName <> 0 and @beginningOfLastName <> 0 and @beginningOfLastName > @endOfFirstName) then ltrim(rtrim(substring(@FullName, @endOfFirstName , @beginningOfLastName - @endOfFirstName))) else '' end set @LastName = case when @beginningOfLastName <> 0 then substring(@FullName, @beginningOfLastName + 1 , @lengthOfFullName - @beginningOfLastName) else '' end return end And here's me calling it. DECLARE @FirstName nvarchar(255), @MiddleName nvarchar(255), @LastName nvarchar(255) EXEC [dbo].[import_ParseName] @FullName = N'Scott The Other Scott Kowalczyk', @FirstName = @FirstName OUTPUT, @MiddleName = @MiddleName OUTPUT, @LastName = @LastName OUTPUT print @FirstName print @MiddleName print @LastName output: Scott The Other Scott Kowalczyk A: If you are trying to parse apart a human name in PHP, I recommend Keith Beckman's nameparse.php script. Copy in case site goes down: <? /* Name: nameparse.php Version: 0.2a Date: 030507 First: 030407 License: GNU General Public License v2 Bugs: If one of the words in the middle name is Ben (or St., for that matter), or any other possible last-name prefix, the name MUST be entered in last-name-first format. If the last-name parsing routines get ahold of any prefix, they tie up the rest of the name up to the suffix. i.e.: William Ben Carey would yield 'Ben Carey' as the last name, while, Carey, William Ben would yield 'Carey' as last and 'Ben' as middle. This is a problem inherent in the prefix-parsing routines algorithm, and probably will not be fixed. It's not my fault that there's some odd overlap between various languages. Just don't name your kids 'Something Ben Something', and you should be alright. */ function norm_str($string) { return trim(strtolower( str_replace('.','',$string))); } function in_array_norm($needle,$haystack) { return in_array(norm_str($needle),$haystack); } function parse_name($fullname) { $titles = array('dr','miss','mr','mrs','ms','judge'); $prefices = array('ben','bin','da','dal','de','del','der','de','e', 'la','le','san','st','ste','van','vel','von'); $suffices = array('esq','esquire','jr','sr','2','ii','iii','iv'); $pieces = explode(',',preg_replace('/\s+/',' ',trim($fullname))); $n_pieces = count($pieces); switch($n_pieces) { case 1: // array(title first middles last suffix) $subp = explode(' ',trim($pieces[0])); $n_subp = count($subp); for($i = 0; $i < $n_subp; $i++) { $curr = trim($subp[$i]); $next = trim($subp[$i+1]); if($i == 0 && in_array_norm($curr,$titles)) { $out['title'] = $curr; continue; } if(!$out['first']) { $out['first'] = $curr; continue; } if($i == $n_subp-2 && $next && in_array_norm($next,$suffices)) { if($out['last']) { $out['last'] .= " $curr"; } else { $out['last'] = $curr; } $out['suffix'] = $next; break; } if($i == $n_subp-1) { if($out['last']) { $out['last'] .= " $curr"; } else { $out['last'] = $curr; } continue; } if(in_array_norm($curr,$prefices)) { if($out['last']) { $out['last'] .= " $curr"; } else { $out['last'] = $curr; } continue; } if($next == 'y' || $next == 'Y') { if($out['last']) { $out['last'] .= " $curr"; } else { $out['last'] = $curr; } continue; } if($out['last']) { $out['last'] .= " $curr"; continue; } if($out['middle']) { $out['middle'] .= " $curr"; } else { $out['middle'] = $curr; } } break; case 2: switch(in_array_norm($pieces[1],$suffices)) { case TRUE: // array(title first middles last,suffix) $subp = explode(' ',trim($pieces[0])); $n_subp = count($subp); for($i = 0; $i < $n_subp; $i++) { $curr = trim($subp[$i]); $next = trim($subp[$i+1]); if($i == 0 && in_array_norm($curr,$titles)) { $out['title'] = $curr; continue; } if(!$out['first']) { $out['first'] = $curr; continue; } if($i == $n_subp-1) { if($out['last']) { $out['last'] .= " $curr"; } else { $out['last'] = $curr; } continue; } if(in_array_norm($curr,$prefices)) { if($out['last']) { $out['last'] .= " $curr"; } else { $out['last'] = $curr; } continue; } if($next == 'y' || $next == 'Y') { if($out['last']) { $out['last'] .= " $curr"; } else { $out['last'] = $curr; } continue; } if($out['last']) { $out['last'] .= " $curr"; continue; } if($out['middle']) { $out['middle'] .= " $curr"; } else { $out['middle'] = $curr; } } $out['suffix'] = trim($pieces[1]); break; case FALSE: // array(last,title first middles suffix) $subp = explode(' ',trim($pieces[1])); $n_subp = count($subp); for($i = 0; $i < $n_subp; $i++) { $curr = trim($subp[$i]); $next = trim($subp[$i+1]); if($i == 0 && in_array_norm($curr,$titles)) { $out['title'] = $curr; continue; } if(!$out['first']) { $out['first'] = $curr; continue; } if($i == $n_subp-2 && $next && in_array_norm($next,$suffices)) { if($out['middle']) { $out['middle'] .= " $curr"; } else { $out['middle'] = $curr; } $out['suffix'] = $next; break; } if($i == $n_subp-1 && in_array_norm($curr,$suffices)) { $out['suffix'] = $curr; continue; } if($out['middle']) { $out['middle'] .= " $curr"; } else { $out['middle'] = $curr; } } $out['last'] = $pieces[0]; break; } unset($pieces); break; case 3: // array(last,title first middles,suffix) $subp = explode(' ',trim($pieces[1])); $n_subp = count($subp); for($i = 0; $i < $n_subp; $i++) { $curr = trim($subp[$i]); $next = trim($subp[$i+1]); if($i == 0 && in_array_norm($curr,$titles)) { $out['title'] = $curr; continue; } if(!$out['first']) { $out['first'] = $curr; continue; } if($out['middle']) { $out['middle'] .= " $curr"; } else { $out['middle'] = $curr; } } $out['last'] = trim($pieces[0]); $out['suffix'] = trim($pieces[2]); break; default: // unparseable unset($pieces); break; } return $out; } ?> A: * *Get a sql regex function. Sample: http://msdn.microsoft.com/en-us/magazine/cc163473.aspx *Extract names using regular expressions. I recommend Expresso for learnin/building/testing regular expressions. Old free version, new commercial version A: I'm not sure about SQL server, but in postgres you could do something like this: SELECT SUBSTRING(fullname, '(\\w+)') as firstname, SUBSTRING(fullname, '\\w+\\s(\\w+)\\s\\w+') as middle, COALESCE(SUBSTRING(fullname, '\\w+\\s\\w+\\s(\\w+)'), SUBSTRING(fullname, '\\w+\\s(\\w+)')) as lastname FROM public.person The regex expressions could probably be a bit more concise; but you get the point. This does by the way not work for persons having two double names (in the Netherlands we have this a lot 'Jan van der Ploeg') so I'd be very careful with the results. A: I once made a 500 character regular expression to parse first, last and middle names from an arbitrary string. Even with that honking regex, it only got around 97% accuracy due to the complete inconsistency of the input. Still, better than nothing. A: Subject to the caveats that have already been raised regarding spaces in names and other anomalies, the following code will at least handle 98% of names. (Note: messy SQL because I don't have a regex option in the database I use.) **Warning: messy SQL follows: create table parsname (fullname char(50), name1 char(30), name2 char(30), name3 char(30), name4 char(40)); insert into parsname (fullname) select fullname from ImportTable; update parsname set name1 = substring(fullname, 1, locate(' ', fullname)), fullname = ltrim(substring(fullname, locate(' ', fullname), length(fullname))) where locate(' ', rtrim(fullname)) > 0; update parsname set name2 = substring(fullname, 1, locate(' ', fullname)), fullname = ltrim(substring(fullname, locate(' ', fullname), length(fullname))) where locate(' ', rtrim(fullname)) > 0; update parsname set name3 = substring(fullname, 1, locate(' ', fullname)), fullname = ltrim(substring(fullname, locate(' ', fullname), length(fullname))) where locate(' ', rtrim(fullname)) > 0; update parsname set name4 = substring(fullname, 1, locate(' ', fullname)), fullname = ltrim(substring(fullname, locate(' ', fullname), length(fullname))) where locate(' ', rtrim(fullname)) > 0; // fullname now contains the last word in the string. select fullname as FirstName, '' as MiddleName, '' as LastName from parsname where fullname is not null and name1 is null and name2 is null union all select name1 as FirstName, name2 as MiddleName, fullname as LastName from parsname where name1 is not null and name3 is null The code works by creating a temporary table (parsname) and tokenizing the fullname by spaces. Any names ending up with values in name3 or name4 are non-conforming and will need to be dealt with differently. A: As everyone else says, you can't from a simple programmatic way. Consider these examples: * *President "George Herbert Walker Bush" (First Middle Middle Last) *Presidential assassin "John Wilkes Booth" (First Middle Last) *Guitarist "Eddie Van Halen" (First Last Last) *And his mom probably calls him Edward Lodewijk Van Halen (First Middle Last Last) *Famed castaway "Mary Ann Summers" (First First Last) *New Mexico GOP chairman "Fernando C de Baca" (First Last Last Last) A: We of course all understand that there's no perfect way to solve this problem, but some solutions can get you farther than others. In particular, it's pretty easy to go beyond simple whitespace-splitters if you just have some lists of common prefixes (Mr, Dr, Mrs, etc.), infixes (von, de, del, etc.), suffixes (Jr, III, Sr, etc.) and so on. It's also helpful if you have some lists of common first names (in various languages/cultures, if your names are diverse) so that you can guess whether a word in the middle is likely to be part of the last name or not. BibTeX also implements some heuristics that get you part of the way there; they're encapsulated in the Text::BibTeX::Name perl module. Here's a quick code sample that does a reasonable job. use Text::BibTeX; use Text::BibTeX::Name; $name = "Dr. Mario Luis de Luigi Jr."; $name =~ s/^\s*([dm]rs?.?|miss)\s+//i; $dr=$1; $n=Text::BibTeX::Name->new($name); print join("\t", $dr, map "@{[ $n->part($_) ]}", qw(first von last jr)), "\n"; A: The biggest problem I ran into doing this was cases like "Bob R. Smith, Jr.". The algorithm I used is posted at http://www.blackbeltcoder.com/Articles/strings/splitting-a-name-into-first-and-last-names. My code is in C# but you could port it if you must have in SQL. A: The work by @JosephStyons and @Digs is great! I used parts of their work to create a new function for SQL Server 2016 and newer. This one also handles suffixes, as well as prefixes. CREATE FUNCTION [dbo].[NameParser] ( @name nvarchar(100) ) RETURNS TABLE AS RETURN ( WITH prep AS ( SELECT original = @name, cleanName = REPLACE(REPLACE(REPLACE(REPLACE(LTRIM(RTRIM(@name)),' ',' '),' ',' '), '.', ''), ',', '') ) SELECT prep.original, aux.prefix, firstName.firstName, middleName.middleName, lastName.lastName, aux.suffix FROM prep CROSS APPLY ( SELECT prefix = CASE WHEN LEFT(prep.cleanName, 3) IN ('MR ', 'MS ', 'DR ', 'FR ') THEN LEFT(prep.cleanName, 2) WHEN LEFT(prep.cleanName, 4) IN ('MRS ', 'LRD ', 'SIR ') THEN LEFT(prep.cleanName, 3) WHEN LEFT(prep.cleanName, 5) IN ('LORD ', 'LADY ', 'MISS ', 'PROF ') THEN LEFT(prep.cleanName, 4) ELSE '' END, suffix = CASE WHEN RIGHT(prep.cleanName, 3) IN (' JR', ' SR', ' II', ' IV') THEN RIGHT(prep.cleanName, 2) WHEN RIGHT(prep.cleanName, 4) IN (' III', ' ESQ') THEN RIGHT(prep.cleanName, 3) ELSE '' END ) aux CROSS APPLY ( SELECT baseName = LTRIM(RTRIM(SUBSTRING(prep.cleanName, LEN(aux.prefix) + 1, LEN(prep.cleanName) - LEN(aux.prefix) - LEN(aux.suffix)))), numParts = (SELECT COUNT(1) FROM STRING_SPLIT(LTRIM(RTRIM(SUBSTRING(prep.cleanName, LEN(aux.prefix) + 1, LEN(prep.cleanName) - LEN(aux.prefix) - LEN(aux.suffix)))), ' ')) ) core CROSS APPLY ( SELECT firstName = CASE WHEN core.numParts <= 1 THEN core.baseName ELSE LEFT(core.baseName, CHARINDEX(' ', core.baseName, 1) - 1) END ) firstName CROSS APPLY ( SELECT remainder = CASE WHEN core.numParts <= 1 THEN '' ELSE LTRIM(SUBSTRING(core.baseName, LEN(firstName.firstName) + 1, 999999)) END ) work1 CROSS APPLY ( SELECT middleName = CASE WHEN core.numParts <= 2 THEN '' ELSE LEFT(work1.remainder, CHARINDEX(' ', work1.remainder, 1) - 1) END ) middleName CROSS APPLY ( SELECT lastName = CASE WHEN core.numParts <= 1 THEN '' ELSE LTRIM(SUBSTRING(work1.remainder, LEN(middleName.middleName) + 1, 999999)) END ) lastName ) GO SELECT * FROM dbo.NameParser('Madonna') SELECT * FROM dbo.NameParser('Will Smith') SELECT * FROM dbo.NameParser('Neil Degrasse Tyson') SELECT * FROM dbo.NameParser('Dr. Neil Degrasse Tyson') SELECT * FROM dbo.NameParser('Mr. Hyde') SELECT * FROM dbo.NameParser('Mrs. Thurston Howell, III') A: Check this query in Athena for only one-space separated string (e.g. first name and middle name combination): SELECT name, REVERSE( SUBSTR( REVERSE(name), 1, STRPOS(REVERSE(name), ' ') ) ) AS middle_name FROM name_table If you expect to have two or more spaces, you can easily extend the above query. A: Based on @hajili's contribution (which is a creative use of the parsename function, intended to parse the name of an object that is period-separated), I modified it so it can handle cases where the data doesn't containt a middle name or when the name is "John and Jane Doe". It's not 100% perfect but it's compact and might do the trick depending on the business case. SELECT NAME, CASE WHEN parsename(replace(NAME, ' ', '.'), 4) IS NOT NULL THEN parsename(replace(NAME, ' ', '.'), 4) ELSE CASE WHEN parsename(replace(NAME, ' ', '.'), 3) IS NOT NULL THEN parsename(replace(NAME, ' ', '.'), 3) ELSE parsename(replace(NAME, ' ', '.'), 2) end END as FirstName , CASE WHEN parsename(replace(NAME, ' ', '.'), 3) IS NOT NULL THEN parsename(replace(NAME, ' ', '.'), 2) ELSE NULL END as MiddleName, parsename(replace(NAME, ' ', '.'), 1) as LastName from {@YourTableName} A: Employee table has column "Name" and we had to split it into First, Middle and Last Name. This query will handle to keep middle name as null if name column has value of two words like 'James Thomas'. UPDATE Employees SET [First Name] = CASE WHEN (len(name) - len(Replace(name, '.', ''))) = 2 THEN PARSENAME(Name, 3) WHEN (len(name) - len(Replace(name, '.', ''))) = 1 THEN PARSENAME(Name, 2) ELSE PARSENAME(Name, 1) END ,[Middle Name] = CASE WHEN (len(name) - len(Replace(name, '.', ''))) = 2 THEN PARSENAME(Name, 2) ELSE NULL END ,[Last Name] = CASE WHEN (len(name) - len(Replace(name, '.', ''))) = 2 THEN PARSENAME(Name, 1) WHEN (len(name) - len(Replace(name, '.', ''))) = 1 THEN PARSENAME(Name, 1) ELSE NULL END GO UPDATE Employee SET [Name] = Replace([Name], '.', ' ') GO A: I wanted to post an update to the suggestion by hajili, but this response was too long for a comment on that suggestion. Our issue was "Lastname,Firstname Middlename" with some last name's with a space in them. So we came up with: ,FullName = CUST.FULLNAME ,LastName = PARSENAME(REPLACE(CUST.FULLNAME, ',', '.'),2) ,FirstName = (CASE WHEN PARSENAME(REPLACE(CUST.FULLNAME, ',', '.'),1) LIKE '% %' THEN PARSENAME(REPLACE(PARSENAME(REPLACE(CUST.FULLNAME, ',', '.'),1), ' ', '.'),2) ELSE PARSENAME(REPLACE(CUST.FULLNAME, ',', '.'),1) END) ,MiddleName = (CASE WHEN PARSENAME(REPLACE(CUST.FULLNAME, ' ', '.'),1) LIKE '%,%' THEN NULL ELSE PARSENAME(REPLACE(CUST.FULLNAME, ' ', '.'),1) END) A: SELECT SUBSTRING_INDEX(name, ' ', 1) as fname, SUBSTRING_INDEX(SUBSTRING_INDEX(name, ' ', 2), ' ', -1) as mname, SUBSTRING_INDEX(name, ' ', -1) as lname FROM Person A: If the “fullname” column is in “Last, First - Middle” format (it usually isn’t, but let’s imagine it is), then this works. Done in My SQL. In the first line, the “inner” SUBSTRING_INDEX() gets everything from the left up to ‘ - ‘, which is “Last, First”; then the “outer” SUBSTRING_INDEX() gets everything from the right up to ‘, ‘ from this new “Last, First” string, which is “First”. The second line gets the piece from the right up to ‘ - ‘, which is “Middle”. The third line gets the first string from the left up to the ‘, ‘. SUBSTRING_INDEX(SUBSTRING_INDEX(fullname, ' - ', 1), ', ', -1) AS First, SUBSTRING_INDEX(fullname, ' - ', -1), AS Middle, SUBSTRING_INDEX(fullname, ', ', 1) AS Last, A: Name, Case when (DATALENGTH(NAME)-DATALENGTH(REPLACE(NAME,' ','')))=2 then SUBSTRING(Name,CharIndex(' ',NAME,(CharIndex(' ',NAME)+1)),LEN(NAME)) else SUBSTRING(Name_Line1,CharIndex(' ',NAME,(CharIndex(' ',NAME))),LEN(NAME)) end As Last_name, Case when (DATALENGTH(NAME)-DATALENGTH(REPLACE(NAME,' ','')))=2 then SUBSTRING(Name,CharIndex(' ',NAME,(CharIndex(' ',NAME))),(CharIndex(' ',NAME)+1)) else '' end As Middle_name
{ "language": "en", "url": "https://stackoverflow.com/questions/159567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: LINQ to XML for a small appliction can it replace a small database? I am creating a small application that will be deployed on Window. The database will have less than 10 tables. Instead of installing a database on the client box is using XML documents for the database and LINQ going to cost in performance of queries, waiting for the XML file to be loaded and be written? If I use a database I will use LINQ to SQL. A: I would avoid it. I personally would use something like SqlExpress for the DB, or an .mdb file. The problem becomes when that Xml file starts getting large, or requires a change to the format (i.e. an update to a table's structure), processing that becomes a PITA. A: You can use an embedded database like SQLite or the portable version of SQL server (can't remember what it's called), that way you can still use SQL and LINQ but you don't need to install a database server A: I would prefer that if you choose the database route. One of the main reasons is that you can perform many different functions easily when using database. These functions include sorting, paging, grouping etc. You can also use the power of OR mappers to simply your coding and achieve the persistence, retrieval operations with very few lines of code. The bottom line is go with the database! A: Adding some references to MagicKat's answer: Not very portable, but free and limited - SQL Server 2008 Express Microsoft JET is a better fit to Nir's compact requierment. It is embedded (installed as a DLL), and you can move the DB as a single file (.mdb). From the Wikipedia article, I learn that the current version is Microsoft Access Engine.
{ "language": "en", "url": "https://stackoverflow.com/questions/159568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Way to go from recursion to iteration I've used recursion quite a lot on my many years of programming to solve simple problems, but I'm fully aware that sometimes you need iteration due to memory/speed problems. So, sometime in the very far past I went to try and find if there existed any "pattern" or text-book way of transforming a common recursion approach to iteration and found nothing. Or at least nothing that I can remember it would help. * *Are there general rules? *Is there a "pattern"? A: Really, the most common way to do it is to keep your own stack. Here's a recursive quicksort function in C: void quicksort(int* array, int left, int right) { if(left >= right) return; int index = partition(array, left, right); quicksort(array, left, index - 1); quicksort(array, index + 1, right); } Here's how we could make it iterative by keeping our own stack: void quicksort(int *array, int left, int right) { int stack[1024]; int i=0; stack[i++] = left; stack[i++] = right; while (i > 0) { right = stack[--i]; left = stack[--i]; if (left >= right) continue; int index = partition(array, left, right); stack[i++] = left; stack[i++] = index - 1; stack[i++] = index + 1; stack[i++] = right; } } Obviously, this example doesn't check stack boundaries... and really you could size the stack based on the worst case given left and and right values. But you get the idea. A: Just killing time... A recursive function void foo(Node* node) { if(node == NULL) return; // Do something with node... foo(node->left); foo(node->right); } can be converted to void foo(Node* node) { if(node == NULL) return; // Do something with node... stack.push(node->right); stack.push(node->left); while(!stack.empty()) { node1 = stack.pop(); if(node1 == NULL) continue; // Do something with node1... stack.push(node1->right); stack.push(node1->left); } } A: Thinking of things that actually need a stack: If we consider the pattern of recursion as: if(task can be done directly) { return result of doing task directly } else { split task into two or more parts solve for each part (possibly by recursing) return result constructed by combining these solutions } For example, the classic Tower of Hanoi if(the number of discs to move is 1) { just move it } else { move n-1 discs to the spare peg move the remaining disc to the target peg move n-1 discs from the spare peg to the target peg, using the current peg as a spare } This can be translated into a loop working on an explicit stack, by restating it as: place seed task on stack while stack is not empty take a task off the stack if(task can be done directly) { Do it } else { Split task into two or more parts Place task to consolidate results on stack Place each task on stack } } For Tower of Hanoi this becomes: stack.push(new Task(size, from, to, spare)); while(! stack.isEmpty()) { task = stack.pop(); if(task.size() = 1) { just move it } else { stack.push(new Task(task.size() -1, task.spare(), task,to(), task,from())); stack.push(new Task(1, task.from(), task.to(), task.spare())); stack.push(new Task(task.size() -1, task.from(), task.spare(), task.to())); } } There is considerable flexibility here as to how you define your stack. You can make your stack a list of Command objects that do sophisticated things. Or you can go the opposite direction and make it a list of simpler types (e.g. a "task" might be 4 elements on a stack of int, rather than one element on a stack of Task). All this means is that the memory for the stack is in the heap rather than in the Java execution stack, but this can be useful in that you have more control over it. A: It seems nobody has addressed where the recursive function calls itself more than once in the body, and handles returning to a specific point in the recursion (i.e. not primitive-recursive). It is said that every recursion can be turned into iteration, so it appears that this should be possible. I just came up with a C# example of how to do this. Suppose you have the following recursive function, which acts like a postorder traversal, and that AbcTreeNode is a 3-ary tree with pointers a, b, c. public static void AbcRecursiveTraversal(this AbcTreeNode x, List<int> list) { if (x != null) { AbcRecursiveTraversal(x.a, list); AbcRecursiveTraversal(x.b, list); AbcRecursiveTraversal(x.c, list); list.Add(x.key);//finally visit root } } The iterative solution: int? address = null; AbcTreeNode x = null; x = root; address = A; stack.Push(x); stack.Push(null) while (stack.Count > 0) { bool @return = x == null; if (@return == false) { switch (address) { case A:// stack.Push(x); stack.Push(B); x = x.a; address = A; break; case B: stack.Push(x); stack.Push(C); x = x.b; address = A; break; case C: stack.Push(x); stack.Push(null); x = x.c; address = A; break; case null: list_iterative.Add(x.key); @return = true; break; } } if (@return == true) { address = (int?)stack.Pop(); x = (AbcTreeNode)stack.Pop(); } } A: Generally the technique to avoid stack overflow is for recursive functions is called trampoline technique which is widely adopted by Java devs. However, for C# there is a little helper method here that turns your recursive function to iterative without requiring to change logic or make the code in-comprehensible. C# is such a nice language that amazing stuff is possible with it. It works by wrapping parts of the method by a helper method. For example the following recursive function: int Sum(int index, int[] array) { //This is the termination condition if (int >= array.Length) //This is the returning value when termination condition is true return 0; //This is the recursive call var sumofrest = Sum(index+1, array); //This is the work to do with the current item and the //result of recursive call return array[index]+sumofrest; } Turns into: int Sum(int[] ar) { return RecursionHelper<int>.CreateSingular(i => i >= ar.Length, i => 0) .RecursiveCall((i, rv) => i + 1) .Do((i, rv) => ar[i] + rv) .Execute(0); } A: Usually, I replace a recursive algorithm by an iterative algorithm by pushing the parameters that would normally be passed to the recursive function onto a stack. In fact, you are replacing the program stack by one of your own. var stack = []; stack.push(firstObject); // while not empty while (stack.length) { // Pop off end of stack. obj = stack.pop(); // Do stuff. // Push other objects on the stack as needed. ... } Note: if you have more than one recursive call inside and you want to preserve the order of the calls, you have to add them in the reverse order to the stack: foo(first); foo(second); has to be replaced by stack.push(second); stack.push(first); Edit: The article Stacks and Recursion Elimination (or Article Backup link) goes into more details on this subject. A: One pattern to look for is a recursion call at the end of the function (so called tail-recursion). This can easily be replaced with a while. For example, the function foo: void foo(Node* node) { if(node == NULL) return; // Do something with node... foo(node->left); foo(node->right); } ends with a call to foo. This can be replaced with: void foo(Node* node) { while(node != NULL) { // Do something with node... foo(node->left); node = node->right; } } which eliminates the second recursive call. A: Strive to make your recursive call Tail Recursion (recursion where the last statement is the recursive call). Once you have that, converting it to iteration is generally pretty easy. A: A question that had been closed as a duplicate of this one had a very specific data structure: The node had the following structure: typedef struct { int32_t type; int32_t valueint; double valuedouble; struct cNODE *next; struct cNODE *prev; struct cNODE *child; } cNODE; The recursive deletion function looked like: void cNODE_Delete(cNODE *c) { cNODE*next; while (c) { next=c->next; if (c->child) { cNODE_Delete(c->child) } free(c); c=next; } } In general, it is not always possible to avoid a stack for recursive functions that invoke itself more than one time (or even once). However, for this particular structure, it is possible. The idea is to flatten all the nodes into a single list. This is accomplished by putting the current node's child at the end of the top row's list. void cNODE_Delete (cNODE *c) { cNODE *tmp, *last = c; while (c) { while (last->next) { last = last->next; /* find last */ } if ((tmp = c->child)) { c->child = NULL; /* append child to last */ last->next = tmp; tmp->prev = last; } tmp = c->next; /* remove current */ free(c); c = tmp; } } This technique can be applied to any data linked structure that can be reduce to a DAG with a deterministic topological ordering. The current nodes children are rearranged so that the last child adopts all of the other children. Then the current node can be deleted and traversal can then iterate to the remaining child. A: Well, in general, recursion can be mimicked as iteration by simply using a storage variable. Note that recursion and iteration are generally equivalent; one can almost always be converted to the other. A tail-recursive function is very easily converted to an iterative one. Just make the accumulator variable a local one, and iterate instead of recurse. Here's an example in C++ (C were it not for the use of a default argument): // tail-recursive int factorial (int n, int acc = 1) { if (n == 1) return acc; else return factorial(n - 1, acc * n); } // iterative int factorial (int n) { int acc = 1; for (; n > 1; --n) acc *= n; return acc; } Knowing me, I probably made a mistake in the code, but the idea is there. A: Recursion is nothing but the process of calling of one function from the other only this process is done by calling of a function by itself. As we know when one function calls the other function the first function saves its state(its variables) and then passes the control to the called function. The called function can be called by using the same name of variables ex fun1(a) can call fun2(a). When we do recursive call nothing new happens. One function calls itself by passing the same type and similar in name variables(but obviously the values stored in variables are different,only the name remains same.)to itself. But before every call the function saves its state and this process of saving continues. The SAVING IS DONE ON A STACK. NOW THE STACK COMES INTO PLAY. So if you write an iterative program and save the state on a stack each time and then pop out the values from stack when needed, you have successfully converted a recursive program into an iterative one! The proof is simple and analytical. In recursion the computer maintains a stack and in iterative version you will have to manually maintain the stack. Think over it, just convert a depth first search(on graphs) recursive program into a dfs iterative program. All the best! A: TLDR You can compare the source code below, before and after to intuitively understand the approach without reading this whole answer. I ran into issues with some multi-key quicksort code I was using to process very large blocks of text to produce suffix arrays. The code would abort due to the extreme depth of recursion required. With this approach, the termination issues were resolved. After conversion the maximum number of frames required for some jobs could be captured, which was between 10K and 100K, taking from 1M to 6M memory. Not an optimum solution, there are more effective ways to produce suffix arrays. But anyway, here's the approach used. The approach A general way to convert a recursive function to an iterative solution that will apply to any case is to mimic the process natively compiled code uses during a function call and the return from the call. Taking an example that requires a somewhat involved approach, we have the multi-key quicksort algorithm. This function has three successive recursive calls, and after each call, execution begins at the next line. The state of the function is captured in the stack frame, which is pushed onto the execution stack. When sort() is called from within itself and returns, the stack frame present at the time of the call is restored. In that way all the variables have the same values as they did before the call - unless they were modified by the call. Recursive function def sort(a: list_view, d: int): if len(a) <= 1: return p = pivot(a, d) i, j = partition(a, d, p) sort(a[0:i], d) sort(a[i:j], d + 1) sort(a[j:len(a)], d) Taking this model, and mimicking it, a list is set up to act as the stack. In this example tuples are used to mimic frames. If this were encoded in C, structs could be used. The data can be contained within a data structure instead of just pushing one value at a time. Reimplemented as "iterative" # Assume `a` is view-like object where slices reference # the same internal list of strings. def sort(a: list_view): stack = [] stack.append((LEFT, a, 0)) # Initial frame. while len(stack) > 0: frame = stack.pop() if len(frame[1]) <= 1: # Guard. continue stage = frame[0] # Where to jump to. if stage == LEFT: _, a, d = frame # a - array/list, d - depth. p = pivot(a, d) i, j = partition(a, d, p) stack.append((MID, a, i, j, d)) # Where to go after "return". stack.append((LEFT, a[0:i], d)) # Simulate function call. elif stage == MID: # Picking up here after "call" _, a, i, j, d = frame # State before "call" restored. stack.append((RIGHT, a, i, j, d)) # Set up for next "return". stack.append((LEFT, a[i:j], d + 1)) # Split list and "recurse". elif stage == RIGHT: _, a, _, j, d = frame stack.append((LEFT, a[j:len(a)], d) else: pass When a function call is made, information on where to begin execution after the function returns is included in the stack frame. In this example, if/elif/else blocks represent the points where execution begins after return from a call. In C this could be implemented as a switch statement. In the example, the blocks are given labels; they're arbitrarily labeled by how the list is partitioned within each block. The first block, "LEFT" splits the list on the left side. The "MID" section represents the block that splits the list in the middle, etc. With this approach, mimicking a call takes two steps. First a frame is pushed onto the stack that will cause execution to resume in the block following the current one after the "call" "returns". A value in the frame indicates which if/elif/else section to fall into on the loop that follows the "call". Then the "call" frame is pushed onto the stack. This sends execution to the first, "LEFT", block in most cases for this specific example. This is where the actual sorting is done regardless which section of the list was split to get there. Before the looping begins, the primary frame pushed at the top of the function represents the initial call. Then on each iteration, a frame is popped. The "LEFT/MID/RIGHT" value/label from the frame is used to fall into the correct block of the if/elif/else statement. The frame is used to restore the state of the variables needed for the current operation, then on the next iteration the return frame is popped, sending execution to the subsequent section. Return values If the recursive function returns a value used by itself, it can be treated the same way as other variables. Just create a field in the stack frame for it. If a "callee" is returning a value, it checks the stack to see if it has any entries; and if so, updates the return value in the frame on the top of the stack. For an example of this you can check this other example of this same approach to recursive to iterative conversion. Conclusion Methods like this that convert recursive functions to iterative functions, are essentially also "recursive". Instead of the process stack being utilized for actual function calls, another programmatically implemented stack takes its place. What is gained? Perhaps some marginal improvements in speed. Or it could serve as a way to get around stack limitations imposed by some compilers and/or execution environments (stack pointer hitting the guard page). In some cases, the amount of data pushed onto the stack can be reduced. Do the gains offset the complexity introduced in the code by mimicking something that we get automatically with the recursive implementation? In the case of the sorting algorithm, finding a way to implement this particular one without a stack could be challenging, plus there are so many iterative sorting algorithms available that are much faster. It's been said that any recursive algorithm can be implemented iteratively. Sure... but some algorithms don't convert well without being modified to such a degree that they're no longer the same algorithm. It may not be such a great idea to convert recursive algorithms just for the sake of converting them. Anyway, for what it's worth, the above approach is a generic way of converting that should apply to just about anything. If you find you really need an iterative version of a recursive function that doesn't use a memory eating stack of its own, the best approach may be to scrap the code and write your own using the description from a scholarly article, or work it out on paper and then code it from scratch, or other ground up approach. A: Even using stack will not convert a recursive algorithm into iterative. Normal recursion is function based recursion and if we use stack then it becomes stack based recursion. But its still recursion. For recursive algorithms, space complexity is O(N) and time complexity is O(N). For iterative algorithms, space complexity is O(1) and time complexity is O(N). But if we use stack things in terms of complexity remains same. I think only tail recursion can be converted into iteration. A: The stacks and recursion elimination article captures the idea of externalizing the stack frame on heap, but does not provide a straightforward and repeatable way to convert. Below is one. While converting to iterative code, one must be aware that the recursive call may happen from an arbitrarily deep code block. Its not just the parameters, but also the point to return to the logic that remains to be executed and the state of variables which participate in subsequent conditionals, which matter. Below is a very simple way to convert to iterative code with least changes. Consider this recursive code: struct tnode { tnode(int n) : data(n), left(0), right(0) {} tnode *left, *right; int data; }; void insertnode_recur(tnode *node, int num) { if(node->data <= num) { if(node->right == NULL) node->right = new tnode(num); else insertnode(node->right, num); } else { if(node->left == NULL) node->left = new tnode(num); else insertnode(node->left, num); } } Iterative code: // Identify the stack variables that need to be preserved across stack // invocations, that is, across iterations and wrap them in an object struct stackitem { stackitem(tnode *t, int n) : node(t), num(n), ra(0) {} tnode *node; int num; int ra; //to point of return }; void insertnode_iter(tnode *node, int num) { vector<stackitem> v; //pushing a stackitem is equivalent to making a recursive call. v.push_back(stackitem(node, num)); while(v.size()) { // taking a modifiable reference to the stack item makes prepending // 'si.' to auto variables in recursive logic suffice // e.g., instead of num, replace with si.num. stackitem &si = v.back(); switch(si.ra) { // this jump simulates resuming execution after return from recursive // call case 1: goto ra1; case 2: goto ra2; default: break; } if(si.node->data <= si.num) { if(si.node->right == NULL) si.node->right = new tnode(si.num); else { // replace a recursive call with below statements // (a) save return point, // (b) push stack item with new stackitem, // (c) continue statement to make loop pick up and start // processing new stack item, // (d) a return point label // (e) optional semi-colon, if resume point is an end // of a block. si.ra=1; v.push_back(stackitem(si.node->right, si.num)); continue; ra1: ; } } else { if(si.node->left == NULL) si.node->left = new tnode(si.num); else { si.ra=2; v.push_back(stackitem(si.node->left, si.num)); continue; ra2: ; } } v.pop_back(); } } Notice how the structure of the code still remains true to the recursive logic and modifications are minimal, resulting in less number of bugs. For comparison, I have marked the changes with ++ and --. Most of the new inserted blocks except v.push_back, are common to any converted iterative logic void insertnode_iter(tnode *node, int num) { +++++++++++++++++++++++++ vector<stackitem> v; v.push_back(stackitem(node, num)); while(v.size()) { stackitem &si = v.back(); switch(si.ra) { case 1: goto ra1; case 2: goto ra2; default: break; } ------------------------ if(si.node->data <= si.num) { if(si.node->right == NULL) si.node->right = new tnode(si.num); else { +++++++++++++++++++++++++ si.ra=1; v.push_back(stackitem(si.node->right, si.num)); continue; ra1: ; ------------------------- } } else { if(si.node->left == NULL) si.node->left = new tnode(si.num); else { +++++++++++++++++++++++++ si.ra=2; v.push_back(stackitem(si.node->left, si.num)); continue; ra2: ; ------------------------- } } +++++++++++++++++++++++++ v.pop_back(); } ------------------------- } A: Search google for "Continuation passing style." There is a general procedure for converting to a tail recursive style; there is also a general procedure for turning tail recursive functions into loops. A: There is a general way of converting recursive traversal to iterator by using a lazy iterator which concatenates multiple iterator suppliers (lambda expression which returns an iterator). See my Converting Recursive Traversal to Iterator. A: Another simple and complete example of turning the recursive function into iterative one using the stack. #include <iostream> #include <stack> using namespace std; int GCD(int a, int b) { return b == 0 ? a : GCD(b, a % b); } struct Par { int a, b; Par() : Par(0, 0) {} Par(int _a, int _b) : a(_a), b(_b) {} }; int GCDIter(int a, int b) { stack<Par> rcstack; if (b == 0) return a; rcstack.push(Par(b, a % b)); Par p; while (!rcstack.empty()) { p = rcstack.top(); rcstack.pop(); if (p.b == 0) continue; rcstack.push(Par(p.b, p.a % p.b)); } return p.a; } int main() { //cout << GCD(24, 36) << endl; cout << GCDIter(81, 36) << endl; cin.get(); return 0; } A: My examples are in Clojure, but should be fairly easy to translate to any language. Given this function that StackOverflows for large values of n: (defn factorial [n] (if (< n 2) 1 (*' n (factorial (dec n))))) we can define a version that uses its own stack in the following manner: (defn factorial [n] (loop [n n stack []] (if (< n 2) (return 1 stack) ;; else loop with new values (recur (dec n) ;; push function onto stack (cons (fn [n-1!] (*' n n-1!)) stack))))) where return is defined as: (defn return [v stack] (reduce (fn [acc f] (f acc)) v stack)) This works for more complex functions too, for example the ackermann function: (defn ackermann [m n] (cond (zero? m) (inc n) (zero? n) (recur (dec m) 1) :else (recur (dec m) (ackermann m (dec n))))) can be transformed into: (defn ackermann [m n] (loop [m m n n stack []] (cond (zero? m) (return (inc n) stack) (zero? n) (recur (dec m) 1 stack) :else (recur m (dec n) (cons #(ackermann (dec m) %) stack))))) A: A rough description of how a system takes any recursive function and executes it using a stack: This intended to show the idea without details. Consider this function that would print out nodes of a graph: function show(node) 0. if isleaf(node): 1. print node.name 2. else: 3. show(node.left) 4. show(node) 5. show(node.right) For example graph: A->B A->C show(A) would print B, A, C Function calls mean save the local state and the continuation point so you can come back, and then jump the the function you want to call. For example, suppose show(A) begins to run. The function call on line 3. show(B) means - Add item to the stack meaning "you'll need to continue at line 2 with local variable state node=A" - Goto line 0 with node=B. To execute code, the system runs through the instructions. When a function call is encountered, the system pushes information it needs to come back to where it was, runs the function code, and when the function completes, pops the information about where it needs to go to continue. A: This link provides some explanation and proposes the idea of keeping "location" to be able to get to the exact place between several recursive calls: However, all these examples describe scenarios in which a recursive call is made a fixed amount of times. Things get trickier when you have something like: function rec(...) { for/while loop { var x = rec(...) // make a side effect involving return value x } } A: This is an old question but I want to add a different aspect as a solution. I'm currently working on a project in which I used the flood fill algorithm using C#. Normally, I implemented this algorithm with recursion at first, but obviously, it caused a stack overflow. After that, I changed the method from recursion to iteration. Yes, It worked and I was no longer getting the stack overflow error. But this time, since I applied the flood fill method to very large structures, the program was going into an infinite loop. For this reason, it occurred to me that the function may have re-entered the places it had already visited. As a definitive solution to this, I decided to use a dictionary for visited points. If that node(x,y) has already been added to the stack structure for the first time, that node(x,y) will be saved in the dictionary as the key. Even if the same node is tried to be added again later, it won't be added to the stack structure because the node is already in the dictionary. Let's see on pseudo-code: startNode = pos(x,y) Stack stack = new Stack(); Dictionary visited<pos, bool> = new Dictionary(); stack.Push(startNode); while(stack.count != 0){ currentNode = stack.Pop(); if "check currentNode if not available" continue; if "check if already handled" continue; else if "run if it must be wanted thing should be handled" // make something with pos currentNode.X and currentNode.X // then add its neighbor nodes to the stack to iterate // but at first check if it has already been visited. if(!visited.Contains(pos(x-1,y))) visited[pos(x-1,y)] = true; stack.Push(pos(x-1,y)); if(!visited.Contains(pos(x+1,y))) ... if(!visited.Contains(pos(x,y+1))) ... if(!visited.Contains(pos(x,y-1))) ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/159590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "441" }
Q: Is there anything inherently wrong with long object invocation chains? I've organized my code hierarchically and I find myself crawling up the tree using code like the following. File clientFolder = task.getActionPlan().getClientFile().getClient().getDocumentsFolder(); I'm not drilling down into the task object; I'm drilling up to its parents, so I don't think I'm losing anything in terms of encapsulation; but a flag is going off in the back of my mind telling me there's something dirty about doing it this way. Is this wrong? A: the flag is red, and it says two things in bold: * *to follow the chain it is necessary for the calling code to know the entire tree structure, which is not good encapsulation, and *if the hierarchy ever changes you will have a lot of code to edit and one thing in parentheses: * *use a property, i.e. task.ActionPlan instead of task.getActionPlan() a better solution might be - assuming you need to expose all of the parent properties up the tree at the child level - to go ahead and implement direct properties on the children, i.e. File clientFolder = task.DocumentsFolder; this will at least hide the tree structure from the calling code. Internally the properties may look like: class Task { public File DocumentsFolder { get { return ActionPlan.DocumentsFolder; } } ... } class ActionPlan { public File DocumentsFolder { get { return ClientFile.DocumentsFolder: } } ... } class ClientFile { public File DocumentsFolder { get { return Client.DocumentsFolder; } } ... } class Client { public File DocumentsFolder { get { return ...; } //whatever it really is } ... } but if the tree structure changes in the future you will only need to change the accessor functions in the classes involved in the tree, and not every place where you called up the chain. [plus it will be easier to trap and report nulls properly in the property functions, which was omitted from the example above] A: Well, every indirection adds one point where it could go wrong. In particular, any of the methods in this chain could return a null (in theory, in your case you might have methods that cannot possibly do that), and when that happens you'll know it happened to one of those methods, but not which one. So if there is any chance any of the methods could return a null, I'd at least split the chain at those points, and store in intermediate variables, and break it up into individual lines, so that a crash report would give me a line number to look at. Apart from that I can't see any obvious problems with it. If you have, or can make, guarantees that the null-reference won't be a problem, it would do what you want. What about readability? Would it be clearer if you added named variables? Always write code like you intend it to be read by a fellow programmer, and only incidentally be interpreted by a compiler. In this case I would have to read the chain of method calls and figure out... ok, it gets a document, it's the document of a client, the client is coming from a ... file... right, and the file is from an action plan, etc. Long chains might make it less readable than, say, this: ActionPlan taskPlan = task.GetActionPlan(); ClientFile clientFileOfTaskPlan = taskPlan.GetClientFile(); Client clientOfTaskPlan = clientFileOfTaskPlan.GetClient(); File clientFolder = clientOfTaskPlan.getDocumentsFolder(); I guess it comes down to personal opinion on this matter. A: Getters and setters are evil. Generally, avoid getting an object to do something with it. Instead delegate the task itself. Instead of Object a = b.getA(); doSomething(a); do b.doSomething(); As with all design principles, do not follow this blindly. I have never been able to write anything remotely complicated without getters and setters, but it is a nice guideline. If you have a lot of getters and setters, it probably means you are doing it wrong. A: First of all, stacking code like that can make it annoying to analyze NullPointerExceptions and check references while stepping in a debugger. Apart from that, I think it all boils down to this: Does the caller need to have all that knowledge? Perhaps its functionality could be made more generic; the File could then be passed as a parameter instead. Or, perhaps the ActionPlan should not even reveal that its implementation is based on a ClientFile? A: I agree with the poster that mentioned the Law of Demeter. What you're doing is creating unnecessary dependencies on the implementations of a lot of these classes, and on the structure of the hierarchy itself. It wil make it very difficult to test your code in isolation, since you will need to initialize a dozen other objects just to get a working instance of the class you want to test. A: How timely. I am going to write a post on my blog tonight about this smell, Message Chains, versus its inverse, Middle Man. Anyhow, a deeper question is why you have "get" methods on what appears to be a domain object. If you closely follow the contours of the problem, you will either find out that it doesn't make sense to tell a task to get something, or that what you are doing is really a non-business logic concern like preparing for UI display, persistence, object reconstruction, etc. In the latter case, then the "get" methods are ok as long as they're used by authorized classes. How you enforce that policy is platform -and process-dependent. So in the case where the "get" methods are deemed ok, you still have to face the problem. And unfortunately, I think it depends on the class that is navigating the chain. If it is appropriate for that class to be coupled to the structure (say, a factory), then let it be. Otherwise, you should try to Hide Delegate. Edit: click here for my post. A: Are you realistically going to ever use each and every one of those functions independently? Why not just make task have a GetDocumentsFolder() method that does all the dirty work of calling all those methods for you? Then you can make that do all the dirty work of null-checking everything without crufting up your code in places where it doesn't need to be crufted up. A: The biggest flag in the world. You cannot check easily if any of those invokations returns a null object thus making tracking any sort of error next to impossible! getClientFile() may return null and then getClient() will fail and when you are catching this, assuming you are try-catching you won't have a clue as to which one failed. A: How likely is it to get nulls or invalid results? That code is dependent on the successful return of many functions and it could be harder to sort out errors like null pointer exception. It's also bad for the debugger: less informative since you have to run the functions rather than just watching a few local variables, and awkward to step into the later functions in the chain. A: Yes. It's not best practice. For one thing, if there's a bug, it's harder to find it. For example, your exception handler might display a stack trace that shows that you have a NullReferenceException on line 121, but which of these methods is returning null? You'd have to dig into the code to find out. A: This is a subjective question but I don't think there's anything wrong with it up to a point. For instance if the chain extends beyond the readable aread of the editor then you should introduce some locals. For instance, on my browser I can't see the last 3 calls so I have no idea what you're doing :). A: Well it depends. You shouldn't have to reach through an object like that. If you control the implementations of those methods, I'd recommend refactoring so that you don't have to do that. Otherwise, I see no harm in doing what you're doing. It's certainly better than ActionPlan AP = task.getActionPlan(); ClientFile CF = AP.getClientFile(); Client C = CF.getClient(); DocFolder DF = C.getDocumentsFolder(); A: It is not bad as such, but you might get problems reading this in 6 month. Or a co-worker might get problems maintaining / writing code, because the chain of your objects is quite... long. And I recon that you do not have to introduce variables in your code. So the objects do know all they need to jump from method to method. (Here arises the question if you did not overengineer a little bit, but who am I to tell?) I would introduce a kind of "convenience methods". Imagine you got a method in your "task" - object something like task.getClientFromActionPlan(); You then surely could use task.getClientFromActionPlan().getDocumentsFolder(); Much more readable and in case you do these "convenience methods" right (i.e. for heavy used object chains), much less to type ;-) . Edith says: These convenience methods I suggest often do contain Nullpointer-checking when I write them. This way you even can throw Nullpointers with good error messages in (i.e. "ActionPlan was null while trying to retrieve the client from it"). A: Related discussion: Function Chaining - How many is too many? A: This question is very close to https://stackoverflow.com/questions/154864/function-chaining-how-many-is-too-many#155407 In general, it seems that people agree that too long chains are not good and you should stick to one or two chained calls at most. Though I hear that Python fans consider chaining to be a lot of fun. That might be just a rumor...:-) A: Depending on your end goal you would probably want to use The Principal of Least Knowledge to avoid heavy coupling and costing you in the end. As head first likes to put it.. "Only talk to your friends." A: Another important byproduct of chaining is performance. It's not a big deal in most cases, but especially in a loop you can see a reasonable boost in performance by reducing indirection. Chaining also makes it harder to estimate performance, you can't tell which of those methods may or may not do something complex. A: I'd point to The Law of Demeter, too. And add an article about Tell, Don't Ask A: OK as others point out the code isn't great because you're locking in the code to a specific hierarchy. It can present problems debugging, and it's not nice to read, but the major point is the code that takes a task knows way too much about traversing to get some folder thing. Dollars to donuts, somebody's going to want to insert something in the middle. (all tasks are in a task list, etc) Going out on a limb, are all of these classes just special names for the same thing? ie are they hierarchical, but each level has maybe a few extra properties? So, from a different angle, I'm going to simplify to an enum and an interface, where the child classes delegate up the chain if they aren't the requested thing. For the sake of argument, I'm calling them folders. enum FolderType { ActionPlan, ClientFile, Client, etc } interface IFolder { IFolder FindTypeViaParent( FolderType folderType ) } and each class that implements IFolder probably just does IFolder FindTypeViaParent( FolderType folderType ) { if( myFolderType == folderType ) return this; if( parent == null ) return null; IFolder parentIFolder = (IFolder)parent; return parentIFolder.FindTypeViaParent(folderType) } A variation is to make the IFolder interface: interface IFolder { FolderType FolderType { get; } IFolder Parent { get; } } This allows you to externalize the traversal code. However this takes control away from the classes (maybe they have multiple parents) and exposes implementation. Good and bad. [ramblings] At a glance this appears to be a pretty expensive hierarchy to set up. Do I need to instantiate top-down every time? i.e. if something just needs a task, do you have to instantiate everything bottom-up to ensure all those back-pointers work? Even if it's lazy-load, do I need to walk up the hierarchy just to get the root? Then again, is the hierarchy really a part of object identity? If it's not, perhaps you could externalize the hierarchy as an n-ary tree. As a side-note, you may want to consider the DDD (Domain Driven Design) concept of aggregate and determine who the major players are. What is the ultimate owner object that is responsible? e.g. wheels of a car. In a design that models a car, the wheels are also objects, but they are owned by the car. Maybe it works for you, maybe it doesn't. Like I said, this is just a shot in the dark. A: On the other hand, the Law of Demeter isn't universally applicable, nor a hard rule (arrr, it be more of a guideline!).
{ "language": "en", "url": "https://stackoverflow.com/questions/159594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Can't copy file with appropriate permissions using FileIOPermission This snippet works well if I try to write in a user directory but as soon as I try to write in Program Files, it just executes silently and the file has not been copied (no exception). If I try to copy the file in C:\ or in C:\Windows I catch an UnauthorizedAccessException. Do you know another way to get the permissions to write in that directory or to make it work another way? Any help greatly appreciated! Thanks using(FileStream fs=File.Open(source, FileMode.Open)){ } try { FileIOPermission fp = new FileIOPermission(FileIOPermissionAccess.Write, AccessControlActions.Change, "C:\\Program Files\\MyPath"); fp.Demand(); //<-- no exception but file is not copied File.Copy("C:\\Users\\teebot\\Documents\\File.xml","C:\\Program Files\\MyPath\\File.xml",true); } catch(SecurityExceptions) { throw(s); } catch(UnauthorizedAccessException unauthroizedException) { throw unauthroizedException; } A: Don't write in the Program Files folder. That's a big no-no, and will especially cause problems when the day comes where your code runs in Vista or on a machine at a company where users only get standard security rather than admin rights. Use the Application Data folder instead. A: Are you running on Vista? If so then you may be running into file system virtualization. This is a feature in 32 bit versions of Vista which allows a normal user to write to protected parts of the file system. It's a shim introduced to reduce the pain of the LUA features of Vista. The short version is that the operating system will create a virtual file system for certain protected roots (such as program files). When a non-admin attempts to write to it, a copy will be created an editted instead of the original. When your user account attempts to look at the file it will see the edit.s Other user accounts will only see the original. Longer Version: http://thelazyadmin.com/blogs/thelazyadmin/archive/2007/04/26/file-system-virtualization.aspx A: If you are running under Vista then the system just redirects writes to the program files folder, this is done so old program that keep their configuration in the program directory will continue to work when the user is not an Admin (or UAC is enabled). All you have to do is add a manifest to your program that specify the required access level, then the system assume your program is Vista-aware and turns off all those compatibility patches. You can see an example of a manifest file on my blog at: http://www.nbdtech.com/blog/archive/2008/06/16/The-Application-Manifest-Needed-for-XP-and-Vista-Style-File.aspx (the focus of the post is on getting the right version of the common controls, but the Vista security declarations are also there) A: Code access security grants or denies permissions to your code. It can't be used to override permissions that are granted/denied to the current user.
{ "language": "en", "url": "https://stackoverflow.com/questions/159598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I automatically update a web reference at build time? I have a .net project that has a web reference to a service. I would like to update that web reference as part of every build. Is that possible? A: You can use MSBuild script with a task that calls wsdl.exe <Target Name="UpdateWebReference"> <Message Text="Updating Web Reference..."/> <Exec Command="wsdl.exe /o &quot;$(OutDir)&quot; /n &quot;$(WebServiceNamespace)&quot; &quot$(PathToWebServiceURL)&quot;"/> </Target> A: Also, when you are deploying your webservices on production make sure that they are set as Dynamic and not static. A: You can do it using the methods provided by the other answerers, but you have to know that doing this could cause your build to fail. If the WSDL was changed, the generated code is also going to change and your code may no longer compile. A: You can use svcutil (http://msdn.microsoft.com/en-us/library/aa347733.aspx) tool to generate the web reference for you. The tool will generate the proper client proxy classes and the proper config (and it can even merge it in your application config). Keep in mind that the tool requires .Net 3.0 and will generate WCF-style client proxies and configuration.
{ "language": "en", "url": "https://stackoverflow.com/questions/159599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: What is the best client side browser library to upload multiple files over http? What is the best client side http library to upload multiple files? If it can handle directories that's a huge bonus. I'm looking for something that is open source or free. I'm looking for something like FTP, but that works over http, through the browser. Uploading multiple files through a normal HTML 4.x form is a bit of a hassle when it comes to uploading more than 5-6 files. Feel free to share your personal experiences. A: Uploadify is also another great multiple file uploader. It was built off of SWFUpload and they added new features to it. Some of the features that I have found most helpful are: The user can upload all the files at once using ctrl + clicking on all of the files As the files are being downloaded a queue is displayed which shows the files being downloaded including a completeion bar. As files are completed they are removed from the queue It also allows you to specify which file types the user is able to download (they can only see the ones you choose) A: I'd recommend something like SWFUpload for that. It's main feature is its support for progress bars, but it also allows for queuing files which is particularly handy (this is actually the second time I've recommended it today). A: Just to make sure other options are documented (SWFUpload is great) - another good solution is FancyUpload2. A: You could use a Java based solution. I've been using JumpLoader on one of my web pages and haven't had any problems with it. It can also upload directories, which other solutions mentioned here do not support. A: Another option that I have used before is uploading and then extracting ZIP files. I have used PEAR::Archive_Zip to extract. Requires more knowledge on the user's side, but supports directories and unlimited files (depending on the memory alloted to PHP). A: Take a look at jquery-html5-upload it doesn't require Flash, and has a sexy jQuery API.
{ "language": "en", "url": "https://stackoverflow.com/questions/159600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: DB2 CLI result output When running command-line queries in MySQL you can optionally use '\G' as a statement terminator, and instead of the result set columns being listed horizontally across the screen, it will list each column vertically, which the corresponding data to the right. Is there a way to the same or a similar thing with the DB2 command line utility? Example regular MySQL result mysql> select * from tagmap limit 2; +----+---------+--------+ | id | blog_id | tag_id | +----+---------+--------+ | 16 | 8 | 1 | | 17 | 8 | 4 | +----+---------+--------+ Example Alternate MySQL result: mysql> select * from tagmap limit 2\G *************************** 1. row *************************** id: 16 blog_id: 8 tag_id: 1 *************************** 2. row *************************** id: 17 blog_id: 8 tag_id: 4 2 rows in set (0.00 sec) Obviously, this is much more useful when the columns are large strings, or when there are many columns in a result set, but this demonstrates the formatting better than I can probably explain it. A: I don't think such an option is available with the DB2 command line client. See http://www.dbforums.com/showthread.php?t=708079 for some suggestions. For a more general set of information about the DB2 command line client you might check out the IBM DeveloperWorks article DB2's Command Line Processor and Scripting. A: Little bit late, but found this post when I searched for an option to retrieve only the selected data. So db2 -x <query> gives only the result back. More options can be found here: https://www.ibm.com/docs/en/db2/11.1?topic=clp-options Example: [db2inst1@a21c-db2 db2]$ db2 -n select postschemaver from files.product POSTSCHEMAVER -------------------------------- 147.3 1 record(s) selected. [db2inst1@a21c-db2 db2]$ db2 -x select postschemaver from files.product 147.3 A: DB2 command line utility always displays data in tabular format. i.e. rows horizontally and columns vertically. It does not support any other format like \G statement terminator do for mysql. But yes, you can store column organized data in DB2 tables when DB2_WORKLOAD=ANALYTICS is set. db2 => connect to coldb Database Connection Information Database server = DB2/LINUXX8664 10.5.5 SQL authorization ID = BIMALJHA Local database alias = COLDB db2 => create table testtable (c1 int, c2 varchar(10)) organize by column DB20000I The SQL command completed successfully. db2 => insert into testtable values (2, 'bimal'),(3, 'kumar') DB20000I The SQL command completed successfully. db2 => select * from testtable C1 C2 ----------- ---------- 2 bimal 3 kumar 2 record(s) selected. db2 => terminate DB20000I The TERMINATE command completed successfully.
{ "language": "en", "url": "https://stackoverflow.com/questions/159615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you implement a multiculture web application I believe several of us have already worked on a project where not only the UI, but also data has to be supported in different languages. Such as - being able to provide and store a translation for what I'm writing here, for instance. What's more, I also believe several of us have some time-triggered events (such as when expiring membership access) where user location should be taken into account to calculate, like, midnight according to the right time-zone. Finally there's also the need to support Right to Left user interfaces accoring to certain languages and the use of diferent encodings when reading submitted data files (parsing text and excel data, for instance) Currently I'm storing all my translations for all my entities on a single table (not so pratical as it is very hard to find yourself when doing sql queries to look into a problem), setting UI translations mainly on satellite assemblies and not supporting neither time zones nor right to left design. What are your experiences when dealing with these challenges? [Edit] I assume most people think that this level of multiculture requirement is just like building a huge project. As a matter of fact if you tihnk about an online survey where: * *Answers will collected only until midnight *Questionnaire definition and part of the answers come from a text file (in any language) as well as translations *Questions and response options must be displayed in several languages, according to who is accessing it *Reports also have to be shown and generated in several different languages As one can see, we do not have to go too far in an application to have this kind of requirements. [Edit2] Just found out my question is a duplicate i18n in your projects The first answer (when ordering by vote) is so compreheensive I have to get at least a part of it implemented someday. A: Be very very cautious. From what you say about the i18n features you're trying to implement, I wonder if you're over-reaching. Notice that the big boy (e.g. eBay, amazon.com, yahoo, bbc) web applications actually deliver separate apps in each language they want to support. Each of these web applications do consume a common core set of services. Don't be surprised if the business needs of two different countries that even speak the same language (e.g. UK & US) are different enough that you do need a separate app for each. On the other hand, you might need to become like the next amazon.com. It's difficult to deliver a successful web application in one language, much less many. You should not be afraid to favor one user population (say, your Asian-language speakers) over others if this makes sense for your web app's business needs. A: Go slow. Think everything through, then really think about what you're doing again. Bear in mind that the more you add (like Right to Left) the longer your QA cycle will be. A: The primary piece to your puzzle will be extensive use of interfaces on the code side, and either one data source that gets passed through a translator to whichever languages need to be supported, or separate data sources for each language. The time issues can be handled by the interfaces, because presumably you will want things to function in the same fashion, but differ in the implementation details. To a large extent, a similar thought process can be applied to the creation of the interface when adjusting it to support differing languages. When you get down to it, skinning is exactly this, where the content being skinned is the interface, and the look/feel is the implementation. A: Do what your users need. For instance, most programmer understand English, there is no sense to translate posts on this site. If many of your users need a translation, add a new table column with the language id, and another column to link a translated row to its original. If your target auditory contains the users from the Middle East, implement Right to Left. If time precision is critical up to an hour, add a time zone column to the user table, and so on. A: If you're on *NIX, use gettext. Most languages I've used have some level of support; PHP's is pretty good, for instance. A: I'll describe what has been done in my project (it wasn't my original architecture but I liked it anyways) Providing Translation Support Text which needs to be translated have been divided into three different categories: * *Error text: Like errors which happen deep in the application business layer *UI Text: Text which is shown in the User interface (labels, buttons, grid titles, menus) *User-defined Text: text which needs to be translatable according to the final user's preferences (that is - the user creates a question in a survey and he can also create a translated version of that survey) For each different cathegory the schema used to provide translation service is different - so that we have: * *Error Text: A library with static functions which access resource files *UI Text: A "Helper" class which, linked to the view engine, provides translations from remote assemblies *User-defined Text: A table in the database which provides translations (according to typeID of the translated entity and object id) and is linked to the entity via a 1 x N relationship I haven't, however, attacked the other obvious problems such as dealing with time zones, different layouts and picture translation (if this is really necessary). Does anyone have tackled this problem in a different way? Has anyone ever tackled the other i18n problems?
{ "language": "en", "url": "https://stackoverflow.com/questions/159625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Keeping Visual Studio Projects on a Network Drive We just did a move from storing all files locally to a network drive. Problem is that is where my VS projects are also stored now. (No versioning system yet, working on that.) I know I heard of problems with doing this in the past, but never heard of a work-around. Is there a work around? So my VS is installed locally. The files are on a network drive. How can I get this to work? EDIT: I know what SHOULD be done, but is there a band-aid I can put on right now to fix this and maintain the network drive? EDIT 2: I am sure I am not understanding something, but Bob King has the right idea. I'll work with the lead web developer when he gets back into the office to figure out a temporary solution until we get some sort of version control setup. Thanks for the ideas. A: I would not recommend doing that if you have (or even if you don't have) multiple people who are working on the projects. You're just asking for trouble. If you're the only one working on it, on the other hand, you'll avoid much of the trouble. Performance is going to out the window, though. As far as how to get it to work, you just open the solution file from VS. You'll likely run into security issues, but can correct that using CASPOL. As I said, though, performance is going to be terrible. Again, not recommended at all. Do yourself and your team a favor and install SVN or some other form of source control and put the code in there ASAP. EDIT: I'll partially retract my comments. Bob King explains below the reason they run VS projects from a network drive and it makes sense. I would say unless you're doing it for a specific reason like Bob, stay away from it. Otherwise, get your ducks in a row before setting up such a development environment. A: So I was having a similar issue. Visual Studio wouldn't recognize a network location I had mapped for a drive letter for anything. The funny thing is, it worked for a day. I set up my project and began working on it and had no issues. Then, I shut down and the next day nothing works. I couldn't read/write files in code, output my executables or anything. My project is local but my output was intended to be thrown up on the network. Anyways, the problem is probably about the administrator context but one way to fix it which I found while digging around online is to get Visual Studio to browse to the drive in question some how. There are plenty of ways to do this but VS will magically be able to recognize mapped drive letters. My solution is to go the the Debug Output Location in the Project Properties, click browse and go to my previously made output location on my network drive and Voila!!! I wanted to put this up because I spent half a day trying to figure this out and figured it might save someone else some time. Thanks much and good luck!!! Erik A: I understand this is an older thread, but this was the best thread I found when looking to solve a similar issue I had visual studio 2013 on a virtual box (using Win 8.1) and the code on the host machine (Win 7). Although I could open the solution, I could not compile. All of the other answers on this relate to older software, so I am adding this answer to update this frequently found question with the solution that worked for me. Here's what I did; Made a registry entry to be able to use a UNC path as the current directory. WARNING: Using Registry Editor incorrectly can cause serious, system-wide problems that may require you to reinstall Windows NT to correct them. Microsoft cannot guarantee that any problems resulting from the use of Registry Editor can be solved. Use this tool at your own risk. Under the registry path: HKEY_CURRENT_USER \Software \Microsoft \Command Processor add the value DisableUNCCheck REG_DWORD and set the value to 0 x 1 (Hex). WARNING: If you enable this feature and start a Console that has a current directory of an UNC name, start applications from that Console, and then close the Console, it could cause problems in the applications started from that Console. Found this information at link: http://support.microsoft.com/kb/156276 A: While we do use Source Control, we do also run all our projects from Network Drives (not shared directories, private directories on network drives). The network drives are backed up nightly, and also use Volume Shadow Copy, so if you need to revert to something before it made it's way to SC, then you can. To get projects to run correctly with the right permission, follow these steps. Basically, you've just got to map the shared directory to a drive, and then grant permission, based on that Url, to all code. Say you map to "N:\", then use "N:\*" as your Url pattern. It isn't obvious you need to wildcard, but you do. A: How about we rephrase this into a question that everyone can answer? I have the exact same problem as the initial poster. I have a copy of VB 2008 (recently upgraded from VB6). If I store my solutions on the backed up network drive, then it won't run a single thing ever. It gives "partially trusted caller" errors for accessing a module, even when "allowpartiallytrustedcallers" is set in the assembly. If I store the files on my (not backed up) C:, then it will run wonderfully, until I put it on the share drive for everyone to use, and I'm back to my same problem. This isn't a big request. I just want to be able to put a solution and executable on the share drive and run it without an absurd amount of nonsense about security. I shouldn't have to cram all my work into form files. -Edit: I found the problem with why it was ignoring the AllowPartialllyTrustedCallers command. I'm trying to reference ADODB, which doesn't allow partially trusted. So, no network executable can access a database? What does Microsoft have against intranets anyway? A: The question is rather generic so I'll give an answer to one issue I was facing. I run Visual Studio 2010 using a Parallels virtual machine on my Mac while keeping all my projects on the mac side via a network share. Visual Studio however wouldn't load the projects assembly files from there. Trying to set the rights using "caspol" alone didn't help in my case. What finally worked for me to allow Visual Studio to load assemblies from a network share was to edit the file "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe.config" (assuming a default installation). in the xml "<runtime>" section you have to add <loadFromRemoteSources enabled="true"/> You may have to change the permissions on that file to allow write access. Save the file. Restart Visual Studio. A: I was facing the same issue just recently so this answer is more for the sake of keeping track of my own knowledge. Anyway, should soumeone find it useful, below is the issue and the solution. Issue: NET 4.0 projects, SVN repo, checkout folders are on local drives, referenced assemblies are build by build server and available on a network drive. Visual studio on W7 is is able to add the reference but unable to build projects. Solution: Since NET 4.0 does not automatically provide a sandbox anymore for network assemblies, you have to make those full-trusted via machine.config update. http://msdn.microsoft.com/en-us/library/dd409252.aspx A: I had a similar problem with opening Visual Studio projects on a network drive, and I fixed it by creating a symbolic link on my local C:\ drive that points to the UNC directory e.g. mklink /D "C:\Users\Self\Documents" "\\domain.net\users\self\My Documents" then you can just open the project using the C:\Users\Self\Documents\ path, instead of the UNC path (You have to be careful, because Visual Studio will automatically redirect you to the '\\domain.net..' path if you double click the symlink when you're browsing for the project. I had to copy paste the 'C:\Users\' path to get it to open with the drive letter path) A: In the interests of actually answering the question, I copied this comment from jcarle.com: Trusting Network Shares with Visual Studio 2010 / .NET Framework v4.0 January 20, 2011, 4:10 pm If you are like me and you store all your code on a server, you will have likely learned about trusting a network share using CasPol.exe. However, when moving from Visual Studio 2008 (.NET Framework 2.0/3.0/3.5) over to Visual Studio 2010 (.NET Framework 4.0), you may find yourself scratching your head. If you are used to using the Visual Studio Command Prompt to quickly get to CasPol, you may find that some of your projects will not seem to respect your new FullTrust settings. The reason is that, unless you are carefully paying attention, the Visual Studio Command Prompt defaults to adding the .NET Framework 4.0 folder to its path. If your project is still running under .NET Framework 2.0/3.0/3.5, it will require setting CasPol for those versions as well. Just a note, I have also personally had more success with using 1 as a code group instead of 1.2. To trust a network share for all versions of the .NET Framework, simply call CasPol for each version using the full path as below: C:\Windows\Microsoft.NET\Framework\v2.0.50727\CasPol -m -ag 1 -url file://YourSharePath* FullTrust C:\Windows\Microsoft.NET\Framework\v4.0.30319\CasPol -m -ag 1 -url file://YourSharePath* FullTrust A: Don't do it. If you have source control (versioning), you do not want your files on a network drive. It totally bypasses all you want to achieve by using source control, because once your files are on a network drive, anyone can modify them .... even while you're currently building your project. Ka-boooom! PS: this sounds like a typical case of over-engineering to me. A: Are you having any specific problems? If you allow more than one person to open the solution, your first problem will be that the .NCB file (Intellisense) will be locked exclusively and only one user will be able to browse the class tree. And of course you have the potential for one user's changes to overwrite the other user's changes. A: You should be warned that some feature in Visual Studio will refuse to work with network drive. For example, mdf file of SQL Express user instance must be located in local drive. For another example, if you use UNC path, you have to make sure they are short enought. A: i found this helpful while trying use vc11 with parallels which run on mac: http://social.msdn.microsoft.com/Forums/en-US/toolsforwinapps/thread/2ffdcb01-c511-4961-834b-afd5f2fbb8e1, and specifically: 1) You can switch from local debugging to remote debugging and set the machine name as 'localhost'. This will do a remote deployment on your local machine (thus not using the project's directory). You don't need to install the Remote Debugger tools, nor start msvsmon for this to work on localhost. A: In case this helps anyone else, I had to do the steps outlined here to add the network share location to Windows intranet zone. In particular, I was having trouble with Visual Studio hanging on load when opening a solution on a network share (i.e. using VMware Fusion and opening a solution from my Mac's hard drive). I also had problems with PostSharp running in this scenario. A: If i understand you correctly, your Visual Studio project files are stored on the network drive and you are running them from there. This is what I do and don't have any problems. You will need to make sure that you have set the security policy. You can use Caspol to do this, or via the control panel-admin tools menu. A: Well, my question would be why you are asking this. Is it not working when you are storing it on a network drive? I haven't tried this myself, and one problem I could envision would be that .NET code running from a network drive (ie. from the bin\Debug directory, also located on the network drive) would be running in a sandbox mode, unless you mess around with CASPOL (or use 3.5 SP1 which I hear has removed that obstacle). If you have specific problems, ask about them. Never ask "Why is doing X not working?". You're not saying if you're just one person or multiple persons accessing the same remote drive, but I'm assuming you're just one for each network directory. Is this correct? If not, no, there is no band-aid. Get version control, move the files back to a local disk. A: "How can I get this to work?" You have a couple choices: Choice A: 1. Move all files back to your local hard drive 2. Implement some type of backup software on your machine 3. Test said backup solution 4. keep on coding Choice B: 1. Get a copy of one of the FREE source control products and implement it. 2. Make sure it's being backed up 3. Test it Choice C: Use one of the many ONLINE source control repositories available. Google, SourceForge, CodePlex, something.
{ "language": "en", "url": "https://stackoverflow.com/questions/159627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: linuXploit_crew hit my webserver We run an old Windows NT Machine, fully patched running IIS4.0. Today we were hit by "linuXploit_crew", and they took down our websites for a minute or two. (luckily we were quick to notice a change on the websites and fix it within minutes of the attack). However -- After fixing the website, I'm left with trying to figure out HOW this happened. Looking in our FTP Logs, there's no changes in our default.asp files, and I see nothing out of the ordinary for Web Logs. Any ideas on how to pinpoint how they got in? We've only got 3 ports open, FTP, HTTP, and HTTPS (21,80,443) on a Cisco Firewall. A: NT/IIS4 no longer get security updates. Any new exploits will remain unpatched. Time to upgrade. Once you've been "owned" enough to change your site, you can't necessarily trust your logs anymore- they could have been "cleaned" by the attacker. A: IIS 7 + .NET 3.5 SP1 should be a nice upgrade :) A: They appear to be using some form of Injection Attack: See http://msdn.microsoft.com/en-us/library/bb355989.aspx?ppud=4 A: A wide array of attacks are possible through just port 80. What applications are you running on the server? The number of asp- and php security holes is a magnitude higher than the number of OS/server application holes. A: Stay away with Windows NT class systems. IIS 7 might be okay for security, but the price is not up to standard. USE BSD instead or Linux with Apache. Centos if Linux and OpenBSD if BSD my suggestions.
{ "language": "en", "url": "https://stackoverflow.com/questions/159633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to implement custom JSON serialization from ASP.NET web service? What options are there for serialization when returning instances of custom classes from a WebService? We have some classes with a number of child collection class properties as well as other properties that may or may not be set depending on usage. These objects are returned from an ASP.NET .asmx WebService decorated with the ScriptService attribute, so are serialized via JSON serialization when returned by the various WebMethods. The problem is that the out of the box serialization returns all public properties, regardless of whether or not they are used, as well as returning class name and other information in a more verbose manner than would be desired if you wanted to limit the amount of traffic. Currently, for the classes being returned we have added custom javascript converters that handle the JSON serializtion, and added them to the web.config as below: <system.web.extensions> <scripting> <webServices> <jsonSerialization> <converters> <add name="CustomClassConverter" type="Namespace.CustomClassConverter" /> </converters> </jsonSerialization> </webServices> </scripting> </system.web.extensions> But this requires a custom converter for each class. Is there any other way to change the out of the box JSON serialization, either through extending the service, creating a custom serializer or the like? Follow Up @marxidad: We are using the DataContractJsonSerializer class in other applications, however I have been unable to figure out how to apply it to these services. Here's an example of how the services are set-up: [ScriptService] public class MyService : System.Web.Services.WebService { [WebMethod] public CustomClass GetCustomClassMethod { return new customClass(); } } The WebMethods are called by javascript and return data serialized in JSON. The only method we have been able to change the serialization is to use the javascript converters as referenced above? Is there a way to tell the WebService to use a custom DataContractJsonSerializer? Whether it be by web.config configuration, decorating the service with attributes, etc.? Update Well, we couldn't find any way to switch the out of the box JavaScriptSerializer except for creating individual JavaScriptConverters as above. What we did on that end to prevent having to create a separate converter was create a generic JavaScriptConverter. We added an empty interface to the classes we wanted handled and the SupportedTypes which is called on web-service start-up uses reflection to find any types that implement the interface kind of like this: public override IEnumerable<Type> SupportedTypes { get { foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies()) { AssemblyBuilder dynamicAssemblyCheck = assembly as AssemblyBuilder; if (dynamicAssemblyCheck == null) { foreach (Type type in assembly.GetExportedTypes()) { if (typeof(ICustomClass).IsAssignableFrom(type)) { yield return type; } } } } } } The actual implementation is a bit different so that the type are cached, and we will likely refactor it to use custom attributes rather than an empty interface. However with this, we ran into a slightly different problem when dealing with custom collections. These typically just extend a generic list, but the custom classes are used instead of the List<> itself because there is generally custom logic, sorting etc. in the collection classes. The problem is that the Serialize method for a JavaScriptConverter returns a dictionary which is serialized into JSON as name value pairs with the associated type, whereas a list is returned as an array. So the collection classes could not be easily serialized using the converter. The solution for this was to just not include those types in the converter's SupportedTypes and they serialize perfectly as lists. So, serialization works, but when you try to pass these objects the other way as a parameter for a web service call, the deserialization breaks, because they can't be the input is treated as a list of string/object dictionaries, which can't be converted to a list of whatever custom class the collection contains. The only way we could find to deal with this is to create a generic class that is a list of string/object dictionaries which then converts the list to the appropriate custom collection class, and then changing any web service parameters to use the generic class instead. I'm sure there are tons of issues and violations of "best practices" here, but it gets the job done for us without creating a ton of custom converter classes. A: If you don't use code-generated classes, you can decorate your properties with the ScriptIgnoreAttribute to tell the serializer to ignore certain properties. Xml serialization has a similar attribute. Of course, you cannot use this approach if you want to return some properties of a class on one service method call and different properties of the same class on a different service method call. If you want to do that, return an anonymous type in the service method. [WebMethod] [ScriptMethod] public object GimmieData() { var dalEntity = dal.GimmieEntity(); //However yours works... return new { id = dalEntity.Id, description = dalEntity.Desc }; } The serializer could care less about the type of the object you send to it, since it just turns it into text anyway. I also believe that you could implement ISerializable on your data entity (as a partial class if you have code-gen'd data entities) to gain fine-grained control over the serialization process, but I haven't tried it. A: I know this thread has been quiet for a while, but I thought I'd offer that if you override the SupportedTypes property of JavaScriptConverter in you custom converter, you can add the types that should use the converter. This could go into a config file if necessary. That way you wouldn't need a custom converter for each class. I tried to create a generic converter but couldn't figure out how to identify it in the web.config. Would love to find out if anyone else has managed it. I got the idea when trying to solve the above issue and stumbled on Nick Berardi's "Creating a more accurate JSON .NET Serializer" (google it). Worked for me:) Thanks to all. A: If you're using .NET 3.x (or can), a WCF service is going to be your best bet. You can selectively control which properties are serialized to the client with the [DataMember] attribute. WCF also allows more fine-grained control over the JSON serialization and deserialization, if you desire it. This is a good example to get started: http://blogs.msdn.com/kaevans/archive/2007/09/04/using-wcf-json-linq-and-ajax-passing-complex-types-to-wcf-services-with-json-encoding.aspx A: You can use the System.Runtime.Serialization.Json.DataContractJsonSerializer class in the System.ServiceModel.Web.dll assembly. A: Don't quote me on this working for certain, but I believe this is what you are looking for. [WebMethod] [ScriptMethod(ResponseFormat = ResponseFormat.Json)] public XmlDocument GetXmlDocument() { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.LoadXml(_xmlString); return xmlDoc; }
{ "language": "en", "url": "https://stackoverflow.com/questions/159704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Probably BAD coding style ... please comment I am checking whether the new name already exists or not. Code 1 if(cmbxExistingGroups.Properties.Items.Cast<string>().ToList().Exists(txt => txt==txtNewGroup.Text.Trim())) { MessageBox.Show("already exists.", "Add new group"); } Otherwise I could have written: Code 2 foreach(var str in cmbxExistingGroups.Properties.Items) { if(str==txtNewGroup.Text) { MessageBox.Show("already exists.", "Add new group"); break; } } I wrote these two and thought I was exploiting language features in code 1. ...and yes: both of them work for me ... I am wondering about the performance :-/ A: I've quoted it before but I'll do it again: Write your code as if the person maintaining it is a homicidal maniac who knows where you live. A: would cmbxExistingGroups.Properties.Items.Contains(text) not work instead? A: There are a few things wrong here: 1) The two bits of code don't do the same thing - the first looks for the trimmed version of txtNewGroup, the second just looks for txtNewGroup 2) There's no point in calling ToList() - that just make things less efficient 3) Using Exists with a predicate is overkill - Contains is all you need here So, the first could easily come down to: if (cmbxExistingGroups.Properties.Items.Cast<string>.Contains(txtNewGroup.Text)) { // Stuff } I'd probably create a variable to give "cmbxExistingGroups.Properties.Items.Cast" a meaningful, simple name - but then I'd say it's easier to understand than the explicit foreach loop. A: The first code bit is fine, except instead of calling Enumerable.ToList() and List<T>.Exists(), you should just call Enumerable.Any() -- it does a lazy evaluation, so it never allocates the memory for the List<T>, and it will stop enumerating cmbxExistingGroups.Properties.Items and casting them to string. Also, calling the trim from inside that predicate means it happens for every item it looks at. It would be best to move it out to the outer scope: string match = txtNewGroup.Text.Trim(); if(cmbxExistingGroups.Properties.Items.Cast<string>().Any(txt => txt==match)) { MessageBox.Show("already exists.", "Add new group"); } A: I appreciate the cleverness of the first sample (assuming it works), but the second one is a lot easier for the next person who has to maintain the code to figure out. A: Sometimes just a little indentation makes a world of difference: if (cmbxExistingGroups.Properties.Items .Cast<string>().ToList() .Exists ( txt => txt==txtNewGroup.Text.Trim() )) { MessageBox.Show("already exists.", "Add new group"); } Since your using a List<String>, you might as well just drop the Exists predicate and use Contains...use Exists when comparing complex objects by unique values. A: Verbosity in coding is not always bad at all. I prefer the second code snippet a lot over the first one. Just imagine you would have to maintain (or even change the functionality of) the first example... um. A: Well, if it were me, it would be a variation on 2. Always prefer readability over one-liners. Additionally, always extract a method to make it clearer. your calling code becomes if( cmbxExistingGroups.ContainsKey(txtNewGroup.Text) ) { MessageBox.Show("Already Exists"); } If you define an extension method for Combo Boxes public static class ComboBoxExtensions { public static bool ContainsKey(this ComboBox comboBox, string key) { foreach (string existing in comboBox.Items) { if (string.Equals(key, existing)) { return true; } } return false; } } A: First, they're not equivalent. The 1st sample does a check against txtNewSGroup.Text.Trim(), the 2nd omits trim. Also, the 1st casts everything to a string, whereas the second uses whatever comes out of the iterator. I assume that's an object, or you wouldn't have needed the cast in the 1st place. So, to be fair, the closest equivalent to the 2nd sample in the LINQ style would be: if (mbxExistingGroups.Properties.Items.Cast<string>().Contains(txtNewGroup.Text)) { ... } which isn't too bad. But, since you seem to be working with old style IEnumerable instead of new fangled IEnumerable<T>, why don't we give you another extension method: public static Contains<T>(this IEnumerable e, T value) { return e.Cast<T>().Contains(value); } And now we have: if (mbxExistingGroups.Properties.Items.Contains(txtNewGroup.Text)) { ... } which is pretty readable IMO. A: I would agree, go with the second one because it will be easier to maintain for anybody else who works on it and when you come back to that in 6-12 months, it will be easier to remember what you were doing. A: both of them works for me ..i am wonodering about the performance I see no one read the question :) I think I see what you're doing (I don't use this language). The first tries to generate the list and test it in one shot. The second does an explicit iteration and can "short circuit" itself (exit early) if it finds the duplicate early on. The question is whether the "all at once" is more efficient due to the language implementation. A: The second of the two would perform better, and it would perform the same as other people's samples that use Contains. The reason why the first one uses an extra trim. plus a conversion to list. so it iterates once for conversion, then starts again to check using exists, and does a trim each time, but will exit iteration if found. The second starts iterating once, has no trim, and will exit if found. So in short the answer to your question is the second performs much better. A: From a performance point of view: txtNewGroup.Text.Trim() Do your control interaction/string manipulation outside of the loop - one time, instead of n times. A: I imagine that on the WTF's per minute scale, the first would be off the chart. Count the dots, any more than two per line is a potential problem
{ "language": "en", "url": "https://stackoverflow.com/questions/159705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the naming convention in Python for variable and function? Coming from a C# background the naming convention for variables and method names are usually either camelCase or PascalCase: // C# example string thisIsMyVariable = "a" public void ThisIsMyMethod() In Python, I have seen the above but I have also seen underscores being used: # python example this_is_my_variable = 'a' def this_is_my_function(): Is there a more preferable, definitive coding style for Python? A: As the Style Guide for Python Code admits, The naming conventions of Python's library are a bit of a mess, so we'll never get this completely consistent Note that this refers just to Python's standard library. If they can't get that consistent, then there hardly is much hope of having a generally-adhered-to convention for all Python code, is there? From that, and the discussion here, I would deduce that it's not a horrible sin if one keeps using e.g. Java's or C#'s (clear and well-established) naming conventions for variables and functions when crossing over to Python. Keeping in mind, of course, that it is best to abide with whatever the prevailing style for a codebase / project / team happens to be. As the Python Style Guide points out, internal consistency matters most. Feel free to dismiss me as a heretic. :-) Like the OP, I'm not a "Pythonista", not yet anyway. A: The coding style is usually part of an organization's internal policy/convention standards, but I think in general, the all_lower_case_underscore_separator style (also called snake_case) is most common in python. A: As mentioned, PEP 8 says to use lower_case_with_underscores for variables, methods and functions. I prefer using lower_case_with_underscores for variables and mixedCase for methods and functions makes the code more explicit and readable. Thus following the Zen of Python's "explicit is better than implicit" and "Readability counts" A: There is PEP 8, as other answers show, but PEP 8 is only the styleguide for the standard library, and it's only taken as gospel therein. One of the most frequent deviations of PEP 8 for other pieces of code is the variable naming, specifically for methods. There is no single predominate style, although considering the volume of code that uses mixedCase, if one were to make a strict census one would probably end up with a version of PEP 8 with mixedCase. There is little other deviation from PEP 8 that is quite as common. A: further to what @JohnTESlade has answered. Google's python style guide has some pretty neat recommendations, Names to Avoid * *single character names except for counters or iterators *dashes (-) in any package/module name *\__double_leading_and_trailing_underscore__ names (reserved by Python) Naming Convention * *"Internal" means internal to a module or protected or private within a class. *Prepending a single underscore (_) has some support for protecting module variables and functions (not included with import * from). Prepending a double underscore (__) to an instance variable or method effectively serves to make the variable or method private to its class (using name mangling). *Place related classes and top-level functions together in a module. Unlike Java, there is no need to limit yourself to one class per module. *Use CapWords for class names, but lower_with_under.py for module names. Although there are many existing modules named CapWords.py, this is now discouraged because it's confusing when the module happens to be named after a class. ("wait -- did I write import StringIO or from StringIO import StringIO?") Guidelines derived from Guido's Recommendations A: I personally use Java's naming conventions when developing in other programming languages as it is consistent and easy to follow. That way I am not continuously struggling over what conventions to use which shouldn't be the hardest part of my project! A: Whether or not being in class or out of class: A variable and function are lowercase as shown below: name = "John" def display(name): print("John") And if they're more than one word, they're separated with underscore "_" as shown below: first_name = "John" def display_first_name(first_name): print(first_name) And, if a variable is a constant, it's uppercase as shown below: FIRST_NAME = "John" A: David Goodger (in "Code Like a Pythonista" here) describes the PEP 8 recommendations as follows: * *joined_lower for functions, methods, attributes, variables *joined_lower or ALL_CAPS for constants *StudlyCaps for classes *camelCase only to conform to pre-existing conventions A: Most python people prefer underscores, but even I am using python since more than 5 years right now, I still do not like them. They just look ugly to me, but maybe that's all the Java in my head. I simply like CamelCase better since it fits better with the way classes are named, It feels more logical to have SomeClass.doSomething() than SomeClass.do_something(). If you look around in the global module index in python, you will find both, which is due to the fact that it's a collection of libraries from various sources that grew overtime and not something that was developed by one company like Sun with strict coding rules. I would say the bottom line is: Use whatever you like better, it's just a question of personal taste. A: Personally I try to use CamelCase for classes, mixedCase methods and functions. Variables are usually underscore separated (when I can remember). This way I can tell at a glance what exactly I'm calling, rather than everything looking the same. A: There is a paper about this: http://www.cs.kent.edu/~jmaletic/papers/ICPC2010-CamelCaseUnderScoreClouds.pdf TL;DR It says that snake_case is more readable than camelCase. That's why modern languages use (or should use) snake wherever they can. A: See Python PEP 8: Function and Variable Names: Function names should be lowercase, with words separated by underscores as necessary to improve readability. Variable names follow the same convention as function names. mixedCase is allowed only in contexts where that's already the prevailing style (e.g. threading.py), to retain backwards compatibility. A: The Google Python Style Guide has the following convention: module_name, package_name, ClassName, method_name, ExceptionName, function_name, GLOBAL_CONSTANT_NAME, global_var_name, instance_var_name, function_parameter_name, local_var_name. A similar naming scheme should be applied to a CLASS_CONSTANT_NAME A: Lenin has told... I'm from Java/C# world too. And SQL as well. Scrutinized myself in attempts to find first sight understandable examples of complex constructions like list in the dictionary of lists where everything is an object. As for me - camelCase or their variants should become standard for any language. Underscores should be preserved for complex sentences. A: Typically, one follow the conventions used in the language's standard library.
{ "language": "en", "url": "https://stackoverflow.com/questions/159720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1018" }
Q: How to find oracle error codes that could result from a connection error? I would like to handle an OracleException thrown when my network/database connection is interrupted, where can I find out what error codes I might can receive? I guess since we are talking about a connection interruption these would be technically TNS errors such as ORA-12560 "TNS:protocol adapter error." But I have noticed a couple others depending on where exactly the connection is lost and would like to get a full list. A: Take a look at Oracle Database Error Messages 11g Release 1 (11.1). And here are the search results for TNS errors. A: There's a full list here: http://ora-code.com But note that some of them, like "TNS:protocol adapter error", can actually mean many different things. A: ORA-12154 TNS:could not resolve service name" ORA-12203 TNS:unable to connect to destination" ORA-12500 TNS:listener failed to start a dedicated server process" ORA-12545 TNS:name lookup failure" ORA-12560 TNS:protocol adapter error"
{ "language": "en", "url": "https://stackoverflow.com/questions/159721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Remove “Save target as…” from IE Is there any way to remove “save target as” from internet explorer’s right-click menu by using group policies or registry hacks? Failing that is there a simple programmatic way? A: Update: Aside from all my preachings below, here's the answer you are looking for: There is a Group Policy to turn off the "Save As..." menu item from the File menu in Internet Explorer (IE5+), which can be deployed per-machine or per-user. However, that group policy does not control the "Save Target As..." from the context menu. Unless you are part of the IT department of the company the user is working for, attempting to limit the user actions in the HTTP agent is never a good idea for multiple reasons: * *You should not mess with the user's computer *You might be breaking other applications *You don't know what HTTP agent the user is using *Relying on limiting the user actions is at best futile, as any sofisticated user will probably find a way to circumvent your limitation *It's the quickest way to alienate your users And even if you are part of the IT department, you should try to limit your control over the user's actions as much as possible. A: If you are trying to implement some kind of DRM scheme for websites - just dont. They never work and just annoy your users. A: @1800 INFORMATION Maybe we could give Stan the benefit of the doubt? Maybe he's working for some corporate IT department that's trying to prevent end-users from downloading virus-laden apps? I mean there are non-evil reasons why someone might try to implement this sort of functionality. If this is the case (you're working for some Corporate IT department and you've been tasked with preventing people from downloading files from the internet) as others have pointed out there may be better approaches to what you're trying to achieve. Assuming he has been tasked with this chore by his bosses, it's a relevant question. One option (but one that would probably not be palatable in most IT environments) would be to dump IE and use an open source browser where you could simply modify the source to remove the "Save As..." option. But, as I said, unless things have changed dramatically, most corporate IT departments would never consider dropping IE in favor of another browser. A: Nope. (this part is just because of the answer minimum size limit) A: A better description of why you're trying to do this would be helpful. Disabling the menu item won't keep people from downloading documents, for example. Any file links they click are still going to end up in Temporary Internet Files. On a page-by-page basis, you can use Javascript to trap the right-click event, and refuse to shoe the menu, but that's easily worked around by even moderately-sophisticated users. A: Why dont you try removing the right click altogether. Or if you need your own menu, you can create your own popup easily using div's. There is no way to modify the default menu and to disable a part of the menu as such. See how I disabled right click context menu here : http://www.codeproject.com/tips/42554/Javascript-hack-to-disable-Right-Click-and-Text-Se.aspx I hope this will help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/159732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Converting MS Word Documents to PDF in ASP.NET Similar questions have been asked, but nothing exactly like mine, so here goes. We have a collection of Microsoft Word documents on an ASP.NET web server with merge fields whose values are filled in as a result of user form submissions. After the field merge, the server must convert the document to PDF and stream it down to the browser. Our first inclination was to use the Visual Studio Tools for Office API; however, we ran into this warning from Microsoft: Microsoft does not currently recommend, and does not support, Automation of Microsoft Office applications from any unattended, non-interactive client application or component (including ASP, ASP.NET, DCOM, and NT Services), because Office may exhibit unstable behavior and/or deadlock when Office is run in this environment. It looks like the field manipulation can be done using the Open XML SDK, but what's the best way to convert Word 2007 documents to PDF without opening Word? The optimal solution would be low-cost, scalable, have a low memory footprint, be easy to deploy, and have a .NET API. A: Check out Microsoft's resource on Saving Word 2007 Documents to PDF and XPS Formats using C# or VB. A: It's not exactly Open Source, but Aspose has a couple products which can do that, Aspose.Pdf.Kit Aspose.Pdf.Kit is a non-graphical PDF® document manipulation component that enables both .NET and Java developers to manage existing PDF files as well as manage form fields embedded within PDF files. Aspose.Pdf is perfect for creating new PDF files; however, developers often need to edit already existing PDF documents. Aspose.Pdf.Kit allows them to do just that. Aspose.Pdf.Kit allows developers to create powerful applications for merging data directly into PDF documents as well as for updating and managing PDF documents. Aspose.Pdf.Kit is a wonderful product and works great with the rest of our PDF products. and Aspose.pdf Aspose.Pdf is a non-graphical PDF® document reporting component that enables either .NET or Java applications to create PDF documents from scratch without utilizing Adobe Acrobat®. Aspose.Pdf is very affordably priced and offers a wealth of strong features including: compression, tables, graphs, images, hyperlinks, security and custom fonts. Aspose.Pdf supports the creation of PDF files through API, XML templates and XSL-FO files. Aspose.Pdf is very easy to use and is provided with 14 fully featured demos written in both C# and Visual Basic. Check out the API and demos. You can download a DLL for free to try it out. I've used both before and they work out great. There's also iTextSharp which is a C# port of iText, a Java PDF converter. I've heard some people try it with mixed results. A: The question is "MS Word Documents to PDF in ASP.NET" so I am very puzzled why Aspose.Pdf and Aspose.Pdf.Kit are recommended above. You need to use Aspose.Words because that's the component that supports Microsoft Word documents to PDF conversion. A: ActivePdf DocConverter - http://www.activepdf.com/ But it requires Office installed on the server for good quality conversion. A: Aspose.Words may be the best option for you, but it doesn't convert all visual elements perfectly. Have a look at the Muhimbi PDF Converter Web Services. It runs on Windows as a service, but can be accessed from any non-Windows web services capable environment including Java and .NET. Although this solutions requires MS-Office to be installed on a server (not necessarily the same server as your application), it is very robust and provides perfect conversion fidelity. It goes to great lengths to get around the deadlock problems Microsoft refer to in their KB article. To generate or Modify MS-Word files I recommend using the free Open XML SDK for Microsoft Office. Eric White maintains a really good Blog about it. Disclaimer, I worked on this product. Having said that, it works great. A: You should try using OpenOffice for this. It is Free and supports a whole range of file conversions. I have used it to convert DOC & DOCX files to HTML format with fantastic results. A: ABCpdf is another popular component that'll let you convert Word documents to PDF under ASP.NET, however I believe it too makes use of Microsoft Office or OpenOffice. http://www.websupergoo.com/abcpdf-office-docs.htm A: Microsoft PDF add-in for word seems to be the best solution for now but you should take into consideration that it does not convert all word documents correctly to pdf and in some cases you will see huge difference between the word and the output pdf. Unfortunately I couldn't find any api that would convert all word documents correctly. The only solution I found to ensure the conversion was 100% correct was by converting the documents through a printer driver. The downside is that documents are queued and converted one by one, but you can be sure the resulted pdf is exactly like the word docuemtn. I personally preferred using UDC (Universal document converter) and installed Foxit Reader(free version) on server too then printed the documents by starting a "Process" and setting its Verb property to "print". You can also use FileSystemWatcher to set a signal when the conversion has completed.
{ "language": "en", "url": "https://stackoverflow.com/questions/159744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Using a subdomain to identify a client I'm working on building a Silverlight application whereas we want to be able to have a client hit a url like: http://{client}.domain.com/ and login, where the {client} part is their business name. so for example, google's would be: http://google.domain.com/ What I was wondering was if anyone has been able, in silverlight, to be able to use this subdomain model to make decisions on the call to the web server so that you can switch to a specific database to run a query? Unfortunately, it's something that is quite necessary for the project, as we are trying to make it easy for their employees to get their company specific information for our software. A: On the server side you can check the HTTP 1.1 Host header to see how the user came to your server and do the necessary customization based on that. A: I think you cannot do this with Silverlight alone, I know you cannot do this without problems with Javascript, Ajax etc. . That is because a sub domain is - for security reasons - treated otherwise than a sub-page by the browsers. What about the following idea: Insert a rewrite rule to your web server software. So if http://google.domain.com is called, the web server itself rewrites the URL to something like http://www.domain.com/google/ (or better: http://www.domain.com/customers/google/). Would that help? A: Georgi: That would help if it would be static, but alas, it's going to all be dynamic. My hope was to have 1x deployment for the application, and to use the http://google.domain.com/ idea to switch to the correct database for the user. I recall doing this once when we built an asp.net website, using the domain context to figure out what skin to use, etc. Ates: Can you explain more about what you are saying... sounds like you are close to what I am trying to come up with. Have you seen such a tutorial for this? The only other way I have come up with to make this work is to have a metabase that when the user logs in, it will switch them to the appropriate database as required... was just thinking as well that telling Client x to hit: http://ClientX.domain.com/ would have been sweeter than saying to hit http://www.domain.com/ and login. It seemed as if they were to hit their name, and to show it personalized for them right from the login screen would have been much more appealing for the client base. A: @Richard B: No, I can't think of any such tutorial that I've seen before. I'll try to be more verbose. The server-side approach in more detail: * *Direct *.example.com to the same IP in your DNS settings. *The backend app that handles login checks the Host HTTP header (e.g. the "HTTP_HOST" server variable in some platforms). That would contain the exact subdomain.example.com that the client used for reaching your server. Extract the subdomain part and continue... There can also be a client-side-only approach. I don't know much about Silverlight but I'm assuming that you should be able to interface Silverlight with JavaScript. You could read document.location with JavaScript and pass it to your Silverlight applet, whereon further data fetching etc. logic would rely on the subdomain that was passed in by JavaScript. A: @Ates: That is what we did when we wrote the ASP.Net system... we pushed a slew of *.example.com hosts against the web server, and handled using the HTTP headers. The hold-up comes when dealing with WCF pushing the info between the client and the server... it can only exist in one domain... So, for example, when you have {client}.example.com and {sandbox}.example.com, the WCF service can't be registered to both. It also cannot be registered to just *.example.com or example.com, so that's where the catch 22 is coming in at. everything else I have the prior knowledge of handling. I recall a method by which an application can "spoof" another domain name in certain instances. I take it in this case, I would need to do such a configuration? Much to research yet I believe. A: Wouldn't it work to put the service on a specific subdomain itself, such as wcf.example.com, and then setup a cross domain policy file on the service to allow it to access it? As long as this would work you could just load the silverlight in the proper subdomain and then pass that subdomain to your service and let it do its thing. Some examples of this below: * *Silverlight Cross Domain Services *Silverlight Cross Domain Policy Helpers
{ "language": "en", "url": "https://stackoverflow.com/questions/159768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Oracle ROWNUM pseudocolumn I have a complex query with group by and order by clause and I need a sorted row number (1...2...(n-1)...n) returned with every row. Using a ROWNUM (value is assigned to a row after it passes the predicate phase of the query but before the query does any sorting or aggregation) gives me a non-sorted list (4...567...123...45...). I cannot use application for counting and assigning numbers to each row. A: You could do it as a subquery, so have: select q.*, rownum from (select... group by etc..) q That would probably work... don't know if there is anything better than that. A: Is there a reason that you can't just do SELECT rownum, a.* FROM (<<your complex query including GROUP BY and ORDER BY>>) a A: Can you use an in-line query? ie SELECT cols, ROWNUM FROM (your query) A: Assuming that you're query is already ordered in the manner you desire and you just want a number to indicate what row in the order it is: SELECT ROWNUM AS RowOrderNumber, Col1, Col2,Col3... FROM ( [Your Original Query Here] ) and replace "Colx" with the names of the columns in your query. A: I also sometimes do something like: SELECT * FROM (SELECT X,Y FROM MY_TABLE WHERE Z=16 ORDER BY MY_DATE DESC) WHERE ROWNUM=1 A: If you want to use ROWNUM to do anything more than limit the total number of rows returned in a query (e.g. AND ROWNUM < 10) you'll need to alias ROWNUM: select * (select rownum rn, a.* from (<sorted query>) a)) where rn between 500 and 1000
{ "language": "en", "url": "https://stackoverflow.com/questions/159769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Do objects added to the SqlException.Data collection need to be [Serializable]? Do objects added to the SqlException.Data collection need to be [Serializable]? A: Yes, they need to be. It's because ISerializable's implementation in Exception type add Data property into StreamingContext. And all objects in Data property (which is a IDictionary) must be serializable. Having exception classes and instances serializable is good practice even if aren't going to build distributed app. A: Well, strictly speaking, no... you can add any type to an Exception's Data dictionary... but why would you ask? What boundary do you think they are going to be serialized across? A: If the exception is going to be propagated across appdomain boundaries, the exceptions and the data they contain need to be serializable. One such scenario would be a client-server application communicating over remoting. If the server throws an exception and it needs to be handled at client side, framework will have to serialize/deserialize it.
{ "language": "en", "url": "https://stackoverflow.com/questions/159773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is Ruby a functional language? Wikipedia says Ruby is a functional language, but I'm not convinced. Why or why not? A: Ruby is a multi-paradigm language that supports a functional style of programming. A: Whether a language is or is not a functional language is unimportant. Functional Programming is a thesis, best explained by Philip Wadler (The Essence of Functional Programming) and John Hughes (Why Functional Programming Matters). A meaningful question is, 'How amenable is Ruby to achieving the thesis of functional programming?' The answer is 'very poorly'. I gave a talk on this just recently. Here are the slides. A: Ruby is an object-oriented language, that can support other paradigms (functional, imperative, etc). However, since everything in Ruby is an object, it's primarily an OO language. example: "hello".reverse() = "olleh", every string is a string object instance and so on and so forth. Read up here or here A: It depends on your definition of a “functional language”. Personally, I think the term is itself quite problematic when used as an absolute. The are more aspects to being a “functional language” than mere language features and most depend on where you're looking from. For instance, the culture surrounding the language is quite important in this regard. Does it encourage a functional style? What about the available libraries? Do they encourage you to use them in a functional way? Most people would call Scheme a functional language, for example. But what about Common Lisp? Apart from the multiple-/single-namespace issue and guaranteed tail-call elimination (which some CL implementations support as well, depending on the compiler settings), there isn't much that makes Scheme as a language more suited to functional programming than Common Lisp, and still, most Lispers wouldn't call CL a functional language. Why? Because the culture surrounding it heavily depends on CL's imperative features (like the LOOP macro, for example, which most Schemers would probably frown upon). On the other hand, a C programmer may well consider CL a functional language. Most code written in any Lisp dialect is certainly much more functional in style than your usual block of C code, after all. Likewise, Scheme is very much an imperative language as compared to Haskell. Therefore, I don't think there can ever be a definite yes/no answer. Whether to call a language functional or not heavily depends on your viewpoint. A: Ruby does support higher-level functions (see Array#map, inject, & select), but it is still an imperative, Object-Oriented language. One of the key characteristics of a functional language it that it avoids mutable state. Functional languages do not have the concept of a variable as you would have in Ruby, C, Java, or any other imperative language. Another key characteristic of a functional language is that it focuses on defining a program in terms of "what", rather than "how". When programming in an OO language, we write classes & methods to hide the implementation (the "how") from the "what" (the class/method name), but in the end these methods are still written using a sequence of statements. In a functional language, you do not specify a sequence of execution, even at the lowest level. A: I most definitely think you can use functional style in Ruby. One of the most critical aspects to be able to program in a functional style is if the language supports higher order functions... which Ruby does. That said, it's easy to program in Ruby in a non-functional style as well. Another key aspect of functional style is to not have state, and have real mathematical functions that always return the same value for a given set of inputs. This can be done in Ruby, but it is not enforced in the language like something more strictly functional like Haskell. So, yeah, it supports functional style, but it also will let you program in a non-functional style as well. A: Ruby isn't really much of a multi-paradigm language either, I think. Multi-paradigm tends to be used by people wanting to label their favorite language as something which is useful in many different areas. I'd describe Ruby is an object-oriented scripting language. Yes, functions are first-class objects (sort of), but that doesn't really make it a functional language. IMO, I might add. A: Recursion is common in functional programming. Almost any language does support recursion, but recursive algorithms are often ineffective if there is no tail call optimization (TCO). Functional programming languages are capable of optimizing tail recursion and can execute such code in constant space. Some Ruby implementations do optimize tail recursion, the other don't, but in general Ruby implementations are not required to do TCO. See Does Ruby perform Tail Call Optimization? So, if you write some Ruby functional style and rely on TCO of some particular implementation, your code may be very ineffective in another Ruby interpreter. I think this is why Ruby is not a functional language (neither is Python). A: Strictly speaking, it doesn't make sense to describe a language as "functional"; most languages are capable of functional programming. Even C++ is. Functional style is more or less a subset of imperative language features, supported with syntactic sugar and some compiler optimizations like immutability and tail-recursion flattening, The latter arguably is a minor implementation-specific technicality and has nothing to do with the actual language. The x64 C# 4.0 compiler does tail-recursion optimization, whereas the x86 one doesn't for whatever stupid reason. Syntactic sugar can usually be worked around to some extent or another, especially if the language has a programmable precompiler (i.e. C's #define). It might be slightly more meaningful to ask, "does language __ support imperative programming?", and the answer, for instance with Lisp, is "no". A: I submit that supporting, or having the ability to program in a language in a functional style does not a functional language make. I can even write Java code in a functional style if I want to hurt my collegues, and myself a few months weeks on. Having a functional language is not only about what you can do, such as higher-order functions, first-class functions and currying. It is also about what you cannot do, like side-effects in pure functions. This is important because it is a big part of the reason why functional programs are, or functional code in generel is, easier to reason about. And when code is easier to reason about, bugs become shallower and float to the conceptual surface where they can be fixed, which in turn gives less buggy code. Ruby is object-oriented at its core, so even though it has reasonably good support for a functional style, it is not itself a functional language. That's my non-scientific opinion anyway. Edit: In retrospect and with consideration for the fine comments I have recieved to this answer thus far, I think the object-oriented versus functional comparison is one of apples and oranges. The real differentiator is that of being imparative in execution, or not. Functional languages have the expression as their primary linguistic construct and the order of execution is often undefined or defined as being lazy. Strict execution is possible but only used when needed. In an imparative language, strict execution is the default and while lazy execution is possible, it is often kludgy to do and can have unpredictable results in many edge cases. Now, that's my non-scientific opinion. A: Ruby will have to meet the following requirements in order to be "TRUELY" functional. Immutable values: once a “variable” is set, it cannot be changed. In Ruby, this means you effectively have to treat variables like constants. The is not fully supported in the language, you will have to freeze each variable manually. No side-effects: when passed a given value, a function must always return the same result. This goes hand in hand with having immutable values; a function can never take a value and change it, as this would be causing a side-effect that is tangential to returning a result. Higher-order functions: these are functions that allow functions as arguments, or use functions as the return value. This is, arguably, one of the most critical features of any functional language. Currying: enabled by higher-order functions, currying is transforming a function that takes multiple arguments into a function that takes one argument. This goes hand in hand with partial function application, which is transforming a multi-argument function into a function that takes less arguments then it did originally. Recursion: looping by calling a function from within itself. When you don’t have access to mutable data, recursion is used to build up and chain data construction. This is because looping is not a functional concept, as it requires variables to be passed around to store the state of the loop at a given time. Lazy-evaluation, or delayed-evaluation: delaying processing of values until the moment when it is actually needed. If, as an example, you have some code that generated list of Fibonacci numbers with lazy-evaluation enabled, this would not actually be processed and calculated until one of the values in the result was required by another function, such as puts. Proposal (Just a thought) I would be of great to have some kind of definition to have a mode directive to declare files with functional paradigm, example mode 'functional' A: Please, have a look at the beginning of the book: "A-Great-Ruby-eBook". It discusses the very specific topic you are asking. You can do different types of programming in Ruby. If you want to program like functionally, you can do it. If you want to program like imperatively, you can do it. It is a definition question how functional Ruby in the end is. Please, see the reply by the user camflan.
{ "language": "en", "url": "https://stackoverflow.com/questions/159797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: VB.Net Automating MS Word for Spell Check Capabilities An application currently in development has the requirements for using MS Word to perform spell check on certain textareas within the application. I currently have Office 2007 (which is the Office 12 com objects). My question is, if I add in the Office 12 objects what will happen on boxes which have Office 2003? This is in regards to both other development boxes as well as the end users. Am I correct in believing that the end result will be that the spell capabilities will not be available for those users? And if I used an Office 11 object would that mean that the users would be unable to perform the spellchecks if they have Office 07 installed? A: We gave up on trying to use a dependency on Word, as both have differing versions installed or no Office installation at all! Instead opting for NetSpell. A: I am guessing here, but if it is as you can't use the 2007 PIA (Primary Interop Assembly) with a 2003 installation, you could try accessing the PIA via reflection as I would guess the calls you want won't change between the two, and then it won't matter - you'll use whichever is installed. If you are installing the PIA as well, you can then either get the user to tell you which they have, or be a little more clever and just try 2007 and if it fails, try 2003. Like I said, I'm guessing here but it might be worth a try. EDIT: I found this link about Office PIA's. This refers to Excel but actually covers Office in general. I don't envy the task you have. Looks like you'll need to detect the PIA (which may or may not be installed) and act accordingly. Sounds like a job for reflection to me. A: Newer versions of Office will maintain most if not all compatibility with older versions of COM objects. Meaning if you want to program against Office 2003 and 2007 you will need to use Office 11 COM objects as a dependency as they were the newest available when 2003 was released. As long as you verify that the methods you need exist both versions of COM objects you should have no problems as long as you use the older COM objects. Unfortunately, although I have used this solution for my own work, I have not tested it with Spell Check. In the end make sure that you test your code with all version of Office that you wish to integrate with. A: My gut reaction to this question, is to simply suggest you go another route. Try using a 3rd party spell-check control. They are relatively inexpensive (and you may find some free controls). At least that way you can control the version of the control included with your app and be able to rely on it's functionality. Quite frankly, I'm surprised this library isn't built into Windows already. Sure it's complicated with all of the languages Windows supports, but these days it's similar to copy/paste in terms of user expectations. A: You can actually package both the office11 and office12 interops needed to work in BOTH versions. It takes some minor work but I managed to do it. I do a check in the registry to see which interop to call and then executes the spell checking with the correct version. It even goes so far as to check if you have Word installed and throws an error alert that you can't spell check without having Word. We're tied to using Word due to the medical dictionary that's tied into Office we're required to use. Do a search on interop or Com wrappers and I think you'll find you can use both fairly easily.
{ "language": "en", "url": "https://stackoverflow.com/questions/159799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Python Web Services Elementree has become the accepted standard for interacting with xml. What is the prevalent web service/SOAP library in use today? A: I'm not sure about an accepted standard, but I've found SOAPpy to be fairly straight-forward and useful library for handling SOAP-based web services. SOAPy is a SOAP/XML Schema Library for Python. Given either a WSDL or SDL document, SOAPy discovers the published API for a web service and exposes it to Python applications as transparently as possible. IBM provide a good walk-through and example on their site for getting started with SOAPpy. SOAPpy's no longer under active development, but is instead being folded into Zolera SOAP Infrastructure (ZSI) at the Python Web Services Project. This project however has alos not seen much activity since November last year. A: soaplib is very easy to use and seems to be active. http://web.archive.org/web/20090729125144/https://wiki.github.com/jkp/soaplib/ A: Old question, but for anyone else who is asking this question I've had good success with suds: https://fedorahosted.org/suds/
{ "language": "en", "url": "https://stackoverflow.com/questions/159802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I scroll a UITableView to a section that contains no rows? In an app I'm working on, I have a plain style UITableView that can contain a section containing zero rows. I want to be able to scroll to this section using scrollToRowAtIndexPath:atScrollPosition:animated: but I get an error when I try to scroll to this section due to the lack of child rows. Apple's calendar application is able to do this, if you look at your calendar in list view, and there are no events in your calendar for today, an empty section is inserted for today and you can scroll to it using the Today button in the toolbar at the bottom of the screen. As far as I can tell Apple may be using a customized UITableView, or they're using a private API... The only workaround I can think of is to insert an empty UITableCell in that's 0 pixels high and scroll to that. But it's my understanding that having cells of varying heights is really bad for scrolling performance. Still I'll try it anyway, maybe the performance hit won't be too bad. Update Since there seems to be no solution to this, I've filed a bug report with apple. If this affects you too, file a duplicate of rdar://problem/6263339 (Open Radar link) if you want this to get this fixed faster. Update #2 I have a decent workaround to this issue, take a look at my answer below. A: If your section have not rows use this let indexPath = IndexPath(row: NSNotFound, section: section) tableView.scrollToRow(at: indexPath, at: .middle, animated: true) A: This is an old question, but Apple still haven't added anything which helps or fixed the crash bug where the section has no rows. For me, I really needed to make a new section scroll to the middle when added, so I now use this code: if (rowCount > 0) { [self.tableView scrollToRowAtIndexPath: [NSIndexPath indexPathForRow: 0 inSection: sectionIndexForNewFolder] atScrollPosition: UITableViewScrollPositionMiddle animated: TRUE]; } else { CGRect sectionRect = [self.tableView rectForSection: sectionIndexForNewFolder]; // Try to get a full-height rect which is centred on the sectionRect // This produces a very similar effect to UITableViewScrollPositionMiddle. CGFloat extraHeightToAdd = sectionRect.size.height - self.tableView.frame.size.height; sectionRect.origin.y -= extraHeightToAdd * 0.5f; sectionRect.size.height += extraHeightToAdd; [self.tableView scrollRectToVisible:sectionRect animated:YES]; } Hope you like it - it's based on Mike Akers' code as you can see, but does the calculation for scrolling to the middle instead of top. Thanks Mike - you're a star. A: A Swift approach to the same: if rows > 0 { let indexPath = IndexPath(row: 0, section: section) self.tableView.setContentOffset(CGPoint.zero, animated: true) self.tableView.scrollToRow(at: indexPath, at: .top, animated: true) } else { let sectionRect : CGRect = tableView.rect(forSection: section) tableView.scrollRectToVisible(sectionRect, animated: true) } A: Since using: [NSIndexPath indexPathForRow:NSNotFound inSection:EXAMPLE] Broken for me in Xcode 11.3.1 (iOS simulator - 13.3) I decided to use: NSUInteger index = [self.sectionTypes indexOfObject:@(EXAMPLE)]; if (index != NSNotFound) { CGRect rect = [self.tableView rectForSection:index]; [self.tableView scrollRectToVisible:rect animated:YES]; } A: UPDATE: Looks like this bug is fixed in iOS 3.0. You can use the following NSIndexPath to scroll to a section containing 0 rows: [NSIndexPath indexPathForRow:NSNotFound inSection:section] I'll leave my original workaround here for anyone still maintaining a project using the 2.x SDK. Found a decent workaround: CGRect sectionRect = [tableView rectForSection:indexOfSectionToScrollTo]; [tableView scrollRectToVisible:sectionRect animated:YES]; The code above will scroll the tableview so the desired section is visible but not necessarily at the top or bottom of the visible area. If you want to scroll so the section is at the top do this: CGRect sectionRect = [tableView rectForSection:indexOfSectionToScrollTo]; sectionRect.size.height = tableView.frame.size.height; [tableView scrollRectToVisible:sectionRect animated:YES]; Modify sectionRect as desired to scroll the desired section to the bottom or middle of the visible area. A: I think a blank row is probably the only way to go there. Is it possible to redesign the UI such that the "empty" row can display something useful? I say try it out, and see what the performance is like. They give pretty dire warnings about using transparent sub-views in your list items, and I didn't find that it mattered all that much in my application.
{ "language": "en", "url": "https://stackoverflow.com/questions/159821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Determine logged on user on a remote Windows machine Is there a way to determine who is logged on to a particular (remote) machine given the IP address (or the workstation name) of the machine? * *The machines in question are on an Active Directory Domain *The user running the script probably won't have any special rights on either their local or the remote machine *Operating system is Windows XP Any programming language is fine but ideally * *VBScript (yeah I know) *C# *Java *DOS Batch file A: PSloggedon from SysInternals will provide this from a batch file, however the user would require admin access on the remote machine. I doubt you can get this information without Administrator access. A: Difficult to do depending on the permissioning on the machine. One way is to query WMI on the remote machine and check the owner of the explorer.exe process. A: You don't need admin access. Just use net apis. ask on news://194.177.96.26/comp.os.ms-windows.programmer.win32 where it's a FAQ
{ "language": "en", "url": "https://stackoverflow.com/questions/159837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is it possible to interpolate my angle bracket, percent, equals <%= %> syntax in external javascript files? Often times when mixing jQuery with asp.net I need to use asp .net angle bracket percent, <% %>, syntax within a jQuery selector. If I would like to separate the JavaScript from markup into different files is there still a way to evaluate my JavaScript file so the angle bracket percents are interpolated before reaching the client browser? A: No, you'll need to refactor your JavaScript to accept that information as parameters. So, instead of jQuery('#<%=MainPanel.ClientId%>').hide('slow'); do something like this: function hidePanel(panelId) { jQuery('#' + panelId).hide('slow'); } which you can call from your page with hidePanel('<%=MainPanel.ClientId%>'); A: I made an attempt to separate javascript on the search grid user control from the html in the .ascx file. In the first iteration I used the jQuery(document).onReady function to attach my intialization. The problem with this is that <%= %> tags used within jQuery selectors were not interpolated correctly and the controls the javascript acted on were not found with the jQuery selectors. Next, I attempted to create a json object in the Page initialization and write that out using the asp.net method Page.ClientScript.RegisterClientScriptBlock. This worked ok, but with drawbacks: hard wired the json object's name and keys in the asp.net file and javascript file. This is disadvantageous because now there exists "two points of truth" to maintain and further more there is the potential for name collision in the final rendered page. The most elegant solution within the asp .net and utilizing jQuery is to create an ajax script behavior in javascript. Then within the asp codebehind register the script behavior's properties in the GetScriptDescriptors() method of the IScriptControl interface, adding the server side control's ClientID as a property to the script descriptor. // Ajax Javacsript Code below: Type.registerNamespace('SearchGrid'); // Define the behavior properties // ButtonBehavior = function() { ButtonBehavior.initializeBase(this); this._lnkSearchID = null; } // Create the prototype for the behavior // // SearchGrid.ButtonBehavior.prototype = { initialize: function() { SearchGrid.ButtonBehavior.callBaseMethod(this, 'initialize'); jQuery('#' + this._lnkSearchID).click(function() { alert('We clicked!'); }); }, dispose: function() { SearchGrid.ButtonBehavior.callBaseMethod(this, 'dispose'); jQuery('#' + this._lnkSearchID).unbind(); } } // Register the class as a type that inherits from Sys.Component. SearchGrid.ButtonBehavior.registerClass('SearchGrid.ButtonBehavior', Sys.Component); if (typeof (Sys) !== 'undefined') Sys.Application.notifyScriptLoaded(); Asp .Net code below: public partial class SearchGrid : System.Web.UI.UserControl, IScriptControl { // Initialization protected override void OnPreRender(EventArgs e) { if (!this.DesignMode) { // Test for ScriptManager and register if it exists ScriptManager sm = ScriptManager.GetCurrent(Page); if (sm == null) throw new ApplicationException("A ScriptManager control must exist on the current page."); sm.RegisterScriptControl(this); } base.OnPreRender(e); } protected override void Render(HtmlTextWriter writer) { if (!this.DesignMode) sm.RegisterScriptDescriptors(this); base.Render(writer); } // IScriptControl Members public IEnumerable<ScriptDescriptor> GetScriptDescriptors() { ScriptBehaviorDescriptor desc = new ScriptBehaviorDescriptor("SearchGrid.ButtonBehavior", this.ClientID); desc.AddProperty("lnkSearchID", this.lnkSearch.ClientID); yield return desc; } public IEnumerable<ScriptReference> GetScriptReferences() { ScriptReference reference = new ScriptReference(); reference.Path = ResolveClientUrl("SearchGrid.ButtonBehavior.js"); return new ScriptReference[] { reference }; } } The advantage here is that you may create stand alone reusable controls with javascript behavior contained in its own separate file (or as a web resource) while passing state and context, which might otherwise would be interpolated with angle,percent, equals syntax, necessary for jQuery to do its work. A: If you want to evaluate <% code blocks %> as ASP.NET code in a JavaScript file, you can just put the JavaScript in an ASPX file and reference it from a SCRIPT element. script.js.aspx: function hideElements() { <% foreach(var elementId in Request.QueryString["hide"].Split(',') { %> jQuery('#' + <%= elementId %>).hide('slow'); <% } %> } page.aspx: <script src="script.js.aspx?hide=<%= GetElementsIds() %>" type='text/javascript'></script> page.aspx.cs: public string GetElementIds() { return string.Join(",", new []{control1.ClientID, control2.ClientID}); } A: You could also handle .js files as .aspx files; this way you won't lose intellisense and code formatting while you're editing them. Just add this to web.config: <system.webServer> <handlers> <add name="Dynamic JS" path="*.js" verb="*" type="System.Web.UI.PageHandlerFactory" resourceType="Unspecified"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/159842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: UnsatisfiedLinkError: The specified procedure could not be found I'm writing some JNI code in C++ to be called from an applet on Windows XP. I've been able to successfully run the applet and have the JNI library loaded and called, even going so far as having it call functions in other DLLs. I got this working by setting up the PATH system environment variable to include the directory all of my DLLs are in. So, the problem, is that I add another call that uses a new external DLL, and suddenly when loading the library, an UnsatisfiedLinkError is thrown. The message is: 'The specified procedure could not be found'. This doesn't seem to be a problem with a missing dependent DLL, because I can remove a dependent DLL and get a different message about dependent DLL missing. From what I've been able to find online, it appears that this message means that a native Java function implementation is missing from the DLL, but it's odd that it works fine without this extra bit of code. Does anyone know what might be causing this? What kinds of things can give a 'The specified procedure could not be found' messages for an UnsatisifedLinkError? A: There is a chance that the DLL was built using C++(as opposed to C). unless you took care to do an extern on the procedure,this is one possible reason. Try exporting all the functions from the DLL. If the list includes your function, then you're good. A: I figured out the problem. This was a doozy. The message "The specified procedure could not be found" for UnsatisfiedLinkError indicates that a function in the root dll or in a dependent dll could not be found. The most likely cause of this in a JNI situation is that the native JNI function is not exported correctly. But this can apparently happen if a dependent DLL is loaded and that DLL is missing a function required by its parent. By way of example, we have a library named input.dll. The DLL search order is to always look in the application directory first and the PATH directories last. In the past, we always ran executables from the same directory as input.dll. However, there is another input.dll in the windows system directory (which is in the middle of the DLL search order). So when running this from a java applet, if I include the code described above in the applet, which causes input.dll to be loaded, it loads the input.dll from the system directory. Because our code is expecting certain functions in input.dll which aren't there (because it's a different DLL) the load fails with an error message about missing procedures. Not because the JNI functions are exported wrong, but because the wrong dependent DLL was loaded and it didn't have the expected functions in it. A: Usually, when linking to other libraries, you need to link to the relevant .lib file. It sounds like you aren't referencing all the lib files you need. Check what isn't linking and make sure you add it's lib to the list for the linker. A: Did you create the new external DLL using the standard JNI procedure? I.e., using javah and so forth? If so, then I am not sure what is wrong. If not, then the procedure you're trying to call hasn't been exported (as mentioned by anjanb). I am aware of two way of exporting functions: a separate export list and marking specific functions with __declspec(dllexport). Can't access variable in C++ DLL from a C app has a little more information the topic of DLLs. A: Compile your c++ code in debug mode. Then insert the DebugBreak(); statement where you would like to start debugging. Run the java code. When the DebugBreak() statement is encountered you will get a popup with a Debug button on it. Click on it. Dev Studio will open with your program in machine code. Step over with the debugger twice and you should be able to step over your source code. A: If you have done all programming issue at JNI manuals and examples but still you are getting same missing procedure error, problem can be at your path variable probably. Do below steps and run again: * *Be sure about you set JAVA_HOME variable to your JDK folder(not JRE because JRE doesnt contain jni header) Example: At environment variable settings panel define var:JAVA_HOME val:C:\Program Files\Java\jdk1.7.0_11 *add %JAVA_HOME%\bin to your path variable After doing those steps, your application can find jni procedure name and links to JNI.dll in right way. So, i hope you dont get this missing procedure error again.
{ "language": "en", "url": "https://stackoverflow.com/questions/159846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: subversion diff including new files I have some local changes to an open source project which uses Subversion as its source control. (I do not have commit access on the original project repository.) My change adds a file, but this file is not included in the output of "svn diff". (It may be worth noting that the new file is a binary, not plain text.) How can I make a patch which includes the new files? $ svn st A tests/foo.zip $ svn diff $ A: The fact that your file is binary is exactly why it is not displayed I'm afraid. Subversion's diff command only does textual diffs/patches (even though Subversion internally can handle binary file differences efficiently between versions). A: I experienced similar behavior to Pozsar. And his answer worked for me better than the normal svn diff --force. However, if running on a DOS machine (e.g. via Cygwin), you may need to modify his answer slightly. The following diff + patch worked for patching my text + binary files in Cygwin using the --binary arg: svn diff --force --diff-cmd /usr/bin/diff -x "-au --binary" OLD-URL NEW-URL > mybinarydiff.diff patch -p0 --binary -i mybinarydiff.diff A: There is a --force option to the diff command, but it produces an incorrect patch file for binaries on my machine. Using it with the --diff-cmd option works for me though: svn diff --force --diff-cmd /usr/bin/diff -x -au I think this produces exactly what you wanted. A: If you're building a patch, you might want to use plain old 'diff' with the --new-file option which treats the missing file as empty. Note that the syntax for this option may actually vary depending on what version of plain old diff you're using.
{ "language": "en", "url": "https://stackoverflow.com/questions/159853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: log4net Configuration Section for NUnit Test Project I am running NUnit with the project named AssemblyTest.nunit. The test calls another assembly which uses the log4net assembly. This is using nunit version 2.4.3 with the .net 2.0 framework. In TestFixtureSetup I am calling log4net.Config.XmlConfigurator.Configure( ) and am getting the following error: System.Configuration.ConfigurationErrorsException: Configuration system failed to initialize ---> System.Configuration.ConfigurationErrorsException: Unrecognized configuration section log4net. (C:\path\to\assembly.dll.config line 7) Is there a way to fix this without renaming the config file to 'AssemblyTest.config'? A: I don't know why you guys are trapped in config files, for nunit if you like to see logs running in Text Output window in nunit test runner all you need to do is following line of code, BasicConfigurator.Configure(); best point add this line is the constructor of Test class e.g. [TestFixture] public class MyTest { log4net.ILog log = log4net.LogManager.GetLogger(typeof(MyTest)); public MyTest() { BasicConfigurator.Configure(); } [SetUp] public void SetUp() { log.Debug(">SetUp"); } [TearDown] public void TearDown() { log.Debug(">TearDown"); } [Test] public void TestNothing() { log.Debug(">TestNothing"); } } A: Create a separate config file for log4net with root element log4net. In TestFixtureSetup create a FileInfo object for this config file and give it as argument to log4net.Config.XmlConfigurator.Configure( ). A: I had the same problem because I forget to add the log4net definition in the configSections element. So, if you want to put log4net-elements into the app.config, you need to include the configSections element (which tells where log4net-elements are defined) at the top of the config file. Try it like this: <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> ... </log4net> </configuration>
{ "language": "en", "url": "https://stackoverflow.com/questions/159856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I sort my code (by method name) in Visual Studio 2008? Short of cutting and pasting, is there a way to sort the methods in my classes in Visual Studio 2008? I like orderly code. A: If you are using Resharper, you can change the Type Members Layout template so that it orders your code however you like. See under Resharper>Options>Languages>C#>Type Members Layout. alt text http://www.jetbrains.com/resharper/features/screenshots/40/automatic_member_layout_full.png You can, for example, put methods with particular attributes first in your file... e.g. methods marked with NUnit's [Setup] and [TearDown] could come before methods marked with [Test] by placing a block like: <!--Fixture Setup/Teardown--> <Entry> <Match> <And> <Kind Is="method"/> <Or> <HasAttribute CLRName="NUnit.Framework.TestFixtureSetUpAttribute" Inherit="true"/> <HasAttribute CLRName="NUnit.Framework.TestFixtureTearDownAttribute" Inherit="true"/> </Or> </And> </Match> </Entry> before: <!--Test methods--> <Entry> <Match> <And Weight="100"> <Kind Is="method"/> <HasAttribute CLRName="NUnit.Framework.TestAttribute" Inherit="false"/> </And> </Match> <Sort> <Name/> </Sort> </Entry> and then have a catch-all for everything else: <!--All other members--> <Entry> <Sort> <Name/> </Sort> </Entry> The template system is very powerful and should meet your needs. A: This is a free plug-in that does what you are asking: http://www.visualstudiogallery.com/ExtensionDetails.aspx?ExtensionID=800978aa-2aac-4440-8bdf-6d1a76a5c23c Update Unfortunately the link is outdated. You can download Regionerate at http://www.rauchy.net/regionerate/docs/2007/05/download.html A: You may find or be able to make a macro to do this, but there is no built in functionality of VS to sort your methods. Some third party productivity tools like Resharper and CodeRush provide some functionality to reorder your code. A: ReSharper has Code Reordering functionality and a File Structure view that lets you do drag and drop reordering. A: Resharper will do a good job in a limited way. It depends on how much you want. For example, it wont go and reorder your overrides in an asp.net page based on lifecycle, or anything like that, but it will keep properties, fields, methods and what not clearly grouped EDIT: By the eway i was refering to auto reordering aka reformatting.
{ "language": "en", "url": "https://stackoverflow.com/questions/159862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Synchronized ListViews in .Net I'm working on a control to tie together the view from one ListView to another so that when the master ListView is scrolled, the child ListView view is updated to match. So far I've been able to get the child ListViews to update their view when the master scrollbar buttons are clicked. The problem is that when clicking and dragging the ScrollBar itself, the child ListViews are not updated. I've looked at the messages being sent using Spy++ and the correct messages are getting sent. Here is my current code: public partial class LinkedListViewControl : ListView { [DllImport("User32.dll")] private static extern bool SendMessage(IntPtr hwnd, UInt32 msg, IntPtr wParam, IntPtr lParam); [DllImport("User32.dll")] private static extern bool ShowScrollBar(IntPtr hwnd, int wBar, bool bShow); [DllImport("user32.dll")] private static extern int SetScrollPos(IntPtr hWnd, int wBar, int nPos, bool bRedraw); private const int WM_HSCROLL = 0x114; private const int SB_HORZ = 0; private const int SB_VERT = 1; private const int SB_CTL = 2; private const int SB_BOTH = 3; private const int SB_THUMBPOSITION = 4; private const int SB_THUMBTRACK = 5; private const int SB_ENDSCROLL = 8; public LinkedListViewControl() { InitializeComponent(); } private readonly List<ListView> _linkedListViews = new List<ListView>(); public void AddLinkedView(ListView listView) { if (!_linkedListViews.Contains(listView)) { _linkedListViews.Add(listView); HideScrollBar(listView); } } public bool RemoveLinkedView(ListView listView) { return _linkedListViews.Remove(listView); } private void HideScrollBar(ListView listView) { //Make sure the list view is scrollable listView.Scrollable = true; //Then hide the scroll bar ShowScrollBar(listView.Handle, SB_BOTH, false); } protected override void WndProc(ref Message msg) { if (_linkedListViews.Count > 0) { //Look for WM_HSCROLL messages if (msg.Msg == WM_HSCROLL) { foreach (ListView view in _linkedListViews) { SendMessage(view.Handle, WM_HSCROLL, msg.WParam, IntPtr.Zero); } } } } } Based on this post on the MS Tech Forums I tried to capture and process the SB_THUMBTRACK event: protected override void WndProc(ref Message msg) { if (_linkedListViews.Count > 0) { //Look for WM_HSCROLL messages if (msg.Msg == WM_HSCROLL) { Int16 hi = (Int16)((int)msg.WParam >> 16); Int16 lo = (Int16)msg.WParam; foreach (ListView view in _linkedListViews) { if (lo == SB_THUMBTRACK) { SetScrollPos(view.Handle, SB_HORZ, hi, true); int wParam = 4 + 0x10000 * hi; SendMessage(view.Handle, WM_HSCROLL, (IntPtr)(wParam), IntPtr.Zero); } else { SendMessage(view.Handle, WM_HSCROLL, msg.WParam, IntPtr.Zero); } } } } // Pass message to default handler. base.WndProc(ref msg); } This will update the location of the child ListView ScrollBar but does not change the actual view in the child. So my questions are: * *Is it possible to update the child ListViews when the master ListView ScrollBar is dragged? *If so, how? A: I wanted to do the same thing, and after searching around I found your code here, which helped, but of course didn't solve the problem. But after playing around with it, I have found a solution. The key came when I realized that since the scroll buttons work, that you can use that to make the slider work. In other words, when the SB_THUMBTRACK event comes in, I issue repeated SB_LINELEFT and SB_LINERIGHT events until my child ListView gets close to where the master is. Yes, this isn't perfect, but it works close enough. In my case, my master ListView is called "reportView", while my child ListView is called "summaryView". Here's my pertinent code: public class MyListView : ListView { public event ScrollEventHandler HScrollEvent; protected override void WndProc(ref System.Windows.Forms.Message msg) { if (msg.Msg==WM_HSCROLL && HScrollEvent != null) HScrollEvent(this,new ScrollEventArgs(ScrollEventType.ThumbTrack, (int)msg.WParam)); base.WndProc(ref msg); } } And then the event handler itself: reportView.HScrollEvent += new ScrollEventHandler((sender,e) => { if ((ushort) e.NewValue != SB_THUMBTRACK) SendMessage(summaryView.Handle, WM_HSCROLL, (IntPtr) e.NewValue, IntPtr.Zero); else { int newPos = e.NewValue >> 16; int oldPos = GetScrollPos(reportView .Handle, SB_HORZ); int pos = GetScrollPos(summaryView.Handle, SB_HORZ); int lst; if (pos != newPos) if (pos<newPos && oldPos<newPos) do { lst=pos; SendMessage(summaryView.Handle,WM_HSCROLL,(IntPtr)SB_LINERIGHT,IntPtr.Zero); } while ((pos=GetScrollPos(summaryView.Handle,SB_HORZ)) < newPos && pos!=lst); else if (pos>newPos && oldPos>newPos) do { lst=pos; SendMessage(summaryView.Handle,WM_HSCROLL,(IntPtr)SB_LINELEFT, IntPtr.Zero); } while ((pos=GetScrollPos(summaryView.Handle,SB_HORZ)) > newPos && pos!=lst); } }); Sorry about the odd formatting of the while loops there, but that's how I prefer to code things like that. The next problem was getting rid of the scroll bars in the child ListView. I noticed you had a method called HideScrollBar. This didn't really work for me. I found a better solution in my case was leaving the scroll bar there, but "covering" it up instead. I do this with the column header as well. I just slide my child control up under the master control to cover the column header. And then I stretch the child to fall out of the panel that contains it. And then to provide a bit of a border along the edge of my containing panel, I throw in a control to cover the visible bottom edge of my child ListView. It ends up looking rather nice. I also added an event handler to sync changing column widths, as in: reportView.ColumnWidthChanging += new ColumnWidthChangingEventHandler((sender,e) => { summaryView.Columns[e.ColumnIndex].Width = e.NewWidth; }); While this all seems a bit of a kludge, it works for me. A: This is conjecture just to get the mental juices flowing so take it as you will: In the scroll handler for the master list, can you call the scroll handler for the child list (passing the sender and eventargs from the master)? Add this to your Form load: masterList.Scroll += new ScrollEventHandler(this.masterList_scroll); Which references this: private void masterList_scroll(Object sender, System.ScrollEventArgs e) { childList_scroll(sender, e); } private void childList_scroll(Object sender, System.ScrollEventArgs e) { childList.value = e.NewValue } A: I would create my own class, inheriting from ListView to expose the Vertical and Horizontal scroll events. Then I would do create scroll handlers in my form to synchronize the two controls This is sample code which should allow a listview to publish scroll events: public class MyListView : System.Windows.Forms.ListView { const int WM_HSCROLL = 0x0114; const int WM_VSCROLL = 0x0115; private ScrollEventHandler evtHScroll_m; private ScrollEventHandler evtVScroll_m; public event ScrollEventHandler OnHScroll { add { evtHScroll_m += value; } remove { evtHScroll_m -= value; } } public event ScrollEventHandler OnHVcroll { add { evtVScroll_m += value; } remove { evtVScroll_m -= value; } } protected override void WndProc(ref System.Windows.Forms.Message msg) { if (msg.Msg == WM_HSCROLL && evtHScroll_m != null) { evtHScroll_m(this,new ScrollEventArgs(ScrollEventType.ThumbTrack, msg.WParam.ToInt32())); } if (msg.Msg == WM_VSCROLL && evtVScroll_m != null) { evtVScroll_m(this, new ScrollEventArgs(ScrollEventType.ThumbTrack, msg.WParam.ToInt32())); } base.WndProc(ref msg); } Now handle the scroll events in your form: Set up a PInvoke method to be able to send a windows message to a control: [DllImport("user32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern int SendMessage(IntPtr hWnd, [MarshalAs(UnmanagedType.U4)] int iMsg, int iWParam, int iLParam); Set up your event handlers (lstMaster and lstChild are two listboxes): lstMaster.OnVScroll += new ScrollEventHandler(this.lstMaster_OnVScroll); lstMaster.OnHScroll += new ScrollEventHandler(this.lstMaster_OnHScroll); const int WM_HSCROLL = 0x0114; const int WM_VSCROLL = 0x0115; private void lstMaster_OnVScroll(Object sender, System.ScrollEventArgs e) { SendMessage(lstChild.Handle,WM_VSCROLL,(IntPtr)e.NewValue, IntPtr.Zero); } private void lstMaster_OnHScroll(Object sender, System.ScrollEventArgs e) { SendMessage(lstChild.Handle,WM_HSCROLL,(IntPtr)e.NewValue, IntPtr.Zero); } A: A naive solution to your problem can be handling the paint message in the parent list view and checking if the linked list views are displaying the correct data. If they don't, then update them to display the correct data by calling the EnsureVisible method.
{ "language": "en", "url": "https://stackoverflow.com/questions/159864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: From SourceSafe to Team Foundation Server Our team would like to move from the Visual SourceSafe (VSS) to the Team Foundation Server (TFS). I know that the TFS is much more than just a version control system, but for the first time I would like to use it this way. Currently our projects are organized within the single solution that consists of the shared part (common library) and many customer projects. Is there some kind of migration guide that would describe such a challenge? Or TFS enforces its own usage scenarios (versioning of projects, releases, etc.)? A: TFS certainly has much more potential than just as a source repository, but it's quite understandable why you would want to migrate source control first. The migration utility of choice is generally VSSConverter.exe which allows you to map VSS paths to Team Project source control paths and is pretty well documented in this walkthrough here. There's another tool (TFS Migration and Synchronization Toolkit) available over on CodePlex, but when I compared the two, I determined that VSSConverter has been more widely used and I think is generally accepted as being the tool of choice for VSS migrations. It seems there are a few more answers on this thread here also. Now, the question I think you are really asking is more about guidance on creating Team Projects and structuring? This is a little harder to answer without knowing more about your specific circumstance. Patterns and Practices published a book on CodePlex called the TFS Guide which might help - it describes amongst many things, a suggested Team Project source control structure. It might help in giving you some guidance around how to migrate and/or remap your solution structure. Regards to versioning and branching, check out this site here on branching guidance - it's not a bad overview of some common branching/release management techniques using TFS. If you get through all that reading, you'll really be on top of most of the essential TFS groundwork! A: (Feel free to downvote me but...) If you're after better source control then TFS is IMHO overkill. I recommend you look into Subversion. VisualSVN is a superb ($49) plug-in to Visual Studio that works seamlessly alongside arguably the best SVN client TortoiseSVN. In addition they provide a free, easy to set up, Windows package of the Subversion server-side stuff called VisualSVN Server. To learn all about the Subversion way of working there's the great Red Bean book. (Not affiliated with VisualSVN, just a Subversion fanboy) A: TFS and VSS are radically different beasts. That said, the major problems with moving from VSS to TFS is generally in the developer's mind. Check out the following blogs: TFS from a VSS User's perspective: http://blogs.msdn.com/robcaron/archive/2006/10/29/901115.aspx And of course, the original http://sstjean.blogspot.com/2006/10/document-from-vss-to-tfs-introduction.html A: When we switched from Sourcesafe to TFS2005 the biggest hurdle were Sourcesafe's shared files, the "Get latest on checkout" approach and the branch/merge "support" in Sourcesafe. Everybody feared branching and merging in Sourcesafe and it took some time convincing all colleagues that it is not that bad with TFS. We decided to not migrate files from Sourcesafe. We used TFS2005 for a new project and kept the old stuff in Sourcesafe. We didn't want to keep the project and folder structure which had grown over the years and was rather unorganized. The old stuff is history now and we do all development work with TFS2008.
{ "language": "en", "url": "https://stackoverflow.com/questions/159869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: SqlMembershipProvider initialize method not being called I have done a custom implementation of MembershipProvider but for some reason the initialize method is not being invoked and thus my provider is not setting up properly from the config parameters, who invokes it in the first place and how do i get it to work. A: I assume this is an ASP.NET application. Do you have a reference to your membership provider in your web.config (it can also be in your machine.config, but this is lesser used)? You should have something like the following in the system.web section of your web.config: <membership defaultProvider="MyCustomMembershipProvider"> <providers> <clear/> <add name="MyCustomMembershipProvider" type="MyNamespace.MyCustomMembershipProvider" connectionStringName="..." ... /> </providers> </membership> Make sure also that your provider is inheriting from the System.Web.Security.MembershipProvider abstract class. See this MSDN article for more detail and examples.
{ "language": "en", "url": "https://stackoverflow.com/questions/159875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a way to make cffile.oldFileSize return a correct value? When working with cffile in ColdFusion, after an upload of a file to a webserver, the cffile structure is created that is supposed to have a value in it called "oldFileSize". Every time I do an upload and examine that value, it has the new file's size, not the overwritten file's size. Is there some setting somewhere to correct that or is this a bug in cffile in cf8? Clarification: If you use the cffile command to upload a file to a server, it will attempt to store that file in the location you tell it in the command. If the destination already has a file there with the same name and path, then one of the options in your cffile command can bet to overwrite any existing file. If you do that, a structure is returned called cffile with an attribute called "oldFileSize". The documentation states that oldFileSize should be the size of the file that was overwritten. Instead, it's returning the size of the file being uploaded. A: If the oldfilesize attribute is not returning correctly, I would use nameconflict=unique to preserve the old file. Then, you can use cfdirectory to check the old filesize, and cffile action="delete" and action="rename" to replace the old file, so that you have essentially overwritten the old file, only manually. A bit of work, but if you need the information.... A: Ben Doom is correct about the work-around to the problem, but if you're not seeing the documented behavior, that's a bug and you should report it! Currently, there is no public bug tracker you can submit to (although there is a push for one and we should probably see it soon-ish), so the defacto standard is to post it as a comment on the documentation page. Adobe staff does read and respond to comments and they will likely either respond that it will be fixed, or acknowledge that it is a bug but indicate there is no plan to fix it at this time. Either way, the responsible thing to do is to report the bug. A: What overwritten file? It seems you are talking about two files when you only refer to one.
{ "language": "en", "url": "https://stackoverflow.com/questions/159881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it OK to put a database initialization call in a C# constructor? I've seen this is various codebases, and wanted to know if this generally frowned upon or not. For example: public class MyClass { public int Id; public MyClass() { Id = new Database().GetIdFor(typeof(MyClass)); } } A: You can use the disposable pattern if you refer to a DB connection: public class MyClass : IDisposable { private Database db; private int? _id; public MyClass() { db = new Database(); } public int Id { get { if (_id == null) _id = db.GetIdFor(typeof(MyClass)); return _id.Value; } } public void Dispose() { db.Close(); } } Usage: using (var x = new MyClass()) { /* ... */ } //closes DB by calling IDisposable.Dispose() when going out of "using" scope A: It will also make it difficult to write unit tests for the class as you won't be able to force the class to use a Mock/Stub version of the db class. See here: http://en.wikipedia.org/wiki/Dependency_injection A: Yea, you CAN do it, but it's not the best design, and error handling in constructors isn't as tidy as elsewhere. A: Well.. I wouldn't. But then again my approach usually involves the class NOT being responsible for retrieving its own data. A: There are several reasons this is not generally considered good design some of which like causing difficult unit testing and difficulty of handling errors have already been mentioned. The main reason I would choose not to do so is that your object and the data access layer are now very tightly coupled which means that any use of that object outside of it original design requires significant rework. As an example what if you came across an instance where you needed to use that object without any values assigned for instance to persist a new instance of that class? you now either have to overload the constructor and then make sure all of your other logic handles this new case, or inherit and override. If the object and the data access were decoupled then you could create an instance and then not hydrate it. Or if your have a different project that uses the same entities but uses a different persistence layer then the objects are reusable. Having said that I have taken the easier path of coupling in projects in the past :) A: The only problem I can think of with this approach is that any errors from the DB initialization will be propagated as exceptions from the constructor. A: Why would anyone want to use a mock object/stub instead of the real thing? Would you agree that car manufacturers should use paperboard models for crashtests?
{ "language": "en", "url": "https://stackoverflow.com/questions/159886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to execute GetLastError() while debugging in Visual Studio You're stepping through C/C++ code and have just called a Win32 API that has failed (typically by returning some unhelpful generic error code, like 0). Your code doesn't make a subsequent GetLastError() call whose return value you could inspect for further error information. How can you get the error value without recompiling and reproducing the failure? Entering "GetLastError()" in the Watch window doesn't work ("syntax error"). A: As mentioned a couple times, the @err pseudo-register will show the last error value, and @err,hr will show the error as a string (if it can). According to Andy Pennell, a member of the Visual Studio team, starting with VS 7 (Visual Studio .NET 2002), using the '@' character to indicate pseudo-registers is deprecated - they prefer to use '$' (as in $err,hr). Both $ and @ are supported for the time being. You can also use the $err pseudo-register in a conditional breakpoint; so you can break on a line of code only if the last error is non-zero. This can be a very handy trick. Some other pseudo registers that you may find handy (from John Robbins' outstanding book, "Debugging Applications for Microsoft .NET and Microsoft Windows"): * *$tib - shows the thread information block *$clk - shows a clock count (useful for timing functions). To more easily use this, place a $clk watch then an additional $clk=0 watch. The second watch will clear the pseudo register after the display of the current value, so the next step or step over you do gives you the time for that action only. Note that this is a rough timing that includes a fair bit of debugger overhead, but it can still be very useful. A: ERR,hr in a watch window usually does the trick A: "edit and continue" add the code so you can see the error (just don't create a new global variable to store it). It works really well if you can quickly put a call to a pre-existing function that executes this kind of error handling code. As a bonus, you can leave the new code there for the future too. If you can't do this, then QBziZ is right "ERR,hr" does it.
{ "language": "en", "url": "https://stackoverflow.com/questions/159888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: How do I access SQLite database instance on iPhone? I'm developing an iPhone app that uses the built-in SQLite database. I'm trying to view and open the database via the sqlite3 command line tool so I can execute arbitrary SQL against it. When I run my app in the simulator, the .sqlite file it creates is located at ~/Library/Application Support/iPhone Simulator/User/Applications/. How can I see that file on the physical iPhone? A: Instructions for Xcode 6.0.1 * *Xcode > Open > YourProject *Xcode > Product > Run *Xcode > Window > Devices *(Column 1 - select) Devices > YourDeviceName *(Column 2 - select) Installed Apps > YourAppName *(Column 2 - select) Cog under 'Installed Apps' list *(Pop-Up - select) Download Container... *Save to location *Right click on 'YourAppName.xcappdata' *Select 'Show Package Contents' *AppData > Documents > YourDatabase.sqlite A: In Xcode select window->organizer and expand the node next to your application in the applications section on your phone. Select the black downward pointing arrow next to application data and save the file anywhere on your desktop. Your sqlite database should be in there somewhere. As for how to go about getting it back on the phone once your done i have no clue. A: In XCode 4, you do the same as Lounges suggested, which will save the whole file structure for your app to your destination of choice. Rename the .xcappdata file which is saved to .sqlite so you can open it by double clicking, then you can find the file from the device. A: Exactly in the same way you do on the simulator. There are very few (important) differences between the device and simulator, and file access and library loading are for the most part not part of them. A: Your question remains a little vague. "See" in what sense? Do you create the SQLite database? How? Have you placed it manually in the Simulator's filesystem area? Are you perhaps asking how to do that on the iPhone? The easiest way is to precreate an empty database with the sqlite3 command-line tool, have it as a resource in your application, then copy it in your application sandbox's documents folder. You can get the path to your resources folder via NSBundle's pathForResource:ofType: method, then grab the path to your Documents folder via NSSearchPathForDirectoriesInDomains() for the NSDocumentsDirectory folder in the NSUserDomainMask, then copy the file via NSFileManager's methods. Otherwise, you can use SQLite's functions to create a new database from scratch by supplying appropriate SQL commands to define its schema. A: This one works if you jailbreak your iPhone.. I don't know why anyone would have any issues with jailbreaking their phone as I've been using it for development for quite some time and found no problems, also it is not uncommon for sqlite to perform differently on the device vs simulator: * *Jail break your phone (there tutorials all over the web) *Set your cydia user level to developer *Install sqlite3 into your phone: go to cydia > manage > sources > cydia/telesphoreo > sqlite3 *ssh into your phone using iphone tunnel root ssh password: "alpine" *Type which sqlite3 to ensure you have it installed *browse to the location of your db.. a breakpoint in your code should tell you where it is located.. in my code it looks something like this NSArray *paths = NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex: 0]; NSString *pathName = [documentsDirectory stringByAppendingPathComponent:filename]; return pathName; Notice that if you run this on the simulator.. you'll get a location like the following: /Users/admin/Library/Application Support/iPhone Simulator/6.0/Applications/42302574-7722-48C1-BE00-91800443DA7C/Documents/email-524200.edb On the device it will look like this: /var/mobile/Applications/FB73857F-A822-497D-A4B8-FBFB269A8699/Documents/email-523600.edb Then just type sqlite3 %dbname% and you can execute sql statements right on your phone.. without copying it over or whatever. A: The Easiest way to do it by far is using iExplorer to download the file from your app. and then use SQLite Professional read-only to read the file. Even thought it is not realtime but at least it is free. :-) A: The Download Container method is the one I found to be the best. However, you have to be careful that some times, if you try to e-mail it or attach it from within the app then the file that is sent out would be empty.
{ "language": "en", "url": "https://stackoverflow.com/questions/159889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Determine if a program is running on a Remote Desktop Is there a way my program can determine when it's running on a Remote Desktop (Terminal Services)? I'd like to enable an "inactivity timeout" on the program when it's running on a Remote Desktop session. Since users are notorious for leaving Remote Desktop sessions open, I want my program to terminate after a specified period of inactivity. But, I don't want the inactivity timeout enabled for non-RD users. A: The following works if you want to know about YOUR application which is running in YOUR session: BOOL IsRemoteSession(void) { return GetSystemMetrics( SM_REMOTESESSION ); } But not in general for any process ID. If you want to know about any arbitrary process which could be running in any arbitrary session then you can use the below method. You can first convert the process ID to a session ID by calling ProcessIdToSessionId. Once you have the session ID you can use it to call: WTSQuerySessionInformation. You can specify WTSInfoClass as value WTSIsRemoteSession and this will give you the information about if that application is a remote desktop connection or not. BOOL IsRemoteSession(DWORD sessionID) { //In case WTSIsRemoteSession is not defined for you it is value 29 return WTSQuerySessionInformation(WTS_CURRENT_SERVER_HANDLE, sessionID, WTSIsRemoteSession, NULL, NULL); } A: GetSystemMetrics(SM_REMOTESESSION) (as described in http://msdn.microsoft.com/en-us/library/aa380798.aspx) A: Here's the C# managed code i use: /// <summary> /// Indicates if we're running in a remote desktop session. /// If we are, then you MUST disable animations and double buffering i.e. Pay your taxes! /// /// </summary> /// <returns></returns> public static Boolean IsRemoteSession { //This is just a friendly wrapper around the built-in way get { return System.Windows.Forms.SystemInformation.TerminalServerSession; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/159910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: CSS/JavaScript Use Div to grey out section of page Does anybody know a way with JavaScript or CSS to basically grey out a certain part of a form/div in HTML? I have a 'User Profile' form where I want to disable part of it for a 'Non-Premium' member, but want the user to see what is behind the form and place a 'Call to Action' on top of it. Does anybody know an easy way to do this either via CSS or JavaScript? Edit: I will make sure that the form doesn't work on server side so CSS or JavaScript will suffice. A: Add this to your HTML: <div id="darkLayer" class="darkClass" style="display:none"></div> And this to your CSS: .darkClass { background-color: white; filter:alpha(opacity=50); /* IE */ opacity: 0.5; /* Safari, Opera */ -moz-opacity:0.50; /* FireFox */ z-index: 20; height: 100%; width: 100%; background-repeat:no-repeat; background-position:center; position:absolute; top: 0px; left: 0px; } And finally this to turn it off and on with JavaScript: function dimOff() { document.getElementById("darkLayer").style.display = "none"; } function dimOn() { document.getElementById("darkLayer").style.display = ""; } Change the dimensions of the darkClass to suite your purposes. A: You might try the jQuery BlockUI plugin. It's quite flexible and is very easy to use, if you don't mind the dependency on jQuery. It supports element-level blocking as well an overlay message, which seems to be what you need. The code to use it is as simple as: $('div.profileform').block({ message: '<h1>Premium Users only</h1>', }); You should also keep in mind that you may still need some sort of server-side protection to make sure that Non-Premium users can't use your form, since it'll be easy for people to access the form elements if they use something like Firebug. A: If you rely on CSS or JavaScript to prevent a user from editing part of a form then this can easily by circumvented by disabling CSS or JavaScript. A better solution might be to present the non-editable information outside of the form for non-premium members, but include the relevant form fields for premium members. A: With opacity //function to grey out the screen $(function() { // Create overlay and append to body: $('<div id="ajax-busy"/>').css({ opacity: 0.5, position: 'fixed', top: 0, left: 0, width: '100%', height: $(window).height() + 'px', background: 'white url(../images/loading.gif) no-repeat center' }).hide().appendTo('body'); }); $.ajax({ type: "POST", url: "Page", data: JSON.stringify({ parameters: XXXXXXXX }), contentType: "application/json; charset=utf-8", dataType: "json", beforeSend: function() { $('#ajax-busy').show(); }, success: function(msg) { $('#ajax-busy').hide(); }, error: function() { $(document).ajaxError(function(xhr, ajaxOptions, thrownError) { alert('status: ' + ajaxOptions.status + '-' + ajaxOptions.statusText + ' \n' + 'error:\n' + ajaxOptions.responseText); }); } });
{ "language": "en", "url": "https://stackoverflow.com/questions/159914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Confused on count(*) and self joins I want to return all application dates for the current month and for the current year. This must be simple, however I can not figure it out. I know I have 2 dates for the current month and 90 dates for the current year. Right, Left, Outer, Inner I have tried them all, just throwing code at the wall trying to see what will stick and none of it works. I either get 2 for both columns or 180 for both columns. Here is my latest select statement. SELECT count(a.evdtApplication) AS monthApplicationEntered, count (b.evdtApplication) AS yearApplicationEntered FROM tblEventDates a RIGHT OUTER JOIN tblEventDates b ON a.LOANid = b.loanid WHERE datediff(mm,a.evdtApplication,getdate()) = 0 AND datediff(yy,a.evdtApplication, getdate()) = 0 AND datediff(yy,b.evdtApplication,getdate()) = 0 A: You don't need any joins at all. You want to count the loanID column from tblEventDates, and you want to do it conditionally based on the date matching the current month or the current year. SO: SELECT SUM( CASE WHEN Month(a.evdtApplication) = MONTH(GEtDate() THEN 1 END) as monthTotal, count(*) FROM tblEventDates a WHERE a.evdtApplication BETWEEN '2008-01-01' AND '2008-12-31' What that does is select all the event dates this year, and add up the ones which match your conditions. If it doesn't match the current month it won't add 1. Actually, don't even need to do a condition for the year because you're just querying everything for that year.
{ "language": "en", "url": "https://stackoverflow.com/questions/159916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I loop through a MySQL query via PDO in PHP? I'm slowly moving all of my LAMP websites from mysql_ functions to PDO functions and I've hit my first brick wall. I don't know how to loop through results with a parameter. I am fine with the following: foreach ($database->query("SELECT * FROM widgets") as $results) { echo $results["widget_name"]; } However if I want to do something like this: foreach ($database->query("SELECT * FROM widgets WHERE something='something else'") as $results) { echo $results["widget_name"]; } Obviously the 'something else' will be dynamic. A: According to the PHP documentation is says you should be able to to do the following: $sql = "SELECT * FROM widgets WHERE something='something else'"; foreach ($database->query($sql) as $row) { echo $row["widget_name"]; } A: Here is an example for using PDO to connect to a DB, to tell it to throw Exceptions instead of php errors (will help with your debugging), and using parameterised statements instead of substituting dynamic values into the query yourself (highly recommended): // connect to PDO $pdo = new PDO("mysql:host=localhost;dbname=test", "user", "password"); // the following tells PDO we want it to throw Exceptions for every error. // this is far more useful than the default mode of throwing php errors $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); // prepare the statement. the placeholders allow PDO to handle substituting // the values, which also prevents SQL injection $stmt = $pdo->prepare("SELECT * FROM product WHERE productTypeId=:productTypeId AND brand=:brand"); // bind the parameters $stmt->bindValue(":productTypeId", 6); $stmt->bindValue(":brand", "Slurm"); // initialise an array for the results $products = array(); $stmt->execute(); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { $products[] = $row; } A: If you like the foreach syntax, you can use the following class: // Wrap a PDOStatement to iterate through all result rows. Uses a // local cache to allow rewinding. class PDOStatementIterator implements Iterator { public $stmt, $cache, $next; public function __construct($stmt) { $this->cache = array(); $this->stmt = $stmt; } public function rewind() { reset($this->cache); $this->next(); } public function valid() { return (FALSE !== $this->next); } public function current() { return $this->next[1]; } public function key() { return $this->next[0]; } public function next() { // Try to get the next element in our data cache. $this->next = each($this->cache); // Past the end of the data cache if (FALSE === $this->next) { // Fetch the next row of data $row = $this->stmt->fetch(PDO::FETCH_ASSOC); // Fetch successful if ($row) { // Add row to data cache $this->cache[] = $row; } $this->next = each($this->cache); } } } Then to use it: foreach(new PDOStatementIterator($stmt) as $col => $val) { ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/159924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: How do I make a ListView row draggable in .NET? I am not the most experienced GUI programmer, so bear with me here. I have a custom list view. I would like to be able to drag a row from the ListView to another control on a form. I know how to catch events that are fired when an object is dragged to a control, but I am not sure how to make a row itself draggable. I could always hack together my own solution, but I am hoping that there is a better (read: easier) way of doing this. EDIT: i would really like to drag a copy of the row, but I can always work out the details myself. A: http://www.c-sharpcorner.com/UploadFile/skulkarni/ImlementingDragandDropinListViewControls11252005035642AM/ImlementingDragandDropinListViewControls.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/159925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there any reasonable way to migrate from subversion to cvs? My company unwittingly switched from cvs to subversion and now we're all wishing we had cvs back. I know there's tools to migrate history and changes from cvs to svn and there's no equivalent to do the reverse. Any suggestions or ideas on how to do this? A: So what is with SVN that your company dislikes so much and CVS does better? The designers of SVN went out of their way to make the SVN experience fairly similar to CVS. If you use the Tortoise client as a front end the experience is very similar. SVN gives you atomic commits, which while not quite up to the standard of Perforce is miles in front of CVS. I do have to sympathise with your plight. I upgraded our development team & IT Team from CVS to SVN. I got all the right python scripts to upgrade all the version history and we have been using SVN happily for nearly 4 years. About three months ago the IT Team leader decided to "upgrade" all his projects from SVN to guess what? That's right, the heavy lifter of the version control systems: SourceSafe! I would definately stick with SVN or even look at some of the newer distributed systems such as Mercurial. With these systems there is no central server. They rely on being able to branch & merge across dozens or hundreds of peers. You define your own topology, so for example, you would specify a particular peer as being the one that performs daily builds. A: I don't think the tools exist to go in the other direction, because there's not much demand for it. If you really must do it, it shouldn't be very hard to write a script that walks through the history of the SVN repo, getting each revision and committing it to CVS. BTW, I'm genuinely interested to know what problems you have with SVN. A: Not an upgrade. Do not do this. Seriously, why would you prefer CVS to SVN? CVS is literally a toy that pretends to allow teams to work without explicit communication. It really is terrible. If you need something other than SVN for whatever reason, look at other version control systems. There are many, and they are almost all better than CVS (in fact, only Visual Source Safe is as poor). A: SVN is not great. SVN is better than CVS. If you want to change checkout Mercurial, GIT, Bazaar. A: One aspect of git has not been discussed when it has been brought to your attention in all these other answers: git provides a cvs server emulation, so that you might migrate to git (svn to git is easy and well supported) and later use a cvs server interface for accessing the repository in a centralized manner. Nobody has to know you use git in the background and you don't have to deal with distributed backup issues. A: Your options are probably realtively limited. Remember that active development of CVS stopped a while ago, so there are probably no tools for you from the CVS developers. And since one of the main goals of svn was to be a better CVS, those developers will probably not have expected anyone to move backwards either. But if you don't like subversion, why not have a look at the more modern distributed systems (git, mercurial etc)? A: when all you have is hammer, everything looks like a nail. best bet is to learn svn it will make more knowledgeable. A: Agree with Corporal Touchy. SVN is better than CVS, because it was designed to be - it's roughly the same thing, with some simplications and new features. With Svn, you can move/rename a file without losing its history; you get safer commits (commits are atomic operations) and global revisions. Anyway, try to get to know it better before swithing back to CVS and even better, try to really understand your needs as a team for a repository. PS: I think Corporal was talking about Mercurial A: svn was supposed to be better than cvs but in some areas that didn't work well. The other distributed tools are a lot faster (svn is slow as hell, even cvs can be faster sometimes), have much more useful features than svn, are developing rapidly (while seeing any new feature in svn takes YEARS). On the other hand svn is quite easy to learn and centralized (this is important for some people). svn team is focused on own agenda, it's very hard to get support from developers (comparing to other open source projects), some bug reports exists for long time without any interest from developers. I'm disapointed by how svn project looks and how it's developed but well, maybe that will change in future. A: I originally added this as a comment to someone else's answer, but then realized that it was an answer, of sorts. I have done these sorts of transitions before, where there was no existing way to convert from one SCM system to another. It's not rocket science to write a script that takes the list of commits from your SVN repository, and iterates through them one at a time, merging them into a newly-created CVS repository. Getting all the branches and tags exactly correct might be a bit more work, but if you want to just save revision history for a few branches, it should be pretty easy. I'm also of the opinion that you won't really gain anything by switching back to CVS, but if you want to do so, then you'll likely be writing your own script. The "svn export" command will undoubtedly be useful in this endeavor. A: the only 2 drawbacks of subversion I can think of users coming from CVS are * *the speed of checkouts over http(s) *the lack of modulaliases the first one can be solved by using svn(+ssh) which is the more comparable format as CVS uses its own protocol as well. the second one is a little trickier, but can be emulated by svn:externals (which have their own nasty sideeffects) If you encountered any other additional drawbacks, I am all ear.. A: Just pay attention to one point: Bazzar, Mercurial etc. (who were advised by some people here) are all distributed version control systems. I found it almost impossible to manage big groups of programmers working on the same source code using these kind of tools. In my company we use SVN and it's doing a wonderful job. A: No idea why you'd want to do this, but going from SVN -> GIT -> CVS might work You'd run.. git svn clone http://thesvnserver ourrepo Then using the following guide to export back to CVS (not entirely sure this will work): http://issaris.blogspot.com/2005/11/cvs-to-git-and-back.html git cvsexportcommit 4a20cbafdf25a141b31a8333284a332d1a4d6072 There's also git cvsserver
{ "language": "en", "url": "https://stackoverflow.com/questions/159926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Animating a custom Button ControlTemplate Foreground I want to change/animate the Foreground property of a custom button control template depending on the control's state. Pre-RC0, I set the Foreground of the ContentPresenter, gave it an x:Name, and referenced it in the VisualStateManager transitions. Now, ContentPresenter no longer has a Foreground, since it doesn't inherit from Control anymore. Usually, I would set the Foreground in the Style which is applied to the templated control. But I cannot reference that from the VisualStateManager transitions / states. I also cannot wrap it in a TextBlock which has the Foreground property set, and (edit:) Border has no Foreground property. Help is greatly appreciated. Update: I can solve the problem for some of the removed properties with a Border, but not those relating to font/text, including Foreground. Since it doesn't seem possible, in my particular case I was able to replace the ContentPresenter with a TextBlock. A: There is a post from Jesse Liberty dealing with this issue. In a few words, the idea is that you can't, because you would be forcing any content in the button to have a specific foreground colour, and that decision should be left to the content itself. Anyway, perhaps you may want to take a look at the concept of hijacking dependency properties, which is using another property of the same type for what you want. It isn't a nice practice, but will certainly work. A: replacing the ContentPresenter with a TextBlock works well as long as the button content is not complex. I have an example where the button content has an image and a textblock. In that case, no content is displayed. Replacing the ContentPresenter with a ContentControl, you have your Foreground property back. <ControlTemplate TargetType="{x:Type ButtonBase}"> <ContentControl Content="{TemplateBinding Content}" Foreground="{Binding Foreground}" /> </ControlTemplate> A: Put a Border around your ContentControl and make your VSM works for that border control. A: I came up with a solution for this problem, similar to an existing response here I just noticed- If you're willing to restrict the possible Content types that can be inserted into your template, to text, then it will work quite nicely: http://storypodders.com:8081/bodhiSoftware/node/14
{ "language": "en", "url": "https://stackoverflow.com/questions/159928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: One To Many To Itself How would one structure a table for an entity that can have a one to many relationship to itself? Specifically, I'm working on an app to track animal breeding. Each animal has an ID; it's also got a sire ID and a dame ID. So it's possible to have a one to many from the sire or dame to its offspring. I would be inclined to something like this: ID INT NOT NULL PRIMARY KEY SIRE_ID INT DAME_ID INT and record a null value for those animals which were purchased and added to the breeding stock and an ID in the table for the rest. So: * *Can someone point me to an article/web page that discusses modeling this sort of relationship? *Should the ID be an INT or some sort of String? A NULL in the INT would indicate that the animal has no parents in the database but a String with special flag values could be used to indicate the same thing. *Would this possibly be best modeled via two tables? I mean one table for the animals and a separate table solely indicating kinship e. g.: Animal ID INT NOT NULL PRIMARY KEY Kinship ID INT NOT NULL PRIMARY KEY FOREIGN KEY SIRE_ID INT PRIMARY KEY FOREIGN KEY DAME_ID INT PRIMARY KEY FOREIGN KEY I apologize for the above: my SQL is rusty. I hope it sort of conveys what I'm thinking about. A: Well, this is a "normal" one-to-many relationship and the method you suggest is the classical one for solving it. Note that two tables are denormalized (I can't point out exactly where the superkey-is-not-well-should-be-subset-of-other-key-fsck-I-forgot part is, but I'm pretty sure it's there somewhere); the intuitive reason is that a tuple in the first one matches at most a tuple in the second one, so unless you have lots of animals with null sire and dame IDs, it's not a good solution in any prospect (it worsens performance -- need a join -- and does not reduce storage requirements). A: I think your layout using just one table is fine. You definitely want to keep SIRE_ID and DAME_ID in the same data type as ID. You also want to declare them as FOREIGN KEYs (it is possible to have a foreign key point back to the same table, and a foreign key can also be null). ID INT NOT NULL PRIMARY KEY SIRE_ID INT REFERENCES TABLENAME (ID) DAME_ID INT REFERENCES TABLENAME (ID) Using this layout, you can easily look up the parent animals, and you could also build an offspring tree for a given animal (for Oracle there is CONNECT BY) A: I asked a similar question a number of months ago on the MySQL website. I would recommend that you take a look at the response that I received from Peter Brawley regarding this type of relationship: http://forums.mysql.com/read.php?135,187196,187196#msg-187196 If you want to research the topic further then I would recommend that you look into Tree Hierarchies on Wikipedia. An alternate suggested architecture (that would be fully normalized) would look something like the following: Table: animal ID | Name | Breed Table: pedigree animal_id | parent_id | parentType (either sire or dame) A: INT is the better choice for the ID column and better suited if you should use a sequence to generate the unique IDs. I don't see any benefit in splitting the design into two tables. A: I don't know about animal breeding, but it sounds like your Sire_ID is the father and Dame_ID is the mother? No problem. One row per animal, null sire_ and dame_ID's for purchased animals, I don't forsee any problems. [ID],[Sire_ID],[Dame_ID]; 0,null,null (male) 1,null,null (female) 2,null,null (female) 3,0,1 (male) 4,0,2 (male) 5,null,null (female) 6,3,5 7,4,5 and so forth. You would likely populate a TreeView or XmlNodeList in a while loop... While (myAnimal.HasChildren) { Animal[] children = GetChildren(Animal.ID) for (int x=0; x<children.length; x++) myAnimal.Children.Add(children[x]); } In this case, Animal.Children is a Collection of Animals. Therefore, myAnimal.Children[0].Father would return myAnimal. .Parent[] could be a collection of its two parents, which should work as long as [0] is always one parent (father) and [1] is always the other (mother). Make ID an Autonumber PK and assign Sire_ID and Dame_ID programatically by returning the IDs of its parents. No foreign key relationships should be neccessary though both parent IDs could reference back to ID if you really want to. A: Use the "connect by" clause with SQL to tell it which hierarchy to follow. A: It's not really a one to many relationship, unless an animal can have many parents. I would leave it as a single table with the unique key ID for the animal, one int field for each of the parents, and probably a text field to use for general notes about the animal, like where it was purchased if that's the case. A: I think that since it is clear that an animal only has one sire and one dam, that using a single table would make the most sense. My preference is to use int or bigint as the row identifier, with a null value signifying no relationship. I would probably, then, to use some other method to uniquely identify animals so they don't end up in the table twice and create a unique index on that column as well. A: Seems like you want to build something like a tree. What about something like?: ID Primary Key, Parent_ID Foreing_Key ( data ) There are some functionality for doing querys in tables with relations to themselves. See the syntax of Connect By: http://www.adp-gmbh.ch/ora/sql/connect_by.html
{ "language": "en", "url": "https://stackoverflow.com/questions/159934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Changing short date format in Ubuntu How do I change the system-wide short date format in Ubuntu? For example, Thunderbird is showing dates in the DD/MM/YY format, and I would like to change it to MM/DD/YY or YYYY-MM-DD. The best information I can find so far is in this thread: http://ubuntuforums.org/showthread.php?t=193916 Edit: I want to change the system-wide date format, so that all my applications use this new date format. A: * *Install and launch "dconf Editor", navigate to com -> canonical -> indicator -> datetime. *Set the value of time-format to custom. *Customize the Time & Date format by editing the value of custom-time-format, e.g. set it to %Y-%m-%d %H:%M:%S for "2017-12-31 23:59:59" format. *Re-login to see effect of the changes. You can also do this via a command in terminal: gsettings set com.canonical.indicator.datetime time-format 'custom' gsettings set com.canonical.indicator.datetime custom-time-format '%Y-%m-%d %H:%M:%S' Source: http://ubuntuhandbook.org/index.php/2015/12/time-date-format-ubuntu-panel/ A: How to do this in 2017 with Ubuntu 16.04 (Xenial Xerus) is described here. Cut/Paste follows below in case that site goes away: Change date and measurement formats You can control the formats that are used for dates, times, numbers, currency, and measurement to match the local customs of your region. * *Click the icon at the very right of the menu bar and select System Settings. *Open Language Support and select the Regional Formats tab. *Select the region that most closely matches the formats you'd like to use. By default, the list only shows regions that use the language set on the Language tab. *You have to log out and back in for these changes to take effect. Click the icon at the very right of the menu bar and select Log Out to log out. *After you've selected a region, the area below the list shows various examples of how dates and other values are shown. Although not shown in the examples, your region also controls the starting day of the week in calendars. A: Thunderbird uses the system's date format, and that format depends on the system's locale settings. You have two options: * *modify the system locale, the instructions are in the forum thread you linked above, or *set LC_TIME to a locale that uses the format you want. The article linked by Craig H suggests en_DK. A: The instructions here worked for me to create a custom locale based on en_US. Then Thunderbird showed the date/time format how I want (I prefer YYYY-MM-DD over MM/DD/YY). Some time later, the date/time format in Thunderbird changed back to what was set in en_US (MM/DD/YY), because I had inadvertently set $LC_ALL to en_US.UTF-8. So, I sudo gedit /etc/environment and changed LC_ALL="en_US.UTF-8" to LC_ALL="custom.UTF-8". Finally, Thunderbird is showing the dates how I want them.
{ "language": "en", "url": "https://stackoverflow.com/questions/159950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Subsonic and sp_help_job Is it possible for subsonic to access dbo.sp_help_job? A: SubSonic is an ORM tool that also wraps SP's but does not generate code for system SP's and the project is open source so you can edit the code to your need
{ "language": "en", "url": "https://stackoverflow.com/questions/159961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: C# - Keyword usage virtual+override vs. new What are differences between declaring a method in a base type "virtual" and then overriding it in a child type using the "override" keyword as opposed to simply using the "new" keyword when declaring the matching method in the child type? A: Here's some code to understand the difference in the behavior of virtual and non-virtual methods: class A { public void foo() { Console.WriteLine("A::foo()"); } public virtual void bar() { Console.WriteLine("A::bar()"); } } class B : A { public new void foo() { Console.WriteLine("B::foo()"); } public override void bar() { Console.WriteLine("B::bar()"); } } class Program { static int Main(string[] args) { B b = new B(); A a = b; a.foo(); // Prints A::foo b.foo(); // Prints B::foo a.bar(); // Prints B::bar b.bar(); // Prints B::bar return 0; } } A: The difference between the override keyword and new keyword is that the former does method overriding and the later does method hiding. Check out the folllowing links for more information... MSDN and Other A: * *new keyword is for Hiding. - means you are hiding your method at runtime. Output will be based base class method. *override for overriding. - means you are invoking your derived class method with the reference of base class. Output will be based on derived class method. A: I always find things like this more easily understood with pictures: Again, taking joseph daigle's code, public class Foo { public /*virtual*/ bool DoSomething() { return false; } } public class Bar : Foo { public /*override or new*/ bool DoSomething() { return true; } } If you then call the code like this: Foo a = new Bar(); a.DoSomething(); NOTE: The important thing is that our object is actually a Bar, but we are storing it in a variable of type Foo (this is similar to casting it) Then the result will be as follows, depending on whether you used virtual/override or new when declaring your classes. A: The new keyword actually creates a completely new member that only exists on that specific type. For instance public class Foo { public bool DoSomething() { return false; } } public class Bar : Foo { public new bool DoSomething() { return true; } } The method exists on both types. When you use reflection and get the members of type Bar, you will actually find 2 methods called DoSomething() that look exactly the same. By using new you effectively hide the implementation in the base class, so that when classes derive from Bar (in my example) the method call to base.DoSomething() goes to Bar and not Foo. A: My version of explanation comes from using properties to help understand the differences. override is simple enough, right ? The underlying type overrides the parent's. new is perhaps the misleading (for me it was). With properties it's easier to understand: public class Foo { public bool GetSomething => false; } public class Bar : Foo { public new bool GetSomething => true; } public static void Main(string[] args) { Foo foo = new Bar(); Console.WriteLine(foo.GetSomething); Bar bar = new Bar(); Console.WriteLine(bar.GetSomething); } Using a debugger you can notice that Foo foo has 2 GetSomething properties, as it actually has 2 versions of the property, Foo's and Bar's, and to know which one to use, c# "picks" the property for the current type. If you wanted to use the Bar's version, you would have used override or use Foo foo instead. Bar bar has only 1, as it wants completely new behavior for GetSomething. A: The "new" keyword doesn't override, it signifies a new method that has nothing to do with the base class method. public class Foo { public bool DoSomething() { return false; } } public class Bar : Foo { public new bool DoSomething() { return true; } } public class Test { public static void Main () { Foo test = new Bar (); Console.WriteLine (test.DoSomething ()); } } This prints false, if you used override it would have printed true. (Base code taken from Joseph Daigle) So, if you are doing real polymorphism you SHOULD ALWAYS OVERRIDE. The only place where you need to use "new" is when the method is not related in any way to the base class version. A: Beyond just the technical details, I think using virtual/override communicates a lot of semantic information on the design. When you declare a method virtual, you indicate that you expect that implementing classes may want to provide their own, non-default implementations. Omitting this in a base class, likewise, declares the expectation that the default method ought to suffice for all implementing classes. Similarly, one can use abstract declarations to force implementing classes to provide their own implementation. Again, I think this communicates a lot about how the programmer expects the code to be used. If I were writing both the base and implementing classes and found myself using new I'd seriously rethink the decision not to make the method virtual in the parent and declare my intent specifically. A: virtual / override tells the compiler that the two methods are related and that in some circumstances when you would think you are calling the first (virtual) method it's actually correct to call the second (overridden) method instead. This is the foundation of polymorphism. (new SubClass() as BaseClass).VirtualFoo() Will call the SubClass's overriden VirtualFoo() method. new tells the compiler that you are adding a method to a derived class with the same name as a method in the base class, but they have no relationship to each other. (new SubClass() as BaseClass).NewBar() Will call the BaseClass's NewBar() method, whereas: (new SubClass()).NewBar() Will call the SubClass's NewBar() method. A: Not marking a method with anything means: Bind this method using the object's compile type, not runtime type (static binding). Marking a method with virtual means: Bind this method using the object's runtime type, not compile time type (dynamic binding). Marking a base class virtual method with override in derived class means: This is the method to be bound using the object's runtime type (dynamic binding). Marking a base class virtual method with new in derived class means: This is a new method, that has no relation to the one with the same name in the base class and it should be bound using object's compile time type (static binding). Not marking a base class virtual method in the derived class means: This method is marked as new (static binding). Marking a method abstract means: This method is virtual, but I will not declare a body for it and its class is also abstract (dynamic binding). A: using System; using System.Text; namespace OverrideAndNew { class Program { static void Main(string[] args) { BaseClass bc = new BaseClass(); DerivedClass dc = new DerivedClass(); BaseClass bcdc = new DerivedClass(); // The following two calls do what you would expect. They call // the methods that are defined in BaseClass. bc.Method1(); bc.Method2(); // Output: // Base - Method1 // Base - Method2 // The following two calls do what you would expect. They call // the methods that are defined in DerivedClass. dc.Method1(); dc.Method2(); // Output: // Derived - Method1 // Derived - Method2 // The following two calls produce different results, depending // on whether override (Method1) or new (Method2) is used. bcdc.Method1(); bcdc.Method2(); // Output: // Derived - Method1 // Base - Method2 } } class BaseClass { public virtual void Method1() { Console.WriteLine("Base - Method1"); } public virtual void Method2() { Console.WriteLine("Base - Method2"); } } class DerivedClass : BaseClass { public override void Method1() { Console.WriteLine("Derived - Method1"); } public new void Method2() { Console.WriteLine("Derived - Method2"); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/159978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "220" }
Q: How can I create a temporary file for writing in C++ on a Linux platform? In C++, on Linux, how can I write a function to return a temporary filename that I can then open for writing? The filename should be as unique as possible, so that another process using the same function won't get the same name. A: tmpnam(), or anything that gives you a name is going to be vulnerable to race conditions. Use something designed for this purpose that returns a handle, such as tmpfile(): #include <stdio.h> FILE *tmpfile(void); A: The GNU libc manual discusses the various options available and their caveats: http://www.gnu.org/s/libc/manual/html_node/Temporary-Files.html Long story short, only mkstemp() or tmpfile() should be used, as others have mentioned. A: Use one of the standard library "mktemp" functions: mktemp/mkstemp/mkstemps/mkdtemp. Edit: plain mktemp can be insecure - mkstemp is preferred. A: man tmpfile The tmpfile() function opens a unique temporary file in binary read/write (w+b) mode. The file will be automatically deleted when it is closed or the program terminates.ote A: mktemp should work or else get one of the plenty of available libraries to generate a UUID. A: The tmpnam() function in the C standard library is designed to solve just this problem. There's also tmpfile(), which returns an open file handle (and automatically deletes it when you close it). A: You should simply check if the file you're trying to write to already exists. This is a locking problem. Files also have owners so if you're doing it right the wrong process will not be able to write to it.
{ "language": "en", "url": "https://stackoverflow.com/questions/159983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is the difference between Ruby and Python versions of"self"? I've done some Python but have just now starting to use Ruby I could use a good explanation of the difference between "self" in these two languages. Obvious on first glance: Self is not a keyword in Python, but there is a "self-like" value no matter what you call it. Python methods receive self as an explicit argument, whereas Ruby does not. Ruby sometimes has methods explicitly defined as part of self using dot notation. Initial Googling reveals http://rubylearning.com/satishtalim/ruby_self.html http://www.ibiblio.org/g2swap/byteofpython/read/self.html A: Python is designed to support more than just object-oriented programming. Preserving the same interface between methods and functions lets the two styles interoperate more cleanly. Ruby was built from the ground up to be object-oriented. Even the literals are objects (evaluate 1.class and you get Fixnum). The language was built such that self is a reserved keyword that returns the current instance wherever you are. If you're inside an instance method of one of your class, self is a reference to said instance. If you're in the definition of the class itself (not in a method), self is the class itself: class C puts "I am a #{self}" def instance_method puts 'instance_method' end def self.class_method puts 'class_method' end end At class definition time, 'I am a C' will be printed. The straight 'def' defines an instance method, whereas the 'def self.xxx' defines a class method. c=C.new c.instance_method #=> instance_method C.class_method #=> class_method A: Despite webmat's claim, Guido wrote that explicit self is "not an implementation hack -- it is a semantic device". The reason for explicit self in method definition signatures is semantic consistency. If you write class C: def foo(self, x, y): ... This really is the same as writing class C: pass def foo(self, x, y): ... C.foo = foo This was an intentional design decision, not a result of introducing OO behaviour at a latter date. Everything in Python -is- an object, including literals. See also Why must 'self' be used explicitly in method definitions and calls? A: Well, I don't know much about Ruby. But the obvious point about Python's "self" is that it's not a "keyword" ...it's just the name of an argument that's sent to your method. You can use any name you like for this argument. "Self" is just a convention. For example : class X : def __init__(a,val) : a.x = val def p(b) : print b.x x = X(6) x.p() Prints the number 6 on the terminal. In the constructor the self object is actually called a. But in the p() method, it's called b. Update : In October 2008, Guido pointed out that having an explicit self was also necessary to allow Python decorators to be general enough to work on pure functions, methods or classmethods : http://neopythonic.blogspot.com/2008/10/why-explicit-self-has-to-stay.html A: self is used only as a convention, you can use spam, bacon or sausage instead of self and get the same result. It's just the first argument passed to bound methods. But stick to using self as it will confuse others and some editors.
{ "language": "en", "url": "https://stackoverflow.com/questions/159990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: iMacros is good but unreliable. Is there any alternative? iMacros is a very nice tool which allows to authomatically fill HTML forms and extract content, includes cycles and many other features. The problem is that it is quite tricky to make it extracting content properly. For example, I have failed to extract all London-to-Tokio flight prices for all the dates between 1/10/08 to 1/12/08 to find a cheapest one from expedia. Sometimes it just crashes. Does anyone know any good alternative? A: Bah, I installed it but never really used it: I am happy enough with Greasemonkey. Chickenfoot can make it more edible... Searching for URLs, I found also DéjàClick and Selenium IDE but I don't really know them. There are lot of other tools for Web automation, most of them professional (read "payware"...). Alternatively, for just data extraction, I would use cURL or wget and a good HTML parser... A: I have heard good things about Selenium IDE also and my limited testing indicates it is pretty capable, and works in Firefox and IE. For most any macro based testing tool, you will need to do some programming if you need to support multiple, repeatable test cases. That said, in your example you mention running an Expedia macro... presumably to scrape results. You will want to make sure that you don't hammer Expedia's servers, and/or expect to be booted once they discover you are (effectively) a bot. A: I agree imacros is quite unreliable. They crash quite easily if you using complex algorithm or running it continously. The trick is to close it and open it again after loops. It will decrease the number of crash you will find, though not completely.
{ "language": "en", "url": "https://stackoverflow.com/questions/160001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Model limit_choices_to={'user': user} I went to all the documentation, also I went to the IRC channel (BTW a great community) and they told me that is not possible to create a model and limit choices in a field where the 'current user' is in a ForeignKey. I will try to explain this with an example: class Project(models.Model): name = models.CharField(max_length=100) employees = models.ManyToManyField(Profile, limit_choices_to={'active': '1'}) class TimeWorked(models.Model): project = models.ForeignKey(Project, limit_choices_to={'user': user}) hours = models.PositiveIntegerField() Of course that code doesn't work because there is no 'user' object, but that was my idea and I was trying to send the object 'user' to the model to just limit the choices where the current user has projects, I don't want to see projects where I'm not in. Thank you very much if you can help me or give me any advice, I don't want to you write all the app, just a tip how to deal with that. I have 2 days with this in my head and I can't figure it out :( UPDATE: The solution is here: http://collingrady.wordpress.com/2008/07/24/useful-form-tricks-in-django/ sending request.user to a model. A: This limiting of choices to current user is a kind of validation that needs to happen dynamically in the request cycle, not in the static Model definition. In other words: at the point where you are creating an instance of this model you will be in a View and at that point you will have access to the current user and can limit the choices. Then you just need a custom ModelForm to pass in the request.user to, see the example here: http://collingrady.wordpress.com/2008/07/24/useful-form-tricks-in-django/ from datetime import datetime, timedelta from django import forms from mysite.models import Project, TimeWorked class TimeWorkedForm(forms.ModelForm): def __init__(self, user, *args, **kwargs): super(ProjectForm, self).__init__(*args, **kwargs) self.fields['project'].queryset = Project.objects.filter(user=user) class Meta: model = TimeWorked then in your view: def time_worked(request): form = TimeWorkedForm(request.user, request.POST or None) if form.is_valid(): obj = form.save() # redirect somewhere return render_to_response('time_worked.html', {'form': form}) A: Model itself doesn't know anything about current user but you can give this user in a view to the form which operates models objects (and in form reset choices for necessary field). If you need this on admin site - you can try raw_id_admin along with django-granular-permissions (http://code.google.com/p/django-granular-permissions/ but I couldn't rapidly get it working on my django but it seems to be fresh enough for 1.0 so...). At last, if you heavily need a selectbox in admin - then you'll need to hack django.contrib.admin itself. A: Using class-based generic Views in Django 1.8.x / Python 2.7.x, here is what my colleagues and I came up with: In models.py: # ... class Proposal(models.Model): # ... # Soft foreign key reference to customer customer_id = models.PositiveIntegerField() # ... In forms.py: # -*- coding: utf-8 -*- from __future__ import unicode_literals from django.forms import ModelForm, ChoiceField, Select from django import forms from django.forms.utils import ErrorList from django.core.exceptions import ValidationError from django.utils.translation import ugettext as _ from .models import Proposal from account.models import User from customers.models import customer def get_customers_by_user(curUser=None): customerSet = None # Users with userType '1' or '2' are superusers; they should be able to see # all the customers regardless. Users with userType '3' or '4' are limited # users; they should only be able to see the customers associated with them # in the customized user admin. # # (I know, that's probably a terrible system, but it's one that I # inherited, and am keeping for now.) if curUser and (curUser.userType in ['1', '2']): customerSet = customer.objects.all().order_by('company_name') elif curUser: customerSet = curUser.customers.all().order_by('company_name') else: customerSet = customer.objects.all().order_by('company_name') return customerSet def get_customer_choices(customerSet): retVal = [] for customer in customerSet: retVal.append((customer.customer_number, '%d: %s' % (customer.customer_number, customer.company_name))) return tuple(retVal) class CustomerFilterTestForm(ModelForm): class Meta: model = Proposal fields = ['customer_id'] def __init__(self, user=None, *args, **kwargs): super(CustomerFilterTestForm, self).__init__(*args, **kwargs) self.fields['customer_id'].widget = Select(choices=get_customer_choices(get_customers_by_user(user))) # ... In views.py: # ... class CustomerFilterTestView(generic.UpdateView): model = Proposal form_class = CustomerFilterTestForm template_name = 'proposals/customer_filter_test.html' context_object_name = 'my_context' success_url = "/proposals/" def get_form_kwargs(self): kwargs = super(CustomerFilterTestView, self).get_form_kwargs() kwargs.update({ 'user': self.request.user, }) return kwargs In templates/proposals/customer_filter_test.html: {% extends "base/base.html" %} {% block title_block %} <title>Customer Filter Test</title> {% endblock title_block %} {% block header_add %} <style> label { min-width: 300px; } </style> {% endblock header_add %} {% block content_body %} <form action="" method="POST"> {% csrf_token %} <table> {{ form.as_table }} </table> <input type="submit" value="Save" class="btn btn-default" /> </form> {% endblock content_body %} A: I'm not sure that I fully understand exactly what you want to do, but I think that there's a good chance that you'll get at least part the way there using a custom Manager. In particular, don't try to define your models with restrictions to the current user, but create a manager that only returns objects that match the current user. A: Use threadlocals if you want to get current user that edits this model. Threadlocals middleware puts current user into process-wide variable. Take this middleware from threading import local _thread_locals = local() def get_current_user(): return getattr(getattr(_thread_locals, 'user', None),'id',None) class ThreadLocals(object): """Middleware that gets various objects from the request object and saves them in thread local storage.""" def process_request(self, request): _thread_locals.user = getattr(request, 'user', None) Check the documentation on how to use middleware classes. Then anywhere in code you can call user = threadlocals.get_current_user A: Hmmm, I don't fully understand your question. But if you can't do it when you declare the model maybe you can achieve the same thing with overriding methods of the class of objects where you "send" the user object, maybe start with the constructor.
{ "language": "en", "url": "https://stackoverflow.com/questions/160009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }