text
stringlengths
8
267k
meta
dict
Q: How to upgrade JRE Is there anyway to upgrade the installed JRE in the system? We are having 1.5.0_08 installed in out HP Unix system.We have to upgrade this to 1.5.0_15.Is there a way to patch up the existing JRE and upgrade to a newer version.Or can this only be achieved by installing the newer JRE and set this in the system PATH. A: What's usually done is to have multiple JRE installed in separate directories (JRE_1_5_10, JRE_1_5_16, JRE_1_6_3, ...) and use some symlinks as references as JRE_1_5 that will point to the latest version of JRE 1.5, same for JRE_1_6, and eventually, JRE that will point to the latest version of the JRE. Doing so, you just need to update the symlinks when you choose to use a newer version and always refer to the symlinks for systems that are "upgradable". A: I'd always just upgrade and change the one in the path. But if it was only class library changes you needed, and not JVM you could replace the contents of the lib directory. But this is asking for disaster.
{ "language": "en", "url": "https://stackoverflow.com/questions/173960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I use the new SVN merge-tracking? In my existing (Pre-SVN 1.5) merge strategy, we create a copy of the Trunk (called BasePoint) at the moment of branch-creation for referencing later during the merge. When we need to merge a branch back into the trunk, we perform 2 operations. * *Merge from BasePoint to LatestTrunk (Trunk has likely moved on since the original branch) into Working copy of Branch and then commit. At this point we typically check that the merge into the branch has not damaged anything *Merge from LatestTrunk to LatestBranch back into Working copy of trunk and then commit. Documentation suggests that I use the new reintegrate merge on the Trunk and Merge from the Branch. Do I need to merge from the trunk into the dev branch first or is this included in the new reintegrate option? To put it another way, does the new merge --reintegrate functionality represent 'each of my previous merges' or 'the whole operation' ? (FWIW I am using TortoiseSVN 1.5.1) A: The short answer is, You still have to do both steps. The SVN book explains the process for merging as: * *svn merge http://trunk/path while in a branch working copy *svn merge --reintegrate http://branch/path while in a trunk working copy Notice the lack of revision numbers. This probably doesn't feel like a huge win. The new coolness is the ability to re-run the merge as you are coding in your branch, allowing you to keep the branch up to date with changes in trunk (without recording revision numbers by hand!). SVN keeps track of what needs to be merged in from trunk and what changes are unique to the branch. When you are done with the branch, --reintegrate uses that data to automatically merge only the branch changes back to trunk. A: I believe reintegrate does not actually do the two operations, but instead is used to merge back into trunk from an updated branch. You will still need to do the first set of merge/commit operations to update the branch first. Here is a link to the Subversion Book. It is possible to get this book in dead tree format. From the link, it sounds like using --reintegrate handles some weird cases, probably like merge usually does compared to just using straight patches (read the section "Why Not Use Patches Instead?").
{ "language": "en", "url": "https://stackoverflow.com/questions/173974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Is abstracting data type (sometimes) a good idea? There are numerous times you have an interface that accepts similar type arguments that have a separate domain logic meaning: public static class Database { public static bool HasAccess(string userId, string documentId) { return true; } } Now it's quite easy to have someone key documentId instead of userId and vice versa. One could prevent that by abstracting the data type of the arguments: public class UserId { public string Value { get; internal set; } public static implicit operator string(UserId id) { return id.Value; } } public class DocumentId { public string Value { get; internal set; } public static implicit operator string(DocumentId id) { return id.Value; } } public static class Database { public static bool HasAccess(UserId userId, DocumentId documentId) { return true; } } This way you get a nice compiler warning if you type in arguments out of order: UserId e = new UserId() { Value = "a" }; DocumentId d = new DocumentId() { Value = "b" }; Database.HasAccess(d, e); You also get the ability to change abstracted type in without affecting the rest of the system but that's not very likely. Would abstracting type be a good idea to get more type safety? The question relates to C# solutions but short descriptions in other languages is welcome. EDIT: Removed the implicit cast from string and pointing finger at the C# tag. A: I think you answered your own question - better data integrity and validation, better system A: Interesting, but I suspect that in many cases (particularly seialization / RPC APIs) this will only add confustion/overhead. Also - a minor implementation detail, but given this approach I'd make the wrappers fully immutable, not just "internal set" immutable. TBH - I'd probably rather use unit tests for most of this... sometimes simple is beautiful. The other problem is that since you have implicit operators, it won't stop you doing the much more likely: string user = "fred"; SomeMethodThatWantsADocument(user); That should compile; the implicit operator undoes all your good work... A: This is where typedef becomes useful in C++. You can have UserID and DocumentID as typedeffed types and thus are not interchangable without a cast, but don't require anything more than a quick note to the compiler saying 'this should be a separate type distinct from other types even though it is really just type X'. A: In this case, it doesn't look worth it to me. You've added 12 lines, spread across two extra-classes. In some languages you're looking at having to manage two new files for that. (Not sure in C#). You've introduced a lot of extra cognitive load. Those classes appear whenever you navigate your class-list; they appear in your automatically generated documentation; they're there as something that newcomers to your codebase see whenever they're trying to learn their way around, they're in the dependency graph of the compiler etc. Programmers have to know the types and create two new objects whenever they call HasAccess. And for what? To prevent you accidentally mixing up the username and document id when checking if someone has a right to access the database. That check should probably be written two, maybe three times in a normal system. (If you're writing it a lot you probably haven't got enough reuse in your database access code) So, I'd say that this is excess astronautics. My rule of thumb is that classes or types should encapsulate variant behaviour, not variant use of passive data. A: Yes, it is sometimes a good idea. But if you get too obsessed with this you become an architecture astronaut. As regards the type safety argument - it does increase type safety but lots of languages manage fine without it. In my opinion the best way to go is leave it as a String to start with, and then when you find yourself reusing the interface, make the refactoring to a more abstract type at that point. Predicting the future is too hard to waste time trying. A: Seems to be a lot of overhead for something your unit tests ought to prevent anyway, at least in this case. A: What you don't ask and don't answer are the questions that best determine if the new types are important: * *What is the projected, realistic lifetime of this system? If the answer is 2+ years, you should have at least one level of abstraction for the database and for the user id. In other words, your database should be abstract and your user and credentials should be abstract. Then you implement your database and userid in terms of the abstract definition. That way, if the needs should change your changes will be local to the places that need it most. *What are the gains and losses from having a userid data type? This question should be answered in terms of usability, expressiveness, and type safety. The number of created classes or extra lines are largely immaterial if there are clear gains in usability and expressiveness - hooray, you win. Let me give you an example of a clear loss - I worked with a class hierarchy that contained an abstract base class with several concrete children types. Rather than provide constructors for the child classes and appropriate accessors, they made a factory method that took an XML string or stream as an argument and constructed the appropriate concrete class from that. It was such a loss in usability that it made this library painful - even their sample code reeked of lose. While I could construct everything they offered, it felt heinous and generated runtime instead of compile time errors for typical issues. A: While at the end of the day, you may not care, the more abstraction the harder the maintenance (especially for other people). If in six months you have to start digging through this code to find or fix a bug, or even add a new feature, it will take you that much longer to remember what you did and why. If someone else is doing it, multiply that time. Elegant code is always nice when you're writing new code, but I always like to weigh that with the needs of maintainers down the road. A: This feels like a YAGNI issue to me. If you're just doing it because it MIGHT be useful, that's usually not a good enough reason for the extra complexity. Also, as others have noted, this is the sort of thing that unit tests should catch. Another thing I try to keep in mind is whether an abstraction is meant to protect programmers from themselves. Yes, it is theoretically easy to flip-flop two string arguments. But in all seriousness, paying attention to argument ordering has been a fundamental aspect of programming in most languages for decades. It's not something that should happen often, and it's certainly something that tests should be catching. If this type of thing is a common problem in your organization, I'd say you have bigger problems to worry about.
{ "language": "en", "url": "https://stackoverflow.com/questions/173980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: (re)initialise a vector to a certain length with initial values As a function argument I get a vector<double>& vec (an output vector, hence non-const) with unknown length and values. I want to initialise this vector to a specific length n with all zeroes. This will work vec.clear(); vec.resize( n, 0.0 ); And this will work as well: vec.resize( n ); vec.assign( n, 0.0 ); Is the second more efficient (because no memory deallocation/allocation is involved)? Is there a more efficient way to do this? A: std::vector<double>(n).swap(vec); This has the advantage of actually compacting your vector too. (In your first example, clear() does not guarantee to compact your vector.) A: Well let's round out the ways to do this :) vec.swap(std::vector<double>(n)); std::vector<double>(n).swap(vec); std::swap(vector<double>(n), vec); std::swap(vec, vector<double>(n)); A: Neither of the code snippets that you posted do any memory deallocation, so they are roughly equal. The swap trick that everyone else keeps posting will take longer to execute, because it will deallocate the memory originally used by the vector. This may or may not be desirable. A: std::vector<double>(n).swap(vec); After this, vec is guaranteed to have size and capacity n, with all values 0.0. Perhaps the more idiomatic way since C++11 is vec.assign(n, 0.); vec.shrink_to_fit(); with the second line optional. In the case where vec starts off with more than n elements, whether to call shrink_to_fit is a trade-off between holding onto more memory than is required vs performing a re-allocation.
{ "language": "en", "url": "https://stackoverflow.com/questions/173995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to format a currency datafield in Flex I have an xml file providing data for a datagrid in Flex 2 that includes an unformatted Price field (ie: it is just a number). Can anyone tell me how I take that datafield and format it - add a currency symbol, put in thousand separators etc. Thanks. S. A: Thanks alot for your answers...they helped a great deal. In the end I went for a solution that involved the following three elements: <mx:DataGridColumn headerText="Price" textAlign="right" labelFunction="formatCcy" width="60"/> public function formatCcy(item:Object, column:DataGridColumn):String { return euroPrice.format(item.price); } <mx:CurrencyFormatter id="euroPrice" precision="0" rounding="none" decimalSeparatorTo="." thousandsSeparatorTo="," useThousandsSeparator="true" useNegativeSign="true" currencySymbol="€" alignSymbol="left"/> I dont know whether this is the correct solution, but it seems to work (at the moment), Thanks again, S... A: As stated above an easy way to do this would be to add a labelFunction to the specified column and format the data within there. Frequently I find that its much easier to work with objects then straight XML so normally if I am receiving XML from a function I would create an object and parser for that XML and you can format the data inside the parser also if you like. Another way to handle this would be inside an itemRenderer. Example: <mx:DataGridColumn id="dgc" headerText="Money" editable="false"> <mx:itemRenderer> <mx:Component> <mx:HBox horizontalAlign="right"> <mx:CurrencyFormatter id="cFormat" precision="2" currencySymbol="$" useThousandsSeparator="true"/> <mx:Label id="lbl" text="{cFormat.format(data)}" /> </mx:HBox> </mx:Component> </mx:itemRenderer> </mx:DataGridColumn> A: How about the CurrencyFormatter class See here for docs from Flex 2. It's pretty easy to use. You can use one of these in a labelFunction on a DataGrid Column to format your numbers.
{ "language": "en", "url": "https://stackoverflow.com/questions/174005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is it worth converting my functional JavaScript code to an object-oriented design? I'm currently building a small web application that includes a fair amount of JavaScript. When I was prototyping the initial idea, I just hacked together a few functions to demonstrate how the application would eventually behave intending to go forward re-writing the JavaScript in an object-oriented nature. Now that I'm getting into the implementation phase, I'm finding that creating object-oriented JavaScript for the sake of being object-oriented seems overkill - the project isn't likely to require any major modifications in the future that would warrant and object-oriented design. Instead, I'm finding that a set of concise, cohesive functions are working well. So, with that said and with attempting to adhere to the KISS principle, when a set of functions are providing a suitable solution to a problem, are there any other reasons worth considering to convert my code into an object-oriented design? A: If your code is well structured, well laid out, and well commented, and does the job that is required of it, then messing with it for any reason other then to add features is ill-advised. While it might be nice to say that the program is nicely OOP etc, if it doesn't need to be changed to work then I would definetely leave it as it is. If it aint broke, dont fidgit with it :) A: If this code is already implemented and won't require maintenance or - better yet - upgrades, stick with it. If you are going to implement it now and it could get complex, consider the OO approach. Experience has shown me that it's pretty easy to write and maintain procedural code while complexity is low, but after a certain threshold it starts getting exponentially more difficult to increase complexity while using procedural programming, whereas OOP, although harder to begin, keeps complexity much more manageable. Bottom line: if the task is simple enough or has already been implemented, keep it simple. If it might grow more complex, consider OOP. A: I would say that it is still worth reviewing your code before making a decision. The obvious downside to "re-writing" code is that there is a testing cost to ensure that your code works the same as before. Do you have any Unit tests? If not, then your testing cost is even higher. So in general, I'm against re-writing working code unless it serves another end, which is to allow you to more easily write new functionality that is now required (i.e. refactoring common functions, etc.) HOWEVER, any time a person says "I hacked together", I suggest it is always worth a second look at your code. Why was it hacked together in the first place? I know plenty of people say that Object Oriented code isn't an end in and of itself, but it is a methodology that after while doesn't have to be thought about either. You just sort of naturally start doing it. Maybe your js is relatively simple, and therefore OO scafolding is truly extra overhead. Fine. But I still suggest that you should always code review (and especially have someone else review) any code you call "hacked". Perhaps it was a Freudian slip... but it did slip. A: No, although I personally find OOP more tasty, it is a means to an end, and not an end in itself. There are many cases where procedural programming makes more sense than OOP, an converting for the sake of converting, could be, as you said, overkill. A: No, let it be and move forward - that is more productive in my view . A: Treat it as legacy code from now on. When you want to change something, refactor it so the code becomes easier on the mind. If you need a bit of OOP, use it. If you don't, don't. OOP is a hammer, please don't treat a screw-problem as a nail. A: If it works, and it's easy to maintain, I wouldn't bother converting it for convertings sake. There must be more interesting things to do. A: Just bare in mind Objects are rather expensive to create in javascript. Keep construction of objects to a bare minimum.
{ "language": "en", "url": "https://stackoverflow.com/questions/174008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: linker out of memory LNK1102 My colleagues and I have tried to build a project containing several thousand classes , but we're getting a LNK1102 error ( Linker out of memory ) . I've seen several tips on the internet , such as increasing the virtual memory . We tried but this didn't help . We've also seen some as enabling different warning levels when compiling the code . A guy suggested enabling level 4 for warnings . How could that be done ? Are there other suggestions ? A: I just had the same problem when compiling plain C: "*LINK : fatal error LNK1102: out of memory*" Solution for me was: delete all *.pdb (DEBUG) files around. After that, everything was linked without problems. So probably a pdb file was defect in my case - defect in a funny way to cause this linker error. A: Project (right click) β†’ Properties β†’ Configuration Properties β†’ Linker β†’ Optimization β†’ References β†’ change to Keep Unreferenced Data Worked on my machine! A: If the project is too large, then split it up into several components. This might also help with maintenance. A: If you are running this on a windows machine, open up task manager while linking and go to the performance page. If you see the page file usage increasing until its full, then increase the size of it to at least double your ram. If the page file is not filling up before it throws the error, then ensure there is enough disk space on the machine. A: I suspect that the linker also takes a lot of time to finish. Since you are saying there are thousands of c++ classes, my first thought was to check if there are many inlined class methods. Try this: Pick a bunch of classes that are used the most, make all inlined methods non-inline by moving them from the header file to the implementation file. I've experienced drastic changes in linking time. One project we had went from 15 minutes of pure linking to just 30 seconds. This should also affect the memory of the linking process. Good luck! //Magnus A: If using Visual C++ 6.0, avoid loading workspace from a "subst" drive or "network mapped" drive. Copying the project to "C:\temp\MyProject\" folder, and loading the workspace from this location, made the "LNK1102" error go away for me this time. Hope it helps! A: I had this fatal error LNK1102: out of memory error and solved it by using 64bits compiler and linker. You set an environment variable: set PreferredToolArchitecture=x64 and then run Visual Studio. A: Run the 64 bits version of the Linker? Downside: you'll get a amd64 executable. (Unlike the 32->64 crosscompilation toolset, there is no 64->32 bit toolset) A: Definitely monitor the actual memory usage through task manager while linking. Close other programs to increase your available physical memory and set your page file to 4092 mb in size, if possible. Also, it might help to create a link repro. This will allow other people to try to reproduce your link issues on other machines. A: I've got the same error while incremental building of a big project in VS 2008. I've just Clean the project, delete all *.ilk, *.dll, *.exe and *.pdb files and built all again. A: The solution mentioned here a couple of times is to use the 64 bit host compiler tools. For a CMake-generated project this can be achieved by setting the variable CMAKE_GENERATOR_TOOLSET to the value host=x64, either in CMakeLists.txt: set(CMAKE_GENERATOR_TOOLSET "host=x64") or on the cmake command line add -T host=x64 Note: this setting only applicable to Visual Studio generators. For any other generators it will cause a fatal error. A: Alternatively, use clang-cl as a drop-in replacement. This was the only way I found to work around this issue with VS16 and VS17 on a GitHub-hosted action. For CMake-based projects provide the -T ClangCL argument. Otherwise, read up on https://learn.microsoft.com/en-us/cpp/build/clang-support-msbuild?view=msvc-170
{ "language": "en", "url": "https://stackoverflow.com/questions/174013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Object arrays in method signatures Consider the following method signatures: public fooMethod (Foo[] foos) { /*...*/ } and public fooMethod (Foo... foos) { /*...*/ } Explanation: The former takes an array of Foo-objects as an argument - fooMethod(new Foo[]{..}) - while the latter takes an arbitrary amount of arguments of type Foo, and presents them as an array of Foo:s within the method - fooMethod(fooObject1, fooObject2, etc...). Java throws a fit if both are defined, claiming that they are duplicate methods. I did some detective work, and found out that the first declaration really requires an explicit array of Foo objects, and that's the only way to call that method. The second way actually accepts both an arbitrary amount of Foo arguments AND also accepts an array of Foo objects. So, the question is, since the latter method seems more flexible, are there any reasons to use the first example, or have I missed anything vital? A: I'd like to add to Shimi's explanation to add that another restriction of the varargs syntax is that the vararg must be the last declared parameter. So you can't do this: void myMethod(String... values, int num); This means that any given method can only have a single vararg parameter. In cases where you want to pass multiple arrays, you can use varargs for only one of them. In practice, varargs are at their best when you are treating the args as an arbitrary number of distinct values, rather than as an array. Java5 maps them to an array simply because that was the most convenient thing to do. A good example is String.format(). Here, the varargs are matched against the format placeholders in the first argument. A: These methods are actually the same. This feature is called varargs and it is a compiler feature. Behind the scenes is is translates to the former version. There is a pitfall if you define a method that accepts Object... and you sent one parameter of type Object[]! A: The latter was introduced in Java 5 and existing libraries are gradually being reworked to support it. You might still use the former to stress that the method requires 2+ inputs, plus there are restrictions on where the ... can be used. A: There are certainly no performance issues or things like that to consider, so it comes down to semantics. Do you expect the caller of your method to have an array of Foo's ready at hand? Then use the Foo[] version. Use the varargs variant to emphasize the possibility of having a "bunch" of Foo's instead of an array. A classical example for varargs ist the string-format (in C#, dunno how its called in Java): string Format(string formatString, object... args) Here you expect the args to be of different types, so having an array of arguments would be quite unusal, hence the varargs variant. On the other hand in something like string Join(string[] substrings, char concatenationCharacter) using an Array is perfectly reasonable. Also note that you can have multiple array-parameters and only one vararg parameter at the end of the paramater-list.
{ "language": "en", "url": "https://stackoverflow.com/questions/174024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you trigger javascript functions from flash? How do you trigger a javascript function using actionscript in flash? The goal is to trigger jQuery functionality from a flash movie A: As Jochen said ExternalInterface is the way to go and I can confirm that it works with AS2. If you plan to trigger navigation or anything that affects the area where the flash sits don't do it directly from the function you call from flash. Flash expects a return value from the function it calls and if the flash object does not exist when the function is completed the flash plugin will crash. If you need to do navigation or alter the content you can add a setTimeout call (into your js function). That will create a new thread and give flash the return value it expects. A: Take a look at the ExternalInterface-Class. From the AS3-Language Reference: The ExternalInterface class is the External API, an application programming interface that enables straightforward communication between ActionScript and the Flash Player container– for example, an HTML page with JavaScript. Adobe recommends using ExternalInterface for all JavaScript-ActionScript communication. And it's work like this: ExternalInterface.addCallback("sendToActionScript", receivedFromJavaScript); ExternalInterface.call("sendToJavaScript", input.text); You can submit parameters and recieve callbacks...pretty cool, right? ;) As I know it will also work on AS2...
{ "language": "en", "url": "https://stackoverflow.com/questions/174025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Which database would you recommend to use with C# (.NET) application? I'm developing a little project plan and I came to a point when I need to decide what local databse system to use. The input data is going to be stored on webserver (hosting - MySQL DB). The idea is to build a process to download all necessary data (for example at midnight) and process them. However, there are going to be many inputs and stages of processing, so I need to use some kind of local database to store the semi-product of the application What local database system would you recommend to work with C# (.NET) application? edit: The final product (information) should be easily being exported back to Hosting MySQL DB. As Will mentioned in his answer - yes, I'm for a performance AND comfort of use. A: For quick and dirty I'd go with Sql Server Compact Edition. Its an in-process implementation of Sql Server, so it doesn't require you install any other applications. Back in the day, you'd use an Access database for this kind of thing. But Access databases kinda blow. It wouldn't take much to upload your finished data back to the production server. If you're looking for a solution that automates that process, you'll probably need to look at hosting an instance of MySql locally and use whatever replication services it provides. A: I say go with Sql Server Compact Edition. It's real similar to the full blown version of SQL Server, and VS2008 has built in support for designing tables, querying etc. (Management Studio 2008 also has support for it). The biggest downside is that you lose out on stored procedures, but the upside is great as there's no need to install anything on the local users machine, and it works real fast for selecting data. Even cooler, is that with SQL Metal, you can create a DBML file and use LINQ just as you would with Sql Server. A: I want to say Microsoft Sql 2005 Express, as it (almost) comes as the obvious choice when developing in .NET. But it all depends on what previous db skills you have. If you already know MySql and as you already said, the data should be exported back to MySql. Why not use MySql all the way? A: How about using db4o? It's an OODB you can embed in your application. Also supports replication. Edit: As a side note - In my current pet project using db4o I have a line (C# 3.5): IList<Users> list = Persistence.Database.Query<Users>(u => u.Name == "Admin"); Using strong typed lambda expression to get a (lazy) list of objects from the database. It also uses indexes to retrieve the list quickly. A: MS SQL Server support comes out of the box without any other drivers or setup required. Also, MS SQL Server Express is free. You can generate scripts that will export the data to/from MySQL. A: The "obvious" choice would be MS SQL Server Express. VS and .net both support it natively and if you've experience with it already (for the main DB), I'd certainly be tempted to stick with it (or its express version). But that's certainly not the end of your options. I use SQLite a lot for cross-platform applications and web-apps. It's eye-wateringly-fast and it does integrate pretty well through System.Data.SQLite -- albeit not as tightly as MS SQL Server. There's also a Compact edition of SQL Server that compares quite well to SQLite. A: I don't know of a good in-process database that is fully syntax and type compatible with MySQL. With that in mind, you have three options: * *Choose something like SQLlite, Access, or SQL Server Compact. The problem is that you'll end up writing some complex conversion logic with any of those and have to write all your queries twice. *Install MySQL locally. Then you have to put up with having a full database server running on your local system. You definitely want to avoid this for anything you'll ship to a customer, but for your own use it might be ok. Fortunately, MySQL doesn't use as many resources as some other modern database servers, but this is still less than ideal. *Switch to SQL Server Express edition at the server and use SQL Server Compact at the client. It's just as cheap as MySQL (maybe even cheaper, since you're supposed purchase MySQL for any commercial use). Considering that you're using C# at the client end, you might want to use it with ASP.Net at the server side as well. And if you're using ASP.Net server side, then it's not difficult to find a host that offers SQL Server Express. Now your databases are type compatible and any query you write for you client is guaranteed to work for your server as well. IMO, one of the big strengths of the MS database stack (excluding access) is that they have a compatible solution for whatever you're doing all the way from the desktop up to multi-datacenter clusters. If your app's scale changes or you need to ship data between two different classes of app, your database layer is taken care of. A: Pick anything that's available, but code against interfaces only, that way you can easily switch between them. For production, I'd say either MS SQL for large projects, (or express for mid level) simply because of the close integration with VS and Sqlite for smaller projects. Given the description, I would think that Sqlite would be a good choice, since it's the simplest/lowest overhead. A: I've been doing some testing as of late and though I'd also recommend SQL Server 2005 (Express if needed) because it works out of the box though SQL Server 2008 is new and RTM'd now. A: SQL Server as most have mentioned... My reason would be cos you can use the source control to integrate the test cases from C# to database.... Team foundation (TFS) is one such from Microsoft with GUI... A: Since this is already answered. I have to mention when working with CLR languages, CLR/.NET Framework Integration sets MS SQL Server 2005/2008 apart from the rest. Following excerpt from here. By using languages such as Visual Basic .NET and C#, you can capitalize on CLR integration to write code that has more complex logic and is more suited for computation tasks. Additionally, Visual Basic .NET and C# offer object-oriented capabilities such as encapsulation, inheritance, and polymorphism. You can easily organize related code into classes and namespaces, which means that you can more easily organize and maintain your code investments when you are working with large amounts of code. The ability to logically and physically organize code into assemblies and namespaces is a huge benefit that enables you to better find and relate different pieces of code in a large database implementation. A: For what you described, definitely MS SQL Server. Good performance, good tools. Free.
{ "language": "en", "url": "https://stackoverflow.com/questions/174059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Adding a new application group in linux How can I, in Java or using some other programming language, add a new program group in the applications menu in both KDE and Gnome? I am testing with Ubuntu and Kubuntu 8. Putting a simple .menu file in ~/.config/menus/applications-merged worked in Kubuntu, but the same procedure does nothing in Ubuntu. The content of my file is as follows: <!DOCTYPE Menu PUBLIC "-//freedesktop//DTD Menu 1.0//EN" "http://www.freedesktop.org/standards/menu-spec/1.0/menu.dtd"> <Menu> <Menu> <Name>My Program Group</Name> <Include> <Filename>shortcut.desktop</Filename> </Include> </Menu> </Menu> Note that the .desktop file is correctly placed in ~/.local/share/applications. Ps: The original question did not specify I wanted a solution in a programmatic way. A: Maybe xdg-desktop-menu does that? See man xdg-desktop-menu or http://manpages.ubuntu.com/manpages/hardy/en/man1/xdg-desktop-menu.html . A: Thanks, oliver. I used xdg-desktop-menu and then analyzed its output. The correct menu file needs to explicitly name the outer menu (Applications), as follows: <!DOCTYPE Menu PUBLIC "-//freedesktop//DTD Menu 1.0//EN" "http://www.freedesktop.org/standards/menu-spec/menu-1.0.dtd"> <Menu> <Name>Applications</Name> <Menu> <Name>My Program Group</Name> <Include> <Filename>shortcut.desktop</Filename> </Include> </Menu> </Menu> This worked fine in Kubuntu, Ubuntu and Fedora Core 9. Couldn't make it work on openSUSE, though. A: I recommend you look into freedesktop.org standards that cover this. Up to date list is available here: http://www.freedesktop.org/wiki/Specifications/menu-spec The latest one is currently 1.0: http://standards.freedesktop.org/menu-spec/1.0/ FreeDesktop.org standards are followed by Gnome, KDE and XFCE, so it should work on any distribution. A: In Gnome use System -> Settings -> Menu then just choose New Menu or New Entry. A: Not sure what you meant exactly with "in openSUSE the .directory file is mandatory or else the program group does not shows up"; generally I suppose you have to call xdg-desktop-menu twice (once for the program group and once for the program itself), and so you have to supply two different .directory files as well. If the program group is empty, it makes sense that the desktop hides it. (But maybe I completely misunderstood you here :-) and I've never used xdg-desktop-menu myself anyway).
{ "language": "en", "url": "https://stackoverflow.com/questions/174069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: .toArray(new MyClass[0]) or .toArray(new MyClass[myList.size()])? Assuming I have an ArrayList ArrayList<MyClass> myList; And I want to call toArray, is there a performance reason to use MyClass[] arr = myList.toArray(new MyClass[myList.size()]); over MyClass[] arr = myList.toArray(new MyClass[0]); ? I prefer the second style, since it's less verbose, and I assumed that the compiler will make sure the empty array doesn't really get created, but I've been wondering if that's true. Of course, in 99% of the cases it doesn't make a difference one way or the other, but I'd like to keep a consistent style between my normal code and my optimized inner loops... A: From JetBrains Intellij Idea inspection: There are two styles to convert a collection to an array: either using a pre-sized array (like c.toArray(new String[c.size()])) or using an empty array (like c.toArray(new String[0]). In older Java versions using pre-sized array was recommended, as the reflection call which is necessary to create an array of proper size was quite slow. However since late updates of OpenJDK 6 this call was intrinsified, making the performance of the empty array version the same and sometimes even better, compared to the pre-sized version. Also passing pre-sized array is dangerous for a concurrent or synchronized collection as a data race is possible between the size and toArray call which may result in extra nulls at the end of the array, if the collection was concurrently shrunk during the operation. This inspection allows to follow the uniform style: either using an empty array (which is recommended in modern Java) or using a pre-sized array (which might be faster in older Java versions or non-HotSpot based JVMs). A: toArray checks that the array passed is of the right size (that is, large enough to fit the elements from your list) and if so, uses that. Consequently if the size of the array provided it smaller than required, a new array will be reflexively created. In your case, an array of size zero, is immutable, so could safely be elevated to a static final variable, which might make your code a little cleaner, which avoids creating the array on each invocation. A new array will be created inside the method anyway, so it's a readability optimisation. Arguably the faster version is to pass the array of a correct size, but unless you can prove this code is a performance bottleneck, prefer readability to runtime performance until proven otherwise. A: The first case is more efficient. That is because in the second case: MyClass[] arr = myList.toArray(new MyClass[0]); the runtime actually creates an empty array (with zero size) and then inside the toArray method creates another array to fit the actual data. This creation is done using reflection using the following code (taken from jdk1.5.0_10): public <T> T[] toArray(T[] a) { if (a.length < size) a = (T[])java.lang.reflect.Array. newInstance(a.getClass().getComponentType(), size); System.arraycopy(elementData, 0, a, 0, size); if (a.length > size) a[size] = null; return a; } By using the first form, you avoid the creation of a second array and also avoid the reflection code. A: Modern JVMs optimise reflective array construction in this case, so the performance difference is tiny. Naming the collection twice in such boilerplate code is not a great idea, so I'd avoid the first method. Another advantage of the second is that it works with synchronised and concurrent collections. If you want to make optimisation, reuse the empty array (empty arrays are immutable and can be shared), or use a profiler(!). A: Counterintuitively, the fastest version, on Hotspot 8, is: MyClass[] arr = myList.toArray(new MyClass[0]); I have run a micro benchmark using jmh the results and code are below, showing that the version with an empty array consistently outperforms the version with a presized array. Note that if you can reuse an existing array of the correct size, the result may be different. Benchmark results (score in microseconds, smaller = better): Benchmark (n) Mode Samples Score Error Units c.a.p.SO29378922.preSize 1 avgt 30 0.025 β–’ 0.001 us/op c.a.p.SO29378922.preSize 100 avgt 30 0.155 β–’ 0.004 us/op c.a.p.SO29378922.preSize 1000 avgt 30 1.512 β–’ 0.031 us/op c.a.p.SO29378922.preSize 5000 avgt 30 6.884 β–’ 0.130 us/op c.a.p.SO29378922.preSize 10000 avgt 30 13.147 β–’ 0.199 us/op c.a.p.SO29378922.preSize 100000 avgt 30 159.977 β–’ 5.292 us/op c.a.p.SO29378922.resize 1 avgt 30 0.019 β–’ 0.000 us/op c.a.p.SO29378922.resize 100 avgt 30 0.133 β–’ 0.003 us/op c.a.p.SO29378922.resize 1000 avgt 30 1.075 β–’ 0.022 us/op c.a.p.SO29378922.resize 5000 avgt 30 5.318 β–’ 0.121 us/op c.a.p.SO29378922.resize 10000 avgt 30 10.652 β–’ 0.227 us/op c.a.p.SO29378922.resize 100000 avgt 30 139.692 β–’ 8.957 us/op For reference, the code: @State(Scope.Thread) @BenchmarkMode(Mode.AverageTime) public class SO29378922 { @Param({"1", "100", "1000", "5000", "10000", "100000"}) int n; private final List<Integer> list = new ArrayList<>(); @Setup public void populateList() { for (int i = 0; i < n; i++) list.add(0); } @Benchmark public Integer[] preSize() { return list.toArray(new Integer[n]); } @Benchmark public Integer[] resize() { return list.toArray(new Integer[0]); } } You can find similar results, full analysis, and discussion in the blog post Arrays of Wisdom of the Ancients. To summarize: the JVM and JIT compiler contains several optimizations that enable it to cheaply create and initialize a new correctly sized array, and those optimizations can not be used if you create the array yourself. A: As of ArrayList in Java 5, the array will be filled already if it has the right size (or is bigger). Consequently MyClass[] arr = myList.toArray(new MyClass[myList.size()]); will create one array object, fill it and return it to "arr". On the other hand MyClass[] arr = myList.toArray(new MyClass[0]); will create two arrays. The second one is an array of MyClass with length 0. So there is an object creation for an object that will be thrown away immediately. As far as the source code suggests the compiler / JIT cannot optimize this one so that it is not created. Additionally, using the zero-length object results in casting(s) within the toArray() - method. See the source of ArrayList.toArray(): public <T> T[] toArray(T[] a) { if (a.length < size) // Make a new array of a's runtime type, but my contents: return (T[]) Arrays.copyOf(elementData, size, a.getClass()); System.arraycopy(elementData, 0, a, 0, size); if (a.length > size) a[size] = null; return a; } Use the first method so that only one object is created and avoid (implicit but nevertheless expensive) castings. A: Using 'toArray' with the array of the correct size will perform better as the alternative will create first the zero sized array then the array of the correct size. However, as you say the difference is likely to be negligible. Also, note that the javac compiler does not perform any optimization. These days all optimizations are performed by the JIT/HotSpot compilers at runtime. I am not aware of any optimizations around 'toArray' in any JVMs. The answer to your question, then, is largely a matter of style but for consistency's sake should form part of any coding standards you adhere to (whether documented or otherwise). A: The second one is marginally mor readable, but there so little improvement that it's not worth it. The first method is faster, with no disadvantages at runtime, so that's what I use. But I write it the second way, because it's faster to type. Then my IDE flags it as a warning and offers to fix it. With a single keystroke, it converts the code from the second type to the first one.
{ "language": "en", "url": "https://stackoverflow.com/questions/174093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "213" }
Q: Scaling an application I have an application (an IP conferencing service) that I need to scale. It has quite a few independent components/applications, written in different languages (mainly C++ and PHP, some Perl). Currently a single installation runs on 5 machines, with 1-2 components sharing a single box. The configuration of each box therefore is different, so it's a pain to scale the whole thing, not to mention maintenance. The individual components vary from media proxies to message proxies and databases, so load is everything but even. I was thinking about putting all components on a single machine and add more boxes as I go along with some sort of a load balancer in front. Others advised me to scale the other way - add specialized boxes where the app load gets high - but this leads back to the maintenance nightmare. Do you know resources where I should start? Ideally I'd need some benchmarks which approach is better in terms of performance? (thinking aloud, when I have X amount of load and Y amount of memory and processing power where does it matter how I allocate it?) A: I'd go for virtualization. That way you can have various similar configuration machines, and distribute load among them as needed, even if you keep modules in separate VMs. A: You need to determine where the highest load or bottle neck will be it will be difficult to properly plan this without. I would second looking at virtualization. It makes your app: Quick to deploy Easy to backup On event of failure quick to restore
{ "language": "en", "url": "https://stackoverflow.com/questions/174106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why do shell script comparisons often use x$VAR = xyes? I see this often in the build scripts of projects that use autotools (autoconf, automake). When somebody wants to check the value of a shell variable, they frequently use this idiom: if test "x$SHELL_VAR" = "xyes"; then ... What is the advantage to this over simply checking the value like this: if test $SHELL_VAR = "yes"; then ... I figure there must be some reason that I see this so often, but I can't figure out what it is. A: I recommend instead: if test "yes" = "$SHELL_VAR"; then since it does away with the ugly x, and still solves the problem mentioned by https://stackoverflow.com/a/174288/895245 that $SHELL_VAR may start with - and be read as an option. A: The other reason that no-one else has yet mentioned is in relation to option processing. If you write: if [ "$1" = "abc" ]; then ... and $1 has the value '-n', the syntax of the test command is ambiguous; it is not clear what you were testing. The 'x' at the front prevents a leading dash from causing trouble. You have to be looking at really ancient shells to find one where the test command does not have support for -n or -z; the Version 7 (1978) test command included them. It isn't quite irrelevant - some Version 6 UNIX stuff escaped into BSD, but these days, you'd be extremely hard pressed to find anything that ancient in current use. Not using double quotes around values is dangerous, as a number of other people pointed out. Indeed, if there's a chance that file names might contain spaces (MacOS X and Windows both encourage that to some extent, and Unix has always supported it, though tools like xargs make it harder), then you should enclose file names in double quotes every time you use them too. Unless you are in charge of the value (e.g. during option handling, and you set the variable to 'no' at startup and 'yes' when a flag is included in the command line) then it is not safe to use unquoted forms of variables until you've proved them safe -- and you may as well do it all the time for many purposes. Or document that your scripts will fail horribly if users attempt to process files with blanks in the names. (And there are other characters to worry about too -- backticks could be rather nasty too, for instance.) A: I believe its due to SHELLVAR=$(true) if test $SHELLVAR = "yes" ; then echo "yep" ; fi # bash: test: =: unary operator expected as well as if test $UNDEFINEDED = "yes" ; then echo "yep" ; fi # bash: test: =: unary operator expected and SHELLVAR=" hello" if test $SHELLVAR = "hello" ; then echo "yep" ; fi # yep however, this should usually work SHELLVAR=" hello" if test "$SHELLVAR" = "hello" ; then echo "yep" ; fi #<no output> but when it complains in output somewhere else, its hard to tell what its complaining about I guess, so SHELLVAR=" hello" if test "x$SHELLVAR" = "xhello" ; then echo "yep" ; fi works just as well, but would be easier to debug. A: There's two reasons that I know of for this convention: http://tldp.org/LDP/abs/html/comparison-ops.html In a compound test, even quoting the string variable might not suffice. [ -n "$string" -o "$a" = "$b" ] may cause an error with some versions of Bash if $string is empty. The safe way is to append an extra character to possibly empty variables, [ "x$string" != x -o "x$a" = "x$b" ] (the "x's" cancel out). Second, in other shells than Bash, especially older ones, the test conditions like '-z' to test for an empty variable did not exist, so while this: if [ -z "$SOME_VAR" ]; then echo "this variable is not defined" fi will work fine in BASH, if you're aiming for portability across various UNIX environments where you can't be sure that the default shell will be Bash and whether it supports the -z test condition, it's safer to use the form if [ "x$SOME_VAR" = "x" ] since that will always have the intended effect. Essentially this is an old shell scripting trick for finding an empty variable, and it's still used today for backwards compatibility despite there being cleaner methods available. A: If you're using a shell that does simple substitution and the SHELL_VAR variable does not exist (or is blank), then you need to watch out for the edge cases. The following translations will happen: if test $SHELL_VAR = yes; then --> if test = yes; then if test x$SHELL_VAR = xyes; then --> if test x = xyes; then The first of these will generate an error since the fist argument to test has gone missing. The second does not have that problem. Your case translates as follows: if test "x$SHELL_VAR" = "xyes"; then --> if test "x" = "xyes"; then The x, at least for POSIX-compliant shells, is actually redundant since the quotes ensue that both an empty argument and one containing spaces are interpreted as a single object. A: I used to do that in DOS when the SHELL_VAR might be undefined. A: If you don't do the "x$SHELL_VAR" thing, then if $SHELL_VAR is undefined, you get an error about "=" not being a monadic operator or something like that.
{ "language": "en", "url": "https://stackoverflow.com/questions/174119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "83" }
Q: JKS protection Are JKS (Java Key Store) files encrypted? Do they provide full protection for encryption keys, or do I need to rely solely on access control? Is there a way to ensure that the keys are protected? I'm interested in the gritty details, including algorithm, key management, etc. Is any of this configurable? A: To be more precise: * *PrivateKeys and SecretKeys within a JKS file are encrypted with their own password. *Integrity of trusted certificates is protected with a MAC using the key store password. *The file as a whole is not encrypted, and an attacker can list its entries without the key store password. A: They are encrypted. The algorithm is provider dependent. The provider will return the key/certificate based on a password. If you need strong security, find a keystore provider that uses a strong encryption.
{ "language": "en", "url": "https://stackoverflow.com/questions/174131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Counting the number of deleted rows in a SQL Server stored procedure In SQL Server 2005, is there a way of deleting rows and being told how many were actually deleted? I could do a select count(*) with the same conditions, but I need this to be utterly trustworthy. My first guess was to use the @@ROWCOUNT variables - but that isn't set, e.g. delete from mytable where datefield = '5-Oct-2008' select @@ROWCOUNT always returns a 0. MSDN suggests the OUTPUT construction, e.g. delete from mytable where datefield = '5-Oct-2008' output datefield into #doomed select count(*) from #doomed this actually fails with a syntax error. Any ideas? A: Just do this: SET NOCOUNT off ; SELECT @p1 = @@ROWCOUNT where p1 is the output parameter you set in the stored procedure. Hope it helps. A: In your example @@ROWCOUNT should work - it's a proper way to find out a number of deleted rows. If you're trying to delete something from your application then you'll need to use SET NOCOUNT ON According to MSDN @@ROWCOUNT function is updated even when SET NOCOUNT is ON as SET NOCOUNT only affects the message you get after the the execution. So if you're trying to work with the results of @@ROWCOUNT from, for example, ADO.NET then SET NOCOUNT ON should definitely help. A: Have you tried SET NOCOUNT OFF? A: I use @@ROWCOUNT for this exact purpose in SQL2000 with no issues. Make sure that you're not inadvertantly resetting this count before checking it though (BOL: 'This variable is set to 0 by any statement that does not return rows, such as an IF statement'). A: I found a case where you can't use @@rowcount, like when you want to know the distinct count of the values that were deleted instead of the total count. In this case you would have to do the following: delete from mytable where datefield = '5-Oct-2008' output deleted.datefield into #doomed select count(distinct datefield) from #doomed The syntax error in the OP was because output did not include deleted before the datefield field name. A: Out of curiosity, how are you calling the procedure? (I'm assuming it is a stored procedure?). The reason I ask is that there is a difference between a stored procedure's return value (which would be 0 in this case), and a rowset result -- which in this case would be a single row with a single column. In ADO.Net, the former would be accessed by a parameter and the latter with a SqlDataReader. Are you, perhaps, mistaking the procedure's return value as the rowcount? A: Create temp table with one column, id. Insert into temp table selecting the ids you want to delete. That gives you your count. Delete from your table where id in (select id from temp table)
{ "language": "en", "url": "https://stackoverflow.com/questions/174143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Storing X509 certificates in DB - Yea or Nay? I find myself needing to store public key certificates, and a single private key certificate for an in-house application. A member of our team suggested storing the X509 certificates in the database, instead of storing it in the windows certificate store, as we have been doing up until now. I don't like re-inventing the wheel, but I have to at least consider the idea. it would mean keeping our data more centralized, which is good, I suppose. The initial barriers that I can see are: * *The private key still needs to be stored somewhere, and I don't know if shoehorning it into a 'public key' table is a good idea. I don't like the idea of setting up a table for a single element, either. Perhaps just keep the private key as a local file? (.pfx file, for instance). *Revocation Lists. We would probably have to set up a process to deal with revoked public keys. I don't have a lot of experience with X509 certificates, so, my question is: Are there any other problems we are likely to encounter storing public key certificates in a database, instead of going with the windows certificate store? It's worth bearing in mind that this application is going to be rolled out onto several business clients servers, so keeping all the data in a single place will make for easier backups. Oh, and the in-house app in question is being developed with C#.. Thanks! A: What is the purpose of your application? If you are handling all the crypto in your application, and can reference a PKCS#12 cert + private key file, then going the database route is probably fine. If you need to use Windows Crypto API to access the certs, then you'll probably want to keep using the built-in certificate store. You gain some advantages here as you can protect the private key on an external device, like a smart card or Hardware Security Module (HSM). You'll want to make sure that you go through a significant effort to protect the private key if you're storing everything on the local disk. Be sure to use a strong passphrase and use best practices to protect this passphrase in your app. A: I would be reluctant to move the private key to any other location unless really necessary. Its not required if the key is being used for signing and would only be required if the key is being used for decrypting and you wish to archive it for the future. Even in this instance the certificate authority that issued the certificate would commonly be able to handle archival and recovery. This is certainly the case for the more popular CAs such as Microsoft and entrust. If you must store it then encrypt it using AES and a key that you are able to protect either in an HSM(Hardware Security Module) or on a smartcard. Do not leave this key in plain text (in a file or the registry). You would also wish to protect this key in transit between its generation location and the central database. SSL or VPN etc Revocation lists are published by the Certificate Authority in most environments, usually to an LDAP or the directory or both.
{ "language": "en", "url": "https://stackoverflow.com/questions/174149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Pre and post increment/decrement operators in C# In C#, does anybody know why the following will compile: int i = 1; ++i; i++; but this will not compile? int i = 1; ++i++; (Compiler error: The operand of an increment or decrement operator must be a variable, property or indexer.) A: My guess would be that ++i returns an integer value type, to which you then try to apply the ++ operator. Seeing as you can't write to a value type (think about 0++ and if that would make sense), the compiler will issue an error. In other words, those statements are parsed as this sequence: ++i (i = 2, returns 2) 2++ (nothing can happen here, because you can't write a value back into '2') A: you are running one of the operands on the result of the other, the result of a increment/decrement is a value - and you can not use increment/decrement on a value it has to be a variable that can be set. A: For the same reason you can't say 5++; or f(i)++; A function returns a value, not a variable. The increment operators also return values, but cannot be applied to values. A: My guess: to avoid such ugly and unnecessary constructs. Also it would use 2 operations (2x INC) instead of one (1x ADD 2). Yes, i know ... "but i want to increase by two and i'm a l33t g33k!" Well, don't be a geek and write something that doesn't look like an inadvertent mistake, like this: i += 2;
{ "language": "en", "url": "https://stackoverflow.com/questions/174153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Switch statement fallthrough in C#? Switch statement fallthrough is one of my personal major reasons for loving switch vs. if/else if constructs. An example is in order here: static string NumberToWords(int number) { string[] numbers = new string[] { "", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine" }; string[] tens = new string[] { "", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety" }; string[] teens = new string[] { "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen" }; string ans = ""; switch (number.ToString().Length) { case 3: ans += string.Format("{0} hundred and ", numbers[number / 100]); case 2: int t = (number / 10) % 10; if (t == 1) { ans += teens[number % 10]; break; } else if (t > 1) ans += string.Format("{0}-", tens[t]); case 1: int o = number % 10; ans += numbers[o]; break; default: throw new ArgumentException("number"); } return ans; } The smart people are cringing because the string[]s should be declared outside the function: well, they are, this is just an example. The compiler fails with the following error: Control cannot fall through from one case label ('case 3:') to another Control cannot fall through from one case label ('case 2:') to another Why? And is there any way to get this sort of behaviour without having three ifs? A: (Copy/paste of an answer I provided elsewhere) Falling through switch-cases can be achieved by having no code in a case (see case 0), or using the special goto case (see case 1) or goto default (see case 2) forms: switch (/*...*/) { case 0: // shares the exact same code as case 1 case 1: // do something goto case 2; case 2: // do something else goto default; default: // do something entirely different break; } A: They left out this behaviour by design to avoid when it was not used by will but caused problems. It can be used only if there is no statement in the case part, like: switch (whatever) { case 1: case 2: case 3: boo; break; } A: The "why" is to avoid accidental fall-through, for which I'm grateful. This is a not uncommon source of bugs in C and Java. The workaround is to use goto, e.g. switch (number.ToString().Length) { case 3: ans += string.Format("{0} hundred and ", numbers[number / 100]); goto case 2; case 2: // Etc } The general design of switch/case is a little bit unfortunate in my view. It stuck too close to C - there are some useful changes which could be made in terms of scoping etc. Arguably a smarter switch which could do pattern matching etc would be helpful, but that's really changing from switch to "check a sequence of conditions" - at which point a different name would perhaps be called for. A: They changed the switch statement (from C/Java/C++) behavior for c#. I guess the reasoning was that people forgot about the fall through and errors were caused. One book I read said to use goto to simulate, but this doesn't sound like a good solution to me. A: To add to the answers here, I think it's worth considering the opposite question in conjunction with this, viz. why did C allow fall-through in the first place? Any programming language of course serves two goals: * *Provide instructions to the computer. *Leave a record of the intentions of the programmer. The creation of any programming language is therefore a balance between how to best serve these two goals. On the one hand, the easier it is to turn into computer instructions (whether those are machine code, bytecode like IL, or the instructions are interpreted on execution) then more able that process of compilation or interpretation will be to be efficient, reliable and compact in output. Taken to its extreme, this goal results in our just writing in assembly, IL, or even raw op-codes, because the easiest compilation is where there is no compilation at all. Conversely, the more the language expresses the intention of the programmer, rather than the means taken to that end, the more understandable the program both when writing and during maintenance. Now, switch could always have been compiled by converting it into the equivalent chain of if-else blocks or similar, but it was designed as allowing compilation into a particular common assembly pattern where one takes a value, computes an offset from it (whether by looking up a table indexed by a perfect hash of the value, or by actual arithmetic on the value*). It's worth noting at this point that today, C# compilation will sometimes turn switch into the equivalent if-else, and sometimes use a hash-based jump approach (and likewise with C, C++, and other languages with comparable syntax). In this case there are two good reasons for allowing fall-through: * *It just happens naturally anyway: if you build a jump table into a set of instructions, and one of the earlier batches of instructions doesn't contain some sort of jump or return, then execution will just naturally progress into the next batch. Allowing fall-through was what would "just happen" if you turned the switch-using C into jump-table–using machine code. *Coders who wrote in assembly were already used to the equivalent: when writing a jump table by hand in assembly, they would have to consider whether a given block of code would end with a return, a jump outside of the table, or just continue on to the next block. As such, having the coder add an explicit break when necessary was "natural" for the coder too. At the time therefore, it was a reasonable attempt to balance the two goals of a computer language as it relates to both the produced machine code, and the expressiveness of the source code. Four decades later though, things are not quite the same, for a few reasons: * *Coders in C today may have little or no assembly experience. Coders in many other C-style languages are even less likely to (especially Javascript!). Any concept of "what people are used to from assembly" is no longer relevant. *Improvements in optimisations mean that the likelihood of switch either being turned into if-else because it was deemed the approach likely to be most efficient, or else turned into a particularly esoteric variant of the jump-table approach are higher. The mapping between the higher- and lower-level approaches is not as strong as it once was. *Experience has shown that fall-through tends to be the minority case rather than the norm (a study of Sun's compiler found 3% of switch blocks used a fall-through other than multiple labels on the same block, and it was thought that the use-case here meant that this 3% was in fact much higher than normal). So the language as studied make the unusual more readily catered-to than the common. *Experience has shown that fall-through tends to be the source of problems both in cases where it is accidentally done, and also in cases where correct fall-through is missed by someone maintaining the code. This latter is a subtle addition to the bugs associated with fall-through, because even if your code is perfectly bug-free, your fall-through can still cause problems. Related to those last two points, consider the following quote from the current edition of K&R: Falling through from one case to another is not robust, being prone to disintegration when the program is modified. With the exception of multiple labels for a single computation, fall-throughs should be used sparingly, and commented. As a matter of good form, put a break after the last case (the default here) even though it's logically unnecessary. Some day when another case gets added at the end, this bit of defensive programming will save you. So, from the horse's mouth, fall-through in C is problematic. It's considered good practice to always document fall-throughs with comments, which is an application of the general principle that one should document where one does something unusual, because that's what will trip later examination of the code and/or make your code look like it has a novice's bug in it when it is in fact correct. And when you think about it, code like this: switch(x) { case 1: foo(); /* FALLTHRU */ case 2: bar(); break; } Is adding something to make the fall-through explicit in the code, it's just not something that can be detected (or whose absence can be detected) by the compiler. As such, the fact that on has to be explicit with fall-through in C# doesn't add any penalty to people who wrote well in other C-style languages anyway, since they would already be explicit in their fall-throughs.† Finally, the use of goto here is already a norm from C and other such languages: switch(x) { case 0: case 1: case 2: foo(); goto below_six; case 3: bar(); goto below_six; case 4: baz(); /* FALLTHRU */ case 5: below_six: qux(); break; default: quux(); } In this sort of case where we want a block to be included in the code executed for a value other than just that which brings one to the preceding block, then we're already having to use goto. (Of course, there are means and ways of avoiding this with different conditionals but that's true of just about everything relating to this question). As such C# built on the already normal way to deal with one situation where we want to hit more than one block of code in a switch, and just generalised it to cover fall-through as well. It also made both cases more convenient and self-documenting, since we have to add a new label in C but can use the case as a label in C#. In C# we can get rid of the below_six label and use goto case 5 which is clearer as to what we are doing. (We'd also have to add break for the default, which I left out just to make the above C code clearly not C# code). In summary therefore: * *C# no longer relates to unoptimised compiler output as directly as C code did 40 years ago (nor does C these days), which makes one of the inspirations of fall-through irrelevant. *C# remains compatible with C in not just having implicit break, for easier learning of the language by those familiar with similar languages, and easier porting. *C# removes a possible source of bugs or misunderstood code that has been well-documented as causing problems for the last four decades. *C# makes existing best-practice with C (document fall through) enforceable by the compiler. *C# makes the unusual case the one with more explicit code, the usual case the one with the code one just writes automatically. *C# uses the same goto-based approach for hitting the same block from different case labels as is used in C. It just generalises it to some other cases. *C# makes that goto-based approach more convenient, and clearer, than it is in C, by allowing case statements to act as labels. All in all, a pretty reasonable design decision *Some forms of BASIC would allow one to do the likes of GOTO (x AND 7) * 50 + 240 which while brittle and hence a particularly persuasive case for banning goto, does serve to show a higher-language equivalent of the sort of way that lower-level code can make a jump based on arithmetic upon a value, which is much more reasonable when it's the result of compilation rather than something that has to be maintained manually. Implementations of Duff's Device in particular lend themselves well to the equivalent machine code or IL because each block of instructions will often be the same length without needing the addition of nop fillers. †Duff's Device comes up here again, as a reasonable exception. The fact that with that and similar patterns there's a repetition of operations serves to make the use of fall-through relatively clear even without an explicit comment to that effect. A: Switch fallthrough is historically one of the major source of bugs in modern softwares. The language designer decided to make it mandatory to jump at the end of the case, unless you are defaulting to the next case directly without processing. switch(value) { case 1:// this is still legal case 2: } A: You can 'goto case label' http://www.blackwasp.co.uk/CSharpGoto.aspx The goto statement is a simple command that unconditionally transfers the control of the program to another statement. The command is often criticised with some developers advocating its removal from all high-level programming languages because it can lead to spaghetti code. This occurs when there are so many goto statements or similar jump statements that the code becomes difficult to read and maintain. However, there are programmers who point out that the goto statement, when used carefully, provides an elegant solution to some problems... A: A jump statement such as a break is required after each case block, including the last block whether it is a case statement or a default statement. With one exception, (unlike the C++ switch statement), C# does not support an implicit fall through from one case label to another. The one exception is if a case statement has no code. -- C# switch() documentation A: After each case statement require break or goto statement even if it is a default case. A: You can achieve fall through like c++ by the goto keyword. EX: switch(num) { case 1: goto case 3; case 2: goto case 3; case 3: //do something break; case 4: //do something else break; case default: break; } A: Just a quick note to add that the compiler for Xamarin actually got this wrong and it allows fallthrough. It has supposedly been fixed, but has not been released. Discovered this in some code that actually was falling through and the compiler did not complain. A: switch (C# Reference) says C# requires the end of switch sections, including the final one, So you also need to add a break; to your default section, otherwise there will still will be a compiler error. A: You forgot to add the "break;" statement into case 3. In case 2 you wrote it into the if block. Therefore try this: case 3: { ans += string.Format("{0} hundred and ", numbers[number / 100]); break; } case 2: { int t = (number / 10) % 10; if (t == 1) { ans += teens[number % 10]; } else if (t > 1) { ans += string.Format("{0}-", tens[t]); } break; } case 1: { int o = number % 10; ans += numbers[o]; break; } default: { throw new ArgumentException("number"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/174155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "410" }
Q: Is the Entity Framework basically another CRUD code generator? Is entity framework just a fancy name for another CRUD code generator? Or is there more to it? A: Thats sort of like saying object oriented programming is basically proceedural with a few modifications. While EF is NOT considered the best example of object relational mapping, the principles it attempts to cover have been in use for almost 30 years. I recommend reading Dr. Raymond Chen on the Entity Relationship Model ( he developed it and has a paper on it.) Wikipedia has some info as well. http://en.wikipedia.org/wiki/Entity_relationship_model. The best tool on the market for this approach is LLBLGen. It has 5 years of maturity and runs circles around MS EF. A: The Entity Framework is suitable for all applications which would benefit from having an ORM (object relational mapping) layer. Daniel Simmons post goes into detail on this. http://blogs.msdn.com/dsimmons/archive/2008/05/17/why-use-the-entity-framework.aspx The EF allows you to have classes which contain business logic not related to persistence through usage of partial classes (this approach is not specific to the EF however). We have complex domain objects which do validation and support complex business rules which are also persisted in part via EF so this is more than CRUD at heart.
{ "language": "en", "url": "https://stackoverflow.com/questions/174163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Python - Py2exe can't build .exe using the 'email' module py2exe does not work with the standard email module Hello. I am trying to use py2exe for converting a script into an exe. The build process shows this: The following modules appear to be missing ['email.Encoders', 'email.Generator', 'email.Iterators', 'email.MIMEBase', 'email.MIMEMultipart', 'email.MIMEText', 'email.Utils', 'email.base64MIME'] The executable does not work. The referenced modules are not included. I researched this on the Internet and I found out that py2exe has a problem with the Lazy import used in the standard lib email module. Unfortunately I have not succeeded in finding a workaround for this problem. Can anyone help? Thank you, P.S. Imports in the script look like this: Code: Select all import string,time,sys,os,smtplib from email.MIMEMultipart import MIMEMultipart from email.MIMEBase import MIMEBase from email.MIMEText import MIMEText from email import Encoders A: Have a look at this question how-to-package-twisted-program-with-py2exe it seems to be the same problem. The answer given there is to explicitly include the modules on the command line to py2exe. A: What version of Python are you using? If you are using 2.5 or 2.6, then you should be doing your import like: import string,time,sys,os,smtplib from email.mime.multipart import MIMEMultipart from email.mime.base import MIMEBase from email.mime.text import MIMEText from email import Encoders I'm pretty certain that py2exe's modulefinder can correctly find the email package if you use it correctly (i.e. use the above names in Python 2.5+, or use the old names in Python 2.4-). Certainly the SpamBayes setup script does not need to explicitly include the email package, and it includes the email modules without problem. The other answers are correct in that if you do need to specifically include a module, you use the "includes" option, either via the command-line, or passing them in when you call setup. A: Use the "includes" option. See: http://www.py2exe.org/index.cgi/ListOfOptions A: If you don't have to work with py2exe, bbfreeze works better, and I've tried it with the email module. http://pypi.python.org/pypi/bbfreeze/0.95.4 A: I got it working by explicitly including missing modules in setup.py: OLD setup.py: setup(console = ['main.py']) New setup.py: setup(console = ['main.py'], options={"py2exe":{"includes":["email.mime.multipart","email.mime.text"]}}) A: while porting my app from py24 to 26 I had the same problem. After reading http://www.py2exe.org/index.cgi/ExeWithEggs if found finaly following solution: in my application.py: import email import email.mime.text import email.mime.base import email.mime.multipart import email.iterators import email.generator import email.utils try: from email.MIMEText import MIMEText except: from email.mime import text as MIMEText in setup.py: import modulefinder modulefinder.AddPackagePath("mail.mime", "base") modulefinder.AddPackagePath("mail.mime", "multipart") modulefinder.AddPackagePath("mail.mime", "nonmultipart") modulefinder.AddPackagePath("mail.mime", "audio") modulefinder.AddPackagePath("mail.mime", "image") modulefinder.AddPackagePath("mail.mime", "message") modulefinder.AddPackagePath("mail.mime", "application") For py2exe to work with packages loaded during runtime, the main thing seems to be that u explicitly import the modules needed by your app somewhere in your app. And then give py2exe in setup.py with moudlefinder.AddPackagePath( , ) the hint, where to search for modules it couldn't find by std. introspection. in the app A: This solve my problem: in setup.py edit includes = ["email"] A: Please try this. This works on my py2exe build. Just replace "project_name.py" with your main script. The EXTRA_INCLUDES are packages that you need to include in your build like email package. I this works with you also. from distutils.core import setup import py2exe, sys, os sys.argv.append('py2exe') EXTRA_INCLUDES = [ "email.iterators", "email.generator", "email.utils", "email.base64mime", "email", "email.mime", "email.mime.multipart", "email.mime.text", "email.mime.base", "lxml.etree", "lxml._elementpath", "gzip" ] setup( options = {'py2exe': {'bundle_files': 1, 'compressed': True, 'includes': EXTRA_INCLUDES, 'dll_excludes': ['w9xpopen.exe','MSVCR71.dll']}}, console = [{'script': "project_name.py"}], zipfile = None, )
{ "language": "en", "url": "https://stackoverflow.com/questions/174170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Including non-Java sources in a Maven project I'm starting on a project that I expect will include a substantial amount of non-Java code (mostly shell and SQL scripts). I would still like to manage this project with Maven. What are the best practices wrt non-Java source code and Maven? Where should the source go? What happens to them during the different lifecycle phases? Any pointers or links to more information would be greatly appreciated. A: You must not put the non-Java code into resources, if you don't want to include these files into your JAR files like heckj has suggested. Everything that is located in resources is automatically copied into the JAR file and I guess you don't want shell scripts and SQL scripts be included in a JAR file, right? So the Maven way would be to create additional folders under src/main. E.g. create a sql folder for your SQL scripts, an sh folder for your shell scripts and so on. This is the location where other Maven plugins also expect sources, e.g. for C++, Groovy and so on. A: I keep in a separate folder src/main/sql. Check this link for more info.
{ "language": "en", "url": "https://stackoverflow.com/questions/174171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Oracle APEX access Is Oracle Application Express suitable for Intranet client/server application? If so, what should I do to enable client access to application? Well, I am working as a PowerBuilder/Oracle developer, so I am familiar with client/server architecture. I have recently read an article about APEX so I would like to develop APEX variation of my PowerBuilder/Oracle app, which is pretty much HR app. It should not be Internet accessible app, just a couple of windows boxes in a small network. I have no problem with developing app in PL/SQL and SQL (will have to read and ask a lot, though). I would just like to know is APEX suitable for Intranet app - it should be as it is suitable for Internet app :) - and how should I enable client's browser to access an application since there would be nothing like http:/www.appdomain.com ? I know next to nothing about win networks :) A: APEX is NOT a client/server application development environment, hence the answer is clearly "no". Apex is an n-tier web application architecture where n=3: Your browser, the Apex web application server, and the database. The app server in this case is a bit of a funny hybrid because it actually executes almost all the code in the database, not in the middle tier. Client/server is where you deploy an application on the user side (as opposed to using the browswer as noted above) and you connect to a server (e.g. oracle db server). Very common back in the day was VB front end connecting to an Oracle backend. Not so much of that these days ;) APEX is GREAT if you have an Oracle shop. If it is a small application, you can use "Oracle Database XE", the free version that comes bundled with APEX with limitations on CPU and storage. I'm guessing you might be asking if the XE database version from Oracle is good/stable and ready for use in a client server application setting? IMHO, absolutely a great place to start, or stay with small, simple applications, but it it stuck where it is in terms of fixes to known problems, bugs, etc.. While these tend to be very very specific situations where the right combination of factors appears, you don't want to set expectations that license and support will be free to find out later on that you will have to go back to the full enterprise version of the database. Also not the best bargaining position to be in with Oracle sales people :) Maybe you could phrase your question a bit differently? A: "since there would be nothing like http:/www.appdomain.com" Given you are familiar with client/server technology, I guess you know about TNSNAMES.ORA Your Apex host would be defined in a similar manner to the way the database host is defined in tnsnames.ora If your tnsnames.ora says that your database is at 192.168.0.255, then your Apex host would have a similar (ugly) name. If it says it is defined as dbhost, then whoever in your organisation has mapped dbhost to a particular machine can do the same for your webserver. The only caveat is that sometimes you have a specific proxy defined in your browser and your apex webserver may need to be added as an exception. A: Igor, I'm coming really late to this party, but you didn't seem to get an answer you liked. Apex is absolutely a great tool for developing a small in-house web application such as you're describing. It'll be miles easier than doing the same thing in PB (which I also use). If Oracle is already installed on your network then ask the DBAs to also install Apex, which is installed only within Oracle (no external stuff needed), which can be done fairly quickly. (I'm running Apex on my home PC on top of Oracle XE.) After the installation, the DBAs will have to tell you the URL for Apex. After they've done that, walk yourself through the 2-day Developer Guide to get an idea of how Apex apps are developed, then try it out. It'll take a bit to get the hang of it, but once you do, it's really efficient. But if you understand the data and business logic for your application, you shouldn't take long. There are also a lot of sample applications out there you can install and then check out their source code for methods. Once you get started, join Oracle Apex User Forum, which is a great community for developers helping eachother. You'll find me there regularly. Good luck! Stew A: Application Express applications are accessed via a URL in a browser, with a URL something like: http://www.mydomain.com/pls/mydad/f?p=MYAPP A client/server application would have to launch a browser window and pass in the appropriate URL.
{ "language": "en", "url": "https://stackoverflow.com/questions/174174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In which layer are you putting your REST api? Our application is structured something like: UI <--> REST API <--> Workflow <--> Business Logic <--> DAL <--> DB However, I am seeing a few examples where it looks like people are doing UI <--> Workflow <--> REST API <--> Business Logic <--> DAL <--> DB Is this my imagination? Or is the second option considered a viable alternative? A: It really is relative to what you mean workflow. Hypermedia as the engine of application state will give you a directed graph of states/resources. It is not necessary that these graphs form a workflow (e.g have a specific start and end point). They may well form a cycle, have bidirectional links and whatnot. I assume this graph is somehow derrived from the business logic. If you include your workflow (a specific path from one point to another via the graph) in you UI, you make some assumptions about the REST API therefore tightly coupling your UI with the business logic, therefore throwing the discoverability of REST away. In general mixing workflows (imperative programming) with REST (declarative programming) is very problematic. The best approach would be to have an adaptive UI that can allow the user to navigate the network of states instead of constraining them through bespoke, predetermined workflows. That is how a browser works, anyways. If you really need to have some workflows though, you could implement them by creating a chain of interconnected resources and guiding the user to the first one. In this sense, your first option would be valid although I find the seperation of business logic and workflow to be a grey area. Workflows are part of the business logic or, to state it better, are derrived from the business logic. These opinions are my own, however a good, relevant article on the topic can be found here: http://www.infoq.com/articles/webber-rest-workflow A: I am just getting exposed to what ReST really is now and hopefully I'm not way off base here, but as I understand it, the client should be responsible for choosing what states to transfer to (workflow), so yes I think #2 is definitely valid. In fact I'd be interested to know how you implement workflow in your ReST API. A: REST is access to resources. The question is "What's a resource"? Most answers are that it's a pretty low-level piece of information. A composite application or workflow depends on one or more resources. It's hard to say that a resource depends on a workflow. Not imspossible. But hard. When designing a RESTful interface, you've only got the CRUD rules available to you. The most common expectation is that the response is totally married to your request. When you POST an X, you expect that the only state change is to create a new X. Not create an X and and Y with an optional pair of Z's. I'd suggest that your second alternative puts REST in a better context -- access to stateful objects.
{ "language": "en", "url": "https://stackoverflow.com/questions/174181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Recursive Rails Nested Resources I have a Rails application for project management where there are Project and Task models. A project can have many tasks, but a task can also have many tasks, ad infinitum. Using nested resources, we can have /projects/1/tasks, /projects/1/tasks/new, /projects/1/tasks/3/edit etc. However, how do you represent the recursive nature of tasks RESTfully? I don't want go another level deep, so perhaps the following would do: map.resources :tasks do |t| t.resources :tasks end That would give me the following urls: /tasks/3/tasks/new /tasks/3/tasks/45/edit Or perhaps when it comes to an individual task I can just use /tasks/45/edit Is this a reasonable design? Cam A: Going anywhere beyond a single nested route is generally considered a bad idea. From page 108 of The Rails Way: "Jamis Busk a very influential figure in the Rails community, almost as much as David himself. In February 2007, vis his blog, he basically told us that deep nesting was a _bad_ thing, and proposed the following rule of thumb: Resources should never be nested more than one level deep." Now some would argue with this (which is discussed on page 109) but when you're talking about nesting tasks with tasks it just doesn't seem to make much sense. I would approach your solution a different way and like it was mentioned above, a project should have many tasks but for a task to have many tasks doesn't seem correct and maybe those should be re-named as sub-tasks or something along those lines. A: there's no reason they should have decendant URLS. logically: /projects/1 --> project 1 /projects/1/edit ( etc ) /tasks/1 --> task 1 /project/1/tasks --> task list for project 1 /project/1/tasks/new /project/1/tasks/1/edit -> /tasks/5/edit ( redundancy ) /project/1/tasks/1 -> redirect to /tasks/1 /tasks/1/project -> redirect to /projects/1 /tasks/3/tasks --> list of tasks that are children tasks of task 3 /tasks/3/tasks/5 -> redirect /tasks/5/ ( because you don't really need to have a recursive URL ) /tasks/5/parents -> list of tasks that are parents of tasks 3 /tasks/5/parents/3 -> redirect /tasks/3/ there's no reason IMHO to require the URLs be associative, you're not going to need to know that task 5 is a child of task 3 to edit task 5. A: I'm currently on a project that does something similar. The answer I used that was very elegant was I added a parent_id column that pointed to another task. When doing your model, make sure to do the following: belongs_to :project belongs_to :parent, :class_name => "Task" has_many :children, :class_name => "Task", :foreign_key => "parent_id" ...and then you can do recursion by: def do_something(task) task.children.each do |child| puts "Something!" do_something(child) end end This way, you can reference your tasks by its parent or by its children. When doing your routes, you'll access a single task always by /project/:project_id/task/:task_id even though it may have a parent or children. Just make sure that you don't have a task that has its parent the same as its child or else you'll go into an infinite loop when you do your recursion to find all the children. You can add the condition to your validation scripts to make sure it doesn't. See also: acts_as_tree
{ "language": "en", "url": "https://stackoverflow.com/questions/174190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Visual Studio: Is there a "move class to different namespace" refactoring? I'm doing some architectural cleanup that involves moving a bunch of classes into different projects and/or namespaces. Currently I'm moving the files by hand, building, and then manually adding using Foo statements as needed to resolve compilation errors. Anyone know of a smarter way of doing this? (We're a CodeRush and Refactor! shop, but I'd be interested to hear if Resharper has support for this) A: There are partial solutions for VS 2015 & VS 2017 without Resharper using free extensions. One extension which I like today (end of 2017) is the Fix Namespace VS Extension: https://marketplace.visualstudio.com/items?itemName=vs-publisher-599079.FixNamespace#overview It analyses the folder structure of your solution and offers namespace refactoring using that. Unfortunately it isn't perfect: It doesn't track dependencies that well, but solved the lion's share of the work for me. A: Visual Studio 2019 provides at least 2 built-in options: 'Move to namespace...' refactoring can be triggered on any class, and VS will prompt for the target namespace. 'Change namespace to...' refactoring is provided for when the current file namespace doesn't match with the folder structure. This can be used to move individual classes to a different namespace by: * *creating the desired folder structure *moving the file *applying the mentioned refactoring (CTRL+. with the cursor over the namespace) These operation ensures that all references are updated accordingly. A: Visual Studio 2010 has the possibility to rename a namespace. Place the cursor over the namespace name and press F2. Or simply rename it in the code and press Shift+Alt+F10, Enter after seeing the red squiggle appear. Reharper can also rename namespaces. Quote: The Rename Namespace refactoring allows users to rename a specific namespace and automatically correct all references to the namespace in the code. The following usages are renamed: * *Namespace statements *Using directives *Qualified names of types A: As mentioned in the comments, this answer is now outdated. Please see the up-to-date answer below Resharper is the only tool I am aware of what has this ability. There is also a lot of other functionality that it has that is missing in CodeRush and Refactor! A: With Resharper: CTRL+R+O Then press the down arrow key twice to select Move Type To Another Namespace. A: This answer applies to at least Visual Studio 2013 and 2015 with no resharper required * *Move class files to new folder *Open 'Find and replace' *Select 'Replace in Files' *Type the original namespace definition in the 'Find what' field eg. MyCorp.AppStuff.Api *Type the new namespace definition in the 'Replace with' field eg. MyCorp.AppStuff.Api.Extensions *Select the new folder using the 'Look in' field's browse button ..., or type the folder path *Press the Replace All button A: Since the answer above was provided (I'm guessing) this feature has been added to CodeRush. Just place the carat on the Type to be moved and you'll see a Move Type to Namespace option on the Refactor! context menu. This will move the type to the new namespace and update references. You may still want to move the file to a solution folder that matches the name of the namespace though. A: It's not the best outcome but can be done without plugins or tools, only with Visual Studio. Find and replace in Entire Solution, Match case, Match whole word. Find what: class name, Replace with: New.Namespace.ClassName (fully qualified class name). If you have 100+ references of the moved class and other classes in old namespace what are not moved this is the only foolproof and free solution I found. The only case when it leads to errors is when you have same class name in other namespace. A: If you cannot, or do not want to use Re$harper, Notepad++ is your friend: * *Make sure you don't have usaved changes inside Visual Studio for the files you need to move to the new namespace *Open all the files that contain the namespace that needs to be changed in Notepad++ *Open Find & Replace (CTRL + H) *Fill the Find what and Replace with fields *Press Replace All in All Opened Documents *Save all changes in all documents (CTRL + SHIFT + S) *Switch to Visual Studio and reload all the documents (Yes to all at the prompt) DONE
{ "language": "en", "url": "https://stackoverflow.com/questions/174193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84" }
Q: C#3.0 Automatic properties, why not access the field directly? With the new approach of having the get/set within the attribut of the class like that : public string FirstName { get; set; } Why simply not simply put the attribute FirstName public without accessor? A: Two of the big problems with direct access to variable inside class (field/attribute) are: 1) You can't easily databind against fields. 2) If you expose public fields from your classes you can't later change them to properties (for example: to add validation logic to the setters) A: This mostly comes down to it having become a common coding convention. It makes it easy to then add custom processing code if you desire. But you are correct, there is technically no real need for this. Though, if you do add custom processing later, this will help you to not break the interface. A: This style of notation is more useful when you mix acessibility for the getter and setter. For example, you can write: public int Foo { get; private set; } You could also put an internal setter, or even make the getter private and the setter public. This notation avoids the need to explicitly write a private variable just to handle the classic problem of the internally writable/externally readable value. A: Because, in the future, if you change the implementation, code using the current interface won't break. For instance, you implement a simple class with a public field and start using your class in some external modules. A month later you discover you need to implement lazy loading in that class. You would then need to transform the field to a property. From the external module point of ciew, it might look the same syntaxicaly, but it is not. A property is a set of functions, while a field is an offset in a class instance. By using a property, you effectively reduce the risk the interface will change. A: The key is that the compiler translates the property 'under the hood' into a function pair, and when you have code that looks like it's using the property it's actually calling functions when compiled down to IL. So let's say you build this as a field and have code in a separate assembly that uses this field. If later on the implementation changes and you decide to make it a property to hide the changes from the rest of your code, you still need to re-compile and re-deploy the other assembly. If it's a property from the get-go then things will just work. A: I think the questioner is asking why not do the following... public string FirstName { } Why bother with the accessors when you could shorten it to the above. I think the answer is that by requiring the accessors makes it obvious to the person reading the code that it is a standard get/set. Without them you can see above it is hard to spot this is automatically being implemented. A: For 99% of cases, exposing a public field is fine. The common advice is to use fields: "If you expose public fields from your classes you can't later change them to properties ". I know that we all want our code to be future-proof, but there are some problems with this thinking: * *The consumers of your class can probably recompile when you change your interface. *99% of your data members will never need to become non-trivial properties. It's speculative generality. You're writing a lot of code that will probaby never be useful. *If you need binary compatability across versions, making data members in to properties probably isn't enough. At the very least, you should only expose interfacess and hide all constructors, and expose factories (see code below). public class MyClass : IMyClass { public static IMyClass New(...) { return new MyClass(...); } } It's a hard problem, trying to make code that will work in an uncertain future. Really hard. Does anyone have an example of a time when using trivial properties saved their bacon? A: When you get a bug and you need to find out which methods are modifying your fields and when, you'll care a lot more. Putting light weight property accessors in up front saves a phenomenal amount of heartache should a bug arise involving the field you wrapped. I've been in that situation a couple of times and it isn't pleasant, especially when it turns out to be re-entrancy related. Doing this work upfront and sticking a breakpoint on an accessor is a lot easier than the alternatives. A: It preserve the encapsulation of the object and reduce the code to be more simple to read.
{ "language": "en", "url": "https://stackoverflow.com/questions/174198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Unlock Windows workstation programmatically I would like to write a small application that unlocks the workstation. To put the specs of what I need very simple: Have an exe that runs and at a defined time (let's say midnight) unlocks the workstation. Of course the application knows the user and password of the logged on account. I know of the LogonUser API and have tried using it but failed. Does anyone have a solution, code excerpt that actually works for this issue? I am targeting NT5 OSes. Well, since people started asking what is the reason: I am working on a desktop sharing application and I want to add the feature of unlocking the workstation. Having the very small and simple app to unlock the station at a defined time is in order to separate the problem and to avoid the integration details. A: Just so you have an answer for this instead of people leaving answers which might be better off as comments. This is certainly possible, though as many people have already said it's not really wise, what happens if someone is walking by the computer as it unlocks, how long do they have to play around, etc? Anway, apart from suggesting you have a service to do any work that you want on hte computer, or even remotely connecting to the computer to do work you might be able to make user of the following information. (as for 'code excerts' you can make it yourself) http://www.paralint.com/projects/aucun/ is a GINA implementation which will be able to give you rights to log on even if someone else has already logged on. Look into it it might be what you're looking for and is a lot safer than unlocking the workstation at a certain time. As an alternative, look into writing your own GINA implementation that will do an unlock at a scheduled time. Information on GINA http://msdn.microsoft.com/en-gb/magazine/cc163803.aspx http://msdn.microsoft.com/en-us/magazine/cc163786.aspx After you've unlocked the desktop I'm pretty sure you're going to want to lock it again. http://www.codeproject.com/win32/AntonioWinLock.asp A: Just to add another lead (an not have any judgment), autoit (a scripting Windows language) may have an answer, as described in this thread: How to unlock an Xp desktop And I just found another scenario where one might want to unlock a desktop ;) A: Let your app run as a service, then you do not need user/password as that is set up with the service. A: For my situation I needed to be able to restore the console session after I've disconnected my terminal session (I'm connecting to a WinXPe kiosk with a touchscreen, but no keyboard). Here's a command line solution I found to work. Instead of closing my session window to disconnect, I run a batch file with the following line. My session is closed and the console session is restored unlocked. * *automatically unlock workstation after terminal session on WinXP tscon.exe 0 /dest:console *for Windows Vista/7 the console session number has changed from 0 to 1, so you need to use tscon.exe 1 /dest:console Source link: http://arstechnica.com/civis/viewtopic.php?f=15&t=69113
{ "language": "en", "url": "https://stackoverflow.com/questions/174225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How to open a new email, and assign subject, using .NET Compact Framework Basically I'm trying to accomplish the same thing that "mailto:bgates@microsoft.com" does in Internet Explorer Mobile. But I want to be able to do it from a managed Windows Mobile application. I don't want to send an email pro grammatically in the background. I want to be able to create the email in Pocket Outlook and then let the user do the rest. Hopefully that helps you hopefully help me! A: I assume you use C#. You add a reference to System.Diagnostics and then write the following code: ProcessStartInfo psi = new ProcessStartInfo("mailto:bla@bla.com?subject=MySubject", ""); Process.Start(psi); This will start the default email client on your mobile device. The mailto protocol definition might come handy too. A: You can also use Microsoft.WindowsMobile.PocketOutlook.MessagingApplication.DisplayComposeForm like so: OutlookSession sess = new OutlookSession(); EmailAccountCollection accounts = sess.EmailAccounts; //Contains all accounts on the device //I'll just choose the first one -- you might want to ask them MessagingApplication.DisplayComposeForm(accounts[0], "someone@somewhere.com", "The Subject", "The Body"); The DisplayComposeForm method has a lot of overloads with options for attachments and more.
{ "language": "en", "url": "https://stackoverflow.com/questions/174232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Best way to send data to aspx page in ASP.NET 2.0/3.5? Which is Best way to send data to aspx page, and why? * *using query string *using session *using cross page postback *Something else Thanks. What are you trying to achieve? More info. please For example search form and advanced search form or multi step user registration. A: It really depends greatly on your uses. * *using query string Query string data is a good way of sending data which is not important to keep secure. It is probably the best and easiest way of passing data which users should be able to see and it isn't a problem if they try to change the querystring data. Paging data and sorting information is good here. Search parameters and user-requested info can go here pretty nicely. * *using session Session is the best place for user-specific information which will be needed more than once while the user is using the site. It is great for information which doesn't need to be very secure, but needs to be associated with a user for the length of the user's visit to the site. * *using cross page postback One danger with using postback is that it sends this information behind the scenes. It is a great way of passing information, but is probably not the best. Cross-page postbacks require that the next page handle the information passed to it. This creates fragile connections between pages as well as the problem that users must resend the posted data if they refresh the page. Does the page still work without the posted data? * *Something else Cookies work, but they should never ever ever contain any information which needs to be secure. There are plenty of others which should be used at different times. You can read entire articles on each of these ways of passing data. A: Depends what kind of data, and what kind of action is taken when the data is received. A query string is the simplest and most standard and most used way of sending data to the server. Next question is what method do you use POST or GET. As a general rule of thumb you can take the following: use POST when the request will cause action that changes state on the server, and use GET when you only retreive data from the server.
{ "language": "en", "url": "https://stackoverflow.com/questions/174238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: C# Default scope resolution I have inherited a c# class 'Button' (which I can't change) which clashes with the BCL class 'Windows.Forms.Button'. Normally, Id be very happy to go: MyPackage.MyClass.Button; But there are a large number or references to this class which is a pain to have to re-type. Is there any way to get the compiler (linker?) to default to using the customised version of Button over the BCL version? A: if you want to use it by default, replace using Windows.Forms; with using MyPackage.MyClass; If you do that, you'll need to fully qualify all the buttons from Windows.Forms. Or, if you want to, you can alias the namespace using My = MyPackage.MyClass; //... then My.Button b = ... Or alias the button using MyButton = MyPackage.MyClass.Button; A: Add this to the top of the file: using MyButton = MyPackage.MyClass.Button; Now you can reference your custom button using a distinct name. You may need to do something similar for the stock button if you use that anywhere in the same file. A: You could remove using Windows.Forms; from the top of the code. That would of course mean that you would have to reference all Windows.Forms items specifically. A: You can at least make it a small bit less painful/wordy with "using": using MPMC = MyPackage.MyClass; then you can say: MPMC.Button A: It appears that I can do the following: using Button = MyPackage.MyClass.Button; works and it preserves all references within the code to Button. Although Im tempted not to go down this route as it is still ambiguious (at least to the reader) which Button is being used.
{ "language": "en", "url": "https://stackoverflow.com/questions/174239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Writing a language for the Windows Scripting Host (WSH) Has anyone had any experience targetting WSH in the way that VBScript, JScript, PerlScript, xbScript and ForthScript (among other) do, such that the language can be used from the command line and embedded in server-side web pages? Where do I go to get that kind of information? A: These are called Windows Script Engines and are implemented by exposing the engine via COM. There is a lot documentation on MSDN, and the actual interfaces are fairly straight forward.
{ "language": "en", "url": "https://stackoverflow.com/questions/174240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I create a Firebug-like bottom window Firefox extension Several extensions offer a "bottom window" for viewing their content. Firebug and ScribeFire are good examples where the main content appears at the bottom of the browser. This appears to be very similar to the sidebar functionality in the browser. Is there a best practice/method to create a bottom window in an extension, since there is no "sidebar for the bottom" of the browser? A: You would create your extensions UI using an Overlay. In the overlay you specify the insertion point of your UI with regards to the main browser page, browser.xul. Excerpted from Firefox's main pages browser.xul we have <vbox id="appcontent" flex="1"> <tabbrowser id="content" disablehistory="true" flex="1" contenttooltip="aHTMLTooltip" contentcontextmenu="contentAreaContextMenu" onnewtab="BrowserOpenTab();" autocompletepopup="PopupAutoComplete" ondragdrop="nsDragAndDrop.drop(event, contentAreaDNDObserver);" onclick="return contentAreaClick(event, false);" /> </vbox> and excerpted from a previous version of Firebug file browserOverlay.xul we have <vbox id="appcontent"> <splitter id="fbContentSplitter" collapsed="true"/> <vbox id="fbContentBox" collapsed="true" persist="height"> <toolbox id="fbToolbox"> <toolbar id="fbToolbar"> <toolbarbutton id="fbFirebugMenu" type="menu"> <menupopup onpopupshowing="return FirebugChrome.onOptionsShowing(this);"> <menuitem label="&firebug.DisableFirebug;" type="checkbox" oncommand="FirebugChrome.onToggleOption(this)" option="disabledAlways"/> <menuitem type="checkbox" oncommand="FirebugChrome.onToggleOption(this)" option="disabledForSite"/> <menuitem label="&firebug.AllowedSites;" command="cmd_openFirebugPermissions"/> <menuseparator/> <menu label="&firebug.TextSize;"> <menupopup> <menuitem label="&firebug.IncreaseTextSize;" oncommand="Firebug.increaseTextSize(1)"/> <menuitem label="&firebug.DecreaseTextSize;" oncommand="Firebug.increaseTextSize(-1)"/> <menuitem label="&firebug.NormalTextSize;" oncommand="Firebug.setTextSize(0)"/> </menupopup> </menu> <menu label="&firebug.Options;"> <menupopup onpopupshowing="return FirebugChrome.onOptionsShowing(this);"> <menuitem type="checkbox" label="&firebug.AlwaysOpenInWindow;" oncommand="FirebugChrome.onToggleOption(this)" option="openInWindow"/> <menuitem type="checkbox" label="&firebug.ShowTooltips;" oncommand="FirebugChrome.onToggleOption(this)" option="showInfoTips"/> <menuitem type="checkbox" label="&firebug.ShadeBoxModel;" oncommand="FirebugChrome.onToggleOption(this)" option="shadeBoxModel"/> </menupopup> </menu> <menuseparator/> <menuitem label="&firebug.Website;" oncommand="Firebug.visitWebsite('main')"/> <menuitem label="&firebug.Documentation;" oncommand="Firebug.visitWebsite('docs')"/> <menuitem label="&firebug.Forums;" oncommand="Firebug.visitWebsite('discuss')"/> <menuseparator/> <menuitem label="&firebug.Donate;" oncommand="Firebug.visitWebsite('donate')"/> </menupopup> </toolbarbutton> <toolbarbutton id="fbDetachButton" class="toolbarbutton-iconic" tooltiptext="&firebug.DetachFirebug;" command="cmd_detachFirebug"/> <toolbarbutton id="fbCloseButton" class="toolbarbutton-iconic" tooltiptext="&firebug.CloseFirebug;" command="cmd_toggleFirebug"/> </toolbar> </toolbox> <hbox id="fbPanelBox" flex="1"/> <hbox id="fbCommandBox"/> </vbox> </vbox> Notice that both blocks of XUL markup start with <vbox id="appcontent".../> This is what the Gecko engine uses to determine how an overlay fits together with the page being overlayed. If you look at browserOverlay.xul you'll also see other insertion points for commandset, statusbar, etc. For more information refer to the Mozilla Developer Center.
{ "language": "en", "url": "https://stackoverflow.com/questions/174243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Strategy for tracking user recent activity Our customer would like to know who is online and currently using the custom application we wrote for them. I discussed it with them and this does not need to be exact, more of a guestimate will work. So my thought is a 15 minute time interval to determine user activity. Some ideas I have for doing this are as follows: * *Stamp their user record with a date and time of their last activity every time they do something that hits the database, or requests a web page ... this though could be quite database intensive. *Send out a "who is online request" from our software, looking for responses, this could be done at a scheduled interval, and then stamp the user record with the current date and time for each response I received. What are your thoughts? And how would you handle this situation? Clarification I would like to use the same architecture for both Windows or the Web if possible. I have a single business logic layer that multiple user interfaces interact with, could be Windows or the Web. By Windows I would mean client-server. Clarification I am using an n-tier architecture so my business objects handle all the interaction with the presentation layer. That presentation layer could be feeding a client-server Windows application, Web application, Web Service and so on. It is not a high traffic application, as it was developed for a customer of ours, maybe 100 users at most. A: Our solution is to maintain a "Transaction" table (which follows what was done), in addition to our "Session" table (which follows who was here). UPDATE, INSERT and DELETE instructions are all managed through a "Transaction" object and each of these SQL instruction is stored in the "Transaction" table once it has been successfully executed on the database (depending on tables updated: we have the possibility to specifically follow some tables and ignore others). This "Transaction" table has other fields such as transactiontType (I for INSERT, D for DELETE, U for UPDATE), transactionDateTime, etc, and a foreign key "sessionId", telling us finally who sent the instruction. It is even possible, through some code, to identify who did what and when (Gus created the record on monday, Tim changed the Unit Price on tuesday, Liz added an extra discount on thursday, etc). Pros for this solution are: * *you're able to tell "what who and when", and to show it to your users! (you'll need some code to analyse SQL statements) *if your data is replicated, and replication fails, you can rebuild your database through this table Cons are * *100 000 data updates per month mean 100 000 records in Tbl_Transaction *Finally, this table tends to be 99% of your database volume Our choice: all records older than 90 days are automatically deleted every morning A: I've seen strategy 1 work before. Of course the site was a small one. A: I wonder how a site like stackoverflow does it? They must target a specific event, as I just tooled around the site, take a look at my profile, and still says something like last seen 8 minutes ago. A: I'd just drop a log record table in the db. UserId int FK Action char(3) ('in' or 'out') Time DateTime You can drop a new record in the table when somebody logs in or out or alternatively update the last record for the user. A: I have worked with many systems that have utilized the first method you listed, with a little careful planning it can be done in a manner that really doesn't have much of an effect. It all depends on exactly when/how/what you are trying to track. If you need to track multiple sessions I'll typically see people that use a session system tied to a user account, and then by a specific elapsed time that session is consiered dead. If you are truly looking for currently online, your first option is the best. A: If you have session data just use that. Most session systems already have timestamps so they can expire sessions not used for x minutes. A: You can increment a global variable everytime a user session is created, and decrement it when it is destroyed. This way you will always know how many users are online at any given moment. If you want to monitor it over time, on the other hand, I think logging session start and end to the database is the best option, and you calculate user activity after the fact with a simple query. A: [DISCLAIMER 1 --- Java solution] If each meaningful user is given a Session, then you could write your own SessionListener implementation to track each session that has been created and destroyed. [DISCLAIMER 2 --- Code not tested or compiled] public class ActiveSessionsListener implements HttpSessionListener { public void sessionCreated(HttpSessionEvent e) { ServletContext ctx = e.getSession().getServletContext(); synchronized (ctx) { Integer count = ctx.getAttribute("SESSION_COUNT"); if (count == null) { count = new Integer(0); } ctx.setAttribute("SESSION_COUNT", new Integer(count.intValue() + 1); } } public void sessionDestroyed(HttpSessionEvent e) { ... similar for decrement ... } } And register this in your web.xml: <listener-class>com.acme.ActiveSessionsListener</listener-class> Hope this helps. A: The only problem with a web application solution is you often don't know when someone signs out. Obviously, if you have a login / authentication requirement, you can capture when a person signs on, and as part of your data access code, you can log when a person hits the database. But you will have to accept that there will be on reliable way of capturing when a person logs off - many will just move away from the site without taking the "log off" action. A: I would imagine that using a trigger would be a reasonable option that would preclude you from having to mess with any logic differences between the web and the non-web environment (or any other environment for that matter). However, this only captures changes to the environment and doesn't do anything when select statements are made. This, however, can be overcome if all your commands from your apps are run through stored procedures. A: With a web app, the concept of "online" is a little nebulous. The best you can really do is "made a request in the last X minutes" or maybe "authenticated in the last X minutes". Choose a set of events (made request, performed update, authenticated, ...), and log them to a DB table. Log them to a table in a separate DB A: I've just implemented a last seen system for my website. Your first option is similar, but I only update every +-5 minutes. It works for my situation, but larger scale websites might require something a little extra. <?php function updateLastSeen($user_ref, $session_id, $db) { /*Parameters: The user's primary key, the user's session id, the connection to the database*/ $timestamp = date('Y-m-d H:i:s'); if ($session_id !== '') { /*logged in*/ $sql_check = "SELECT user_id FROM user_last_seen WHERE user_id = ?"; $stmt_check = $db->prepare($sql_check); $stmt_check->bind_param('s', $user_ref); $result_check = $stmt_check->execute(); $stmt_result_check = $stmt_check->get_result(); if ($stmt_result_check->num_rows > 0) { /*If the user's last seen was previously recorded, update his record*/ $sql = "UPDATE user_last_seen SET last_seen = ? WHERE user_id = ?"; } else { /*Otherwise, insert a record for him*/ $sql = "INSERT INTO user_last_seen (last_seen, user_id) VALUES (?,?)"; } $stmt = $db->prepare($sql); $stmt->bind_param('ss', $timestamp, $user_ref); $result = $stmt->execute(); } } if( !isset($_SESSION['lastSeen']) ){ /*User logs into the website or lands on the current page, create a lastSeen variable*/ $_SESSION['lastSeen'] = time(); updateLastSeen($user_ref, $session_id, $db); } else { $last_seen_time_difference = (time() - $_SESSION['lastSeen']) / 60; if ($last_seen_time_difference > 5) { //if the difference between now and the lastSeen is 5 minutes or more, record his last seen. updateLastSeen($user_ref, $session_id, $db); $_SESSION['lastSeen'] = time(); /*after updating the database, reset the lastSeen time to now.*/ }/* else { //do nothing. Don't update database if lastSeen is less than 5 minutes ago. This prevents unnecessary database hits. }*/ }
{ "language": "en", "url": "https://stackoverflow.com/questions/174248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Load and save bitmaps using dotnet This may be simple one, but 5 mins of Googling didn't give me the answer. How do you save and load bitmaps using .Net librabries? I have an Image object and I need to save it to disk in some format (preferably png) and load back in later. A C# example would be great. A: Here's a really simple example. Top of code file using System.Drawing; In code Image test = new Bitmap("picture.bmp"); test.Save("picture.png", System.Drawing.Imaging.ImageFormat.Png); Remember to give write permissions to the ASPNET user for the folder where the image is to be saved. A: Hiya, use the Image.Save() method. A better explanation and code sample than I could provide can be found here: MSDN A: About 10 seconds of google lead me to this example for the save method, you can dig around a bit more for the others.
{ "language": "en", "url": "https://stackoverflow.com/questions/174263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to prevent VS 2008 from opening files when opening a solution When I open a solution in VS 2008, I don't want it to open all the files that I had open last time. I just want it to open the solution. Can't see a config option for this, is it possible? A: Delete the .suo file... See Is there anyway to tell Visual Studio not to open all the documents when I load solution? A: You can automate the process of closing all the files prior to closing a solution by adding a handler for the BeforeClosing event of EnvDTE.SolutionEvents -- this will get invoked when VS is exiting. In VS2005, adding the following to the EnvironmentEvents macro module will close all open documents: Private Sub SolutionEvents_BeforeClosing() Handles SolutionEvents.BeforeClosing DTE.ExecuteCommand("Window.CloseAllDocuments") End Sub Visual Studio 2008 appears to support the same events so I'm sure this would work there too. I'm sure you could also delete the .suo file for your project in the handler if you wanted, but you'd probably want the AfterClosing event.
{ "language": "en", "url": "https://stackoverflow.com/questions/174285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Techniques for dynamic (algorithmic) graphics I'm programming an application for a 32 bit processor with limited memory (512k flash, 32k RAM). The display on this device is 128x160 with 16 bit color, which would normally consume 40k ram if I were to buffer it on my processor. I don't have that much RAM, so I'm looking for techniques, tips, tricks, ideas for generating screen data on the fly. Things that might help: * *Perhaps you know of a resource for this sort of limitation *Maybe you've generated attractive graphics on the fly *Is there a generic algorithm I might use to combine elements in program memory (including alpha blending) on the fly while I scan the display *Simple vector rendering techniques (or free (bsd/mit/apache) source) *??? I do have a multiplier, but no floating point processor. The display itself has a very simple controller and memory for the display - but reads and writes are expensive so I don't want to use that as my workspace if I can avoid it. -Adam A: In a way, you are in pretty much the same situation game developers where at the time of the Tandys, Spectrums and early PCs. So, here's my recommendation: You should read Michael Abrash writings on computer graphics. They were written in a time where a floating point co-processor was an optional piece of hardware, and they describe a lot of the basic techniques (Bresenham lines, etc.) used in the old (supposedly 'bad') software-rendered days. You can read most of his "Black Book" here. Additionaly, you can probably find a lot of the old BBS files that most people used back in the day to learn graphics programming here. Just search for Graphics, Lines, and what not. Hope that helps! Update: I also recall using this in my first attempts at drawing things on the screen. Can't tell how much time I spent trying to understand the maths behind it (well, to be fair I was like 15 at the time). Very good (and simple) introduction to 3D, and a very nice premier on transformations, polygon-fillers and interpolation. A: What kind of data will you show on the screen? If it is not photographic images, you could consider using a palette. For instance: A 256 color palette using 8 bits per pixel would require 20kb, (plus 256 x 2bytes for the lookup table) which at least is better than 40kb. A: I believe the basic technique for dealing with this kind of situation is to divide the screen into narrow horizontal stripes, and only buffer two such stripes in RAM. One stripe will be displayed, while you render the next one down. When the scanning 'beam' hits the next stripe (and fires off an interrupt for you to catch), you swap the two and start drawing the next stripe down. A nasty side-effect of this is that that you have hard timing limits on how long you can spend on rendering each stripe. So I guess it would be tempting to stick with something boring but with predictable performance, like sprites. Slightly offtopic but this is how the Nintendo DS 3D hardware works. You can see it if you try to render too many polygons around the same y-coordinate - polys will randomly flicker and drop out as the screen-refresh overtakes the rendering hardware. Also, I'd second the other poster's suggestion that you use palettised rendering. It's very hard to do fast maths on 16bit pixels, but faster in 8bit if you're clever about how you lay out your palette. A: Some ideas that would combine nice graphics and low memory: * *Store backgrounds and sprites in flash. *Store dynamically generated graphics in RAM using a palette to half the bytes. *Use the windowing feature of the LCD driver to only update the part of the screen you need to.
{ "language": "en", "url": "https://stackoverflow.com/questions/174289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the best way to delete a value from an array in Perl? The array has lots of data and I need to delete two elements. Below is the code snippet I am using, my @array = (1,2,3,4,5,5,6,5,4,9); my $element_omitted = 5; @array = grep { $_ != $element_omitted } @array; A: Use splice if you already know the index of the element you want to delete. Grep works if you are searching. If you need to do a lot of these, you will get much better performance if you keep your array in sorted order, since you can then do binary search to find the necessary index. If it makes sense in your context, you may want to consider using a "magic value" for deleted records, rather then deleting them, to save on data movement -- set deleted elements to undef, for example. Naturally, this has its own issues (if you need to know the number of "live" elements, you need to keep track of it separately, etc), but may be worth the trouble depending on your application. Edit Actually now that I take a second look -- don't use the grep code above. It would be more efficient to find the index of the element you want to delete, then use splice to delete it (the code you have accumulates all the non-matching results..) my $index = 0; $index++ until $arr[$index] eq 'foo'; splice(@arr, $index, 1); That will delete the first occurrence. Deleting all occurrences is very similar, except you will want to get all indexes in one pass: my @del_indexes = grep { $arr[$_] eq 'foo' } 0..$#arr; The rest is left as an excercise for the reader -- remember that the array changes as you splice it! Edit2 John Siracusa correctly pointed out I had a bug in my example.. fixed, sorry about that. A: Is this something you are going to be doing a lot? If so, you may want to consider a different data structure. Grep is going to search the entire array every time and for a large array could be quite costly. If speed is an issue then you may want to consider using a Hash instead. In your example, the key would be the number and the value would be the count of elements of that number. A: if you change my @del_indexes = grep { $arr[$_] eq 'foo' } 0..$#arr; to my @del_indexes = reverse(grep { $arr[$_] eq 'foo' } 0..$#arr); This avoids the array renumbering issue by removing elements from the back of the array first. Putting a splice() in a foreach loop cleans up @arr. Relatively simple and readable... foreach $item (@del_indexes) { splice (@arr,$item,1); } A: You could use array slicing instead of splicing. Grep to return the indices you want keep and use slicing: my @arr = ...; # run through each item. my @indicesToKeep = grep { $arr[$_] ne 'foo' } 0..$#arr; @arr = @arr[@indicesToKeep]; A: The best I found was a combination of "undef" and "grep": foreach $index ( @list_of_indexes_to_be_skiped ) { undef($array[$index]); } @array = grep { defined($_) } @array; That does the trick! Federico A: I think your solution is the simplest and most maintainable. The rest of the post documents the difficulty of turning tests on elements into splice offsets. Thus, making it a more complete answer. Look at the gyrations you have to go through to have an efficient (i.e. one-pass) algorithm to turn tests on list items into indexes. And it's not that intuitive at all. sub array_remove ( \@& ) { my ( $arr_ref, $test_block ) = @_; my $sp_start = 0; my $sp_len = 0; for ( my $inx = 0; $inx <= $#$arr_ref; $inx++ ) { local $_ = $arr_ref->[$inx]; next unless $test_block->( $_ ); if ( $sp_len > 0 && $inx > $sp_start + $sp_len ) { splice( @$arr_ref, $sp_start, $sp_len ); $inx = $inx - $sp_len; $sp_len = 0; } $sp_start = $inx if ++$sp_len == 1; } splice( @$arr_ref, $sp_start, $sp_len ) if $sp_len > 0; return; } A: Delete all occurrences of 'something' if array. Based on SquareCog answer's: my @arr = ('1','2','3','4','3','2', '3','4','3'); my @dix = grep { $arr[$_] eq '4' } 0..$#arr; my $o = 0; for (@dix) { splice(@arr, $_-$o, 1); $o++; } print join("\n", @arr); Each time we remove index from @arr, the next correct index to delete will be $_-current_loop_step. A: I use: delete $array[$index]; Perldoc delete. A: You can use the non-capturing group and a pipe delim list of items to remove. perl -le '@ar=(1 .. 20);@x=(8,10,3,17);$x=join("|",@x);@ar=grep{!/^(?:$x)$/o} @ar;print "@ar"' A: splice will remove array element(s) by index. Use grep, as in your example, to search and remove. A: You can simply do this: my $input_Color = 'Green'; my @array = qw(Red Blue Green Yellow Black); @array = grep {!/$input_Color/} @array; print "@array"; A: Just to be sure I have benchmarked grep and map solutions, first searching for indexes of matched elements (those to remove) and then directly removing the elements by grep without searching for the indexes. I appears that the first solution proposed by Sam when asking his question was already the fastest. use Benchmark; my @A=qw(A B C A D E A F G H A I J K L A M N); my @M1; my @G; my @M2; my @Ashrunk; timethese( 1000000, { 'map1' => sub { my $i=0; @M1 = map { $i++; $_ eq 'A' ? $i-1 : ();} @A; }, 'map2' => sub { my $i=0; @M2 = map { $A[$_] eq 'A' ? $_ : () ;} 0..$#A; }, 'grep' => sub { @G = grep { $A[$_] eq 'A' } 0..$#A; }, 'grem' => sub { @Ashrunk = grep { $_ ne 'A' } @A; }, }); The result is: Benchmark: timing 1000000 iterations of grem, grep, map1, map2... grem: 4 wallclock secs ( 3.37 usr + 0.00 sys = 3.37 CPU) @ 296823.98/s (n=1000000) grep: 3 wallclock secs ( 2.95 usr + 0.00 sys = 2.95 CPU) @ 339213.03/s (n=1000000) map1: 4 wallclock secs ( 4.01 usr + 0.00 sys = 4.01 CPU) @ 249438.76/s (n=1000000) map2: 2 wallclock secs ( 3.67 usr + 0.00 sys = 3.67 CPU) @ 272702.48/s (n=1000000) M1 = 0 3 6 10 15 M2 = 0 3 6 10 15 G = 0 3 6 10 15 Ashrunk = B C D E F G H I J K L M N As shown by elapsed times, it's useless to try to implement a remove function using either grep or map defined indexes. Just grep-remove directly. Before testing I was thinking "map1" would be the most efficient... I should more often rely on Benchmark I guess. ;-) A: If you know the array index, you can delete() it. The difference between splice() and delete() is that delete() does not renumber the remaining elements of the array. A: A similar code I once wrote to remove strings not starting with SB.1 from an array of strings my @adoSymbols=('SB.1000','RT.10000','PC.10000'); ##Remove items from an array from backward for(my $i=$#adoSymbols;$i>=0;$i--) { unless ($adoSymbols[$i] =~ m/^SB\.1/) {splice(@adoSymbols,$i,1);} } A: This works well too: my @array = (1,2,3,4,5,5,6,5,4,9); my $element_omitted = 5; for( my $i = 0; $i < scalar( @array ); $i++ ) { splice( @array, $i ), $i-- if( $array[$i] == $element_omitted ); } say "@array"; # 1 2 3 4 6 4 9
{ "language": "en", "url": "https://stackoverflow.com/questions/174292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Oracle: ORA-00932 when converting column_expression from user_ind_expressions using to_lob Try running these two simple statements on Oracle 10.2: CREATE TABLE mytest (table_name varchar2(30), index_name varchar2(30), column_expression clob, column_position number); INSERT INTO mytest (table_name, index_name, column_expression, column_position) SELECT table_name, index_name, to_lob(column_expression), column_position FROM user_ind_expressions EXPRA WHERE NOT EXISTS (SELECT 1 FROM user_constraints WHERE constraint_name = EXPRA.index_name AND table_name = EXPRA.table_name); This results in this error: ERROR at line 1: ORA-00932: inconsistent datatypes: expected - got LONG If I omit the WHERE NOT EXISTS like this: INSERT INTO mytest (table_name,index_name,column_expression, column_position) SELECT table_name,index_name, to_lob(column_expression), column_position FROM user_ind_expressions EXPRA; It works: 23 rows created. What is going on? A: If Michel Cadot says its a bug, then its almost certainly a bug. A: Yep, seems like it. http://www.orafaq.com/forum/m/352199/130782/#msg_352199
{ "language": "en", "url": "https://stackoverflow.com/questions/174297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using both Eclipse and NetBeans on the same project Eclipse is a really great editor, which I prefer to use, but the GUI design tools for Eclipse are lacking. On the other hand, NetBeans works really well for GUI design. Are there any tips, tricks or pitfalls for using NetBeans for GUI design and Eclipse for everything else on the same project? EDIT: I tried Maven, and it does not seem to work (too complex for my needs). A: Create your GUI with Netbeans. copy a Eclipse .project file (like below) into the project folder change the MyProjectName. Open Eclipse and import the project into your workspace, so you can open the projekt from your Eclipse workspace with Netbeans. Now you able to use Netbeans to create and change the GUI and editing the code with Eclipse. <?xml version="1.0" encoding="UTF-8"?> <projectDescription> <name>MyProject</name> <comment></comment> <projects> </projects> <buildSpec> <buildCommand> <name>org.eclipse.jdt.core.javabuilder</name> <arguments> </arguments> </buildCommand> </buildSpec> <natures> <nature>org.eclipse.jdt.core.javanature</nature> </natures> </projectDescription> A: MyEclipse offers an integration of the Netbeans GUI editor (Matisse) with Eclipse. See http://www.myeclipseide.com/module-htmlpages-display-pid-5.html A: Echoing @Tom I'd use an external build tool (Maven 2 would be my pick). I've done this on projects before and as long as you don't walk all over Eclipse's .Xxxx files and folders you'll be fine. Then you get the full power of Netbeans (which integrates with Maven 2 really nicely) or Eclipse and also have the added value of an external build which can also be run by your CI tool. Everybody wins! A: Cloud Garden makes a GUI editor called Jigloo that is quite nice if you are into that sort of thing (and the price is very, very reasonable). If that's all that's missing for you from Eclipse, I'd recommend that you take a look. Netbeans does a ton of stuff with source code that you aren't allowed to edit, etc... One other thing that I will mention: I have used GUI editors like Matisse and Jigloo for super rapid prototyping. However, within 3 or 4 iterations, I always find myself dropping back to hand coding the layouts. I also find that when I'm doing rapid prototyping, I am almost always more productive when I change the layout manager to absolute and just place components. Once the design starts the gel, implementing the design by hand coding using a good layout manager (I strongly recommend MiG Layout) is pretty easy, and gives much better results. I know that dragging and dropping a GUI layout is really enticing - but MiG Layout is incredibly productive for hand wiring GUIs, and I suspect that almost any developer will be more productive within a week going down that path. A: import project in netbeans create gui and then again open the project in eclipse it works with no error A: Define your project dependencies with Maven, and use it to generate project configuration files for both Netbeans and Eclipse. Try to keep separate classes directories for Eclipse and Netbeans, since Eclipse doesn't like it when external tools touch its classes. A: A few gotchas: * *If you try to use both without any plugins/integration, you must be careful not to edit the regions marked "DO NOT EDIT" as Netbeans will overwrite code in those sections quite frequently. *You should use the "Customize..." command to add custom init code for components. *Adding/creating new components on a form using Java code will not be reflected in the GUI editor. *Developers have to be discouraged from going into the code and adding swing customizations, effectively bypassing the GUI editor. Another tip is that you can create Java Beans using Eclipse and drag-and-drop them into the Matisse editor. This allows you to create a custom GUI component or a non-GUI component (models, listeners, etc) and add it to a Matisse form. With listeners and models, you can specify a component to use an instance of your custom listener/model instead of the default behavior. You can also drag-and-drop in custom GUI components and manipulate them like any other GUI widget. A: For me using linked source folders works quite well. I build the GUIs in independent NetBeans projects - if they need some simple classes or interfaces, I use the "link source" (right click on project in NetBeans, choose properties), to include these in the NetBeans project. My main projects are in eclipse. Here I again use the link source feature to link to the NetBeans project (right click on project in eclipse, select "build path", then "link source"). EDIT (Thx to Milhous :) ): in both projects in eclipse and NetBeans further all required JAR files need to be added to the build path (also the libraries added by NetBeans: eg beansbinding-1.2.1.jar, appframework-1.0.3.jar swing-worker-1.1.jar, ...) Now the GUI classes can be reused in eclipse. Also leads to need to have GUI and logic classes quite decoupled, which can be nothing bad.
{ "language": "en", "url": "https://stackoverflow.com/questions/174308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: UITableView didSelectRow while editing? I'm building an interface much like the built-in Weather application's flipside view, or the Alarms view of the Clock application in editing mode. The table view is always in editing mode, so the delete icon appears on the left side of each cell. When the table view is in editing mode, my delegate doesn't receive didSelectRowAtIndexPath notifications. It receives accessoryButtonTappedForRowWithIndexPath notifications, but that's not what I want to do. I want my rows to stay selectable, even when the table view is in editing mode. Any ideas on how I can accomplish this? Thanks, P.S. Hooray for the lifted NDA. =) A: You can also set this in the attributes inspector. Make sure you select the table view, not the view controller, and select "Single Selection During Editing" in the Editing dropdown: A: Set table.allowsSelectionDuringEditing to YES.
{ "language": "en", "url": "https://stackoverflow.com/questions/174309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: asp:TextBox ReadOnly=true or Enabled=false? What's the difference between the Enabled and the ReadOnly-properties of an asp:TextBox control? A: If a control is disabled it cannot be edited and its content is excluded when the form is submitted. If a control is readonly it cannot be edited, but its content (if any) is still included with the submission. A: Readonly will not "grayout" the textbox and will still submit the value on a postback. A: Think about it from the browser's point of view. For readonly the browser will send in a variable/value pair. For disabled, it won't. Run this, then look at the URL after you hit submit: <html> <form action=foo.html method=get> <input name=dis type=text disabled value="dis"> <input name=read type=text readonly value="read"> <input name=normal type=text value="normal"> <input type=submit> </form> </html> A: Readonly will allow the user to copy text from it. Disabled will not. A: Readonly textbox in Asp.net <asp:TextBox ID="t" runat="server" Style="margin-left: 20px; margin-top: 24px;" Width="335px" Height="41px" ReadOnly="true"></asp:TextBox> A: Another behaviour is that readonly = 'true' controls will fire events like click, buton Enabled = False controls will not. A: I have a child aspx form that does an address lookup server side. The values from the child aspx page are then passed back to the parent textboxes via javascript client side. Although you can see the textboxes have been changed neither ReadOnly or Enabled would allow the values to be posted back in the parent form.
{ "language": "en", "url": "https://stackoverflow.com/questions/174319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: How do I retrieve the size of a directory from Perforce? I would like to know how much disk space a directory is going to consume before I bring it over from the Perforce server. I don't see any way to do this other than getting the files and looking at the size of the directory in a file manager. This, of course, defeats the purpose. Is there a way to get file size info from Perforce without actually getting the files? A: I don't know how I missed this command, but here's how you do it: p4 sizes -s //depot/directory/... A: p4 fstat
{ "language": "en", "url": "https://stackoverflow.com/questions/174322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Comparative advantages of xdoc and apt formats What are the relative merits of the xdoc and apt formats for writing websites of Maven projects? Is one more expressive than the other? Has one got better tool support than the other? A: The XDOC format is definitely a richer mechanism for creating documents and is required if you want to produce documents with TOC/TOF, headers, footers or footnotes (and other document attributes), since the APT format doesn't support these. That being said, I tend to use the APT format for almost all internal documents as I enjoy writing in the APT format. When compared to writing XDOC (with all its XML loveliness), APT is a breeze. By the same token, when I'm writing a plugin that generates content, I tend to use the XDOC format, since it's pretty easy to write software that creates the required XML. A: APT format is easy to use. However, it does not support custom styling (for example styling a character or a heading). The closest to preformatted text that you can use with APT is the "verbatim text" Like in HTML, verbatim text is preformatted. Unlike HTML, verbatim text is escaped: inside a verbatim display, markup is not interpreted by the APT processor.
{ "language": "en", "url": "https://stackoverflow.com/questions/174325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Will web browsers cache content over https Will content requested over https still be cached by web browsers or do they consider this insecure behaviour? If this is the case is there anyway to tell them it's ok to cache? A: As of 2010, all modern, current-ish browsers cache HTTPS content by default, unless explicitly told not to. It is not required to set cache-control:public for this to happen. Source: Chrome, IE, Firefox. A: By default web browsers should cache content over HTTPS the same as over HTTP, unless explicitly told otherwise via the HTTP Headers received. This link is a good introduction to setting cache setting in HTTP headers. is there anyway to tell them it's ok to cache? This can be achieved by setting the max-age value in the Cache-Control header to a non-zero value, e.g. Cache-Control: max-age=3600 will tell the browser that this page can be cached for 3600 seconds (1 hour) A: Https is cached by default. This is managed by a global setting that cannot be overridden by application-defined cache directives. To override the global setting, select the Internet Options applet in the control panel, and go to the advanced tab. Check the "Do not save encrypted pages to disk" box under the "Security" section, but the use of HTTPS alone has no impact on whether or not IE decides to cache a resource. WinINet only caches HTTP and FTP responses not HTTPS response. https://msdn.microsoft.com/en-us/library/windows/desktop/aa383928%28v=vs.85%29.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/174348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "249" }
Q: Forcing single-argument constructors to be explicit in C++? By default, in C++, a single-argument constructor can be used as an implicit conversion operator. This can be suppressed by marking the constructor as explicit. I'd prefer to make "explicit" be the default, so that the compiler cannot silently use these constructors for conversion. Is there a way to do this in standard C++? Failing that, is there a pragma (or similar) that'll work in Microsoft C++ to do this? What about g++ (we don't use it, but it might be useful information)? A: Nope, you have to do it all by hand. It's a pain, but you certainly should get in the habit of making single argument constructors explicit. I can't imagine the pain you would have if you did find a solution and then had to port the code to another platform. You should usually shy away from compiler extensions like this because it will make the code less portable. A: If there was a pragma or command line option that made constructors explicit by default, how would you declare one that is not explicit? There would have to be another compiler-specific token or pragma to make it possible to declare an implicit conversion constructor. A: It could be rather nasty for any header you have. Like <vector>, or any of the Boost headers. It would also cause quite a few false bugreports. So, no, I don't expect a compiler to add such a #pragma. A: There is no such option in the compilers, as far as I am aware. But there is a Lint warning for such cases (see http://www.gimpel.com/lintinfo.htm). A: I think the answer is no! Sorry, its not a very constructive answer. I hope somebody else might know more! A: There's no such option available in standard c++, and I don't believe there is in Visual Studio either.
{ "language": "en", "url": "https://stackoverflow.com/questions/174349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: 1-dimensional nesting algorithm What is an effective algorithm for nesting 1 dimensional lengths into predefined stock lengths? For example, If you required steel bars in the following quantities and lengths, * *5 x 2 metres *5 x 3 metres *5 x 4 metres and these can be cut from 10 metre bars. How could you calculate the pattern for cutting the 10m bars so that the minimum number of bars are used? In addition, how could you incorporate multiple stock lengths into the algorithm? I've had a bit of time to work on this so I'm going to write up how I solved it. I hope this will be useful to someone.I'm not sure if it is ok to answer my own question like this. A moderator can change this to an answer if that is more appropriate. First thanks to everyone that answered. This pointed me to the appropriate algorithm; the cutting stock problem. This post was also useful; "Calculating a cutting list with the least amount of off cut waste". Ok, on to the solution. I'll use the following terminology in my solution; * *Stock: a length of material that will be cut into smaller pieces *Cut: a length of material that has been cut from stock. multiple cuts may be taken from the same piece of stock *Waste: the length of material that is left in a piece of stock after all cuts have been made. There are three main stages to solving the problem, * *Identify all possible cut combinations *Identify which combinations can be taken from each piece of stock *Find the optimal mix of cut combinations. Step 1 With N cuts, there are 2^N-1 unique cut combinations. These combinations can be represented as a binary truth table. Where A,B,C are unique cuts; A B C | Combination ------------------- 0 0 0 | None 0 0 1 | C 0 1 0 | B 0 1 1 | BC 1 0 0 | A 1 0 1 | AC 1 1 0 | AB 1 1 1 | ABC A for-loop with some bitwise operators can be used to quickly create groupings of each cut combination. This can get quite time consuming for large values of N. In my situation there were multiple instances of the same cut. This produced duplicate combinations. A B B | Combination ------------------- 0 0 0 | None 0 0 1 | B 0 1 0 | B (same as previous) 0 1 1 | BB 1 0 0 | A 1 0 1 | AB 1 1 0 | AB (same as previous) 1 1 1 | ABB I was able to exploit this redundancy to reduce the time to calculate the combinations. I grouped the duplicate cuts together and calculated the unique combinations of this group. I then appended this list of combinations to each unique combination in a second group to create a new group. For example, with cuts AABBC, the process is as follows. A A | Combination ------------------- 0 1 | A 1 1 | AA Call this group X. Append X to unique instances of B, B B X | Combination ------------------- 0 0 1 | A | AA 0 1 0 | B 0 1 1 | BA | BAA 1 1 0 | BB 1 1 1 | BBA | BBAA Call this group Y. Append Y to unique instances of C, C Y | Combination ----------------- 0 1 | A | AA | B | BA | BAA | BB | BBA | BBAA 1 0 | C 1 1 | CA | CAA | CB | CBA | CBAA | CBB | CBBA | CBBAA This example produces 17 unique combinations instead of 31 (2^5-1). A saving of almost half. Once all combinations are identified it is time to check how this fits into the stock. Step 2 The aim of this step is to map the cut combinations identified in step 1 to the available stock sizes. This is a relatively simple process. For each cut combination, calculate the sum of all cut lengths. for each item of stock, if the sum of cuts is less than stock length, store stock, cut combination and waste in a data structure. Add this structure to a list of some sort. This will result in a list of a valid nested cut combinations. It is not strictly necessary to store the waste as this can be calculated from the cut lengths and stock length. However, storing waste reduces processing required in step 3. Step 3 In this step we will identify the combination of cuts that produces the least waste. This is based on the list of valid nests generated in step 2. In an ideal world we would calculate all possibilities and select the best one. For any non-trivial set of cuts it would take forever to calculate them all. We will just have to be satisfied with a non optimal solution. There are various algorithms for accomplishing this task. I chose a method that will look for a nest with the least waste. It will repeat this until all cuts have been accounted for. Start with three lists * *cutList: a list of all required cuts (including duplicates). *nestList: The list of nests generated in step 2. This is sorted from lowest waste to highest waste. *finalList: Initially empty, this will store the list of cut combinations that will be output to the user. Method pick nest from nestList with the least waste if EVERY cut in the nest is contained in cutList remove cuts from cutList copy this nest into the finalList if some cuts in nest not in cutList remove this nest from nestList repeat until cutlist is empty With this method I managed to get a total waste of around 2-4% for some typical test data. Hopefully I will get to revisit this problem at some point and have a go at implementing the Delayed column generation algorithm. This should give better results. I hope this helped anyone else having this problem. David A: Actually, there's an even more specific problem that applies: The cutting stock problem: The cutting stock problem is an optimization problem, or more specifically, an integer linear programming problem. It arises from many applications in industry. Imagine that you work in a paper mill and you have a number of rolls of paper of fixed width waiting to be cut, yet different customers want different numbers of rolls of various-sized widths. How are you going to cut the rolls so that you minimize the waste (amount of left-overs)? The reason this applies better than the bin packing problem is because you are trying to minimise the waste, rather than minimise the number of 'bins'. In a sense, the bin packing problem is the inverse of the cutting stock problem: How would you take lengths of steel and reassemble them into as few bars as possible under a certain size? A: Least Cost Bin Packing edit: Here's a better link: http://en.wikipedia.org/wiki/Bin_packing_problem A: Thanks for suggesting bin packing problem plinth. This lead me to the following post, Calculating a cutting list with the least amount of off cut waste. This appears to cover my question well A: Solved a problem similar to this years ago. I ended up using a genetic algorithm. That would be overkill for small problems. This program was somewhat fun to write, but not fun at the same time, being back in the 16-bit days. First, it made a list of all the ways a 10' piece of raw material could be cut, using the given lengths. For each the amount of wasted material was recorded. (Though it is fast math, it's faster to store these for lookup later.) Then it looked at the list of required pieces. In a loop, it would pick from the way-to-cut list a way of cutting stock that didn't cut more pieces of any size than required. A greedy algorithm would pick one with minimal waste, but sometimes a better solution could be found by loosening up on that. Eventually a genetic algorithm made the choices, the "DNA" being some set of ways-to-cut that did pretty well in past solutions. All this was way back in pre-internet days, hacked up with cleverness and experimentation. These days there's probably some .NET or java library thing to do it already black-boxed - but that would be less fun and less education, wouldn't it?
{ "language": "en", "url": "https://stackoverflow.com/questions/174351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Using ButtonField or HyperLinkField to write a cookie in ASP.NET I currently have a DetailsView in ASP.NET that gets data from the database based on an ID passed through a QueryString. What I've been trying to do now is to then use that same ID in a new cookie that is created when a user clicks either a ButtonField or a HyperLinkField. What I have in the .aspx is this: <asp:DetailsView ID="DetailsView1" runat="server" AutoGenerateRows="False" DataKeyNames="ArtID" DataSourceID="AccessDataSource1" Height="50px" Width="125px"> <Fields> <asp:ImageField DataAlternateTextField="Title" DataImageUrlField="FileLocation"> </asp:ImageField> <asp:BoundField DataField="ArtID" HeaderText="ArtID" InsertVisible="False" ReadOnly="True" SortExpression="ArtID" /> <asp:BoundField DataField="Title" HeaderText="Title" SortExpression="Title" /> <asp:BoundField DataField="ArtDate" HeaderText="ArtDate" SortExpression="ArtDate" /> <asp:BoundField DataField="Description" HeaderText="Description" SortExpression="Description" /> <asp:BoundField DataField="FileLocation" HeaderText="FileLocation" SortExpression="FileLocation" /> <asp:BoundField DataField="Medium" HeaderText="Medium" SortExpression="Medium" /> <asp:BoundField DataField="Location" HeaderText="Location" SortExpression="Location" /> <asp:BoundField DataField="PageViews" HeaderText="PageViews" SortExpression="PageViews" /> <asp:HyperLinkField DataNavigateUrlFields="ArtID" DataNavigateUrlFormatString="Purchase.aspx?ArtID={0}" NavigateUrl="Purchase.aspx" Text="Add To Cart" /> <asp:ButtonField ButtonType="Button" DataTextField="ArtID" Text="Add to Cart" CommandName="btnAddToCart_Click" /> </Fields> </asp:DetailsView> When using a reguler asp.net button such as: <asp:Button ID="btnAddArt" runat="server" Text="Add To Cart" /> I would have something like this in the VB: Protected Sub btnAddArt_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnAddArt.Click Dim CartArtID As New HttpCookie("CartArtID") CartArtID.Value = ArtID.DataField CartArtID.Expires = Date.Today.AddDays(0.5) Response.Cookies.Add(CartArtID) Response.Redirect("Purchase.aspx") End Sub However, I can't figure out how I go about applying this to the ButtonField instead since the ButtonField does not allow me to give it an ID. The ID that I need to add to the cookie is the ArtID in the first BoundField. Any idea's/advice on how I would go about doing this are greatly appreciated! Alternatively, if I could do it with the HyperLinkField or with the regular button, that would be just as good, but I'm having trouble using a regular button to access the ID within the DetailsView. Thanks A: Use the CommandName and the CommandArgument parameters of the Button class. Specifying a CommandName will expose the ItemCommand event. From there you can check for the CommandName, easily grab the CommandArgument (the ID of your row or item) then push whatever data your need into your cookie that way. More formally, you're looking to have your button like this: <asp:Button ID="btnAddArt" CommandName="AddCard" CommandArgument="[ArtID]" runat="server" Text="Add To Cart" /> Then your code behind can function like this: Private Sub ProcessDetailsViewCommand(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewCommandEventArgs) Handles DetailsView1.ItemCommand ' Using Case statement makes it easy to add more custom commands later on. Select Case e.CommandName Case "AddCard" Dim CartArtID As New HttpCookie("CartArtID") CartArtID.Value = Integer.Parse(e.CommandArgument.ToString) CartArtID.Expires = Date.Today.AddDays(0.5) Response.Cookies.Add(CartArtID) Response.Redirect("Purchase.aspx") End Select End Sub A: I noticed you're putting the key in the grid itself (DataKeyNames="ArtID"). You have access to that in your event handler -- the event args will get you the current index for indexing into the datakeys on the grid. Make sense? A: Since you set the DataKeyNames property of the DetailsView control, you can access the ArtID of the displayed item using the DetailsView1.DataKey(0). Alternatively you can use DetailsView1.SelectedValue to get the same. As for handling the click event, you'll have to add an ItemCommand event handler to the DetailsView.
{ "language": "en", "url": "https://stackoverflow.com/questions/174352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I resize my panels when resizing the main form in a winforms application? If the user of my winforms application resizes the main form, I want the 2 panels to stretch out also, along with the child controls. How can I achieve this? A: Play around with the Dock and Anchor properties of your panels. A: You can use the TableLayoutPanel and set column width at x% each this way you will have the screen split in 2. The TableLayoutPanel must be Dock to fill all the form or Anchor. The TableLayoutPanel can contain other panel. OR you can use simply your panel and use Anchor (click the panel and go in the Properties panel of VS). A: If the user of my winforms application resizes the main form, I want the 2 panels to stretch out also, along with the child controls. You're the ideal use case for TableLayoutPanel (MSDN). If you were only scaling the panels, Dock and Anchor would be appropriate. But since you want your controls to scale well, you're pretty much in an AutoLayout world, and likely the TableLayoutPanel. (I'm a huge fan of this, by the way, although overuse can have a negative performance impact on laying out your controls.) Some helpful links on using it to configure your layout to scale: * *AutoLayout By Examples *WinForms AutoLayout Basics: TableLayoutPanel *Video Training on TableLayoutPanel
{ "language": "en", "url": "https://stackoverflow.com/questions/174355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Ways to ASSERT expressions at build time in C I'm tidying up some older code that uses 'magic numbers' all over the place to set hardware registers, and I would like to use constants instead of these numbers to make the code somewhat more expressive (in fact they will map to the names/values used to document the registers). However, I'm concerned that with the volume of changes I might break the magic numbers. Here is a simplified example (the register set is more complex): const short mode0 = 0; const short mode1 = 1; const short mode2 = 2; const short state0 = 0; const short state1 = 4; const short state2 = 8; so instead of : set_register(5); we have: set_register(state1|mode1); What I'm looking for is a build time version of: ASSERT(5==(state1|mode1)); Update @Christian, thanks for the quick response, I'm interested on a C / non-boost environment answer too because this is driver/kernel code. A: #define static_assert(expr) \ int __static_assert(int static_assert_failed[(expr)?1:-1]) It can be used anywhere, any times. I think it is the easiest solution. Before usage, test it with your compiler carefully. A: Any of the techniques listed here should work and when C++0x becomes available you will be able to use the built-in static_assert keyword. A: Try: #define STATIC_ASSERT(x, error) \ do { \ static const char error[(x)?1:-1];\ } while(0) Then you can write: STATIC_ASSERT(a == b, a_not_equal_to_b); Which may give you a better error message (depending on your compiler). A: If you have Boost then using BOOST_STATIC_ASSERT is the way to go. If you're using C or don't want to get Boost here's my c_assert.h file that defines (and explains the workings of) a few macros to handle static assertions. It's a bit more convoluted that it should be because in ANSI C code you need 2 different macros - one that can work in the area where you have declarations and one that can work in the area where normal statements go. There is a also a bit of work that goes into making the macro work at global scope or in block scope and a bunch of gunk to ensure that there are no name collisions. STATIC_ASSERT() can be used in the variable declaration block or global scope. STATIC_ASSERT_EX() can be among regular statements. For C++ code (or C99 code that allow declarations mixed with statements) STATIC_ASSERT() will work anywhere. /* Define macros to allow compile-time assertions. If the expression is false, an error something like test.c(9) : error XXXXX: negative subscript will be issued (the exact error and its format is dependent on the compiler). The techique used for C is to declare an extern (which can be used in file or block scope) array with a size of 1 if the expr is TRUE and a size of -1 if the expr is false (which will result in a compiler error). A counter or line number is appended to the name to help make it unique. Note that this is not a foolproof technique, but compilers are supposed to accept multiple identical extern declarations anyway. This technique doesn't work in all cases for C++ because extern declarations are not permitted inside classes. To get a CPP_ASSERT(), there is an implementation of something similar to Boost's BOOST_STATIC_ASSERT(). Boost's approach uses template specialization; when expr evaluates to 1, a typedef for the type ::interslice::StaticAssert_test< sizeof( ::interslice::StaticAssert_failed<true>) > which boils down to ::interslice::StaticAssert_test< 1> which boils down to struct StaticAssert_test is declared. If expr is 0, the compiler will be unable to find a specialization for ::interslice::StaticAssert_failed<false>. STATIC_ASSERT() or C_ASSERT should work in either C or C++ code (and they do the same thing) CPP_ASSERT is defined only for C++ code. Since declarations can only occur at file scope or at the start of a block in standard C, the C_ASSERT() or STATIC_ASSERT() macros will only work there. For situations where you want to perform compile-time asserts elsewhere, use C_ASSERT_EX() or STATIC_ASSERT_X() which wrap an enum declaration inside it's own block. */ #ifndef C_ASSERT_H_3803b949_b422_4377_8713_ce606f29d546 #define C_ASSERT_H_3803b949_b422_4377_8713_ce606f29d546 /* first some utility macros to paste a line number or counter to the end of an identifier * this will let us have some chance of generating names that are unique * there may be problems if a static assert ends up on the same line number in different headers * to avoid that problem in C++ use namespaces */ #if !defined( PASTE) #define PASTE2( x, y) x##y #define PASTE( x, y) PASTE2( x, y) #endif /* PASTE */ #if !defined( PASTE_LINE) #define PASTE_LINE( x) PASTE( x, __LINE__) #endif /* PASTE_LINE */ #if!defined( PASTE_COUNTER) #if (_MSC_VER >= 1300) /* __COUNTER__ introduced in VS 7 (VS.NET 2002) */ #define PASTE_COUNTER( x) PASTE( x, __COUNTER__) /* __COUNTER__ is a an _MSC_VER >= 1300 non-Ansi extension */ #else #define PASTE_COUNTER( x) PASTE( x, __LINE__) /* since there's no __COUNTER__ use __LINE__ as a more or less reasonable substitute */ #endif #endif /* PASTE_COUNTER */ #if __cplusplus extern "C++" { // required in case we're included inside an extern "C" block namespace interslice { template<bool b> struct StaticAssert_failed; template<> struct StaticAssert_failed<true> { enum {val = 1 }; }; template<int x> struct StaticAssert_test { }; } } #define CPP_ASSERT( expr) typedef ::interslice::StaticAssert_test< sizeof( ::interslice::StaticAssert_failed< (bool) (expr) >) > PASTE_COUNTER( IntersliceStaticAssertType_) #define STATIC_ASSERT( expr) CPP_ASSERT( expr) #define STATIC_ASSERT_EX( expr) CPP_ASSERT( expr) #else #define C_ASSERT_STORAGE_CLASS extern /* change to typedef might be needed for some compilers? */ #define C_ASSERT_GUID 4964f7ac50fa4661a1377e4c17509495 /* used to make sure our extern name doesn't collide with something else */ #define STATIC_ASSERT( expr) C_ASSERT_STORAGE_CLASS char PASTE( PASTE( c_assert_, C_ASSERT_GUID), [(expr) ? 1 : -1]) #define STATIC_ASSERT_EX(expr) do { enum { c_assert__ = 1/((expr) ? 1 : 0) }; } while (0) #endif /* __cplusplus */ #if !defined( C_ASSERT) /* C_ASSERT() might be defined by winnt.h */ #define C_ASSERT( expr) STATIC_ASSERT( expr) #endif /* !defined( C_ASSERT) */ #define C_ASSERT_EX( expr) STATIC_ASSERT_EX( expr) #ifdef TEST_IMPLEMENTATION C_ASSERT( 1 < 2); C_ASSERT( 1 < 2); int main( ) { C_ASSERT( 1 < 2); C_ASSERT( 1 < 2); int x; x = 1 + 4; C_ASSERT_EX( 1 < 2); C_ASSERT_EX( 1 < 2); return( 0); } #endif /* TEST_IMPLEMENTATION */ #endif /* C_ASSERT_H_3803b949_b422_4377_8713_ce606f29d546 */ A: The common, portable option is #if 5 != (state1|mode1) # error "aaugh!" #endif but it doesn't work in this case, because they're C constants and not #defines. You can see the Linux kernel's BUILD_BUG_ON macro for something that handles your case: #define BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)])) When condition is true, this becomes ((void)sizeof(char[-1])), which is illegal and should fail at compile time, and otherwise it becomes ((void)sizeof(char[1])), which is just fine. A: NEW ANSWER : In my original answer (below), I had to have two different macros to support assertions in a function scope and at the global scope. I wondered if it was possible to come up with a single solution that would work in both scopes. I was able to find a solution that worked for Visual Studio and Comeau compilers using extern character arrays. But I was able to find a more complex solution that works for GCC. But GCC's solution doesn't work for Visual Studio. :( But adding a '#ifdef __ GNUC __', it's easy to choose the right set of macros for a given compiler. Solution: #ifdef __GNUC__ #define STATIC_ASSERT_HELPER(expr, msg) \ (!!sizeof \ (struct { unsigned int STATIC_ASSERTION__##msg: (expr) ? 1 : -1; })) #define STATIC_ASSERT(expr, msg) \ extern int (*assert_function__(void)) [STATIC_ASSERT_HELPER(expr, msg)] #else #define STATIC_ASSERT(expr, msg) \ extern char STATIC_ASSERTION__##msg[1]; \ extern char STATIC_ASSERTION__##msg[(expr)?1:2] #endif /* #ifdef __GNUC__ */ Here are the error messages reported for STATIC_ASSERT(1==1, test_message); at line 22 of test.c: GCC: line 22: error: negative width in bit-field `STATIC_ASSERTION__test_message' Visual Studio: test.c(22) : error C2369: 'STATIC_ASSERTION__test_message' : redefinition; different subscripts test.c(22) : see declaration of 'STATIC_ASSERTION__test_message' Comeau: line 22: error: declaration is incompatible with "char STATIC_ASSERTION__test_message[1]" (declared at line 22) Β Β  ORIGINAL ANSWER : I do something very similar to what Checkers does. But I include a message that'll show up in many compilers: #define STATIC_ASSERT(expr, msg) \ { \ char STATIC_ASSERTION__##msg[(expr)?1:-1]; \ (void)STATIC_ASSERTION__##msg[0]; \ } And for doing something at the global scope (outside a function) use this: #define GLOBAL_STATIC_ASSERT(expr, msg) \ extern char STATIC_ASSERTION__##msg[1]; \ extern char STATIC_ASSERTION__##msg[(expr)?1:2] A: There is an article by Ralf Holly that examines different options for static asserts in C. He presents three different approaches: * *switch case values must be unique *arrays must not have negative dimensions *division by zero for constant expressions His conclusion for the best implementation is this: #define assert_static(e) \ do { \ enum { assert_static__ = 1/(e) }; \ } while (0) A: Checkout boost's static assert A: You can roll your own static assert if you don't have access to a third-party library static assert function (like boost): #define STATIC_ASSERT(x) \ do { \ const static char dummy[(x)?1:-1] = {0};\ } while(0) The downside is, of course, that error message is not going to be very helpful, but at least, it will give you the line number. A: Ensure you compile with a sufficiently recent compiler (e.g. gcc -std=c11). Then your statement is simply: _Static_assert(state1|mode1 == 5, "Unexpected change of bitflags"); A: #define MODE0 0 #define MODE1 1 #define MODE2 2 #define STATE0 0 #define STATE1 4 #define STATE2 8 set_register(STATE1|STATE1); //set_register(5); #if (!(5==(STATE1|STATE1))) //MY_ASSERT(5==(state1|mode1)); note the ! #error "error blah blah" #endif This is not as elegant as a one line MY_ASSERT(expr) solution. You could use sed, awk, or m4 macro processor before compiling your C code to generate the DEBUG code expansion of MY_ASSERT(expr) to multiple lines or NODEBUG code which removes them for production.
{ "language": "en", "url": "https://stackoverflow.com/questions/174356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How to detect a file modifications with TFS? It seems that when I use a tool (such as winmerge) to update my codebase... my Visual Studio Team System (VSTS) integration with Team Foundation Server (TFS) doesn't seem to pick it up. How do I know which files to check out and check back in? Is there something I am missing? Is this a feature that isn't part of VSTS & TFS? A: First, this is probably because the files have not yet been checked out. If you do that first before running your update, TFS will see those changes. Second, you can use TFS Power Tools (available from MS) to review local repository for changes that are not recognized. If there are found differences, power toys resets the status of the file so Pending Changes window sees the change. this does not require you to check-out the files, it will do that for you if there are differences. Pretty nifty. Power Tools for 2008 is here: http://www.microsoft.com/en-us/download/details.aspx?id=15836 and you are looking for the "Online" command: "Online Command - Use the online command to create pending edits on writable files that do not have pending edits." A: I assume you are applying changes across an entire project, outside of VS. You will have to check-out the complete project first, then apply the changes and check back in. Unmodified files will not be actually checked-in, AFAIK. A: Your question sound like as if you have not installed the Team Foundation Server Client. If you have installed the Visual Studio Team System edition you are able to connect with the Team Foundation Server. But to have the integration working you need to install the Team Foundation Server Client as well. After having done this your Visual Studio should inform you in case of file changes and then automatically check out the files.
{ "language": "en", "url": "https://stackoverflow.com/questions/174365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I re-use an existing database connection in phpBB3? I am using my own db for phpbb3 forum, and I wish to insert some data from the forum into my own tables. Now, I can make my own connection and it runs my query but in trying to use the $db variable(which I think is what you're meant to use??) it gives me an error. I would like someone to show me the bare bones which i insert my query into to be able to run it. A: Well.. You haven't given us very much information, but there are two things you need to do to connect and query to a database. For phpbb, you may want to read the documentation they have presented: http://wiki.phpbb.com/Database_Abstraction_Layer Here is a general overview of how you'd execute a query: include($phpbb_root_path . 'includes/db/mysql.' . $phpEx); $db = new dbal_mysql(); // we're using bertie and bertiezilla as our example user credentials. You need to fill in your own ;D $db->sql_connect('localhost', 'bertie', 'bertiezilla', 'phpbb', '', false, false); $sql = "INSERT INTO (rest of sql statement)"; $result = $db->sql_query($sql); A: I presumed that phpBB already had a connection to my database. Thus I wasnt going to use a new one? Can i make a new one and call it something else and not get an error? And $resultid = mysql_query($sql,$db345); Where $db345 is the name of my database connection A: $db = new dbal_mysql(); // we're using bertie and bertiezilla as our example user credentials. You need to fill in your own ;D $db->sql_connect('localhost', 'bertie', 'bertiezilla', 'phpbb', '', false, false); $sql = "INSERT INTO (rest of sql statement)"; $result = $db->sql_query($sql);
{ "language": "en", "url": "https://stackoverflow.com/questions/174375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Spring webflow : Move through view states Within a spring webflow, i need to implement a navigation bar that will allow to "step back" or resume the flow to one of the previous view. For example : * *View 1 = login *View 2 = My informations *View 3 = My messages *View 4 = Close session For this example, i would like to return back to view 2 from the view 4 page. A: It depends how you're going about doing this. If you're doing this within a single flow, you'll have something like this: <view-state id="loginView" view="login.jsp"> <action-state bean="someBean" method="login" /> <transition on="success" to="informationView" /> </view-state> <view-state id="informationView" view="information.jsp"> <render-actions> <action-state bean="someBean" method="retrieveInformation" /> </render-actions> <transition on="forward" to="messageView" /> <transition on="back" to="loginView" /> </view-state> <view-state id="messageView" view="message.jsp"> <render-actions> <action-state bean="someBean" method="retrieveMessage" /> </render-actions> <transition on="forward" to="closeView" /> <transition on="back" to="informationView" /> </view-state> <view-state id="closeView" view="logout.jsp"> <transition on="jumpBack" to="informationView" /> </view-state> The "jumpBack" transition on "closeView" will jump you back to view state #2, which is your information view. With sub-flows it is tricky. You'd need to chain it: call a subflow, and if an event is signaled that states you need to end your flow with a specific state, immediately do so. For example, say that your flow chain is login->information->message->close. On the close flow, the end-state would be "returnToInformation". The message flow has a transition on="returnToInformation" to="returnToInformation". "returnToInformation" is also an end-state in the message flow. Then, the information flow has a transition on="returnToInformation" to="displayInformationPage", which would then re-display the information page. A: I did this by defining some global flow that represented the tabs. I then defined an object that represented the tabs on the flows and indicated if the current tab was active. When the user moved through the tabs I updated the tab object as appropriate. When the user went to click on one of the tabs it used the global flows to allow them to move between the tabs (for my implementation I found it easier to call actions rather than view states because you may find the views may change depending on the user interaction to get there so you may need to recalculate them). For the tab bar itself, I put it in a single JSP that I then placed at the top of each form, this made updating it easier. Its not the nicest solution, but it does work. Good luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/174380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I check if at least one of two subexpressions in a regular expression match? I am trying to match floating-point decimal numbers with a regular expression. There may or may not be a number before the decimal, and the decimal may or may not be present, and if it is present it may or may not have digits after it. (For this application, a leading +/- or a trailing "E123" is not allowed). I have written this regex: /^([\d]*)(\.([\d]*))?$/ Which correctly matches the following: 1 1. 1.23 .23 However, this also matches empty string or a string of just a decimal point, which I do not want. Currently I am checking after running the regex that $1 or $3 has length greater than 0. If not, it is not valid. Is there a way I can do this directly in the regex? A: I think this will do what you want. It either starts with a digit, in which case the decimal point and digits after it are optional, or it starts with a decimal point, in which case at least one digit is mandatory after it. /^\d+(\.\d*)?|\.\d+$/ A: Create a regular expression for each case and OR them. Then you only need test if the expression matches. /^(\d+(\.\d*)?)|(\d*\.\d+)$/ A: A very late answer, but like to answer, taken from regular-expressions.info [-+]?[\d]*\.?[\d]+? Update This [\d]*\.?[\d]+?|[\d]+\. will help you matching 1. http://regex101.com/r/lJ7fF4/7
{ "language": "en", "url": "https://stackoverflow.com/questions/174381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Does anyone know if NetBeans 6.x can be used with Java SE 6 on Leopard? Java SE 6 (64 bit only) is now on OS X and that is a good thing. As I understand it since Eclipse is still Carbon and thus 32 bit, it cannot be used for 1.6 on Leopard, only 1.5. Does anyone know if NetBeans 6.x can be used with Java SE 6 on Leopard utilizing its JVM? A: Yes, you should be able to. A number of blogs have reported running Netbeans on 1.6 as well as the the problems they had with earlier versions of NB. The NB issue tracker also has a number of bugs that have been fixed that affected 1.6 on Mac OS. If you have trouble getting it to run, you might also try the Netbeans forum. A: Eclipse works with java 1.6, kinda. Ecplipse runs using the 1.5 vm, but it can compile code for 1.6 using the 1.6 java compiler. I have used netbeans for 1.6 development and it seems alright. A: I haven't tried it yet, but can't think of any reason why not.
{ "language": "en", "url": "https://stackoverflow.com/questions/174383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why does RSACryptoServiceProvider.VerifyHash need an LDAP check? I recently encountered an odd problem with RSACryptoServiceProvider.VerifyHash. I have a web application using it for decryption. When users running the web service were doing so over our VPN it became very very slow. When they had no connection or a internet connection they were fine. After much digging I found that every time RSACryptoServiceProvider.VerifyHash is called it makes an LDAP request to check MyMachineName\ASPNET. This doesn't happen with our WebDev (cassini based) servers as they run as the current user, and it is only really slow over the VPN, but it shouldn't happen at all. This seems wrong for a couple of reasons: * *Why is it checking the domain controller for a local machine user? *Why does it care? The encryption/decryption works regardless. Does anyone know why this occurs or how best to work around it? A: From this KB it looks like a 'wrinkle' in the code that needs sorting: http://support.microsoft.com/kb/948080 A: Thanks (+1 & ans) Tested and works. From the KB article: The SignData or VerifyData methods always perform an OID lookup query which is sent to the domain controller, even when the application is running in a local user account. This may cause slowness while signing or verifying data. Logon failure audit events occur on the DC because the client machine's local user account is not recognized by the domain. Therefore, the OID lookup fails. This is exactly what we were seeing. We changed this line: rsa.VerifyHash( hashedData, CryptoConfig.MapNameToOID( "SHA1" ), signature ); To this: rsa.VerifyHash( hashedData, null, signature ); And that fixed it.
{ "language": "en", "url": "https://stackoverflow.com/questions/174387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Why is getenv('REMOTE_ADDR') giving me a blank IP address? This PHP code... 207 if (getenv(HTTP_X_FORWARDED_FOR)) { 208 $ip = getenv('HTTP_X_FORWARD_FOR'); 209 $host = gethostbyaddr($ip); 210 } else { 211 $ip = getenv('REMOTE_ADDR'); 212 $host = gethostbyaddr($ip); 213 } Throws this warning... Warning: gethostbyaddr() [function.gethostbyaddr]: Address is not in a.b.c.d form in C:\inetpub...\filename.php on line 212 It seems that $ip is blank. A: Why don't you use $_SERVER['REMOTE_ADDR'] and $_SERVER['HTTP_X_FORWARDED_FOR'] A: A better solution has already been given. But still: getenv('HTTP_X_FORWARD_FOR'); should be getenv('HTTP_X_FORWARDED_FOR'); Yeah... sometimes computers want to have strings they understand ;-) A: on php.net it says the following: The function getenv does not work if your Server API is ASAPI (IIS). So, try to don't use getenv('REMOTE_ADDR'), but $_SERVER["REMOTE_ADDR"]. Did you maybe try to do it with $_SERVER? A: First of all, getenv() takes a string as parameter. On line 207, you should use: getenv('HTTP_X_FORWARDED_FOR') ...instead of: getenv(HTTP_X_FORWARDED_FOR) Secondly, accessing these variables through $_SERVER is a more reliable solution, as getenv() tends to display different behaviour on different platforms. Also, these variables will probably not work if you are running this script through CLI. Try a var_dump($ip); and see what the variable contains.
{ "language": "en", "url": "https://stackoverflow.com/questions/174393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Codify a measure in a database field name I've got a question concerning fields in databases which are measures that might be displayed in different units but are stored only in one, such as "height", for example. Where should the "pattern unit" be stated?. Of course, in the documentation, etc... But we all know nobody reads the documentation and that self-documented things are preferable. From a practical point of view, what do you think of coding it in the database field (such as height_cm for example)?. I find this weird at a first look, but I find it practical to avoid any mistakes when different people deal with the database directly and the "pattern unit" will never change. What do you think? A: What's weird about height_cm? Looks good to me. Sometimes you see measures and units in two separate fields, which is much more painful. As long as you know the units aren't going to change, I think height_cm is a good way to deal with it. A: Most databases support comments on columns. For example in Postgres you could set a comment like this: COMMENT ON COLUMN my_table.my_column IS 'cm'; Storing the unit name this way means your database is self-documenting. I would also strongly recommend using standard scientific units (i.e. the metric system). A: I agree, nothing wrong with adding the unit to the field name. The only thing I'd say is to make the naming convention consistent across your database - i.e. avoid situations where you have both height_cm and mm_width present in the same database! A: Be wary about measures that may change like currencies. In many cases it is not practical rename database field when it's measure changes. It is rather silly to have a field called amount_mk which used to contain money amount in marks but currently actually contains money amount in euros.
{ "language": "en", "url": "https://stackoverflow.com/questions/174394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Using VS 2005 to design abstract forms There's a famous bug in Visual Studio that prevents you from using the form designer on a subclass of an abstract form. This problem has already been elucidated and solved most elegantly by Urban Potato; that's not the part I'm having trouble with. The trouble is, I have duplicated the technique described by Urban Potato, and included it in my project (which happens to be pretty big), and now every time I try to open the designer of my derived form, I get that Microsoft "frightfully sorry, old chap, but I'm going to have to kill you now" message (reminiscent of Otto in A Fish Called Wanda) that says "Microsoft Visual Studio 2005 has encountered a problem and needs to close. We are sorry for the inconvenience." But here's the real kicker: if you just ignore that message, and stuff it away beyond the bottom right corner of the screen, you can carry on working, perfectly normally! Just don't click the "Send Error Report" or "Don't Send" buttons, coz then VS does close. Still, this phenomenon is highly annoying, and I'd very much like to be able to work without the feeling that my IDE is just looking for some really nasty way to get back at me for pooh-poohing its sage advice to quit now - or else. Further useful info: this same behavior can be duplicated on all other computers in my office; it's nothing specific to my machine. Obviously something in the project/code is upsetting the IDE, but at least I know the design pattern works, coz after I ignore the crash message, the designer works perfectly well. I just don't know where to start looking for the thing that is causing this problem. Any ideas? Thanks! A: If it were me, I'd try attaching a debugger (maybe another instance of Visual Studio) to the instance that throws the error dialog, and see if the stack trace gives you any insights into what's causing the error. A: The reason your are getting this problem might be that your base form is an abstracted class. The reason why the IDE will crashes is because the IDE tries to create an instance of the the abstract class which it cannot do. It might be that you accidentally marked the internal class as abstract too. Regards, JvR
{ "language": "en", "url": "https://stackoverflow.com/questions/174400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: .Net WebBrowser.DocumentText Isn't Changing! In my vb.net program, I am using a webbrowser to show the user an HTML preview. I was previously hitting a server to grab the HTML, then returning on an asynchronous thread and raising an event to populate the WebBrowser.DocumentText with the HTML string I was returning. Now I set it up to grab all of the information on the client, without ever having to hit the server, and I'm trying to raise the same event. I watch the code go through, and it has the HTML string correct and everything, but when I try to do browser.DocumentText = _emailHTML the contents of DocumentText remain as "<HTML></HTML>" I was just wondering why the DocumentText was not being set. Anyone have any suggestions? A: Try the following: browser.Navigate("about:blank"); HtmlDocument doc = browser.Document; doc.Write(String.Empty); browser.DocumentText = _emailHTML; I've found that the WebBrowser control usually needs to be initialized to about:blank anyway. The same needs to be done between navigates to different types of content (like text/xml to text/html) because the renderer is different (mshtml for text/html, something else for text/xml). See Also: C# 2.0 WebBrowser control - bug in DocumentText? A: I found the following and it worked! webBrowser.Navigate("about:blank"); webBrowser.Document.OpenNew(false); webBrowser.Document.Write(html); webBrowser.Refresh(); A: That worked for me: webBrowser.Navigate("about:blank"); webBrowser.Document?.Write(htmlString); A: Make sure that you do not cancel Navigating event of WebBrowser for about:blank page. WebBrowser navigates to about:blank before setting DocumentText. So if you want to handle links by yourself you need to create following handler of Navigating event: private void webBrowser1_Navigating(object sender, WebBrowserNavigatingEventArgs e) { if(e.Url.OriginalString.StartsWith("about:")) { return; } e.Cancel = true; // ... } A: I found the best way to handle this, is as follows: if (this.webBrowser1.Document == null) { this.webBrowser1.DocumentText = htmlSource; } else { this.webBrowser1.Document.OpenNew(true); this.webBrowser1.Document.Write(htmlSource); } A: Just spotted this in some of our old code. _webBrowser.DocumentText = builder.WriteToString( ... ); Application.DoEvents(); Apparently a DoEvents also kicks the browser into rendering A: please refer to this answer c# filenotfoundexception on webbrowser? A: While Application.DoEvents() fix it in a WinForms project, it was irrelevant in a WPF project. I finally got it to work by using webBrowser.Write( htmlContent ) (instead of webBrowser.DocumentText = htmlContent). A: This always works using mshtml; private IHTMLDocument2 Document { get { if (Browser.Document != null) { return Browser.Document.DomDocument as IHTMLDocument2; } return null; } } if (Document == null) { Browser.DocumentText = Contents; } else { Document.body.innerHTML = Contents; }
{ "language": "en", "url": "https://stackoverflow.com/questions/174403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Cookies and Objects I'm having trouble figuring out how to access a cookie from a compiled object. I'm trying to make a compiled (DLL) object that will check the users cookie and then compare that to a database to confirm they have the correct access. I can pass in the cookie info fine and the component will work, but I'm trying to have the component check the users cookie as well. I'm not even sure what object to use. I've been searching all weekend and I've seen references to httprequest, httpcookie, cookie, and cookiecollection. I can look up cookie values on the page itself using Request.Cookies("inet")("user_id") but this doesn't work in the component. A: Objects (App_Code/ compiled dlls) can only access Request via the static HttpContext.Current object HttpCookie cookie = HttpContext.Current.Request.Cookies["CookieName"]; (If it's not called from a web app, HttpContext.Current is null, so you may want to check for that when running in unit testing) (If this isn't App_Code, you'll need to reference System.Web) A: If the component is a separate DLL from your web app you'd need to pass in a reference to the Request object. That said why not just read/check the cookie value in your ASP.NET code before calling into your DLL. It's not such a good idea to have your business logic coupled to your web tier like this.
{ "language": "en", "url": "https://stackoverflow.com/questions/174412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Modern equivalent of javadeps? I am looking for a replacement for javadeps, which I used to use to generate sections of a Makefile to specify which classes depended on which source files. Unfortunately javadeps itself has not been updated in a while, and cannot parse generic types or static imports. The closest thing I've found so far is Dependency Finder. It almost does what I need but does not match non-public classes to their source files (as the source filename does not match the class name.) My current project has an interface whose only client is an inner class of a package-private class, so this is a significant problem. Alternatively if you are not aware of a tool that does this, how do you do incremental compilation in large Java projects using command-line tools? Do you compile a whole package at a time instead? Notes: * *javadeps is not to be confused with jdepend, which is for a very different purpose. *This question is a rewrite of "Tool to infer dependencies for a java project" which seemed to be misunderstood by 2 out of 3 responders. A: I use the <depend> task in ant, which is ok, but not 100% trustworthy. Supposedly JavaMake can do this dependency analysis, but it seems to be rarely updated and the download page is only sometimes available.
{ "language": "en", "url": "https://stackoverflow.com/questions/174417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can you improve this 'lines of code algorithm' in F#? I've written a little script to iterate across files in folders to count lines of code. The heart of the script is this function to count lines of whitespace, comments, and code. (Note that for the moment it is tailored to C# and doesn't know about multi-line comments). It just doesn't look very nice to me - has anyone got a cleaner version? // from list of strings return tuple with count of (whitespace, comments, code) let loc (arr:List<string>) = let innerloc (whitesp, comment, code) (l:string) = let s = l.Trim([|' ';'\t'|]) // remove leading whitespace match s with | "" -> (whitesp + 1, comment, code) //blank lines | "{" -> (whitesp + 1, comment, code) //opening blocks | "}" -> (whitesp + 1, comment, code) //closing blocks | _ when s.StartsWith("#") -> (whitesp + 1, comment, code) //regions | _ when s.StartsWith("//") -> (whitesp, comment + 1, code) //comments | _ -> (whitesp, comment, code + 1) List.fold_left innerloc (0,0,0) arr A: I think what you have is fine, but here's some variety to mix it up. (This solution repeats your problem of ignoring trailing whitespace.) type Line = | Whitespace = 0 | Comment = 1 | Code = 2 let Classify (l:string) = let s = l.TrimStart([|' ';'\t'|]) match s with | "" | "{" | "}" -> Line.Whitespace | _ when s.StartsWith("#") -> Line.Whitespace | _ when s.StartsWith("//") -> Line.Comment | _ -> Line.Code let Loc (arr:list<_>) = let sums = Array.create 3 0 arr |> List.iter (fun line -> let i = Classify line |> int sums.[i] <- sums.[i] + 1) sums "Classify" as a separate entity might be useful in another context. A: A better site for this might be refactormycode - it's tailored exactly for these questions. A: Can't see much wrong with that other than the fact you will count a single brace with trailing spaces as code instead of whitespace.
{ "language": "en", "url": "https://stackoverflow.com/questions/174418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I implement java-like synchronization (monitors) using the Win32 API? Each Java object (and its class) has an associated monitor. In pthread terms a Java monitor is equivalent to the combination of a reentrant mutex and a condition variable. For locking, the Win32 API provides Mutex objects (which are reentrant but heavyweight) and Critical Sections (which are non-reentrant but lightweight). It also provides other synchronization constructs such as Semaphores and Events but has no explicit concept of a condition variable. If I were writing a JVM, how could I use these Win32 concepts to implement Java monitors? A: Windows has SignalObjectAndWait() which can be used very much like a wait on a condition variable in a monitor. You can use an Event (that is Reset) and a Mutex and then use PulseEvent() to do the equivalent of signalling the condition variable. A: I suggest you take a look at the OpenJDK source to see how the class ReentrantLock was implemented. (I haven't checked it myself so i'm not sure of the answer). the util.concurrent locks are implemented using native API.
{ "language": "en", "url": "https://stackoverflow.com/questions/174423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Log4Net "Could not find schema information" messages I decided to use log4net as a logger for a new webservice project. Everything is working fine, but I get a lot of messages like the one below, for every log4net tag I am using in my web.config: Could not find schema information for the element 'log4net'... Below are the relevant parts of my web.config: <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="C:\log.txt" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="100KB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level: %message%newline" /> </layout> </appender> <logger name="TIMServerLog"> <level value="DEBUG" /> <appender-ref ref="RollingFileAppender" /> </logger> </log4net> Solved: * *Copy every log4net specific tag to a separate xml-file. Make sure to use .xml as file extension. *Add the following line to AssemblyInfo.cs: [assembly: log4net.Config.XmlConfigurator(ConfigFile = "xmlFile.xml", Watch = true)] nemo added: Just a word of warning to anyone follow the advice of the answers in this thread. There is a possible security risk by having the log4net configuration in an xml off the root of the web service, as it will be accessible to anyone by default. Just be advised if your configuration contains sensitive data, you may want to put it else where. @wcm: I tried using a separate file. I added the following line to AssemblyInfo.cs [assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config", Watch = true)] and put everything dealing with log4net in that file, but I still get the same messages. A: You can bind in a schema to the log4net element. There are a few floating around, most do not fully provide for the various options available. I created the following xsd to provide as much verification as possible: http://csharptest.net/downloads/schema/log4net.xsd You can bind it into the xml easily by modifying the log4net element: <log4net xsi:noNamespaceSchemaLocation="http://csharptest.net/downloads/schema/log4net.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> A: I believe you are seeing the message because Visual Studio doesn't know how to validate the log4net section of the config file. You should be able to fix this by copying the log4net XSD into C:\Program Files\Microsoft Visual Studio 8\XML\Schemas (or wherever your Visual Studio is installed). As an added bonus you should now get intellisense support for log4net A: In Roger's answer, where he provided a schema, this worked very well for me except where a commenter mentioned This XSD is complaining about the use of custom appenders. It only allows for an appender from the default set (defined as an enum) instead of simply making this a string field I modified the original schema which had a xs:simpletype named log4netAppenderTypes and removed the enumerations. I instead restricted it to a basic .NET typing pattern (I say basic because it just supports typename only, or typename, assembly -- however someone can extend it. Simply replace the log4netAppenderTypes definition with the following in the XSD: <xs:simpleType name="log4netAppenderTypes"> <xs:restriction base="xs:string"> <xs:pattern value="[A-Za-z_]\w*(\.[A-Za-z_]\w*)+(\s*,\s*[A-Za-z_]\w*(\.[A-Za-z_]\w*)+)?"/> </xs:restriction> </xs:simpleType> I'm passing this back on to the original author if he wants to include it in his official version. Until then you'd have to download and modify the xsd and reference it in a relative manner, for example: <log4net xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="../../../Dependencies/log4net/log4net.xsd"> <!-- ... --> </log4net> A: Actually you don't need to stick to the .xml extension. You can specify any other extension in the ConfigFileExtension attribute: [assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config", ConfigFileExtension=".config", Watch = true)] A: @steve_mtl: Changing the file extensions from .config to .xml solved the problem. Thank you. @Wheelie: I couldn't try your suggestion, because I needed a solution which works with an unmodified Visual Studio installation. To sum it up, here is how to solve the problem: * *Copy every log4net specific tag to a separate xml-file. Make sure to use .xml as file extension. *Add the following line to AssemblyInfo.cs: [assembly: log4net.Config.XmlConfigurator(ConfigFile = "xmlFile.xml", Watch = true)] A: For VS2008 just add the log4net.xsd file to your project; VS looks in the project folder as well as the installation directory that Wheelie mentioned. Also, using a .config extension instead of .xml avoids the security issue since IIS doesn't serve *.config files by default. A: I had a different take, and needed the following syntax: [assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.xml", Watch = true)] which differs from xsl's last post, but made a difference for me. Check out this blog post, it helped me out. A: Just a word of warning to anyone follow the advice of the answers in this thread. There is a possible security risk by having the log4net configuration in an xml off the root of the web service, as it will be accessible to anyone by default. Just be advised if your configuration contains sensitive data, you may want to put it else where. A: Have you tried using a separate log4net.config file? A: I got a test asp project to build by puting the xsd file in the visual studio schemas folder as described above (for me it is C:\Program Files\Microsoft Visual Studio 8\XML\Schemas) and then making my web.config look like this: <?xml version="1.0"?> <!-- Note: As an alternative to hand editing this file you can use the web admin tool to configure settings for your application. Use the Website->Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config --> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/> </configSections> <appSettings> </appSettings> <connectionStrings> </connectionStrings> <system.web> <trace enabled="true" pageOutput="true" /> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="true" /> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows" /> <customErrors mode="Off"/> <!-- <customErrors mode="Off"/> The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="On" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> </system.web> <log4net xsi:noNamespaceSchemaLocation="http://csharptest.net/downloads/schema/log4net.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <appender name="LogFileAppender" type="log4net.Appender.FileAppender"> <!-- Please make shure the ..\\Logs directory exists! --> <param name="File" value="Logs\\Log4Net.log"/> <!--<param name="AppendToFile" value="true"/>--> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c %m%n"/> </layout> </appender> <appender name="SmtpAppender" type="log4net.Appender.SmtpAppender"> <to value="" /> <from value="" /> <subject value="" /> <smtpHost value="" /> <bufferSize value="512" /> <lossy value="true" /> <evaluator type="log4net.Core.LevelEvaluator"> <threshold value="WARN"/> </evaluator> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%newline%date [%thread] %-5level %logger [%property] - %message%newline%newline%newline" /> </layout> </appender> <logger name="File"> <level value="ALL" /> <appender-ref ref="LogFileAppender" /> </logger> <logger name="EmailLog"> <level value="ALL" /> <appender-ref ref="SmtpAppender" /> </logger> </log4net> </configuration> A: Without modifying your Visual Studio installation, and to take into account proper versioning/etc. amongst the rest of your team, add the .xsd file to your solution (as a 'Solution Item'), or if you only want it for a particular project, just embed it there. A: I noticed it a bit late, but if you look into the examples log4net furnishes you can see them put all of the configuration data into an app.config, with one difference, the registration of configsection: <!-- Register a section handler for the log4net section --> <configSections> <section name="log4net" type="System.Configuration.IgnoreSectionHandler" /> </configSections> Could the definition it as type "System.Configuration.IgnoreSectionHandler" be the reason Visual Studio does not show any warning/error messages on the log4net stuff? A: I followed Kit's answer https://stackoverflow.com/a/11780781/6139051 and it didn't worked for AppenderType values like "log4net.Appender.TraceAppender, log4net". The log4net.dll assembly has the AssemblyTitle of "log4net", i.e. the assembly name does not have a dot inside, that was why the regex in Kit's answer didn't work. I has to add the question mark after the third parenthetical group in the regexp, and after that it worked flawlessly. The modified regex looks like the following: <xs:pattern value="[A-Za-z_]\w*(\.[A-Za-z_]\w*)+(\s*,\s*[A-Za-z_]\w*(\.[A-Za-z_]\w*)?+)?"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/174430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Query showing list of associations in many-to-many relationship I have two tables, Book and Tag, and books are tagged using the association table BookTag. I want to create a report that contains a list of books, and for each book a list of the book's tags. Tag IDs will suffice, tag names are not necessary. Example: Book table: Book ID | Book Name 28 | Dracula BookTag table: Book ID | Tag ID 28 | 101 28 | 102 In my report, I'd like to show that book #28 has the tags 101 and 102: Book ID | Book Name | Tags 28 | Dracula | 101, 102 Is there a way to do this in-line, without having to resort to functions or stored procedures? I am using SQL Server 2005. Please note that the same question already has been asked in Combine multiple results in a subquery into a single comma-separated value, but the solution involves creating a function. I am asking if there is a way to solve this without having to create a function or a stored procedure. A: You can almost do it. The only problem I haven't resolved is the comma delimiter. Here is a query on a similar structure that separates the tags using a space. SELECT em.Code, (SELECT et.Name + ' ' AS 'data()' FROM tblEmployeeTag et JOIN tblEmployeeTagAssignment eta ON et.Id = eta.EmployeeTag_Id AND eta.Employee_Id = em.id FOR XML PATH('') ) AS Tags FROM tblEmployee em Edit: Here is the complete version using your tables and using a comma delimiter: SELECT bk.Id AS BookId, bk.Name AS BookName, REPLACE((SELECT LTRIM(STR(bt.TagId)) + ', ' AS 'data()' FROM BookTag bt WHERE bt.BookId = bk.Id FOR XML PATH('') ) + 'x', ', x','') AS Tags FROM Book bk I suppose for future reference I should explain a bit about what is going on. The 'data()' column name is a special value that is related to the FOR XML PATH statement. It causes the XML document to be rendered as if you did an .InnerText on the root node of the resulting XML. The REPLACE statement is a trick to remove the trailing comma. By appending a unique character (I randomly chose 'x') to the end of the tag list I can search for comma-space-character and replace it with an empty string. That allows me to chop off just the last comma. This assumes that you are never going to have that sequence of characters in your tags. A: Unless you know what the tag ids/names are and can hard code them into your query, I'm afraid the answer is no. A: If you knew the maximum number of tags for a book, you could use a pivot to get them to the same row and then use COALESCE, but in general, I don't believe there is. A: The cleanest solution is probably to use a custom C# CLR aggregate function. We have found that this works really well. You can find instructions for creating this at http://dotnet-enthusiast.blogspot.com/2007/05/user-defined-aggregate-function-in-sql.html
{ "language": "en", "url": "https://stackoverflow.com/questions/174438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Complex profiles in maven I've been looking at profiles in maven for selecting different sets of dependencies. This is fine when you want to build say a debug build differently from a release build. My problem is that I want to do a fair bit more than this. For my application (Mobile Java app where J2ME is just one target among many) there may be a large number of possible combinations of variations on a build. Using some made-up command line syntax to illustrate what I'd like to see, I'd imagine typing in something like mvn -Pmidp,debug,local-resources What Maven does in this case is to build three different builds. What I want to do is use those three (Or more, or less) switches to affect just one build. So I'd get a MIDP-targetting debug build with 'local resources' (Whatever that might mean to me - I'm sure you can imagine better examples). The only way I can think of doing this would be to have lots and lots of profiles which becomes quite problematic. In my example, I'd have -Pmidp-debug-localresources -Pmidp-release-localresources -Pmidp-debug-remoteresources -Pmidp-release-remoteresources ... Each with its own frustratingly similar set of dependencies and build tag. I'm not sure I've explained my problem well enough, but I can re-write the question to clarify it if comments are left. UPDATE: The question isn't actually valid since I'd made a false assumption about the way maven works. -Pmidp,debug,local-resources does not do 3 builds. It in fact enables those 3 profiles on one build, which was ironically what I was looking for in the first place. A: The Maven way is to create a lot of artifacts with less complexity. I'd say your best bet is to abstract the common parts of each build into a separate artifact, then create a project for each build that defines the build specific parts. This will leave you with a lot of projects, but each will be much simpler.
{ "language": "en", "url": "https://stackoverflow.com/questions/174439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to automate converting Excel xls files to Excel xml format? I have about 200 Excel files that are in standard Excel 2003 format. I need them all to be saved as Excel xml - basically the same as opening each file and choosing Save As... and then choosing Save as type: XML Spreadsheet Would you know any simple way of automating that task? A: Here is a routine that will convert all files in a single directory that have a .xls extension. It takes a straight forward approach. Any VBA code in a workbook is stripped out, the workbook is not saved with a .xlsm extension. Any incompatability warning are not dislayed, instead the changes are automatically accepted. Sub Convert_xls_Files() Dim strFile As String Dim strPath As String With Application .EnableEvents = False .DisplayAlerts = False .ScreenUpdating = False End With 'Turn off events, alerts & screen updating strPath = "C:\temp\excel\" strFile = Dir(strPath & "*.xls") 'Change the path as required Do While strFile <> "" Workbooks.Open (strPath & strFile) strFile = Mid(strFile, 1, Len(strFile) - 4) & ".xlsx" ActiveWorkbook.SaveAs Filename:=strPath & strFile, FileFormat:=xlOpenXMLWorkbook ActiveWorkbook.Close True strFile = Dir Loop 'Opens the Workbook, set the file name, save in new format and close workbook With Application .EnableEvents = True .DisplayAlerts = True .ScreenUpdating = True End With 'Turn on events, alerts & screen updating End Sub A: You could adapt the code I posted here: http://www.atalasoft.com/cs/blogs/loufranco/archive/2008/04/01/loading-office-documents-in-net.aspx It shows how to save as PDF (Word is shown in the blog, but if you download the solution, it has code for Excel and PPT). You need to find the function for saving as the new format instead of exporting (probably the easiest way is to record a macro of yourself doing it in Excel and then looking at the code). A: Open them all up, and then press ALT+F11 to get to macro editor and enter something like: Sub SaveAllAsXml() Dim wbk As Workbook For Each wbk In Application.Workbooks wbk.SaveAs FileFormat:=XlFileFormat.xlXMLSpreadsheet Next End Sub And then press F5 to run it. May need some tweaking as I haven't tested it. A: Sounds like a job for my favorite-most-underrated language of all time: VBScript!!Put this in a text file, and make the extension ".vbs": set xlapp = CreateObject("Excel.Application") set fso = CreateObject("scripting.filesystemobject") set myfolder = fso.GetFolder("YOURFOLDERPATHHERE") set myfiles = myfolder.Files for each f in myfiles set mybook = xlapp.Workbooks.Open(f.Path) mybook.SaveAs f.Name & ".xml", 47 mybook.Close next I haven't tested this, but it should work A: The simplest way is to record macro for one file and then manually edit macros to do such actions for files in folder using loop. In macro you can use standart VB functions to get all files in directory and to filter them. You can look http://www.xtremevbtalk.com/archive/index.php/t-247211.html for additional information. A: Const xlXLSX = 51 REM 51 = xlOpenXMLWorkbook (without macro's in 2007-2013, xlsx) REM 52 = xlOpenXMLWorkbookMacroEnabled (with or without macro's in 2007-2013, xlsm) REM 50 = xlExcel12 (Excel Binary Workbook in 2007-2013 with or without macro's, xlsb) REM 56 = xlExcel8 (97-2003 format in Excel 2007-2013, xls) dim args dim file dim sFile set args=wscript.arguments dim wshell Set wshell = CreateObject("WScript.Shell") Set objExcel = CreateObject("Excel.Application") Set objWorkbook = objExcel.Workbooks.Open( wshell.CurrentDirectory&"\"&args(0)) objExcel.DisplayAlerts = FALSE objExcel.Visible = FALSE objWorkbook.SaveAs wshell.CurrentDirectory&"\"&args(1), xlXLSX objExcel.Quit Wscript.Quit
{ "language": "en", "url": "https://stackoverflow.com/questions/174446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: To STL or !STL, that is the question Unquestionably, I would choose to use the STL for most C++ programming projects. The question was presented to me recently however, "Are there any cases where you wouldn't use the STL?"... The more I thought about it, the more I realized that perhaps there SHOULD be cases where I choose not to use the STL... For example, a really large, long term project whose codebase is expected to last years... Perhaps a custom container solution that precisely fits the projects needs is worth the initial overhead? What do you think, are there any cases where you would choose NOT to STL? A: I think it's a typical build vs buy scenario. However, I think that in this case I would almost always 'buy', and use STL - or a better solution (something from Boost perhaps), before rolling my own. You should be focusing most of your effort on what your application does, not the building blocks it uses. A: I don't really think so. In making my own containers, I would even try to make those compatible with the STL because the power of the generic algorithms is too great to give up. The STL should at least be nominally used, even if all you do is write your own container and specialize every algorithm for it. That way, every sorting algorithm can be invoked sort(c.begin(), c.end()). If you specialize sort to have the same effect, even if it works differently. A: Coding for Symbian. STLPort does support Symbian 9, so the case against using STL is weaker than it used to be ("it's not available" is a pretty convincing case), but STL is still alien to all the Symbian libraries, so may be more trouble than just doing things the Symbian way. Of course it might be argued on these grounds that coding for Symbian is not "a C++ programming project". A: The main reasons not to use STL are that: * *Your C++ implementation is old and has horrible template support. *You can't use dynamic memory allocation. Both are very uncommon requirements in practice. For a longterm project rolling your own containers that overlap in functionality with the STL is just going to increase maintenance and development costs. A: Most of the projects I have worked on had a codebase way older than any really usable version of STL - therefore we chose not to introduce it now. A: Introduction: STL is a great library, and useful in many cases, but it definitively don't solve all the situations. Answering STL or !STL is like answering "Does STL meet your need or does it not?" Pros of STL * *In most situations, STL has a container that fit for a given solution. *It is well documented *It is well known ( Programmers usually already know it, getting into a project is shorter) *It is tested and stable. *It is crossplatform *It is included with every compiler (does not add a 3rd library dependency) *STL is already implemented and ready *STL is shiny, ... Contras of STL It does not mater that you need a simple Graph, Red-Black Tree, or a very complex database of elements with an AI managing concurrent access through a quantum computer. The fact is, STL do not, and will never solve everything. Following aspects are only a few examples, but they basically are consequence of this fact: STL is a real library with limits. * *Exceptions: STL relay on exceptions, so if for any reason you cannot accept exceptions (e.g. safety critical), you cannot use STL. Right! exceptions may be disabled, but that does not solve the design of the STL relaying on them and will eventually carry a crash. *Need of specific (not yet included) data structure: graph, tree, etc. *Special constraints of complexity: You could discover that STL general purpose container is not the most optimal for your bottleneck code. *Concurrency considerations: Either you need concurrency and STL do not provide what you need (e.g. reader-writer lock cannot(easily) be used because of the bi-directional [] operator). Either you could design a container taking benefit of multi-threading for a much faster access/searching/inserting/whatever. *STL need to fit your needs, but the revers is also true: You need to fulfill the needs of STL. Don't try to use std::vector in a embedded micro-controller with 1K of unmanaged RAM. *Compatibility with other libraries: It may be that for historical reasons, the libraries you use do not accept STL (e.g. QtWidgets make intensive use of it own QList). Converting containers in both directions might be not the best solution. Implementing your own container After reading that, you could think: "Well, I am sure I may do something better for my specific case than STL does." WAIT! Implementing your container correctly become very quickly a huge task: it is not only about implementing something working, you might have to: * *Document it deeply, including limitations, algorithm complexity,etc. *Expect bugs, and solving them *Incoming additional needs: you know, this function missing, this conversion between types, etc. *After a while, you could want to refactor, and change all the dependencies (too late?) *.... Code used that deep in the code like a container is definitively something that take time to implement, and should be though carefully. Using 3rd party library Not STL does not necessarily mean custom. There are plenty of good libraries in the net, some even with permissive open-source license. Adding or not an additional 3rd party library is another topic, but it worth to be considered. A: Projects with strict memory requirements such as for embedded systems may not be suited for the STL, as it can be difficult to control and manage what's taken from and returned to the heap. As Evan mentioned, writing proper allocators can help with this, but if you're counting every byte used or concerned with memory fragmentation, it may be wiser to hand-roll a solution that's tailored for your specific problem, as the STL has been optimized for the most general usage. You may also choose not to use STL for a particular case because more applicable containers exist that are not in the current standard, such as boost::array or boost::unordered_map. A: One situation where this might occur is when you are already using an external library that already provides the abilities you need from the STL. For instance, my company develops an application in space-limited areas, and already uses Qt for the windowing toolkit. Since Qt provides STL-like container classes, we use those instead of adding the STL to our project. A: There are just so many advantages to using the stl. For a long term project the benefits outweigh the costs. * *New programmers being able to understand the containers from day one giving them more time to learn the other code in the project. (assuming they already know STL like any competent C++ programmer would) *Fixing bugs in containers sucks and wastes time that could be spent enhancing the business logic. *Most likely you're not going to write them as well as the STL is implemented anyways. That being said, the STL containers don't deal with concurrency at all. So in an environment where you need concurrency I would use other containers like the Intel TBB concurrent containers. These are far more advanced using fine grained locking such that different threads can be modifying the container concurrently and you don't have to serialize access to the container. A: I have found problems in using STL in multi-threaded code. Even if you do not share STL objects across threads, many implementations use non-thread safe constructs (like ++ for reference counting instead of an interlocked increment style, or having non-thread-safe allocators). In each of these cases, I still opted to use STL and fix the problems (there are enough hooks to get what you want). Even if you opt to make your own collections, it would be a good idea to follow STL style for iterators so that you can use algorithms and other STL functions that operate only on iterators. A: The main issue I've seen is having to integrate with legacy code that relies on non-throwing operator new. A: Usually, I find that the best bet is to use the STL with custom allocators instead of replacing STL containers with hand rolled ones. The nice thing about the STL is you pay only for what you use. A: I started programming C back in about 1984 or so and have never used the STL. Over the years I have rolled my own function librarys and they have evolved and grown when the STL was not stable yet and or lacked cross platform support. My common library has grown to include code by others ( mostly things like libjpeg, libpng, ffmpeg, mysql ) and a few others and I would rather keep the amount of external code in it to a minimum. I'm sure now the STL is great but frankly I'm happy with the items in my toolbox and see no need at this point to load it up with more tools. But I certainly see the great leaps and bounds that new programmers can make by using the STL without having to code all that from scratch. A: Standard C++ perversely allows implementations of some iterator operations to throw exceptions. That possibility can be problematic in some cases. You might therefore implement your own simple container that is guaranteed not to throw exceptions for critical operations. A: Since almost everybody who answered before me seemed so keen on STL containers, I thought it would be useful to compile a list of good reasons not to use them, from actual problems I have encountered myself. These can be reasonably grouped into three broad categories: 1) Poor efficiency STL containers typically run slower AND use too much memory for the job. The reason for this can be partly blamed on too generic implementations of the underlying data structures and algorithms, with additional performance costs deriving from all the extra design constrains required by the tons of API requisites that are irrelevant to the task at hand. Reckless memory use and poor performance go hand in hand, because memory is addressed on the cache by the CPU in lines of 64 bytes, and if you don't use locality of reference to your advantage, you waste cycles AND precious Kb of cache memory. For instance, std::list requires 24 bytes per element rather than the optimal 4. https://lemire.me/blog/2016/09/15/the-memory-usage-of-stl-containers-can-be-surprising/ This is because it is implemented by packing two 64-bit pointers, 1 int and 4 bytes of memory padding, rather than doing anything as basic as allocating small amounts of contiguous memory and separately tracking which elements are in use, or using the pointer xor technique to store both iteration directions in one pointer. https://en.wikipedia.org/wiki/XOR_linked_list Depending on your program needs, these inefficiencies can and do add up to large performance hits. 2) Limitations / creeping standards Of course, sometimes the problem is that you need some perfectly common function or slightly different container class that is just not implemented in STL, such as decrease_min() in a priority queue. A common practice is to then to wrap the container in a class and implement the missing functionality yourself with extra state external to the container and/or multiple calls to container methods, which may emulate the desired behavior, but with a performance much lower and O() complexity higher than a real implementation of the data structure, since there's no way of extending the inner workings of the container. Alternatively you end up mashing up two or more different containers together because you simultaneously need two or more things that are fundamentally incompatible in any one given STL container, such as a minmax heap, a trie (since you need to be able to use agnostic pointers), etc. These solutions may be ugly and add on top of the other inefficiencies, and yet the way the language is evolving the tendency is to only add new STL methods to match C++'s feature creep and ignore any of the missing core functionality. 3) Concurrency/parallelism STL containers are not thread-safe, much less concurrent. In the present age of 16-thread consumer CPUs, it's surprising the go-to default container implementation for a modern language still requires you to write mutexes around every memory access like it's 1996. This is, for any non-trivial parallel program, kind of a big deal, because having memory barriers forces threads to serialize their execution, and if these happen with the same frequency as an STL call, you can kiss your parallel performance goodbye. In short, STL is good as long as you don't care about performance, memory usage, functionality or parallelism. STL is of course still perfectly fine for the many times you are not bound by any of these concerns and other priorities like readability, portability, maintainability or coding speed take precedence.
{ "language": "en", "url": "https://stackoverflow.com/questions/174449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Exceptions for flow of control There is an interesting post over here about this, in relation to cross-application flow of control. Well, recently, I've come across an interesting problem. Generating the nth value in a potentially (practically) endless recursive sequence. This particular algorithm WILL be in atleast 10-15 stack references deep at the point that it succeeds. My first thought was to throw a SuccessException that looked something like this (C#): class SuccessException : Exception { public string Value { get; set; } public SuccessException(string value) : base() { Value = value; } } Then do something like this: try { Walk_r(tree); } catch (SuccessException ex) { result = ex.Value; } Then my thoughts wandered back here, where I've heard over and over to never use Exceptions for flow control. Is there ever an excuse? And how would you structure something like this, if you were to implement it? A: In this case I would be looking at your Walk_r method, you should have something that returns a value, throwing an exception to indicate success, is NOT a common practice, and at minimum is going to be VERY confusing to anyone that sees the code. Not to mention the overhead associated with exceptions. A: walk_r should simply return the value when it is hit. It's is a pretty standard recursion example. The only potential problem I see is that you said it is potentially endless, which will have to be compensated for in the walk_r code by keeping count of the recursion depth and stopping at some maximum value. The exception actually makes the coding very strange since the method call now throws an exception to return the value, instead of simply returning 'normally'. try { Walk_r(tree); } catch (SuccessException ex) { result = ex.Value; } becomes result = Walk_r(tree); A: I'm going to play devil's advocate here and say stick with the exception to indicate success. It might be expensive to throw/catch but that may be insignificant compared with the cost of the search itself and possibly less confusing than an early exit from the method. A: It's not a very good idea to throw exceptions as a part of an algorithm, especially in .net. In some languages/platforms, exceptions are pretty efficient when thrown, and they usually are, when an iterable gets exhausted for instance. A: Why not just return the resulting value? If it returns anything at all, assume it is successful. If it fails to return a value, then it means the loop failed. If you must bring back from a failure, then I'd recommend you throw an exception. A: The issue with using exceptions is that tey (in the grand scheme of things) are very inefficient and slow. It would surely be as easy to have a if condition within the recursive function to just return as and when needed. To be honest, with the amount of memory on modern PC's its unlikely (not impossible though) that you'll get a stack overflow with only a small number of recursive calls (<100). If the stack is a real issue, then it might become necessary to be 'creative' and implement a 'depth limited search strategy', allow the function to return from the recursion and restart the search from the last (deepest) node. To sum up: Exceptions should only be used in exceptional circumstances, the success of a function call i don't believe qualifies as such. A: Using exceptions in normal program flow in my book is one of the worst practises ever. Consider the poor sap who is hunting for swallowed exceptions and is running a debugger set to stop whenever an exception happens. That dude is now getting mad.... and he has an axe. :P
{ "language": "en", "url": "https://stackoverflow.com/questions/174458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What are some good 3rd party controls for Windows Mobile? We're developing a windows mobile 6.1 application and would like to make the user interface look better than the standard battleship grey. Has anyone had any experience with 3rd party controls that can make a windows mobile app look like a WPF/Silverlight/Iphone - like interface? A: Please check : http://www.resco.net/developer/mobileformstoolkit/overview.aspx A: There might be some useful-ish replies in this question.
{ "language": "en", "url": "https://stackoverflow.com/questions/174459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Deleting multiline text from multiple files I have a bunch of java files from which I want to remove the javadoc lines with the license [am changing it on my code]. The pattern I am looking for is ^\* \* ProjectName .* USA\.$ but matched across lines Is there a way sed [or a commonly used editor in Windows/Linux] can do a search/replace for a multiline pattern? A: Here's the appropriate reference point in my favorite sed tutorial. A: Probably someone is still looking for such solution from time to time. Here is one. Use awk to find the lines to be removed. Then use diff to remove the lines and let sed clean up. awk "/^\* \* ProjectName /,/ USA\.$/" input.txt \ | diff - input.txt \ | sed -n -e"s/^> //p" \ >output.txt A warning note: if the first pattern exist while the second does not, you will loose all text below the first pattern - so check that first. A: Yes. Are you using sed, awk, perl, or something else to solve this problem? Most regular expression tools allow you to specify multi-line patterns. Just be careful with regular expressions that are too greedy, or they'll match the code between comments if it exists. Here's an example: /\*(?:.|[\r\n])*?\*/ perl -0777ne 'print m!/\*(?:.|[\r\n])*?\*/!g;' <file> Prints out all the comments run together. The (?: notation must be used for non-capturing parenthesis. / does not have to be escaped because ! delimits the expression. -0777 is used to enable slurp mode and -n enables automatic reading. (From: http://ostermiller.org/findcomment.html )
{ "language": "en", "url": "https://stackoverflow.com/questions/174472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the equivalent to JUnit in C#? I am coming from Java and am currently working on a C# project. What is the recommended way to go about a) unit testing existing C# code and b) accomplishing TDD for C# development? Also is there an equivalent to EMMA / EclEmma (free yet powerful code coverage tool) for Visual Studio and C# code? A: NUnit is patterned after JUnit, but if you're using Visual Studio 2008 then consider the built-in unit testing framework. A: Unit test framework: NUnit Unit test runner: Various, but personally I like the one in ReSharper. (ReSharper costs money, but is easily worth it for the various productivity improvements.) Coverage: NCover (I think this used to be free, but it now costs money. Hmm.) A: I would highly recommend Gallio (formally mbUnit) for unit testing, and (unfortunately not free) NCover for code coverage. A: Regarding your question about unit test frameworks: NUnit 1.0 was a direct port of JUnit. NUnit 2.0 moved away from JUnit syntax in order to take advantage of the .NET platform. xUnit.net is a newer unit test framework (from Jim Newkirk - one of the NUnit 2.0 developers - and Brad Wilson) that states as a goal exposing "advances in other unit test library implementations that have not really surfaced in .NET," which I read as "keeping up with JUnit." A: NUnit would be it. A: NUnit, but NCover is only part of the answer as it isn't free. I've asked elsewhere about that. A: VS2008 Professional has the Team System unit testing functionality baked in. A: NUnit for sure. A: 1 Nunit 2 NCover or 3 PartCover (I never used it) A: I'd install: * *NUnit for your Unit testing framework http://www.nunit.org/index.php *Test driven.net for runing your tests http://www.testdriven.net/ *Rhino Mocks as your mockign framework http://ayende.com/projects/rhino-mocks.aspx As and aside I find it odd that the NUnit guys seem to be using php to host their homepage...
{ "language": "en", "url": "https://stackoverflow.com/questions/174498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: String to Int in java - Likely bad data, need to avoid exceptions Seeing as Java doesn't have nullable types, nor does it have a TryParse(), how do you handle input validation without throwing an exceptions? The usual way: String userdata = /*value from gui*/ int val; try { val = Integer.parseInt(userdata); } catch (NumberFormatException nfe) { // bad data - set to sentinel val = Integer.MIN_VALUE; } I could use a regex to check if it's parseable, but that seems like a lot of overhead as well. What's the best practice for handling this situation? EDIT: Rationale: There's been a lot of talk on SO about exception handling, and the general attitude is that exceptions should be used for unexpected scenarios only. However, I think bad user input is EXPECTED, not rare. Yes, it really is an academic point. Further Edits: Some of the answers demonstrate exactly what is wrong with SO. You ignore the question being asked, and answer another question that has nothing to do with it. The question isn't asking about transition between layers. The question isn't asking what to return if the number is un-parseable. For all you know, val = Integer.MIN_VALUE; is exactly the right option for the application that this completely context free code snippet was take from. A: I'm sure it is bad form, but I have a set of static methods on a Utilities class that do things like Utilities.tryParseInt(String value) which returns 0 if the String is unparseable and Utilities.tryParseInt(String value, int defaultValue) which allows you to specify a value to use if parseInt() throws an exception. I believe there are times when returning a known value on bad input is perfectly acceptable. A very contrived example: you ask the user for a date in the format YYYYMMDD and they give you bad input. It may be perfectly acceptable to do something like Utilities.tryParseInt(date, 19000101) or Utilities.tryParseInt(date, 29991231); depending on the program requirements. A: I'm going to restate the point that stinkyminky was making towards the bottom of the post: A generally well accepted approach validating user input (or input from config files, etc...) is to use validation prior to actually processing the data. In most cases, this is a good design move, even though it can result in multiple calls to parsing algorithms. Once you know that you have properly validated the user input, then it is safe to parse it and ignore, log or convert to RuntimeException the NumberFormatException. Note that this approach requires you to consider your model in two pieces: the business model (Where we actually care about having values in int or float format) and the user interface model (where we really want to allow the user to put in whatever they want). In order for the data to migrate from the user interface model to the business model, it must pass through a validation step (this can occur on a field by field basis, but most scenarios call for validation on the entire object that is being configured). If validation fails, then the user is presented with feedback informing them of what they've done wrong and given a chance to fix it. Binding libraries like JGoodies Binding and JSR 295 make this sort of thing a lot easier to implement than it might sound - and many web frameworks provide constructs that separate user input from the actual business model, only populating business objects after validation is complete. In terms of validation of configuration files (the other use case presented in some of the comments), it's one thing to specify a default if a particular value isn't specified at all - but if the data is formatted wrong (someone types an 'oh' instead of a 'zero' - or they copied from MS Word and all the back-ticks got a funky unicode character), then some sort of system feedback is needed (even if it's just failing the app by throwing a runtime exception). A: I asked if there were open source utility libraries that had methods to do this parsing for you and the answer is yes! From Apache Commons Lang you can use NumberUtils.toInt: // returns defaultValue if the string cannot be parsed. int i = org.apache.commons.lang.math.NumberUtils.toInt(s, defaultValue); From Google Guava you can use Ints.tryParse: // returns null if the string cannot be parsed // Will throw a NullPointerException if the string is null Integer i = com.google.common.primitives.Ints.tryParse(s); There is no need to write your own methods to parse numbers without throwing exceptions. A: Here's how I do it: public Integer parseInt(String data) { Integer val = null; try { val = Integer.parseInt(userdata); } catch (NumberFormatException nfe) { } return val; } Then the null signals invalid data. If you want a default value, you could change it to: public Integer parseInt(String data,int default) { Integer val = default; try { val = Integer.parseInt(userdata); } catch (NumberFormatException nfe) { } return val; } A: For user supplied data, Integer.parseInt is usually the wrong method because it doesn't support internationisation. The java.text package is your (verbose) friend. try { NumberFormat format = NumberFormat.getIntegerInstance(locale); format.setParseIntegerOnly(true); format.setMaximumIntegerDigits(9); ParsePosition pos = new ParsePosition(0); int val = format.parse(str, pos).intValue(); if (pos.getIndex() != str.length()) { // ... handle case of extraneous characters after digits ... } // ... use val ... } catch (java.text.ParseFormatException exc) { // ... handle this case appropriately ... } A: That's pretty much it, although returning MIN_VALUE is kind of questionable, unless you're sure it's the right thing to use for what you're essentially using as an error code. At the very least I'd document the error code behavior, though. Might also be useful (depending on the application) to log the bad input so you can trace. A: What's the problem with your approach? I don't think doing it that way will hurt your application's performance at all. That's the correct way to do it. Don't optimize prematurely. A: I think the best practice is the code you show. I wouldn't go for the regex alternative because of the overhead. A: Try org.apache.commons.lang.math.NumberUtils.createInteger(String s). That helped me a lot. There are similar methods there for doubles, longs etc. A: You could use a Integer, which can be set to null if you have a bad value. If you are using java 1.6, it will provide auto boxing/unboxing for you. A: Cleaner semantics (Java 8 OptionalInt) For Java 8+, I would probably use RegEx to pre-filter (to avoid the exception as you noted) and then wrap the result in a primitive optional (to deal with the "default" problem): public static OptionalInt toInt(final String input) { return input.matches("[+-]?\\d+") ? OptionalInt.of(Integer.parseInt(input)) : OptionalInt.empty(); } If you have many String inputs, you might consider returning an IntStream instead of OptionalInt so that you can flatMap(). References * *RegEx based on parseInt documentation A: Put some if statements in front of it. if (null != userdata ) A: The exception mechanism is valuable, as it is the only way to get a status indicator in combination with a response value. Furthermore, the status indicator is standardized. If there is an error you get an exception. That way you don't have to think of an error indicator yourself. The controversy is not so much with exceptions, but with Checked Exceptions (e.g. the ones you have to catch or declare). Personally I feel you picked one of the examples where exceptions are really valuable. It is a common problem the user enters the wrong value, and typically you will need to get back to the user for the correct value. You normally don't revert to the default value if you ask the user; that gives the user the impression his input matters. If you do not want to deal with the exception, just wrap it in a RuntimeException (or derived class) and it will allow you to ignore the exception in your code (and kill your application when it occurs; that's fine too sometimes). Some examples on how I would handle NumberFormat exceptions: In web app configuration data: loadCertainProperty(String propVal) { try { val = Integer.parseInt(userdata); return val; } catch (NumberFormatException nfe) { // RuntimeException need not be declared throw new RuntimeException("Property certainProperty in your configuration is expected to be " + " an integer, but was '" + propVal + "'. Please correct your " + "configuration and start again"); // After starting an enterprise application the sysadmin should always check availability // and can now correct the property value } } In a GUI: public int askValue() { // TODO add opt-out button; see Swing docs for standard dialog handling boolean valueOk = false; while(!valueOk) { try { String val = dialog("Please enter integer value for FOO"); val = Integer.parseInt(userdata); return val; } catch (NumberFormatException nfe) { // Ignoring this; I don't care how many typo's the customer makes } } } In a web form: return the form to the user with a usefull error message and a chance to correct. Most frameworks offer a standardized way of validation. A: Integer.MIN_VALUE as NumberFormatException is bad idea. You can add proposal to Project Coin to add this method to Integer @Nullable public static Integer parseInteger (String src)... it will return null for bad input Then put link to your proposal here and we all will vote for it! PS: Look at this http://msdn.microsoft.com/en-us/library/bb397679.aspx this is how ugly and bloated it could be A: The above code is bad because it is equivalent as the following. // this is bad int val = Integer.MIN_VALUE; try { val = Integer.parseInt(userdata); } catch (NumberFormatException ignoreException) { } The exception is ignored completely. Also, the magic token is bad because an user can pass in -2147483648 (Integer.MIN_VALUE). The generic parse-able question is not beneficial. Rather, it should be relevant to the context. Your application has a specific requirement. You can define your method as private boolean isUserValueAcceptable(String userData) { return ( isNumber(userData) && isInteger(userData) && isBetween(userData, Integer.MIN_VALUE, Integer.MAX_VALUE ) ); } Where you can documentation the requirement and you can create well defined and testable rules. A: If you can avoid exceptions by testing beforehand like you said (isParsable()) it might be better--but not all libraries were designed with that in mind. I used your trick and it sucks because stack traces on my embedded system are printed regardless of if you catch them or not :(
{ "language": "en", "url": "https://stackoverflow.com/questions/174502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How do you copy a MS SQL 2000 database programmatically using C#? I need to copy several tables from one DB to another in SQL Server 2000, using C# (VS 2005). The call needs to be parameterized - I need to be able to pass in the name of the database to which I am going to be copying these tables. I could use DTS with parameters, but I can't find any sample code that does this from C#. Alternatively, I could just use drop table TableName select * into TableName from SourceDB..TableName and then reconstruct the indexes etc - but that is really kludgy. Any other ideas? Thanks! A: For SQL Server 7.0 and 2000, we have SQLDMO for this. For SQL Server 2005 there is SMO. This allows you do to pretty much everything related to administering the database, scripting objects, enumerating databases, and much more. This is better, IMO, than trying a "roll your own" approach. SQL 2000: Developing SQL-DMO Applications Transfer Object SQL 2005: Here is the SMO main page: Microsoft SQL Server Management Objects (SMO) Here is the Transfer functionality: Transferring Data How to: Transfer Schema and Data from One Database to Another in Visual Basic .NET A: If the destination table is being dropped every time then why not do SELECT INTO? Doesn't seem like a kludge at all. If it works just fine and ticks all the requirements boxes why create a days worth of work growing code to do exactly the same thing? Let SQL do all the heavy lifting for you. A: You could put the scripts (copy db) found here http://www.codeproject.com/KB/database/CreateDatabaseScript.aspx Into an application. Just replace the destination. To actually move the entite database, FOLLOW http://support.microsoft.com/kb/314546 But remember, the database has to be taken offline first. Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/174515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Selecting a maximum order number in SQL I have a table that records a sequence of actions with a field that records the sequence order: user data sequence 1 foo 0 1 bar 1 1 baz 2 2 foo 0 3 bar 0 3 foo 1 Selecting the first item for each user is easy enough with WHERE sequence = '0' but is there a way to select the last item for each user in SQL? The result I am after should look like this: user data sequence 1 baz 2 2 foo 0 3 foo 1 I'm using MySQL if there are any implementation specific tricksters answering. A: This sql will return the record with the highest sequence value for each user: select a.user, a.data, a.sequence from table as a inner join ( select user, max(sequence) as 'last' from table group by user) as b on a.user = b.user and a.sequence = b.last A: select top 1 user ,data ,sequence from table order by sequence
{ "language": "en", "url": "https://stackoverflow.com/questions/174516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can Generic Type Mapper (MSSOAP toolkit) be persuaded to handle empty arrays I'm having the problem described here: http://groups.google.com/group/microsoft.public.xml.soap/browse_thread/thread/029ee5b5d4fa2440/0895d73c5c3720a1 I am consuming a Web Service using Office 2003 Web Services Toolkit. This generates classes for all the data returned by my web service: one of the classes has a property that is an array which may be empty. When I call the web service, the Generic Type Mapper raises an error: array dimensions do not match definition Does anyone know of a solution to this problem that allows me to keep using the generated classes (I know I could just consume the raw XML)? A: Since there are no takers, I'll describe what I've done to date in case anyone else has a similar issue. On my client (using Office 2003 Web Services Toolkit) I want to receive a collection of objects which have a property that itself is a collection of objects. For example, a collection of Customer objects from a C# web service where the Customer class looks something like: public class Customer { public string Name { get; set; } public Collection<Address> Addresses { get; } } The problem I have is that the Addresses property can sometimes be an empty collection, and the SOAP30 GenericTypeMapper is not able to handle this. In my specific case, the client did not actually need the collection of addresses, I just want to be able to get the other properties of the Customer class. So I don't really care what's in the "Addresses" variant property that's created by the Web Services Toolkit. What I've done is create a VB6 ActiveX DLL with a class with a minimalist implementation of ISoapMapper that always returns an uninitialized object reference: Implements ISoapTypeMapper Private Function ISoapTypeMapper_Iid() As String End Function Private Sub ISoapTypeMapper_Init(ByVal par_Factory As MSOSOAPLib30.ISoapTypeMapperFactory, ByVal par_Schema As MSXML2.IXMLDOMNode, ByVal par_WSMLNode As MSXML2.IXMLDOMNode, ByVal par_xsdType As MSOSOAPLib30.enXSDType) End Sub Private Function ISoapTypeMapper_Read(ByVal par_soapreader As MSOSOAPLib30.ISoapReader, ByVal par_Node As MSXML2.IXMLDOMNode, ByVal par_encoding As String, ByVal par_encodingMode As MSOSOAPLib30.enEncodingStyle, ByVal par_flags As Long) As Variant Set ISoapTypeMapper_Read = Nothing End Function Private Function ISoapTypeMapper_SchemaNode() As MSXML2.IXMLDOMNode Set ISoapTypeMapper_SchemaNode = Nothing End Function Private Function ISoapTypeMapper_VarType() As Long ISoapTypeMapper_VarType = vbObject End Function Private Sub ISoapTypeMapper_Write(ByVal par_ISoapSerializer As MSOSOAPLib30.ISoapSerializer, ByVal par_encoding As String, ByVal par_encodingMode As MSOSOAPLib30.enEncodingStyle, ByVal par_flags As Long, par_var As Variant) End Sub Private Function ISoapTypeMapper_XsdType() As MSOSOAPLib30.enXSDType ISoapTypeMapper_XsdType = enXSDUndefined End Function Then I modified the WSML generated by the Web Services Toolkit to use this implementation for the appropriate property: Dim str_WSML As String str_WSML = "<servicemapping>" str_WSML = str_WSML & "<service name='MyService'>" str_WSML = str_WSML & "<using PROGID='MSOSOAP.GenericCustomTypeMapper30' cachable='0' ID='GCTM'/>" str_WSML = str_WSML & "<using PROGID='SoapHelper.EmptyArrayMapper' cachable='0' ID='EATM'/>" ' <== Added this line str_WSML = str_WSML & "<types>" ... str_WSML = str_WSML & "<type name='ArrayOfAddress' targetNamespace='http://...' uses='EATM' targetClassName='struct_Address'/>" '<== Added this line str_WSML = str_WSML & "<type name='Address' targetNamespace='http://mynamespace.com/myapp/services/data' uses='GCTM' targetClassName='struct_Address'/>" ... This achieved what I needed for this application. It seems to me that it may be possible to achieve support for empty arrays more generally by implementing ISoapMapper in such a way that: * *It detects and handles the case of an empty array. *Or if the array is non-empty it delegates to the standard GenericTypeMapper. I'd still be interested to hear if anyone has solved the general problem. Possibly not as the SOAP client is obsolete and no longer supported by Microsoft.
{ "language": "en", "url": "https://stackoverflow.com/questions/174517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to read the content of a file to a string in C? What is the simplest way (least error-prone, least lines of code, however you want to interpret it) to open a file in C and read its contents into a string (char*, char[], whatever)? A: Note: This is a modification of the accepted answer above. Here's a way to do it, complete with error checking. I've added a size checker to quit when file was bigger than 1 GiB. I did this because the program puts the whole file into a string which may use too much ram and crash a computer. However, if you don't care about that you could just remove it from the code. #include <stdio.h> #include <stdlib.h> #define FILE_OK 0 #define FILE_NOT_EXIST 1 #define FILE_TOO_LARGE 2 #define FILE_READ_ERROR 3 char * c_read_file(const char * f_name, int * err, size_t * f_size) { char * buffer; size_t length; FILE * f = fopen(f_name, "rb"); size_t read_length; if (f) { fseek(f, 0, SEEK_END); length = ftell(f); fseek(f, 0, SEEK_SET); // 1 GiB; best not to load a whole large file in one string if (length > 1073741824) { *err = FILE_TOO_LARGE; return NULL; } buffer = (char *)malloc(length + 1); if (length) { read_length = fread(buffer, 1, length, f); if (length != read_length) { free(buffer); *err = FILE_READ_ERROR; return NULL; } } fclose(f); *err = FILE_OK; buffer[length] = '\0'; *f_size = length; } else { *err = FILE_NOT_EXIST; return NULL; } return buffer; } And to check for errors: int err; size_t f_size; char * f_data; f_data = c_read_file("test.txt", &err, &f_size); if (err) { // process error } else { // process data free(f_data); } A: What is the simplest way (least error-prone, least lines of code, however you want to interpret it) to open a file in C and read its contents into a string ...? Sadly, even after years, answers are error prone and many lack proper string formation and error checking. #include <stdio.h> #include <stdlib.h> // Read the file into allocated memory. // Return NULL on error. char* readfile(FILE *f) { // f invalid? fseek() fail? if (f == NULL || fseek(f, 0, SEEK_END)) { return NULL; } long length = ftell(f); rewind(f); // Did ftell() fail? Is the length too long? if (length == -1 || (unsigned long) length >= SIZE_MAX) { return NULL; } // Convert from long to size_t size_t ulength = (size_t) length; char *buffer = malloc(ulength + 1); // Allocation failed? Read incomplete? if (buffer == NULL || fread(buffer, 1, ulength, f) != ulength) { free(buffer); return NULL; } buffer[ulength] = '\0'; // Now buffer points to a string return buffer; } Note that if the text file contains null characters, the allocated data will contain all the file data, yet the string will appear to be short. Better code would also return the length information so the caller can handle that. char* readfile(FILE *f, size_t *ulength_ptr) { ... if (ulength_ptr) *ulength_ptr == *ulength; ... } A: Another, unfortunately highly OS-dependent, solution is memory mapping the file. The benefits generally include performance of the read, and reduced memory use as the applications view and operating systems file cache can actually share the physical memory. POSIX code would look like this: int fd = open("filename", O_RDONLY); int len = lseek(fd, 0, SEEK_END); void *data = mmap(0, len, PROT_READ, MAP_PRIVATE, fd, 0); Windows on the other hand is little more tricky, and unfortunately I don't have a compiler in front of me to test, but the functionality is provided by CreateFileMapping() and MapViewOfFile(). A: If the file is text, and you want to get the text line by line, the easiest way is to use fgets(). char buffer[100]; FILE *fp = fopen("filename", "r"); // do not use "rb" while (fgets(buffer, sizeof(buffer), fp)) { ... do something } fclose(fp); A: If you're using glib, then you can use g_file_get_contents; gchar *contents; GError *err = NULL; g_file_get_contents ("foo.txt", &contents, NULL, &err); g_assert ((contents == NULL && err != NULL) || (contents != NULL && err == NULL)); if (err != NULL) { // Report error to user, and free error g_assert (contents == NULL); fprintf (stderr, "Unable to read file: %s\n", err->message); g_error_free (err); } else { // Use file contents g_assert (contents != NULL); } } A: Just modified from the accepted answer above. #include <stdio.h> #include <stdlib.h> #include <assert.h> char *readFile(char *filename) { FILE *f = fopen(filename, "rt"); assert(f); fseek(f, 0, SEEK_END); long length = ftell(f); fseek(f, 0, SEEK_SET); char *buffer = (char *) malloc(length + 1); buffer[length] = '\0'; fread(buffer, 1, length, f); fclose(f); return buffer; } int main() { char *content = readFile("../hello.txt"); printf("%s", content); } A: If "read its contents into a string" means that the file does not contain characters with code 0, you can also use getdelim() function, that either accepts a block of memory and reallocates it if necessary, or just allocates the entire buffer for you, and reads the file into it until it encounters a specified delimiter or end of file. Just pass '\0' as the delimiter to read the entire file. This function is available in the GNU C Library, http://www.gnu.org/software/libc/manual/html_mono/libc.html#index-getdelim-994 The sample code might look as simple as char* buffer = NULL; size_t len; ssize_t bytes_read = getdelim( &buffer, &len, '\0', fp); if ( bytes_read != -1) { /* Success, now the entire file is in the buffer */ A: I tend to just load the entire buffer as a raw memory chunk into memory and do the parsing on my own. That way I have best control over what the standard lib does on multiple platforms. This is a stub I use for this. you may also want to check the error-codes for fseek, ftell and fread. (omitted for clarity). char * buffer = 0; long length; FILE * f = fopen (filename, "rb"); if (f) { fseek (f, 0, SEEK_END); length = ftell (f); fseek (f, 0, SEEK_SET); buffer = malloc (length); if (buffer) { fread (buffer, 1, length, f); } fclose (f); } if (buffer) { // start to process your data / extract strings here... } A: If you are reading special files like stdin or a pipe, you are not going to be able to use fstat to get the file size beforehand. Also, if you are reading a binary file fgets is going to lose the string size information because of embedded '\0' characters. Best way to read a file then is to use read and realloc: #include <stdio.h> #include <unistd.h> #include <errno.h> #include <string.h> int main () { char buf[4096]; ssize_t n; char *str = NULL; size_t len = 0; while (n = read(STDIN_FILENO, buf, sizeof buf)) { if (n < 0) { if (errno == EAGAIN) continue; perror("read"); break; } str = realloc(str, len + n + 1); memcpy(str + len, buf, n); len += n; str[len] = '\0'; } printf("%.*s\n", len, str); return 0; } A: // Assumes the file exists and will seg. fault otherwise. const GLchar *load_shader_source(char *filename) { FILE *file = fopen(filename, "r"); // open fseek(file, 0L, SEEK_END); // find the end size_t size = ftell(file); // get the size in bytes GLchar *shaderSource = calloc(1, size); // allocate enough bytes rewind(file); // go back to file beginning fread(shaderSource, size, sizeof(char), file); // read each char into ourblock fclose(file); // close the stream return shaderSource; } This is a pretty crude solution because nothing is checked against null. A: I will add my own version, based on the answers here, just for reference. My code takes into consideration sizeof(char) and adds a few comments to it. // Open the file in read mode. FILE *file = fopen(file_name, "r"); // Check if there was an error. if (file == NULL) { fprintf(stderr, "Error: Can't open file '%s'.", file_name); exit(EXIT_FAILURE); } // Get the file length fseek(file, 0, SEEK_END); long length = ftell(file); fseek(file, 0, SEEK_SET); // Create the string for the file contents. char *buffer = malloc(sizeof(char) * (length + 1)); buffer[length] = '\0'; // Set the contents of the string. fread(buffer, sizeof(char), length, file); // Close the file. fclose(file); // Do something with the data. // ... // Free the allocated string space. free(buffer); A: I just ran a bunch of tests comparing using seek, lseek, stat, and fstat also comparing using file streams and file descriptors to see what seems to be the fastest. For the test I create a 100M file. TL;DR - using file descriptors, fstat and read was the fastest and using file streams and seek was the slowest. For the test I ran this on a small Linux box I have running a headless ArchLinux server. I ran the test: checking the file size, malloc a buffer, read the entire file into the buffer, close the file, free the buffer. I ran the test 3 times with 1000 cycles each time and using clock_gettime to calculate the elapsed time. Just simply comparing JUST the time it takes to get the file size using stat or fstat were at least 30% faster than using seek or lseek. Comparing just the speed of using file streams vs file descriptors, they were pretty nearly the same - descriptors were about 1-3% faster. In comparing getting the file size, opening the file, malloc a buffer, read the entire 100M, close the file and free the buffer -- using file descriptors and fstat were 6-8% faster than using seek or lseek. Probably because the bulk of the time is spent in the file read vs the getting the file size, which dilutes the overall performance benefit. BTW - do not use fgetc and read the file 1 character at a time. This is crazy inefficient and really really slow! Like 1700% slower!!!! Post BTW... here is chat gpt's answer - not bad... but as I showed above it should really use stat or fstat #include <stdio.h> #include <stdlib.h> char* read_file(const char* filename) { FILE* file = fopen(filename, "rb"); if (file == NULL) { fprintf(stderr, "Failed to open file '%s'\n", filename); return NULL; } fseek(file, 0L, SEEK_END); long file_size = ftell(file); rewind(file); char* buffer = (char*) malloc(sizeof(char) * (file_size + 1)); if (buffer == NULL) { fclose(file); fprintf(stderr, "Failed to allocate memory for file '%s'\n", filename); return NULL; } size_t bytes_read = fread(buffer, sizeof(char), file_size, file); if (bytes_read != file_size) { fclose(file); free(buffer); fprintf(stderr, "Failed to read file '%s'\n", filename); return NULL; } buffer[file_size] = '\0'; fclose(file); return buffer; } A: easy and neat(assuming contents in the file are less than 10000): void read_whole_file(char fileName[1000], char buffer[10000]) { FILE * file = fopen(fileName, "r"); if(file == NULL) { puts("File not found"); exit(1); } char c; int idx=0; while (fscanf(file , "%c" ,&c) == 1) { buffer[idx] = c; idx++; } buffer[idx] = 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/174531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "136" }
Q: How to find "holes" in a table I recently inherited a database on which one of the tables has the primary key composed of encoded values (Part1*1000 + Part2). I normalized that column, but I cannot change the old values. So now I have select ID from table order by ID ID 100001 100002 101001 ... I want to find the "holes" in the table (more precisely, the first "hole" after 100000) for new rows. I'm using the following select, but is there a better way to do that? select /* top 1 */ ID+1 as newID from table where ID > 100000 and ID + 1 not in (select ID from table) order by ID newID 100003 101029 ... The database is Microsoft SQL Server 2000. I'm ok with using SQL extensions. A: select ID +1 From Table t1 where not exists (select * from Table t2 where t1.id +1 = t2.id); not sure if this version would be faster than the one you mentioned originally. A: This solution doesn't give all holes in table, only next free ones + first available max number on table - works if you want to fill in gaps in id-es, + get free id number if you don't have a gap.. select numb + 1 from temp minus select numb from temp; A: from How do I find a "gap" in running counter with SQL? select MIN(ID) from ( select 100001 ID union all select [YourIdColumn]+1 from [YourTable] where --Filter the rest of your key-- ) foo left join [YourTable] on [YourIdColumn]=ID and --Filter the rest of your key-- where [YourIdColumn] is null A: The best way is building a temp table with all IDs Than make a left join. declare @maxId int select @maxId = max(YOUR_COLUMN_ID) from YOUR_TABLE_HERE declare @t table (id int) declare @i int set @i = 1 while @i <= @maxId begin insert into @t values (@i) set @i = @i +1 end select t.id from @t t left join YOUR_TABLE_HERE x on x.YOUR_COLUMN_ID = t.id where x.YOUR_COLUMN_ID is null A: Have thought about this question recently, and looks like this is the most elegant way to do that: SELECT TOP(@MaxNumber) ROW_NUMBER() OVER (ORDER BY t1.number) FROM master..spt_values t1 CROSS JOIN master..spt_values t2 EXCEPT SELECT Id FROM <your_table> A: SELECT (ID+1) FROM table AS t1 LEFT JOIN table as t2 ON t1.ID+1 = t2.ID WHERE t2.ID IS NULL A: This solution should give you the first and last ID values of the "holes" you are seeking. I use this in Firebird 1.5 on a table of 500K records, and although it does take a little while, it gives me what I want. SELECT l.id + 1 start_id, MIN(fr.id) - 1 stop_id FROM (table l LEFT JOIN table r ON l.id = r.id - 1) LEFT JOIN table fr ON l.id < fr.id WHERE r.id IS NULL AND fr.id IS NOT NULL GROUP BY l.id, r.id For example, if your data looks like this: ID 1001 1002 1005 1006 1007 1009 1011 You would receive this: start_id stop_id 1003 1004 1008 1008 1010 1010 I wish I could take full credit for this solution, but I found it at Xaprb. A: This will give you the complete picture, where 'Bottom' stands for gap start and 'Top' stands for gap end: select * from ( (select <COL>+1 as id, 'Bottom' AS 'Pos' from <TABLENAME> /*where <CONDITION*/> except select <COL>, 'Bottom' AS 'Pos' from <TABLENAME> /*where <CONDITION>*/) union (select <COL>-1 as id, 'Top' AS 'Pos' from <TABLENAME> /*where <CONDITION>*/ except select <COL>, 'Top' AS 'Pos' from <TABLENAME> /*where <CONDITION>*/) ) t order by t.id, t.Pos Note: First and Last results are WRONG and should not be regarded, but taking them out would make this query a lot more complicated, so this will do for now. A: Many of the previous answer are quite good. However they all miss to return the first value of the sequence and/or miss to consider the lower limit 100000. They all returns intermediate holes but not the very first one (100001 if missing). A full solution to the question is the following one: select id + 1 as newid from (select 100000 as id union select id from tbl) t where (id + 1 not in (select id from tbl)) and (id >= 100000) order by id limit 1; The number 100000 is to be used if the first number of the sequence is 100001 (as in the original question); otherwise it is to be modified accordingly "limit 1" is used in order to have just the first available number instead of the full sequence A: For people using Oracle, the following can be used: select a, b from ( select ID + 1 a, max(ID) over (order by ID rows between current row and 1 following) - 1 b from MY_TABLE ) where a <= b order by a desc; A: The following SQL code works well with SqLite, but should be used without issues also on MySQL, MS SQL and so on. On SqLite this takes only 2 seconds on a table with 1 million rows (and about 100 spared missing rows) WITH holes AS ( SELECT IIF(c2.id IS NULL,c1.id+1,null) as start, IIF(c3.id IS NULL,c1.id-1,null) AS stop, ROW_NUMBER () OVER ( ORDER BY c1.id ASC ) AS rowNum FROM |mytable| AS c1 LEFT JOIN |mytable| AS c2 ON c1.id+1 = c2.id LEFT JOIN |mytable| AS c3 ON c1.id-1 = c3.id WHERE c2.id IS NULL OR c3.id IS NULL ) SELECT h1.start AS start, h2.stop AS stop FROM holes AS h1 LEFT JOIN holes AS h2 ON h1.rowNum+1 = h2.rowNum WHERE h1.start IS NOT NULL AND h2.stop IS NOT NULL UNION ALL SELECT 1 AS start, h1.stop AS stop FROM holes AS h1 WHERE h1.rowNum = 1 AND h1.stop > 0 ORDER BY h1.start ASC
{ "language": "en", "url": "https://stackoverflow.com/questions/174532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Google Maps Overlays I'm trying to find something, preferably F/OSS, that can generate a Google Maps overlay from KML and/or KMZ data. We've got an event site we're working on that needed to accommodate ~16,000 place markers last year and will likely have at least that many again this year. Last year, the company that had done the site just fed the KML data directly to the gMaps API and let it place all of the markers client side. Obviously, that became a performance nightmare and tended to make older browsers "freeze" (or at least appear frozen for several minutes at a time). Ideally this server side script would take the KML, the map's lat/lon center, and the map zoom level and appropriately merge all of the visible place markers into a single GIF or PNG overlay. Any guidance or recommendations on this would be greatly appreciated. UPDATE 10/8/2008 - Most of the information I've come across here and other places would seem to indicate that lessening the number of points on the map is the way to go (i.e. using one marker to represent several when viewing from a higher altitude/zoom level). While that's probably a good approach in some cases, it won't work here. We're looking for the visual impact of a US map with many thousand markers on it. One option I've explored is a service called PushPin, which when fed (presumably) KML will create, server side, an overlay that has all of the visible points (based on center lat/lon and zoom level) rendered onto a single image, so instead of performing several thousand DOM manipulations client side, we merge all of those markers into a single image server side and do a single DOM manipulation on the client end. The PushPin service is really slick and would definitely work if not for the associated costs. We're really looking for something F/OSS that we could run server side to generate that overlay ourselves. A: You may want to look into something like Geoserver or Mapserver. They are Google map clones, and a lot more. You could generate an overlay that you like, and Geoserver(I think mapserver does as well) can give you KML, PDF, png, and other output to mix your maps, or you could generate the whole map by yourself, but that takes time. A: Not sure why you want to go to a GIF/PNG overlay, you can do this directly in KML. I'm assuming that most of your performance problem was being caused by points outside the user's current view, i.e. the user is looking at New York but you have points in Los Angeles that are wasting memory because they aren't visible. If you really have 16,000 points that are all visible at once for a typical then yes you'll need to pursue a different strategy. If the above applies, the procedure would be as follows: * *Determine the center & extent of the map *Given that you should be able to calculate the lat/long of the upper left and lower right corners of the map. *Iterate through your database of points and check each location against the two corners. Longitude needs to be greater (signed!) than the upper left longitude and less than the lower right longitude. Latitude needs to be less than the upper left latitude (signed!) and greater than the lower right latitude. Just simple comparisons, no fancy calculations required here. *Output the matching points to a temporary KML for the user. *You can feed KML directly into Google Maps and let it map it, or you can use the Javascript maps API to load the points via KML. It might not solve your exact problem here, but for related issues you might also look into the Google Static Maps API. This allows you to create a static image file with placemarkers on it that will load very quickly, but won't have the interactivity of a regular Google map. Because of the way the API is designed, however, it can't handle anywhere near 16,000 points either so you'd still have to filter down to the view. A: I don't know how fare you are with your project but maybe you can take a look at GeoDjango? This modified Django release includes all kinds of tools to store locations; convert coordinates and display maps, the easy way. Offcourse you need some Python experience and a server to run it on, but once you've got the hang of Django it works fast and good. If you just want a solution for your problem try grouping your results at lower zoom levels, a good example of this implementation can be found here. A: This is a tough one. You can use custom tilesets with Google Maps, but you still need some way to generate the tiles (other than manually). I'm afraid that's all I've got =/ A: OpenLayers is a great javascript frontend to multiple mapping services or your own map servers. Version 2.7 was just released, which adds some pretty amazing features and controls.
{ "language": "en", "url": "https://stackoverflow.com/questions/174535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Sharing Test code in Maven How can you depend on test code from another module in Maven? Example, I have 2 modules: * *Base *Main I would like a test case in Main to extend a base test class in Base. Is this possible? Update: Found an acceptable answer, which involves creating a test jar. A: I recommend using type instead of classifier (see also: classifier). It tells Maven a bit more explicitly what you are doing (and I've found that m2eclipse and q4e both like it better). <dependency> <groupId>com.myco.app</groupId> <artifactId>foo</artifactId> <version>1.0-SNAPSHOT</version> <type>test-jar</type> <scope>test</scope> </dependency> A: Thanks for the base module suggestion. However, I'd rather not create a new module for just this purpose. Found an acceptable answer in the Surefire Maven documentation and a blog. See also "How to create a jar containing test classes". This creates jar file of code from src/test/java using the jar plugin so that modules with tests can share code. <project> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.4</version> <executions> <execution> <goals> <goal>test-jar</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> In order to use the attached test JAR that was created above you simply specify a dependency on the main artifact with a specified classifier of tests: <project> ... <dependencies> <dependency> <groupId>com.myco.app</groupId> <artifactId>foo</artifactId> <version>1.0-SNAPSHOT</version> <type>test-jar</type> <scope>test</scope> </dependency> </dependencies> ... </project> A: We solved this by making a maven project with test code as the src/main/java and adding the following dependency to projects: <dependency> <groupId>foo</groupId> <artifactId>test-base</artifactId> <version>1</version> <scope>test</scope> </dependency> A: Worked for me for 1 project, but I didn't for another after doing exactly the same steps. So I debugged: * *After mvn clean install I checked /target directory: .jar was there so thats good *Ran mvn dependency:tree on a project which should use those test classes. Noticed that generated jar file with test classes is marked as dependency, so thats good. *Conclusion could be only one - I restarted my Intellj. At first class import was still not visible, but after a minute it started to see it! Note: I only restarted Intellj, no caches removal etc A: Yep ... just include the Base module as a dependency in Main. If you're only inheriting test code, then you can use the scope tag to make sure Maven doesn't include the code in your artifact when deployed. Something like this should work: <dependency> <groupId>BaseGroup</groupId> <artifactId>Base</artifactId> <version>0.1.0-SNAPSHOT</version> <scope>test</scope> </dependency>
{ "language": "en", "url": "https://stackoverflow.com/questions/174560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "200" }
Q: Help with deploying Crystal Report embedded Visual Studio 2008 website I have been trying to get around this error for a day now and have not had much luck. I have a VS 2008 project that uses the embedded Crystal Reports which of course runs fine locally, but when deploying to my remote server the reports will no longer run. I gathered that it was because I didn't have the right Crystal Reports components installed on my server. So I attempted to add the dll files into my project directly which did work to resolve some of my errors but their is still a missing reference. The missing reference is on the 'CrystalDecsions.ReportAppServer.ClientDoc' which is located in my GAC. Is there anyway to get around this problem OTHER THAN installing the msi file on the server? A: I had a similar problem with 2005 a while ago, the only way I was about to get around the problem was to install the crystal reports redistributable on the server. Is there any reason that you're hesitant to install an MSI on your server?
{ "language": "en", "url": "https://stackoverflow.com/questions/174567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to set property to Guid in aspx page I have the following code in one of my aspx pages: <% foreach (Dependency dep in this.Common.GetDependencies(this.Request.QueryString["Name"])) { %> <ctl:DependencyEditor DependencyKey='<%= dep.Key %>' runat="server" /> <% } %> When I run it, I get the following error: Parser Error Message: Cannot create an object of type 'System.Guid' from its string representation '<%= dep.Key %>' for the 'DependencyKey' property. Is there any way that I can create a control and pass in a Guid in the aspx page? I'd really hate to have to loop through and create these controls in the code behind just because of that... NOTE: The Key property on the Dependency object is a Guid. A: The key property of the Dependency object may be a Guid, but is the DependencyKey Property of the DependencyEditor a Guid too? If not it should be, otherwise the correct TypeConverter won't be invoked upon assignment. If I'm not mistaken, you could also use dep.Key.ToString() also. A: is your control taking the value and assuming its a GUID? Have you tried instantiating a GUID with the value? Looks like this is a cast problem A: ok, here's the deal...try using the # symbol instead of the = symbol. I replicated your problem and that gets past the compile issue. It should look like this "<%# dep.Key %>" good luck! A: Assuming dep.Key is a string representation of a guid... and DependencyKey is a property of type Guid <ctl:DependencyEditor DependencyKey="<%= new Guid(dep.Key) %>" runat="server" />
{ "language": "en", "url": "https://stackoverflow.com/questions/174570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I rename a column in a database table using SQL? If I wish to simply rename a column (not change its type or constraints, just its name) in an SQL database using SQL, how do I do that? Or is it not possible? This is for any database claiming to support SQL, I'm simply looking for an SQL-specific query that will work regardless of actual database implementation. A: In Informix, you can use: RENAME COLUMN TableName.OldName TO NewName; This was implemented before the SQL standard addressed the issue - if it is addressed in the SQL standard. My copy of the SQL 9075:2003 standard does not show it as being standard (amongst other things, RENAME is not one of the keywords). I don't know whether it is actually in SQL 9075:2008. A: You can use the following command to rename the column of any table in SQL Server: exec sp_rename 'TableName.OldColumnName', 'New colunmName' A: In MySQL, the syntax is ALTER TABLE ... CHANGE: ALTER TABLE <table_name> CHANGE <column_name> <new_column_name> <data_type> ... Note that you can't just rename and leave the type and constraints as is; you must retype the data type and constraints after the new name of the column. A: ALTER TABLE is standard SQL. But it's not completely implemented in many database systems. A: Unfortunately, for a database independent solution, you will need to know everything about the column. If it is used in other tables as a foreign key, they will need to be modified as well. ALTER TABLE MyTable ADD MyNewColumn OLD_COLUMN_TYPE; UPDATE MyTable SET MyNewColumn = MyOldColumn; -- add all necessary triggers and constraints to the new column... -- update all foreign key usages to point to the new column... ALTER TABLE MyTable DROP COLUMN MyOldColumn; For the very simplest of cases (no constraints, triggers, indexes or keys), it will take the above 3 lines. For anything more complicated it can get very messy as you fill in the missing parts. However, as mentioned above, there are simpler database specific methods if you know which database you need to modify ahead of time. A: I think this is the easiest way to change column name. SP_RENAME 'TABLE_NAME.OLD_COLUMN_NAME','NEW_COLUMN_NAME' A: The standard would be ALTER TABLE, but that's not necessarily supported by every DBMS you're likely to encounter, so if you're looking for an all-encompassing syntax, you may be out of luck. A: Specifically for SQL Server, use sp_rename USE AdventureWorks; GO EXEC sp_rename 'Sales.SalesTerritory.TerritoryID', 'TerrID', 'COLUMN'; GO A: In sql server you can use exec sp_rename '<TableName.OldColumnName>','<NewColumnName>','COLUMN' or sp_rename '<TableName.OldColumnName>','<NewColumnName>','COLUMN' A: On PostgreSQL (and many other RDBMS), you can do it with regular ALTER TABLE statement: => SELECT * FROM Test1; id | foo | bar ----+-----+----- 2 | 1 | 2 => ALTER TABLE Test1 RENAME COLUMN foo TO baz; ALTER TABLE => SELECT * FROM Test1; id | baz | bar ----+-----+----- 2 | 1 | 2 A: Alternatively to SQL, you can do this in Microsoft SQL Server Management Studio, from the table Design Panel. First Way Slow double-click on the column. The column name will become an editable text box. Second Way SqlManagement Studio>>DataBases>>tables>>specificTable>>Column Folder>>Right Click on column>>Reman Third Way Table>>RightClick>>Design A: To rename you have to change the column e.g Suppose *registration is Table Name newRefereeName is a column name That I want to change to refereeName SO my SQL Query will be* ALTER TABLE 'registration' CHANGE 'newRefereeName' 'refereeName' VARCHAR(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL;
{ "language": "en", "url": "https://stackoverflow.com/questions/174582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "130" }
Q: Best open source LINQ provider What's the best open source LINQ provider (in terms of completeness)? I'm developing an open source LINQ provider myself and I'd like to borrow as many ideas as I can, avoid common pitfalls, etc. Do not restrict yourself to database LINQ providers, any provider suggestion is welcome. A: There is another implementation "re-linq". Have a look here: * *http://www.re-motion.org/blogs/team/archive/2009/04/23/introducing-re-linq-a-general-purpose-linq-provider-infrastructure.aspx *http://www.codeproject.com/KB/linq/relinqish_the_pain.aspx Hope it helps, Patrick A: Our object database db4o comes with an open source LINQ provider. We even provide an implementation for CompactFramework. To my knowledge this is the only LINQ provider available for CompactFramework. A: LINQ to Amazon web services. http://linqinaction.net/files/folders/linqinaction/entry1952.aspx A: Look at LINQExtender for an example of an extendable IQueryable implementation. It not only provides a good open source example, but you may find you could use that instead of developing an IQueryable implementation from scratch. A: I have a pseudo-LINQ provider: "Push LINQ". It's like Parallel Extensions in that it changes how an existing in-memory data source is used, rather than bringing another actual data source into play. The bits are available as part of my MiscUtil project. It's probably best to ping me privately if you get into it and want to know more (or make suggestions). A: The DbLinq project is working on linq2sql support for other databases, and is now working with the Mono project to become a full System.Data.Linq implementation. A: We have a complete linq provider in Signum Framework You can find the source here as well. (All the Linq subtree). I'ld also take a look to Wayward blog A: LinqExtender gives a way to get started with LINQ to anything without doing the complex Expression tree parsing. It gives out more or less easy data structure, without sacrificing things like projection , where , order by etc. Its still under development and a starting point could be LinqToFlickr. Hope you find it useful and its open to any suggestion
{ "language": "en", "url": "https://stackoverflow.com/questions/174585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Can you add documents and spreadsheets to a Visual Studio Project? In Eclipse, I often include all project-related material (including documents in PDF, Microsoft, and OpenDocument formats) in the project. Is this possible with Visual Studio, especially to the point where if I attempt to open the file from inside Visual Studio, it will open in the external application? A: Yes, just right-click your project in the solution explorer and goto Add > Existing Item... Though -- I'd recommend making a new folder to keep this in. Yes, you can make sure that when you open it it opens with the correct application. Just right-click the file once it's added into the solution explorer and select Open With... and make sure you set the default application that way from that point forward you can just double-click your files. If you have access to use Sharepoint Services with your source control than that would also make life much easier, thanks for reminding me Chris! A: I don't want to take away from Chad's answer. However, I will add that TFS has specific areas for project documentation to be stored in sharepoint.
{ "language": "en", "url": "https://stackoverflow.com/questions/174593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: In an Oracle database, what is the difference between ROWNUM and ROW_NUMBER? What is the difference between ROWNUM and ROW_NUMBER ? A: ROWNUM is a "pseudocolumn" that assigns a number to each row returned by a query: SQL> select rownum, ename, deptno 2 from emp; ROWNUM ENAME DEPTNO ---------- ---------- ---------- 1 SMITH 99 2 ALLEN 30 3 WARD 30 4 JONES 20 5 MARTIN 30 6 BLAKE 30 7 CLARK 10 8 SCOTT 20 9 KING 10 10 TURNER 30 11 FORD 20 12 MILLER 10 ROW_NUMBER is an analytic function that assigns a number to each row according to its ordering within a group of rows: SQL> select ename, deptno, row_number() over (partition by deptno order by ename) rn 2 from emp; ENAME DEPTNO RN ---------- ---------- ---------- CLARK 10 1 KING 10 2 MILLER 10 3 FORD 20 1 JONES 20 2 SCOTT 20 3 ALLEN 30 1 BLAKE 30 2 MARTIN 30 3 TURNER 30 4 WARD 30 5 SMITH 99 1 A: From a little reading, ROWNUM is a value automatically assigned by Oracle to a rowset (prior to ORDER BY being evaluated, so don't ever ORDER BY ROWNUM or use a WHERE ROWNUM < 10 with an ORDER BY). ROW_NUMBER() appears to be a function for assigning row numbers to a result set returned by a subquery or partition. A: Apart from the other differences mentioned in answers, you should also consider performance. There is a non-authoritative but very interesting report here, comparing various means of pagination, among which the use of ROWNUM compared to ROW_NUMBER() OVER(): http://www.inf.unideb.hu/~gabora/pagination/results.html A: rownum is a pseudocolumn which can be added to any select query, to number the rows returned (starting with 1). They are ordered according to when they were identified as being part of the final result set. (#ref) row_number is an analytic's function, which can be used to number the rows returned by the query in an order mandated by the row_number() function. A: Rownum starts with 1 ..increases after condition evaluated results to true . Hence rownum >=1 returns all rows in table
{ "language": "en", "url": "https://stackoverflow.com/questions/174595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: rails in_place_edit: how do I pass an authenticity token? I am trying to get in place editing working but I am running into this error: ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken) I understand that rails now wants to protect against forgery and that I need to pass a form authenticity token but I am not clear on how to do this with the in_place_edit plugin. A: This isn't a complete tested answer, but I took a look at the plugin code, and it looks like you could use the :with option to tack the authenticity token onto the end of the request parameters. Something along the lines of: in_place_editor("my_element", :with => "form.serialize() + '&authenticity_token=#{form_authenticity_token}';") (I have not tested the above code). A: I found a solution. I put the instructions here. Take a look at the part on patching in_place_edit.
{ "language": "en", "url": "https://stackoverflow.com/questions/174598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a way to disable a SQL Server trigger for just a particular scope of execution? In SQL Server 2005, is there a way for a trigger to find out what object is responsible for firing the trigger? I would like to use this to disable the trigger for one stored procedure. Is there any other way to disable the trigger only for the current transaction? I could use the following code, but if I'm not mistaken, it would affect concurrent transactions as well - which would be a bad thing. DISABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL } ON { object_name | DATABASE | ALL SERVER } [ ; ] ENABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL } ON { object_name | DATABASE | ALL SERVER } [ ; ] If possible, I would like to avoid the technique of having a "NoTrigger" field in my table and doing a NoTrigger = null, because I would like to keep the table as small as possible. The reason I would like to avoid the trigger is because it contains logic that is important for manual updates to the table, but my stored procedure will take care of this logic. Because this will be a highly used procedure, I want it to be fast. Triggers impose additional overhead on the server because they initiate an implicit transaction. As soon as a trigger is executed, a new implicit transaction is started, and any data retrieval within a transaction will hold locks on affected tables. From: http://searchsqlserver.techtarget.com/tip/1,289483,sid87_gci1170220,00.html#trigger A: If your trigger is causing performance problems in your application, then the best approach is to remove all manual updates to the table, and require all updates to go through the insert/update stored procedures that contain the correct update logic. Then you may remove the trigger completely. I suggest denying table update permissions if nothing else works. This also solves the problem of duplicate code. Duplicating code in the update SP and in the trigger is a violation of good software engineering principles and will be a maintenance problem. A: I just saw this article recently highlighted on the SQL Server Central newsletter and it appears to offer a way which you may find useful using the Context_Info on the connection: http://www.mssqltips.com/tip.asp?tip=1591 EDIT by Terrapin: The above link includes the following code: USE AdventureWorks; GO -- creating the table in AdventureWorks database IF OBJECT_ID('dbo.Table1') IS NOT NULL DROP TABLE dbo.Table1 GO CREATE TABLE dbo.Table1(ID INT) GO -- Creating a trigger CREATE TRIGGER TR_Test ON dbo.Table1 FOR INSERT,UPDATE,DELETE AS DECLARE @Cinfo VARBINARY(128) SELECT @Cinfo = Context_Info() IF @Cinfo = 0x55555 RETURN PRINT 'Trigger Executed' -- Actual code goes here -- For simplicity, I did not include any code GO If you want to prevent the trigger from being executed you can do the following: SET Context_Info 0x55555 INSERT dbo.Table1 VALUES(100) A: ALTER TABLE tbl DISABLE TRIGGER trg http://doc.ddart.net/mssql/sql70/aa-az_5.htm I don't understand the meaning of your 1st paragraph though A: Since you indicate that the trigger contains logic to handle all updates, even manual updates, then that should be where the logic resides. The example you mention, wherein a stored procedure "will take care of this logic" implies duplicate code. Additionally, if you want to be sure that every UPDATE statement has this logic applied regardless of author, then the trigger is the place for it. What happens when someone authors a procedure but forgets to duplicate the logic yet again? What happens when it is time to modify the logic? A: Not sure if this is a good idea but it seems to work for me. Transaction should prevent inserts to the table from other processes while trigger is disabled. IF OBJECT_ID('dbo.TriggerTest') IS NOT NULL DROP PROCEDURE dbo.TriggerTest GO CREATE PROCEDURE [dbo].[TriggerTest] AS BEGIN TRANSACTION trnInsertTable1s ; DISABLE TRIGGER trg_tblTable1_IU ON tblTable1 ; BEGIN -- Procedure Code PRINT '@@trancount' PRINT @@TRANCOUNT -- Do Stuff END -- Procedure Code ; ENABLE TRIGGER trg_tblTable1_IU ON tblTable1 IF @@ERROR <> 0 ROLLBACK TRANSACTION ELSE COMMIT TRANSACTION A: Do not disable the trigger. You are correct that will disable for any concurrent transactions. Why do you want to disable the trigger? What does it do? WHy is the trigger casuing a problem? It is usually a bad idea to disable a tigger from a data integrity perspective. A: Consider rewriting the trigger to imporve performance if performance is the issue. A: I waffled a bit on this one. On the one hand I'm very anti-trigger mostly because it's one more place for me to look for code executing against my table, in addition to the reasons stated in the article linked in the question post. On the other hand, if you have logic to enforce stable and immutable business rules or cross-table actions (like maintaining a history table) then it would be safer to get this into a trigger so procedure authors and programmers don't need to deal with it - it just works. So, my recommendation is to put the necessary logic in your trigger rather than in this one proc which, will inevitably grow to several procs with the same exemption. A: I concur with some other answers. Do not disable the trigger. This is pure opinion, but I avoid triggers like the plague. I have found very few cases where a trigger was used to enforce database rules. There are obvious edge cases in my experience, and I have only my experience on which to make this statement. I have typically seen triggers used to insert some relational data (which should be done from the business logic), for insert data into reporting table ie denormalizing the data (which can be done with a process outside the transaction), or for transforming the data in some way. There are legitimate uses for triggers, but I think that in everyday business programming they are few and far between. This may not help in your current problem, but you might consider removing the trigger altogether and accomplishing the work the trigger is doing in some other fashion. A: I just confronted the same problem and came up with the following solution, which works for me. * *Create a permanent DB table that contains one record for each trigger that you want to disable (e.g. refTriggerManager); each row contains the trigger name (e.g. strTriggerName = 'myTrigger') and a bit flag (e.g. blnDisabled, default to 0). *At the beginning of the trigger body, look up strTriggerName = 'myTrigger' in refTriggerManager. If blnDisabled = 1, then return without executing the rest of the trigger code, else continue the trigger code to completion. *In the stored proc in which you want to disable the trigger, do the following: BEGIN TRANSACTION UPDATE refTriggerManager SET blnDisabled = 1 WHERE strTriggerName = 'myTrigger' /* UPDATE the table that owns 'myTrigger,' but which you want disabled. Since refTriggerManager.blnDisabled = 1, 'myTrigger' returns without executing its code. */ UPDATE refTriggerManager SET blnDisabled= 0 WHERE triggerName = 'myTrigger' /* Optional final UPDATE code that fires trigger. Since refTriggerManager.blnDisabled = 0, 'myTrigger' executes in full. */ COMMIT TRANSACTION All of this takes place within a transaction, so it's isolated from the outside world and won't affect other UPDATEs on the target table. Does anyone see any problem with this approach? Bill A: you can use 'Exec' function to diable and enable triggers from a stored procedure. Example: EXEC ('ENABLE TRIGGER dbo.TriggerName on dbo.TriggeredTable')
{ "language": "en", "url": "https://stackoverflow.com/questions/174600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: CAML query to locate specific SPFolder nested in document library tree It seems like searching with CAML and SPQuery doesn't work properly against custom metadata, when searching for SPFolders instead of files, or when searching for custom content types. I've been using U2U to test a variety of queries, and just not getting anywhere. The doc's aren't very complete on the topic, and google isn't helping either. In one test, I'm trying to locate any SPFolders in the tree that are a specific custom content-type. If I understand CAML correctly, this should work: <Query> <Where> <Eq> <FieldRef Name='ContentType' /> <Value Type='Text'>CustomTypeName</Value> </Eq> </Where> </Query> In another test, I'm trying to locate any SPFolder that has a custom metadata property set to a specific value. <Query> <Where> <Eq> <FieldRef Name='CustomProp' /> <Value Type='Text'>SpecificPropValue</Value> </Eq> </Where> </Query> In both cases, I'm setting the root for the search to a document library that contains folders, which contain folders, which contain folders (phew.) Also, I'm setting the SPQuery to search recursively. The folder I'm searching for a two steps down are the farthest down in the tree. I don't want to iterate all the way in to manually locate the folders in question. EDIT It might also be helpful to know that I'm using both SPList.GetItems with an SPQuery as an argument, and SPWeb.GetSiteData with an SPSiteDataQuery as an argument. At the moment it appears that folders aren't included in the search-set for either of these queries. Any help would be greatly appreciated. A: After more research, I'm answering my own question. Apparently the methods that I'm using to query don't return SPFolders as items in the result set. Only list items are returned, basically just documents. My fix was to execute a CAML query for all the documents with a certain metadata tag/value, and then using the parent folder of the first one as the representative folder for the set. Works well enough for my needs. A: Try adding SharePoint Manager and Stramit CAML Viewer to your toolset. I have found both to be very important for figuring out CAML problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/174602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cross platform format string for variables of type size_t? On a cross platform c/c++ project (Win32, Linux, OSX), I need to use the *printf functions to print some variables of type size_t. In some environments size_t's are 8 bytes and on others they are 4. On glibc I have %zd, and on Win32 I can use %Id. Is there an elegant way to handle this? A: The only thing I can think of, is the typical: #ifdef __WIN32__ // or whatever #define SSIZET_FMT "%ld" #else #define SSIZET_FMT "%zd" #endif and then taking advantage of constant folding: fprintf(stream, "Your size_t var has value " SSIZET_FMT ".", your_var); A: Dan Saks wrote an article in Embedded Systems Design which covered this matter. According to Dan, %zu is the standard way, but few compilers supported this. As an alternative, he recommended using %lu together with an explicit cast of the argument to unsigned long: size_t n; ... printf("%lu", (unsigned long)n); A: Use boost::format. It's typesafe, so it'll print size_t correctly with %d, also you don't need to remember to put c_str() on std::strings when using it, and even if you pass a number to %s or vice versa, it'll work. A: The PRIuPTR macro (from <inttypes.h>) defines a decimal format for uintptr_t, which should always be large enough that you can cast a size_t to it without truncating, e.g. fprintf(stream, "Your size_t var has value %" PRIuPTR ".", (uintptr_t) your_var); A: I don't know of any satisfying solution, but you might consider a specialized function to format size_t items to a string, and print the string. (Alternatively, if you can get away with it, boost::format handles this kind of thing with ease.) A: You just have to find an integer type with the largest storage class, cast the value to it, and then use the appropriate format string for the larger type. Note this solution will work for any type (ptrdiff_t, etc.), not just size_t. What you want to use is uintmax_t and the format macro PRIuMAX. For Visual C++, you are going to need to download c99-compatible stdint.h and inttypes.h headers, because Microsoft doesn't provide them. Also see http://www.embedded.com/columns/technicalinsights/204700432 This article corrects the mistakes in the article quoted by Frederico. A: Option 1: Since on most (if not all?) systems, the PRIuPTR printf format string from inttypes.h is also long enough to hold a size_t type, I recommend using the following defines for size_t printf format strings. However, it is important that you verify this will work for your particular architecture (compiler, hardware, etc), as the standard does not enforce this. #include <inttypes.h> // Printf format strings for `size_t` variable types. #define PRIdSZT PRIdPTR #define PRIiSZT PRIiPTR #define PRIoSZT PRIoPTR #define PRIuSZT PRIuPTR #define PRIxSZT PRIxPTR #define PRIXSZT PRIXPTR Example usage: size_t my_variable; printf("%" PRIuSZT "\n", my_variable); Option 2: Where possible, however, just use the %zu "z" length specifier, as shown here, for size_t types: Example usage: size_t my_variable; printf("%zu\n", my_variable); On some systems, however, such as STM32 microcontrollers using gcc as the compiler, the %z length specifier isn't necessarily implemented, and doing something like printf("%zu\n", my_size_t_num); may simply end up printing out a literal "%zu" (I personally tested this and found it to be true) instead of the value of your size_t variable. Option 3: Where you need it to be absolutely guaranteed to work, however, or where you aren't sure about your particular architecture, just cast and print as a uint64_t and be done, as this is guaranteed to work, but requires the extra step of casting. Example usage: #include <stdint.h> // for uint64_t #include <inttypes.h> // for PRIu64 size_t my_variable; printf("%" PRIu64 "\n", (uint64_t)my_variable); Sources Cited: * *http://www.cplusplus.com/reference/cstdio/printf/ *http://www.cplusplus.com/reference/cinttypes/ *http://www.cplusplus.com/reference/cstdint/ A: There are really two questions here. The first question is what the correct printf specifier string for the three platforms is. Note that size_t is an unsigned type. On Windows, use "%Iu". On Linux and OSX, use "%zu". The second question is how to support multiple platforms, given that things like format strings might be different on each platform. As other people have pointed out, using #ifdef gets ugly quickly. Instead, write a separate makefile or project file for each target platform. Then refer to the specifier by some macro name in your source files, defining the macro appropriately in each makefile. In particular, both GCC and Visual Studio accept a 'D' switch to define macros on the command line. If your build system is very complicated (multiple build options, generated sources, etc.), maintaining 3 separate makefiles might get unwieldly, and you are going to have to use some kind of advanced build system like CMake or the GNU autotools. But the basic principle is the same-- use the build system to define platform-specific macros instead of putting platform-detection logic in your source files. A: My choice for that problem is to simply cast the size_t argument to unsigned long and use %lu everywhere - this of course only where values are not expected to exceed 2^32-1. If this is too short for you, you could always cast to unsigned long long and format it as %llu. Either way, your strings will never be awkward. A: size_t is an unsigned type of at least 16 bits. Widths of 32 and 64 are often seen. printf("%zu\n", some_size_t_object); // Standard since C99 Above is the best way going forward, yet if code needs to also port to pre-C99 platforms, covert the value to some wide type. unsigned long is reasonable candidate yet may be lacking. // OK, yet insufficient with large sizes > ULONG_MAX printf("%lu\n", (unsigned long) some_size_t_object); or with conditional code #ifdef ULLONG_MAX printf("%llu\n", (unsigned long long) some_size_t_object); #else printf("%lu\n", (unsigned long) some_size_t_object); #endif Lastly consider double. It is a bit inefficient yet should handle all ancient and new platforms until about the years 2030-2040 considering Moore's law when double may lack a precise result. printf("%.0f\n", (double) some_size_t_object);
{ "language": "en", "url": "https://stackoverflow.com/questions/174612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Regular Expression: Match to (aa|bb) (cc)? My regular expression needs to be able to find the strings: * *Visual Studio 2008 *Visual Studio Express 2008 *Visual Basic 2008 *Visual Basic Express 2008 *Visual C++ 2008 *Visual C++ Express 2008 and a host of other similar variants, to be replaced with this one single string Visual Studio 2005 I tried "Visual (Basic|C++|Studio) (Express)? 2008", but it is not working. Any ideas? Edit: Now I have tried "Visual (Basic)|(C++)|(Studio) (Express )?2008", but the replaced line becomes "Visual Studio 2005 Express 2008" for the input "Visual Basic Express 2008". A: It should be "Visual (Basic|C\+\+|Studio)( Express)? 2008" >>> import re >>> repl = 'Visual Studio 2005' >>> regexp = re.compile('Visual (Studio|Basic|C\+\+)( Express)? 2008') >>> test1 = 'Visual Studio 2008' >>> test2 = 'Visual Studio Express 2008' >>> test3 = 'Visual C++ Express 2008' >>> test4 = 'Visual C++ Express 1008' >>> re.sub(regexp,repl,test1) 'Visual Studio 2005' >>> re.sub(regexp,repl,test2) 'Visual Studio 2005' >>> re.sub(regexp,repl,test3) 'Visual Studio 2005' >>> re.sub(regexp,repl,test4) 'Visual C++ Express 1008' A: In the case without an Express, you are looking for 2 spaces before the year. That is no good. Try this: "Visual (Basic|C\+\+|Studio) (Express )?2008" Depending on the input, it might be enough to use: "Visual [^ ]+ (Express )?2008" A: You need to escape the special characters (like +). Also the 'express' bit, should have a space on either side. A: How about this: Visual (Basic|C\\+\\+|Studio) (Express )?2008 A: Unless your sample input is riddled with all sorts of permutations of your keywords, you could simplify it immensely with this: Visual .+? 2008 A: i think this should works /visual (studio|basic|c\+\+)? (express)?\s?2008/i A: Try with: Visual (Basic|C\+\+|Studio)( Express)? 2008 that is, quote the '+' of 'C++' and include the space in "Express" Since it's Python and you don't need the parenthesized parts: Visual (?:Basic|C\+\+|Studio)(?: Express)? 2008 A: This is more explicit with spaces: Visual\s(Basic|C\+\+|Studio)(\sExpress)?\s2008 A: A very late answer, but like to answer.You can simply try this /Visual.*2008/g http://regex101.com/r/fI0yU1/1
{ "language": "en", "url": "https://stackoverflow.com/questions/174633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Wiimote example programs I'd like to use the Wiimote (accelerometers, gyroscopes, infrared camera, etc, etc, etc) on various applications. It's a bluetooth device, and I know others have connected it to their computer. * *What's the easiest way to start using it in my software - are there libraries for C#, for instance? *I want my software to be usable and easily installable - what's the current easiest way to connect a wiimote to the computer? Can I make that process part of my software installation? -Adam A: Coding4Fun had a managed C# library up that would allow you to interface with it but it seems their site is down right now. Though I think they used the Robotics Studio so that may be a place to start. Found it... http://www.codeplex.com/WiimoteLib Oh and I forgot to post a link to these videos I saw quite some time ago. http://www.cs.cmu.edu/~johnny/projects/wii/ A: If you use WiimoteLib (from Coding4Fun as mentioned in another answer), there is an example application called WiimoteTest. This tests all of the Wiimote inputs and outputs, including for multiple Wiimotes at the same time, so it is a good starting point for your own Wiimote code as it gives you an example of how to do pretty much anything you would want to. For the second part of your question, connecting the Wiimote to the computer is pretty much the same as connecting any other Bluetooth device. I don't know that it would be very suitable to have this done at installation as it is likely to be connected and disconnected a lot, especially since the Wiimote will turn itself off if not used for a while. However, it's pretty much a matter of following a standard Windows wizard to connect to it, so it's not too hard. This assumes you have a Bluetooth driver that will work with the Wiimote - more information on that is available at the Coding4Fun website. It worked for me with the default driver that already came with my laptop but if it doesn't for you, the one they recommend is BlueSoleil. A: Have you seen Johnny Chung Lee's 'Procrastineering' Blog? He's written a lot on the subject of using wii remotes and has some fantastic demonstration videos. [Edit] I just found out Mr Lee did a TED talk which gives a good introduction to the stuff he's done too... There's a wealth of information over on Wiibrew.org - check out their Wiimote Library page for some other APIs if you want to look beyond c#. As an avid Python fan, I'm quite curious to have a play with the pyWiimote library :-) A: what's the current easiest way to connect a wiimote to the computer? I not found solution for connect wiimote within my software, you have to connect manually into Windows, but on Windows register bluetooth device it very take time, Try Toshiba bluetooth stack it more convenient.
{ "language": "en", "url": "https://stackoverflow.com/questions/174653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Calculating which tiles are lit in a tile-based game ("raytracing") I'm writing a little tile-based game, for which I'd like to support light sources. But my algorithm-fu is too weak, hence I come to you for help. The situation is like this: There is a tile-based map (held as a 2D array), containing a single light source and several items standing around. I want to calculate which tiles are lit up by the light source, and which are in shadow. A visual aid of what it would look like, approximately. The L is the light source, the Xs are items blocking the light, the 0s are lit tiles, and the -s are tiles in shadow. 0 0 0 0 0 0 - - 0 0 0 0 0 0 0 - 0 0 0 0 0 0 0 X 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 L 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 X X X X 0 0 0 0 0 - - - - - 0 0 0 - - - - - - - A fractional system would be even better, of course, where a tile can be in half-shadow due to being partially obscured. The algorithm wouldn't have to be perfect - just not obviously wrong and reasonably fast. (Of course, there would be multiple light sources, but that's just a loop.) Any takers? A: The algorithms being presented here seem to me to be doing more calculations than I think are needed. I have not tested this but I think it would work: Initially, mark all pixels as lit. For every pixel on the edge of the map: As Arachnid suggested, use Bresenham to trace a line from the pixel to the light. If that line strikes an obstruction then mark all pixels from the edge to just beyond the obstruction as being in shadow. A: Quick and dirty: (Depending on how big the array is) * *Loop through each tile *draw a line to the Light *If any pary of the line hits an X, then it is in shadow *(Optional): calculate the amount of X the line passes through and do fancy maths to determint the proportion of the tile in shadow. NB: This could be done by anti-aliasing the line between the tile and the Light (therefore looking at other tiles along the route back to the light source) during the thresholding procedure these will appear as small anomolies. Depending on the logic used you could potentially determine how much (if at all) the tile is in shadow. You could also keep a track of which pixels have been tested, therefore optimize the solution a little and not re-test pixels twice. This could be dome pretty well by using image manipulation and drawing straight lines between pixles (tiles) If the lines are semi transparent and the X blocks are semi-transparent again. You can threshold the image to determine if the line has intersected an 'X' If you have an option to use a 3rd party tool, then Id probably take it. In the long run it might turn out to be quicker, but you'd understand less about your game. A: This is just for fun: You can replicate the Doom 3 approach in 2D if you first do a step to convert your tiles into lines. For instance, - - - - - - X X X - - X X - - - X - - - - - - - L ...would be reduced into three lines connecting the corners of the solid object in a triangle. Then, do what the Doom 3 engine does: From the perspective of the light source, consider each "wall" that faces the light. (In this scene, only the diagonal line would be considered.) For each such line, project it into a trapezoid whose front edge is the original line, whose sides lie on lines from the light source through each end point, and whose back is far away, past the whole scene. So, it's a trapezoid that "points at" the light. It contains all the space that the wall casts its shadow on. Fill every tile in this trapezoid with darkness. Proceed through all such lines and you will end up with a "stencil" that includes all the tiles visible from the light source. Fill these tiles with the light color. You may wish to light the tile a little less as you get away from the source ("attenuation") or do other fancy stuff. Repeat for every light source in your scene. A: To check if a tile is in shadow you need to draw a straight line back to the light source. If the line intersects another tile that's occupied, then the tile you were testing is in shadow. Raytracing algorithms do this for every object (in your case tile) in the view. The Raytracing article on Wikipedia has pseudocode. A: Here is a very simple but fairly effective approach that uses linear time in the number of tiles on screen. Each tile is either opaque or transparent (that's given to us), and each can be visible or shaded (that's what we're trying to compute). We start by marking the avatar itself as "visible". We then apply this recursive rule to determine the visibility of the remaining tiles. * *If the tile is on the same row or column as the avatar, then it is only visible if the adjacent tile nearer to the avatar is visible and transparent. *If the tile is on a 45 degree diagonal from the avatar, then it is only visible if the neighboring diagonal tile (towards the avatar) is visible and transparent. *In all other cases, consider the three neighboring tiles that are closer to the avatar than the tile in question. For example, if this tile is at (x,y) and is above and to the right of the avatar, then the three tiles to consider are (x-1, y), (x, y-1) and (x-1, y-1). The tile in question is visible if any of those three tiles are visible and transparent. In order to make this work, the tiles must be inspected in a specific order to ensure that the recursive cases are already computed. Here is an example of a working ordering, starting from 0 (which is the avatar itself) and counting up: 9876789 8543458 7421247 6310136 7421247 8543458 9876789 Tiles with the same number can be inspected in any order amongst themselves. The result is not beautiful shadow-casting, but computes believable tile visibility. A: I know this is years old question, but for anyone searching for this style of stuff I'd like to offer a solution I used once for a roguelike of my own; manually "precalculated" FOV. If you field of view of light source has a maximum outer distance it's really not very much effort to hand draw the shadows created by blocking objects. You only need to draw 1/8 th of the circle (plus the straight and diagonal directions); you can use symmerty for the other eigths. You'll have as many shadowmaps as you have squares in that 1/8th of a circle. Then just OR them together according to objects. The three major pros for this are: 1. It's very quick if implemented right 2. You get to decide how the shadow should be cast, no comparing which algorith handles which situation the best 3. No weird algorith induced edge cases which you have to somehow fix The con is you don't really get to implement a fun algorithm. A: The roguelike development community has a bit of an obsession with line-of-sight, field-of-view algorithms. Here's a link to a roguelike wiki article on the subject: http://roguebasin.roguelikedevelopment.org/index.php?title=Field_of_Vision For my roguelike game, I implemented a shadow casting algorithm (http://roguebasin.roguelikedevelopment.org/index.php?title=Shadow_casting) in Python. It was a bit complicated to put together, but ran reasonably efficiently (even in pure Python) and generated nice results. The "Permissive Field of View" seems to be gaining popularity as well: http://roguebasin.roguelikedevelopment.org/index.php?title=Permissive_Field_of_View A: TK's solution is the one that you would generally use for this sort of thing. For the partial lighting scenario, you could have it so that if a tile results in being in shadow, that tile is then split up into 4 tiles and each one of those is tested. You could then split that up as much as you wanted? Edit: You can also optimise it out a bit by not testing any of the tiles adjacent to a light - this would be more important to do when you have multiple light sources, I guess... A: I've actually just recently wrote this functionality into one of my projects. void Battle::CheckSensorRange(Unit* unit,bool fog){ int sensorRange = 0; for(int i=0; i < unit->GetSensorSlots(); i++){ if(unit->GetSensorSlot(i)->GetSlotEmpty() == false){ sensorRange += unit->GetSensorSlot(i)->GetSensor()->GetRange()+1; } } int originX = unit->GetUnitX(); int originY = unit->GetUnitY(); float lineLength; vector <Place> maxCircle; //get a circle around the unit for(int i = originX - sensorRange; i < originX + sensorRange; i++){ if(i < 0){ continue; } for(int j = originY - sensorRange; j < originY + sensorRange; j++){ if(j < 0){ continue; } lineLength = sqrt( (float)((originX - i)*(originX - i)) + (float)((originY - j)*(originY - j))); if(lineLength < (float)sensorRange){ Place tmp; tmp.x = i; tmp.y = j; maxCircle.push_back(tmp); } } } //if we're supposed to fog everything we don't have to do any fancy calculations if(fog){ for(int circleI = 0; circleI < (int) maxCircle.size(); circleI++){ Map->GetGrid(maxCircle[circleI].x,maxCircle[circleI].y)->SetFog(fog); } }else{ bool LOSCheck = true; vector <bool> placeCheck; //have to check all of the tiles to begin with for(int circleI = 0; circleI < (int) maxCircle.size(); circleI++){ placeCheck.push_back(true); } //for all tiles in the circle, check LOS for(int circleI = 0; circleI < (int) maxCircle.size(); circleI++){ vector<Place> lineTiles; lineTiles = line(originX, originY, maxCircle[circleI].x, maxCircle[circleI].y); //check each tile in the line for LOS for(int lineI = 0; lineI < (int) lineTiles.size(); lineI++){ if(false == CheckPlaceLOS(lineTiles[lineI], unit)){ LOSCheck = false; //mark this tile not to be checked again placeCheck[circleI] = false; } if(false == LOSCheck){ break; } } if(LOSCheck){ Map->GetGrid(maxCircle[circleI].x,maxCircle[circleI].y)->SetFog(fog); }else{ LOSCheck = true; } } } } There's some extra stuff in there that you wouldn't need if you're adapting it for your own use. The type Place is just defined as an x and y position for conveniences sake. The line function is taken from Wikipedia with very small modifications. Instead of printing out x y coordinates I changed it to return a place vector with all the points in the line. The CheckPlaceLOS function just returns true or false based on if the tile has an object on it. There's some more optimizations that could be done with this but this is fine for my needs. A: You can get into all sorts of complexities with calculating occlusion etc, or you can go for the simple brute force method: For every cell, use a line drawing algorithm such as the Bresenham Line Algorithm to examine every cell between the current one and the light source. If any are filled cells or (if you have only one light source) cells that have already been tested and found to be in shadow, your cell is in shadow. If you encounter a cell known to be lit, your cell will likewise be lit. An easy optimisation to this is to set the state of any cells you encounter along the line to whatever the final outcome is. This is more or less what I used in my 2004 IOCCC winning entry. Obviously that doesn't make good example code, though. ;) Edit: As loren points out, with these optimisations, you only need to pick the pixels along the edge of the map to trace from. A: i have implemented tilebased field of view in a single C function. here it is: https://gist.github.com/zloedi/9551625 A: If you don't want to spend the time to reinvent/re-implement this, there are plenty of game engines out there. Ogre3D is an open source game engine that fully supports lighting, as well as sound and game controls.
{ "language": "en", "url": "https://stackoverflow.com/questions/174659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: A C# to VB.Net conversion utility that handles Automatic properties correctly? I hope this isn't considered a duplicate since it's more pointed than similar questions (I'm curious about a specific weakness in C# to VB.net conversion utilities). I've been looking at using a tool like this .net code converter to convert a class library to VB since I'm the only one in my group comfortable with C#. The problem I've run into is that it doesn't generate proper VB for automatic properties. It creates empty get/set routines. So this: public string TransactionType { get; private set; } Becomes this: Public Property TransactionType() As String Get End Get Private Set(ByVal value As String) End Set End Property The tools linked here and here have similar issues - some create valid properties, but they don't respect the access level of the set routine. Side question - If you were going to fix the converter on DeveloperFusion, would you have it return something like this? Private _TransactionType As String Public Property TransactionType() As String Get Return _TransactionType End Get Private Set(ByVal value As String) _TransactionType = value End Set End Property A: We've now updated the code generator to support this scenario. If you spot any others that we're not doing very well, please do drop me a line. A: I'd recommend compiling the code and using something like Red-Gate's reflector http://www.red-gate.com/products/reflector/index.htm to handle the conversion. Now it's not "perfect," and I'm not sure if it handles automatic properties (though I'd imagine it would). What makes this possible is that when you compile .NET language down to IL they're exactly the same. The language is just another layer on top of that. So 2 properties that would look at the same in their native languages compile to the exact same IL code. So reversing this to other languages using something like Reflector is easy and quick. A: I stumbled on this while looking for a way to automate using reflector to translate code (since there are several plugins for it to generate code in other languages (even PowerShell)), but you made me wonder, so I tried it. With the compatibility set to .Net 3.5, it converts your example to this: Property TransactionType As String Public Get Private Set(ByVal value As String) End Property If you dig in, it does report that there are compiler generated methods which it doesn't export in VB.Net or C# with 3.5 compatability on ... HOWEVER, if you switch it to 2.0, the same code will generate this: Property TransactionType As String Public Get Return Me.<TransactionType>k__BackingField End Get Private Set(ByVal value As String) Me.<TransactionType>k__BackingField = value End Set End Property <CompilerGenerated> _ Private <TransactionType>k__BackingField As String P.S.: if you try using a disassembler like Reflector to generate code, remember to keep the .pdb file around so you get proper names for the variables ;) A: As an answer to your side question: yes, that code is pretty much exactly what I'd get it to produce. You can't get it to do exactly what the C# code does, which is to make the name of the variable "unspeakable" (i.e. impossible to reference in code) but that's probably close enough. A: I would suggest checking out SharpDevelop (sometimes written as #develop). It's an Open Source .NET IDE that, among other things, can convert (with some issues) from C# to VB.NET and vice versa.
{ "language": "en", "url": "https://stackoverflow.com/questions/174662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: operators as strings I need to evaluate a mathmatical expression that is presented to me as a string in C#. Example noddy but gets the point across that the string as the expression. I need the evaluate to then populate an int. There is no Eval() in C# like in others langugaes... String myString = "3*4"; Edit: I am on VS2008 Tried the Microsoft.JScript. = Its deprecated method (but still complies - warning) However the Microsoft.JScript dll that I have doens work on public object InvokeMember(string name, BindingFlags invokeAttr, Binder binder, object target, object[] args); Complains that there is a missing ";" go figure... EDIT 2 Solution - was the codeDom one - it worked for as there are no security issue - only me ever going to be running the code. Many thanks for the replies ... And the link to the new Dragon Book awesome EDIT 3 Matt dataTable.Compute() also works - even better for the security conscious. (parameter checking noted) A: You can use the jscript interpreter. A great article for it is here: http://www.odetocode.com/Articles/80.aspx A: All the other answers are possible overkill. If all you need is simple arithmetic, do this. DataTable dummy = new DataTable(); Console.WriteLine(dummy.Compute("15 / 3",string.Empty)); EDIT: a little more information. Check out the MSDN documentation for the Expression property of the System.Data.DataColumn class. The stuff on "Expression Syntax" outlines a list of commands you can use in addition to the arithmetic operators. (ex. IIF, LEN, etc.). Thanks everyone for voting up my first posted answer! A: The way I see it, you have two options - use an expression evaluator or construct, compile and run C# code on the fly. I would go with an expression evaluator library, as you do not have to worry about any security issues. That is, you might not be able to use code generation in medium trust environments, such as most shared hosting servers. Here is an example for generating code to evaluate expressions: http://www.vbforums.com/showthread.php?t=397264 A: I did this as a personal exercise in C# a few weeks ago. It is quite a bit of code and is poorly commented in places. But it did work with a lot of test cases. Enjoy! using System; using System.Collections.Generic; using System.Text.RegularExpressions; namespace StackOverflow { class Start { public static void Main(string[] args) { Evaluator ev; string variableValue, eq; Console.Write("Enter equation: "); eq = Console.ReadLine(); while (eq != "quit") { ev = new Evaluator(eq); foreach (Variable v in ev.Variables) { Console.Write(v.Name + " = "); variableValue = Console.ReadLine(); ev.SetVariable(v.Name, Convert.ToDecimal(variableValue)); } Console.WriteLine(ev.Evaluate()); Console.Write("Enter equation: "); eq = Console.ReadLine(); } } } class EvalNode { public virtual decimal Evaluate() { return decimal.Zero; } } class ValueNode : EvalNode { decimal value; public ValueNode(decimal v) { value = v; } public override decimal Evaluate() { return value; } public override string ToString() { return value.ToString(); } } class FunctionNode : EvalNode { EvalNode lhs = new ValueNode(decimal.Zero); EvalNode rhs = new ValueNode(decimal.Zero); string op = "+"; public string Op { get { return op; } set { op = value; } } internal EvalNode Rhs { get { return rhs; } set { rhs = value; } } internal EvalNode Lhs { get { return lhs; } set { lhs = value; } } public override decimal Evaluate() { decimal result = decimal.Zero; switch (op) { case "+": result = lhs.Evaluate() + rhs.Evaluate(); break; case "-": result = lhs.Evaluate() - rhs.Evaluate(); break; case "*": result = lhs.Evaluate() * rhs.Evaluate(); break; case "/": result = lhs.Evaluate() / rhs.Evaluate(); break; case "%": result = lhs.Evaluate() % rhs.Evaluate(); break; case "^": double x = Convert.ToDouble(lhs.Evaluate()); double y = Convert.ToDouble(rhs.Evaluate()); result = Convert.ToDecimal(Math.Pow(x, y)); break; case "!": result = Factorial(lhs.Evaluate()); break; } return result; } private decimal Factorial(decimal factor) { if (factor < 1) return 1; return factor * Factorial(factor - 1); } public override string ToString() { return "(" + lhs.ToString() + " " + op + " " + rhs.ToString() + ")"; } } public class Evaluator { string equation = ""; Dictionary<string, Variable> variables = new Dictionary<string, Variable>(); public string Equation { get { return equation; } set { equation = value; } } public Variable[] Variables { get { return new List<Variable>(variables.Values).ToArray(); } } public void SetVariable(string name, decimal value) { if (variables.ContainsKey(name)) { Variable x = variables[name]; x.Value = value; variables[name] = x; } } public Evaluator(string equation) { this.equation = equation; SetVariables(); } public decimal Evaluate() { return Evaluate(equation, new List<Variable>(variables.Values)); } public decimal Evaluate(string text) { decimal result = decimal.Zero; equation = text; EvalNode parsed; equation = equation.Replace(" ", ""); parsed = Parse(equation, "qx"); if (parsed != null) result = parsed.Evaluate(); return result; } public decimal Evaluate(string text, List<Variable> variables) { foreach (Variable v in variables) { text = text.Replace(v.Name, v.Value.ToString()); } return Evaluate(text); } private static bool EquationHasVariables(string equation) { Regex letters = new Regex(@"[A-Za-z]"); return letters.IsMatch(equation); } private void SetVariables() { Regex letters = new Regex(@"([A-Za-z]+)"); Variable v; foreach (Match m in letters.Matches(equation, 0)) { v = new Variable(m.Groups[1].Value, decimal.Zero); if (!variables.ContainsKey(v.Name)) { variables.Add(v.Name, v); } } } #region Parse V2 private Dictionary<string, string> parenthesesText = new Dictionary<string, string>(); /* * 1. All the text in first-level parentheses is replaced with replaceText plus an index value. * (All nested parentheses are parsed in recursive calls) * 2. The simple function is parsed given the order of operations (reverse priority to * keep the order of operations correct when evaluating). * a. Addition (+), subtraction (-) -> left to right * b. Multiplication (*), division (/), modulo (%) -> left to right * c. Exponents (^) -> right to left * d. Factorials (!) -> left to right * e. No op (number, replaced parentheses) * 3. When an op is found, a two recursive calls are generated -- parsing the LHS and * parsing the RHS. * 4. An EvalNode representing the root node of the evaluations tree is returned. * * Ex. 3 + 5 (3 + 5) * 8 * + * * / \ / \ * 3 5 + 8 * / \ * 3 + 5 * 8 3 5 * + * / \ * 3 * * / \ * 5 8 */ /// <summary> /// Parses the expression and returns the root node of a tree. /// </summary> /// <param name="eq">Equation to be parsed</param> /// <param name="replaceText">Text base that replaces text in parentheses</param> /// <returns></returns> private EvalNode Parse(string eq, string replaceText) { int randomKeyIndex = 0; eq = eq.Replace(" ", ""); if (eq.Length == 0) { return new ValueNode(decimal.Zero); } int leftParentIndex = -1; int rightParentIndex = -1; SetIndexes(eq, ref leftParentIndex, ref rightParentIndex); //remove extraneous outer parentheses while (leftParentIndex == 0 && rightParentIndex == eq.Length - 1) { eq = eq.Substring(1, eq.Length - 2); SetIndexes(eq, ref leftParentIndex, ref rightParentIndex); } //Pull out all expressions in parentheses replaceText = GetNextReplaceText(replaceText, randomKeyIndex); while (leftParentIndex != -1 && rightParentIndex != -1) { //replace the string with a random set of characters, stored extracted text in dictionary keyed on the random set of chars string p = eq.Substring(leftParentIndex, rightParentIndex - leftParentIndex + 1); eq = eq.Replace(p, replaceText); parenthesesText.Add(replaceText, p); leftParentIndex = 0; rightParentIndex = 0; replaceText = replaceText.Remove(replaceText.LastIndexOf(randomKeyIndex.ToString())); randomKeyIndex++; replaceText = GetNextReplaceText(replaceText, randomKeyIndex); SetIndexes(eq, ref leftParentIndex, ref rightParentIndex); } /* * Be sure to implement these operators in the function node class */ char[] ops_order0 = new char[2] { '+', '-' }; char[] ops_order1 = new char[3] { '*', '/', '%' }; char[] ops_order2 = new char[1] { '^' }; char[] ops_order3 = new char[1] { '!' }; /* * In order to evaluate nodes LTR, the right-most node must be the root node * of the tree, which is why we find the last index of LTR ops. The reverse * is the case for RTL ops. */ int order0Index = eq.LastIndexOfAny(ops_order0); if (order0Index > -1) { return CreateFunctionNode(eq, order0Index, replaceText + "0"); } int order1Index = eq.LastIndexOfAny(ops_order1); if (order1Index > -1) { return CreateFunctionNode(eq, order1Index, replaceText + "0"); } int order2Index = eq.IndexOfAny(ops_order2); if (order2Index > -1) { return CreateFunctionNode(eq, order2Index, replaceText + "0"); } int order3Index = eq.LastIndexOfAny(ops_order3); if (order3Index > -1) { return CreateFunctionNode(eq, order3Index, replaceText + "0"); } //no operators... eq = eq.Replace("(", ""); eq = eq.Replace(")", ""); if (char.IsLetter(eq[0])) { return Parse(parenthesesText[eq], replaceText + "0"); } return new ValueNode(decimal.Parse(eq)); } private string GetNextReplaceText(string replaceText, int randomKeyIndex) { while (parenthesesText.ContainsKey(replaceText)) { replaceText = replaceText + randomKeyIndex.ToString(); } return replaceText; } private EvalNode CreateFunctionNode(string eq, int index, string randomKey) { FunctionNode func = new FunctionNode(); func.Op = eq[index].ToString(); func.Lhs = Parse(eq.Substring(0, index), randomKey); func.Rhs = Parse(eq.Substring(index + 1), randomKey); return func; } #endregion /// <summary> /// Find the first set of parentheses /// </summary> /// <param name="eq"></param> /// <param name="leftParentIndex"></param> /// <param name="rightParentIndex"></param> private static void SetIndexes(string eq, ref int leftParentIndex, ref int rightParentIndex) { leftParentIndex = eq.IndexOf('('); rightParentIndex = eq.IndexOf(')'); int tempIndex = eq.IndexOf('(', leftParentIndex + 1); while (tempIndex != -1 && tempIndex < rightParentIndex) { rightParentIndex = eq.IndexOf(')', rightParentIndex + 1); tempIndex = eq.IndexOf('(', tempIndex + 1); } } } public struct Variable { public string Name; public decimal Value; public Variable(string n, decimal v) { Name = n; Value = v; } } } A: When you say, "like in other languages" you should instead say, "like in dynamic languages". For dynamic languages like python, ruby, and many interpreted languages, an Eval() function is a natural element. In fact, it's probably even pretty trivial to implement your own. Howver, .Net is at it's core a static, strongly-typed, compiled platform (at least until the Dynamic Language Runtime gets more support). This has natural advantages like code-injection security and compile-time type checking that are hard to ignore. But it means an Eval() function isn't such a good fit- it wants to be able to compile the expression ahead of time. In this kind of platform, there are generally other, safer, ways to accomplish the same task. A: Check out Flee A: MS has a sample called Dynamic Query Library. It is provided by the LINQ team to dynamically construct LINQ queries such as: Dim query = Northwind.Products.Where("CategoryID=2") You might check to see if it offers rudimentary math capabilities. A: In an interpreted language you might have a chance of evaluating the string using the interpreter. In C# you need a parser for the language the string is written in (the language of mathematical expressions). This is a non-trivial exercise. If you want to do it, use a recursive-descent parser. The early chapters of the "Dragon Book" (Compilers: Design, etc. by Aho, Sethi and Ullman - 1st ed 1977 or 2nd ed 2007) have a good explanation of what you need to do. An alternative might be to include in your project a component written in perl, which is supposed to be available for .NET now, and use perl to do the evaluation. A: The jscript interpreter could do, or you can write your own parser if the expression is simple (beware, it becomes complicated really fast). I'm pretty sure there is no direct "Eval(string)" method in C# since it is not interpreted. Keep in mind though that code interpretation is subject to code injection, be extra careful :) A: Will you need to access the values of other variables when calculating an expression? A: After some googling, I see there's the possibility to create and compile code on the fly using CodeDom. (See a tutorial). I personally don't think that approach is a very good idea, since the user can enter whatever code he wants, but that may be an area to explore (for example by only validating the input, and only allowing numbers and simple math operations). A: Some other suggestions: * *Mono 2.0 (came out today) has an eval method. *You can easily write a small domain specific in boo. *You can create an old school recursive-descent EBNF parser. A: I have posted source for an ultra compact (1 class, < 10 KiB) Java Math Evaluator on my web site. It should be trivial to port this to C#. There are other ones out there that might do more, but this is very capable, and it's tiny.
{ "language": "en", "url": "https://stackoverflow.com/questions/174664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Should I create a JQuery server control for ASP.net to best use it in my apps? I have been rather successful in promoting JQuery within my organization. No small feat on it's own. However, one of the ideas being kicked around here to make it part of our app is to create an ASP.net server side control. (We are going to be sticking with WebForms for the foreseeable future.) I'm not too wild about this approach as it seems like overkill when a couple of script tags will do the job. We found an article on the web, and the amount of code involved really doesn't seem to justify itself. However, I do hear that there is some benefit in the script caching or generating that happens with the server controls. My questions: * *Has anyone else written an ASP.net server control to serve up the JQuery js code? *Does anyone else think that this is a crazy idea to just avoid writing JQuery or Javascript code? A: I know Microsoft (along with Nokia) is "mainstreaming" jQuery and will be integrating it with future versions of Visual Studio. You may want to explore how they'll be officially using it so you can tailor your setup now, and hopefully ease your transition to "official MS jQuery" down the road. A: I agree with you. It is not worth the time to create and overhead to create a control to make to add a JQuery script location. A better solution would be to have 1 .js file that has all the links required to load on the page. That could eliminate allot of .js links, if that is the issue with the team. The only time I would excuse creating a custom control to just link JavaScript would be for whatever reason you did not want to copy the JavaScript to the server and want to be embed it into the .dll. However you will not stop people from seeing the JavaScript on the page because if you embed the files in the .dll you must register them in the header as the full script file. A: One reason for using a server control for injecting the JavaScript reference is that it is easier to control which JavaScript files get added to a page. Imagine a scenario where you use jQuery core, plus jQuery UI and a handful of other plugins. Depending on how you coded this control, you could allow a developer to easily choose which features were needed for a specific page without worrying about the specific scripts needed. This approach would allow you lots of flexibility for segmenting your application: for example the server control might be used by a master pages, a child page, user controls or another server control. If the master page registered a requirement for one jQuery library, but the child page or one of the user controls requires additional libraries, then having a unified API makes this simple. Personally, I believe this is best handled by a helper library rather than a server control. The bottom line is how much you want each developer re-inventing the wheel or using a common, simple to use API which enforces uniformity across your apps. A: I found Scott Hanselman's blog post with a sample app that has ASP.net AJAX + JQuery. It's a simple app, but it includes all the javascript with script tags. I don't see any advisement to use a server control to serve up the scripts.
{ "language": "en", "url": "https://stackoverflow.com/questions/174672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Using PHP to write to OS X's log database? Do any apps/packages exist that will support writing to OS X's log database from external sources. I'm not too familiar with the specifics of the database, beyond the fact that you can view its contents from the Console app. I'm not even sure if it's just a version of some simple DB like SQLite or if it is some sort of proprietary/internal/inaccessible kind of thing. My best guess so far is that one of two things might be possible: * *It looks like it will accept log entries from natively run apps, so perhaps using some sort of daemon running in the background that could take text inputs and relay it to the log database would work? *The other alternative that came to mind was if there were some way to access the database directly, in which case a PHP script could simply connect to it to make log entries. If anyone more knowledgeable could fill in the blanks, I would be very grateful! A: I've never used OS X but you might want to look into the syslog function. A: In your PHP do a syslog(LOG_WARNING, "whatever"); In terminal type syslog -w It will output Warning: whatever It will then print out the syslogs. You will get system message as well in the syslog. The -w help only list recent stuff. A: The log files are plain text, found in (~)/Library/Logs. OS X, like most *nix systems uses a syslogd process to which you can log using syslog as mentioned by RoBorg. A: OS X does use SQLite as part of CoreData. I am under the impressions that it is the primary means for storing data locally with OS X so I would expect it logs with it as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/174699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Visual Studio 2005 Not Loading After Visual Studio 2005 displays the splash screen it locks up on me. No error, no cpu utilization, just a frozen splash screen. I've tried it in both /safemode and /resetsettings I'm sure it's one of the services on my machine, just wonder if anyone else has had the problem and can help me with the hunt? BTW, it's works in a VM in the same machine. Update: I finally tried something new, I started VS2005 in Windows compatibility 2000 mode, it starts then shuts down immediately. I reset it to not run in compatibility mode and it starts right up. grrrrr I think it might be a profile issue, but the root cause is still unresolved. A: Have you tried running it in safe mode - if that doesn't let you sort it out you can try the /resetsettings switch, which has sorted out similar problems for me in the past. /resetuserdata can also help. A: look at the event log for you machine and see if VS threw any useful info in there; you may have to uninstall and reinstall A: I have VMWare installed on my machine. This was the cause of my problem! I started VS2005 in Windows compatibility 2000 mode as suggested above - it started up then shut down immediately. I then ran without compatibility mode and VS2005 now runs perfectly! I wasted half a day trying to sort this out! Thank you for your post! :) A: Try starting up with the log command: devenv.exe /Log c:\vs.log And see if anything is noted in it. Another thing to try is to run VS in a temporary user account to see if the problem is strictly with your user environment or is system-wide. See this post.
{ "language": "en", "url": "https://stackoverflow.com/questions/174702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Automatic Update and checkin of AssemblyInfo.cs files occasionally causes partial fail We have TFS 2008 our build set up to checkout all AssemblyInfo.cs files in the project, update them with AssemblyInfoTask, and then either undo the checkout or checkin depending on whether the build passed or not. Unfortunately, when two builds are queued close together this results in a Partially completed build as the AssemblyInfo.cs files seem to be checked out at an earlier version to the previous checkin. In order to get around this I thought that I could use the "Get" task to force the AssemblyInfo.cs files to the latest version before updating them, but this appears to have no effect. Any ideas? <Target Name="AfterGet" Condition="'$(IsDesktopBuild)'!='true'"> <Message Text="SolutionRoot = $(SolutionRoot)" /> <Message Text="OutDir = $(OutDir)" /> <!-- Set the AssemblyInfoFiles items dynamically --> <CreateItem Include="$(SolutionRoot)\Main\Source\InputApplicationSln\**\$(AssemblyInfoSpec)"> <Output ItemName="AssemblyInfoFiles" TaskParameter="Include" /> </CreateItem> <Message Text="$(AssemblyInfoFiles)" /> <!-- When builds are queued up successively, it is possible for the next build to be set up before the AssemblyInfoSpec is checked in so we need to force the latest these versions of these files to be got before a checkout --> <Get Condition=" '$(SkipGet)'!='true' " TeamFoundationServerUrl="$(TeamFoundationServerUrl)" Workspace="$(WorkspaceName)" Filespec="$(AssemblyInfoSpec)" Recursive="$(RecursiveGet)" Force="$(ForceGet)" /> <Exec WorkingDirectory="$(SolutionRoot)\Main\Source\InputApplicationSln" Command="$(TF) checkout /recursive $(AssemblyInfoSpec)"/> A: Does your build re-write the AssemblyInfo files and then check them back in? Or do you just modify the AssemblyInfo files locally. Personally I prefer the latter approach - as documented over at the TFSBuild recipies site: http://tfsbuild.com/AssemblyVersioning%20.ashx I've never actually sat down and checked but I was wondering if you checked in the AssemblyInfo files then could the following be happening which might be causing your problems... * *Request a build, current changeset = 42 *Build 1 for changeset 42 starts running *Request a build, current changeset = 42 (still) *Build 2 for changeset 42 queued *Build 1 checks in new assemblyinfo files, current changeset = 43 *Build 1 completes *Build 2 for changeset 42 starts, dows a get of changeset 42 meaning AssemblyInfo files are the fold ones. As I say, not exactly sure when the changeset number is determined for the build - at the time of queuing or at the time of running. It would make more sense for it to be at the time of queueing though. A: Changing: <Get Condition=" '$(SkipGet)'!='true' " TeamFoundationServerUrl="$(TeamFoundationServerUrl)" Workspace="$(WorkspaceName)" Filespec="$(AssemblyInfoSpec)" Recursive="$(RecursiveGet)" Force="$(ForceGet)" /> To: <Get Condition=" '$(SkipGet)'!='true' " TeamFoundationServerUrl="$(TeamFoundationServerUrl)" Workspace="$(WorkspaceName)" Filespec="$(AssemblyInfoSpec)" Recursive="True" Force="True" /> Has forced the AssemblyInfo.cs files to be overwritten with top of tree. It's been working so far, but is more of a hack than something elegant.
{ "language": "en", "url": "https://stackoverflow.com/questions/174705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }