text
stringlengths
8
267k
meta
dict
Q: What options do exist now to implement UTF8 in Ruby and RoR? Following the development of Ruby very closely I learned that detailed character encoding is implemented in Ruby 1.9. My question for now is: How may Ruby be used at the moment to talk to a database that stores all data in UTF8? Background: I am involved in a new project where Ruby/RoR is at least an option. But the project needs to rely on an internationalized character set (it's spread over many countries), preferably UTF8. So how do you deal with that? Thanks in advance. A: Ruby 1.8 works fine with UTF-8 strings for basic operations with the strings. Depending on your application's need, some operations will either not work or not work as expected. Eg: 1) The size of strings will give you bytes, not characters since the mult-byte support is not there yet. But do you need to know the size of your strings in characters? 2) No splitting a string at a character boundary. But do you need this? Etc. 3) Sorting order will be funky if sorted in Ruby. The suggestion of using the db to sort is a good idea. etc. Re poster's comment about sorting data after reading from db: As noted, results will probably not match users' expectations. So the solution is to sort on the db. And it will usually be faster, anyhow--databases are designed to sort data. Summary: My Ruby 1.8.6 RoR app works fine with international Unicode characters processed and stored as UTF-8 on modern browsers. Right to left languages work fine too. Main issues: be sure that your db and all web pages are set to use UTF-8. If you already have some data in your db, then you'll need to go through a conversion process to change it to UTF-8. Regards, Larry A: "Unicode ahoy! While Rails has always been able to store and display unicode with no beef, it’s been a little more complicated to truncate, reverse, or get the exact length of a UTF-8 string. You needed to fool around with KCODE yourself and while plenty of people made it work, it wasn’t as plug’n’play easy as you could have hoped (or perhaps even expected). So since Ruby won’t be multibyte-aware until this time next year, Rails 1.2 introduces ActiveSupport::Multibyte for working with Unicode strings. Call the chars method on your string to start working with characters instead of bytes." Click Here for more A: Although I haven't tested it, the character-encodings library (currently in alpha) adds methods to the String class to handle UTF-8 and others. Its page on RubyForge is here. It is designed for Ruby 1.8. It is my experience, however, that, using Ruby 1.8, if you store data in your database as UTF-8, Ruby will not get in the way as long as your character encoding in the HTTP header is UTF-8. It may not be able to operate on the strings, but it won't break anything. Example: file.txt: ¡Hola! ¿Como estás? Leí el artículo. ¡Fue muy excellente! Pardon my poor Spanish; it was the best example of Unicode I could come up with. in irb: str = File.read("file.txt") => "\302\241Hola! \302\277Como est\303\241s? Le\303\255 el art\303\255culo. \302\241Fue muy excellente!\n" str += "Foo is equal to bar." => "\302\241Hola! \302\277Como est\303\241s? Le\303\255 el art\303\255culo. \302\241Fue muy excellente!\nFoo is equal to bar." str = " " + str + " " => " \302\241Hola! \302\277Como est\303\241s? Le\303\255 el art\303\255culo. \302\241Fue muy excellente!\nFoo is equal to bar. " str.strip => "\302\241Hola! \302\277Como est\303\241s? Le\303\255 el art\303\255culo. \302\241Fue muy excellente!\nFoo is equal to bar." Basically, it will just treat the UTF-8 as ASCII with odd characters in it. It will not sort lexigraphically if the code points are out of order; however, it will sort by code point. Example: "\302" <=> "\301" => -1 How much are you planning on operating on the data in the Rails app, anyway? Most sorting etc. is usually done by your database engine.
{ "language": "en", "url": "https://stackoverflow.com/questions/160016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Determine if a resource exists in ResourceManager Is there anyway to determine if a ResourceManager contains a named resource? Currently I am catching the MissingManifestResourceException but I hate having to use Exceptions for non-exceptional situations. There must be some way to enumerate the name value pairs of a ResourceManager through reflection, or something? EDIT: A little more detail. The resources are not in executing assembly, however the ResourceManager is working just fine. If I try _resourceMan.GetResourceSet(_defaultCuture, false, true) I get null, whereas if I try _resourceMan.GetString("StringExists") I get a string back. A: I think you can use something like Assembly.GetManifestResourceNames to enumerate the list of resources available in the Assembly's manifest. It isn't pretty and doesn't solve all of the corner cases, but works if required. A: You can use the ResourceSet to do that, only it loads all the data into memory if you enumerate it. Here y'go: // At startup. ResourceManager mgr = Resources.ResourceManager; List<string> keys = new List<string>(); ResourceSet set = mgr.GetResourceSet(CultureInfo.CurrentCulture, true, true); foreach (DictionaryEntry o in set) { keys.Add((string)o.Key); } mgr.ReleaseAllResources(); Console.WriteLine(Resources.A);
{ "language": "en", "url": "https://stackoverflow.com/questions/160022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Why does Flex Builder fail to connect the app so often? Flex Builder often fails to connect to the app that it's supposed to debug. After a minute or so it times-out and says that it couldn't connect. The only way I can get it to stop doing this is by restarting Eclipse. Very annoying. Anyone know why this is? I'm using FB 3.1 and Firefox on Win XP. Many thanks! A: This became an issue with Firefox 3, and the general workaround is to start disabling Firefox extensions until it works. I've found that the AdBlock and IETab extensions interfered with Flex debugging; once those were disabled in Firefox's Add-On Manager, things got better. See this bug report on Adobe's website for much more information. A: I think it may be because it's full of bugs. I haven't worked with it on windows but on os x it reeks. You have to restart the whole os to get it back to normal sometimes.
{ "language": "en", "url": "https://stackoverflow.com/questions/160026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to put breakpoint in every function of .cpp file? Is there a macro that does it? Which DTE objects to use? A: Here's a quick implementation of 1800 INFORMATION's idea: Sub TemporaryMacro() DTE.ActiveDocument.Selection.StartOfDocument() Dim returnValue As vsIncrementalSearchResult While True DTE.ActiveDocument.ActiveWindow.Object.ActivePane.IncrementalSearch.StartForward() returnValue = DTE.ActiveDocument.ActiveWindow.Object.ActivePane.IncrementalSearch.AppendCharAndSearch(AscW("{")) DTE.ActiveDocument.ActiveWindow.Object.ActivePane.IncrementalSearch.Exit() If Not (returnValue = vsIncrementalSearchResult.vsIncrementalSearchResultFound) Then Return End If DTE.ExecuteCommand("Debug.ToggleBreakpoint") DTE.ExecuteCommand("Edit.GotoBrace") DTE.ActiveDocument.Selection.CharRight() End While End Sub A: I don't know what DTE functions to use, but you could very simply record a macro that could pretty much do it: * *Go to the top of the file *ctrl - shift - R (start recording) *ctrl - I (incremental search) *{ (search for the first { character). *F9 (set breakpoint) *ctrl - ] (go to matching } character) *ctrl - shift - R (stop recording) Now just run this over and over (ctrl - shift P repeatedly) until you reach the end of the file. If you have namespaces, then change 4. to: *( (search for "(" at the start of the function definition) *esc (stop incremental search) *ctrl - I (incremental search again) *{ (start of function body) This kind of thing can be infinitely modified to suit your codebase A: Like Constantin's method... This seems like windbg territory. Since you have the cpp, (even if you didn't you could script something to get by), it should be no problem to use logger part of the debugging tools for windows... it's a very handy tool, shame so few people use it. logger debug's C/COM/C++ easily, with rich symbolic info, hooks/profiling/flexible instrumentation; One way to activate Logger is to start CDB or WinDbg and attach to a user-mode target application as usual. Then, use the !logexts.logi or !logexts.loge extension command. This will insert code at the current breakpoint that will jump off to a routine that loads and initializes Logexts.dll in the target application process. This is referred to as "injecting Logger into the target application." A: (This is not quite what you're asking for, but almost:) You can put a breakpoint on every member function of a class in Visual Studio by bringing up the New Breakpoint dialog and entering: CMyClass::* See http://blogs.msdn.com/b/habibh/archive/2009/09/10/class-breakpoint-how-to-set-a-breakpoint-on-a-c-class-in-the-visual-studio-debugger.aspx for more details. A: Here's how something similar could be achieved in WinDbg: bm mymodule!CSpam::* This puts breakpoint in every method of class (or namespace) CSpam in module mymodule. I'm still looking for anything close to this functionality in Visual Studio. A: There is a macro, but I tested it only with c#. Sub BreakAtEveryFunction() For Each project In DTE.Solution.Projects SetBreakpointOnEveryFunction(project) Next project End Sub Sub SetBreakpointOnEveryFunction(ByVal project As Project) Dim cm = project.CodeModel ' Look for all the namespaces and classes in the ' project. Dim list As List(Of CodeFunction) list = New List(Of CodeFunction) Dim ce As CodeElement For Each ce In cm.CodeElements If (TypeOf ce Is CodeNamespace) Or (TypeOf ce Is CodeClass) Then ' Determine whether that namespace or class ' contains other classes. GetClass(ce, list) End If Next For Each cf As CodeFunction In list DTE.Debugger.Breakpoints.Add(cf.FullName) Next End Sub Sub GetClass(ByVal ct As CodeElement, ByRef list As List(Of CodeFunction)) ' Determine whether there are nested namespaces or classes that ' might contain other classes. Dim aspace As CodeNamespace Dim ce As CodeElement Dim cn As CodeNamespace Dim cc As CodeClass Dim elements As CodeElements If (TypeOf ct Is CodeNamespace) Then cn = CType(ct, CodeNamespace) elements = cn.Members Else cc = CType(ct, CodeClass) elements = cc.Members End If Try For Each ce In elements If (TypeOf ce Is CodeNamespace) Or (TypeOf ce Is CodeClass) Then GetClass(ce, list) End If If (TypeOf ce Is CodeFunction) Then list.Add(ce) End If Next Catch End Try End Sub A: Here's one way to do it (I warn you it is hacky): EnvDTE.TextSelection textSelection = (EnvDTE.TextSelection)dte.ActiveWindow.Selection; // I'm sure there's a better way to get the line count than this... var lines = File.ReadAllLines(dte.ActiveDocument.FullName).Length; var methods = new List<CodeElement>(); var oldLine = textSelection.AnchorPoint.Line; var oldLineOffset = textSelection.AnchorPoint.LineCharOffset; EnvDTE.CodeElement codeElement = null; for (var i = 0; i < lines; i++) { try { textSelection.MoveToLineAndOffset(i, 1); // I'm sure there's a better way to get a code element by point than this... codeElement = textSelection.ActivePoint.CodeElement[vsCMElement.vsCMElementFunction]; if (codeElement != null) { if (!methods.Contains(codeElement)) { methods.Add(codeElement); } } } catch { //MessageBox.Show("Add error handling here."); } } // Restore cursor position textSelection.MoveToLineAndOffset(oldLine, oldLineOffset); // This could be in the for-loop above, but it's here instead just for // clarity of the two separate jobs; find all methods, then add the // breakpoints foreach (var method in methods) { dte.Debugger.Breakpoints.Add( Line: method.StartPoint.Line, File: dte.ActiveDocument.FullName); } A: Put this at the top of the file: #define WANT_BREAK_IN_EVERY_FUNCTION #ifdef WANT_BREAK_IN_EVERY_FUNCTION #define DEBUG_BREAK DebugBreak(); #else #define DEBUG_BREAK #endif then insert DEBUG_BREAK in the beginning of every function, like this: void function1() { DEBUG_BREAK // the rest of the function } void function2() { DEBUG_BREAK // the rest of the function } When you no longer want the debug breaks, comment the line // #define WANT_BREAK_IN_EVERY_FUNCTION at the top of the file.
{ "language": "en", "url": "https://stackoverflow.com/questions/160030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Does dependency injection break the Law of Demeter I have been adding dependency injection to my code because it makes by code much easier to Unit test through mocking. However I am requiring objects higher up my call chain to have knowledge of objects further down the call chain. Does this break the Law of Demeter? If so does it matter? for example: a class A has a dependency on an interface B, The implementation of this interface to use is injected into the constructor of class A. Anyone wanting to use class A must now also have a reference to an implementation of B. And can call its methods directly meaning and has knowledge of its sub components (interface B) Wikipedia says about the law of Demeter: "The fundamental notion is that a given object should assume as little as possible about the structure or properties of anything else (including its subcomponents)." A: How does it break it? DI perfectly fits in idea of least knowledge. DI gives you low coupling - objects are less defendant on each other. Citing Wikipedia: ...an object A can request a service (call a method) of an object instance B, but object A cannot “reach through” object B to access yet another object... Usually DI works exactly the same way, i.e. you use services provided by injected components. If your object try to access some of the B's dependencies i.e. it knows much about B - that's leads to high coupling and breaks idea of DI However I am requiring objects higher up my call chain to have knowledge of objects further down the call chain Some example? A: If I understand you correctly, this isn't caused by the use of dependency injection, it's caused by using mocking strategies that have you specify the function calls you expect a method to make. That's perfectly acceptable in many situations, but obviously that means you have to know something about the method you're calling, if you've specified what you think it's supposed to do. Writing good software requires balancing tradeoffs. As the implementation becomes more complete, it becomes more inconsistent. You have to decide what risks those inconsistencies create, and whether they're worth the value created by their presence. A: Dependency Injection CAN break the Law of Demeter. If you force consumers to do the injection of the dependencies. This can be avoided through static factory methods, and DI frameworks. You can have both by designing your objects in such a way that they require the dependencies be passed in, and at the same time having a mechanism for using them without explicit performing the injection (factory functions and DI frameworks). A: Does it break the law? Strictly speaking, I think it does. Does it matter? The main danger of breaking the law is that you make your code more brittle. If you really keep it to just the tests, it seems like that danger is not too bad. Mitigation My understanding of the Law of Demeter is that it can be followed by having "wrapper methods" which prevent directly calling down into objects. A: The Law of Demeter specifies that the method M of the object O can call methods on objects created/instantiated inside M. However, there's nothing that specifies how these objects were created. I think it's perfectly fine to use an intermediary object to create these, as long as that object's purpose in life is only that - creating other objects on your behalf. In this sense, DI does not break the Law of Demeter. A: This also confused me for some time. In the wiki it also says... An object A can request a service (call a method) of an object instance B, but object A should not "reach through" object B to access yet another object, C, to request its services. Doing so would mean that object A implicitly requires greater knowledge of object B's internal structure. And this is the crux of the matter. When you interact with Class A you should not be able to interact with the state or methods of interface B. You simply shouldn't have access to its inner workings. As for creating class A and knowing about interface B when creating objects; that's a different scenario altogether, it is not what the law of Demeter is trying to address in software design. I would agree with other answers in that factories and a dependency injection framework would be best to handle this. Hope that clears it up for anyone else confused by this :) A: Depends :-) I think the top answer is not correct , even with a framework a lot of code uses Dependency injection and injects high level objects. You then get spaghetti code with lots of dependencies. Dependency injection is best used for all the stuff that would pollute your object model eg an ILogger. If you do inject business object ensure its at the lowest level possible and try to pass it the traditional method if you can . Only use the dependecy injection if it gets to messy . A: Before I add my answer, I must qualify it. Service-Oriented Programming is built on top of OOP Principles and using OO Languages. Also, SOAs follow Inversion of Control and SOLID Principles to the teeth. So a lot of Service-Oriented programmers are surely arriving here. So, this answer is for Service-Oriented Programmers who arrive to this question, because SOA is built on top of OOP. This does no directly answer the OP's example, but does answer the question from an SOA Perspective. In General, the Law of Demeter doesn't apply to Service-Oriented Architectures. For OO, the Law of Demeter is talking about "Rich Objects" in OOP which have properties and methods, and whose properties may also have methods. With OOP Rich Models, it is possible to reach through a chain of objects and access methods, properties, methods of properties, methods of properties' properties, etc. But in Service-Oriented Programming, Data (Properties) are separated from Process (Methods). Your Models (mainly) only have properties (Certainly never dependencies), and your Services only have Methods and dependencies on other Services. In SOP, you can feel free to review the properties of a model, and properties of its properties. You won't ever be able to access methods you shouldn't, only a tree of data. But what about the Services? Does the Law of Demeter apply there? Yes, the Law of Demeter Can Be applied to SOP Services. But again, the law was originally designed for Rich Models in OOP. And though the law Can Be applied to Services, proper Dependency Injection automagically fulfills the Law of Demeter. In that sense, DI Could not possibly break the law. In limited opposition to Mark Roddy, I can't find any situation where you can legitimately talk about Dependency Injection and "consumers" in the same sentence. If by "consumers" you mean a class that is consuming another class, that doesn't make sense. With DI, you would have a Composition Root composing your object graph, and one class should never know another class even exists. If by "consumers" you mean a programmer, then how would they not be forced to "do the injection." The programmer is the one who has to create the Composition Root, so they must do the injection. A Programmer should never "do the injection" as an instantiation within a class to consume another class. Please review the following example which shows actual separate solutions, their references, and the implementing code: In the top-right, we have the "Core." A lot of packages on NuGet and NPM have a "Core" Project which has Model, Interfaces, and possibly even default implementations. The Core should never ever ever depend on anything external. In the top-left, we have an external implementation of the Core. The implementation depends on the Core, and so has knowledge of it. In the bottom-left, we have a standalone Domain. The Domain has a Dependency on some Implementation of the Core, but Does not need to know about the implementation. This is where I point out that neither the Domain nor the Implementation know each other exist. There is a 0% chance that either could ever reach into (Or beyond) the other one, because they don't even know they exist. The domain only knows that there is a contract, and it can somehow consume the methods by whatever is injected into it. In the bottom-left is the Composition Root or Entry-Point. This is also known as the "Front Boundary" of the application. The root of an application knows all of its components and does little more than take input, determine who to call, compose objects, and return outputs. In other words, it can only tell the Domain "Here, use this to fulfill your contract for ICalculateThings, then give me the result of CalculateTwoThings. There is indeed a way to smash everything into the same project, do concrete instantiations of Services, make your dependencies public properties instead of private fields, STILL Do Dependency-Injection (horribly), and then have services call into dependencies of dependencies. But that would be bad, m'kay. You'd have to be trying to be bad to do that. Side-note, I over-complicated this on purpose. These projects could exist in one solution (as long as the Architect controls the Reference Architecture), and there could be a few more simplifications. But the separation in the image really shows how little knowledge the system has to have about its parts. Only the Composition Root (Entry Point, Front-Boundary) need to know about the parts. Conclusion (TL;DR;): In Oldskewl OOP, Models are Rich, and the Law of Demeter can easily be broken by looking into models of models to access their methods. But in Newskewl SOP (built on top of OOP Principles and Languages), Data is separated from Process. So you can feel free to look into properties of models. Then, for Services, dependencies are always private, and nothing knows that anything else exists other than what they are told by abstractions, contracts, interfaces.
{ "language": "en", "url": "https://stackoverflow.com/questions/160032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Finding the time taken to send messages with WCF net.tcp I’m writing a prototype WCF enabled distributed app, to try and find out any issues I’ll have upgrading my existing “sending xml over tcp to communicate” apps I’ve got. I’m using Callback Contracts to register clients with a server (Singleton in ServiceHost) and so far all the communications between client and server work. I can connect multiple clients to the server and send a broadcast from the server that is received by all clients. I can block a particular client and the other clients still receive the calls. This is good. To continue my learning and evaluation of performance I would like the client to record what time the server sends each message, as well as what time the client receives that same message. How should I best go about this? Is there something similar to SOAP extensions, where I can add to the outgoing from the server and incoming to the client? Or would I need to add a “timeSent” parameter to every method that the server calls on the client and record the time received on the client (yuck!)? Is there a better way to accomplish this? I am using net.tcp rather than wsDualHttpBinding (which also works but is less performant). A: Hmmm... that's a difficult one. The problem here is you can't even make sure both the client and the server timers are in sync. If what you want to do is send some out-of-band data, so that you don't need to modify your methods, you can use the method suggested here. I think it should be enough. A: David is right about the problems with clock synchronization. However, adding the timestamp information outside of the service/client implementation is not hard at all on WCF. You're right it doesn't support SoapExtensions, though, in fact, it has a much richer set of extensibility point. In your specific case, I think a custom behavior that adds a MessageInspector would probably work. There are actually two message inspector interfaces: One for the client (IClientMessageInspector), and one for the server (IDispatchMessageInspector). The easiest way to hook up a dispatch inspector on the service side is through a service behavior (IServiceBehavior), since you can hook that up to your service implementation as a custom attribute. Here's a simple example of how to do it. You can also hook it up through an IEndpointBehavior, but you need to do that either through code when setting up the service host or through configuration, which requires writing a bit more code. On the client side, you still use an endpoint behavior, but introducing those through code is a lot easier since you have direct access to the ClientRuntime from the proxy client. Anyway, I would think that something like a timestamp is better added to the message as a custom header so that it is not part directly of the message payload.
{ "language": "en", "url": "https://stackoverflow.com/questions/160040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Break when a value changes using the Visual Studio debugger Is there a way to place a watch on variable and only have Visual Studio break when that value changes? It would make it so much easier to find tricky state issues. Can this be done? Breakpoint conditions still need a breakpoint set, and I'd rather set a watch and let Visual Studio set the breakpoints at state changes. A: Update in 2019: This is now officially supported in Visual Studio 2019 Preview 2 for .Net Core 3.0 or higher. Of course, you may have to put some thoughts in potential risks of using a Preview version of IDE. I imagine in the near future this will be included in the official Visual Studio. https://blogs.msdn.microsoft.com/visualstudio/2019/02/12/break-when-value-changes-data-breakpoints-for-net-core-in-visual-studio-2019/ Fortunately, data breakpoints are no longer a C++ exclusive because they are now available for .NET Core (3.0 or higher) in Visual Studio 2019 Preview 2! A: If you are using WPF, there is an awesome tool : WPF Inspector. It attaches itself to a WPF app and display the full tree of controls with the all properties, an it allows you (amongst other things) to break on any property change. But sadly I didn't find any tool that would allow you to do the same with ANY property or variable. A: You can also choose to break explicitly in code: // Assuming C# if (condition) { System.Diagnostics.Debugger.Break(); } From MSDN: Debugger.Break: If no debugger is attached, users are asked if they want to attach a debugger. If yes, the debugger is started. If a debugger is attached, the debugger is signaled with a user breakpoint event, and the debugger suspends execution of the process just as if a debugger breakpoint had been hit. This is only a fallback, though. Setting a conditional breakpoint in Visual Studio, as described in other comments, is a better choice. A: In Visual Studio 2015, you can place a breakpoint on the set accessor of an Auto-Implemented Property and the debugger will break when the property is updated public bool IsUpdated { get; set; //set breakpoint on this line } Update Alternatively; @AbdulRaufMujahid has pointed out in the comments that if the auto implemented property is on a single line, you can position your cursor at the get; or set; and hit F9 and a breakpoint will be placed accordingly. Nice! public bool IsUpdated { get; set; } A: I remember the way you described it using Visual Basic 6.0. In Visual Studio, the only way I have found so far is by specifying a breakpoint condition. A: Right click on the breakpoint works fine for me (though mostly I am using it for conditional breakpoints on specific variable values. Even breaking on expressions involving a thread name works which is very useful if you're trying to spot threading issues). A: As Peter Mortensen wrote: In the Visual Studio 2005 menu: Debug -> New Breakpoint -> New Data Breakpoint Enter: &myVariable Additional information: Obviously, the system must know which address in memory to watch. So - set a normal breakpoint to the initialisation of myVariable (or myClass.m_Variable) - run the system and wait till it stops at that breakpoint. - Now the Menu entry is enabled, and you can watch the variable by entering &myVariable, or the instance by entering &myClass.m_Variable. Now the addresses are well defined. Sorry when I did things wrong by explaining an already given solution. But I could not add a comment, and there has been some comments regarding this. A: In the Visual Studio 2005 menu: Debug -> New Breakpoint -> New Data Breakpoint Enter: &myVariable A: Imagine you have a class called A with the following declaration. class A { public: A(); private: int m_value; }; You want the program to stop when someone modifies the value of "m_value". Go to the class definition and put a breakpoint in the constructor of A. A::A() { ... // set breakpoint here } Once we stopped the program: Debug -> New Breakpoint -> New Data Breakpoint ... Address: &(this->m_value) Byte Count: 4 (Because int has 4 bytes) Now, we can resume the program. The debugger will stop when the value is changed. You can do the same with inherited classes or compound classes. class B { private: A m_a; }; Address: &(this->m_a.m_value) If you don't know the number of bytes of the variable you want to inspect, you can use the sizeof operator. For example: // to know the size of the word processor, // if you want to inspect a pointer. int wordTam = sizeof (void* ); If you look at the "Call stack" you can see the function that changed the value of the variable. A: Change the variable into a property and add a breakpoint in the set method. Example: private bool m_Var = false; protected bool var { get { return m_var; } set { m_var = value; } } A: You can use a memory watchpoint in unmanaged code. Not sure if these are available in managed code though. A: You can probably make a clever use of the DebugBreak() function. A: You can optionally overload the = operator for the variable and can put the breakpoint inside the overloaded function on specific condition.
{ "language": "en", "url": "https://stackoverflow.com/questions/160045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "229" }
Q: Is there a way to keep Visual Studio from modifying the solution file after every test run? Visual Studio seems to be modifying a list of .vsmdi files in my .sln every time sometimes when I run a unit test. This is annoying because my source control client thinks the .sln file needs to be checked in even though I don't want to check it in. Is there any way to keep Visual Studio from munging the .sln file after a test run? Edit: Found a Microsoft Connect issue discussing this, which sucks because things just sort of disappear from there after a little while and its a terrible bug tracker A: I don't believe a solution exists. A good Connect case, that does a better job of documenting the issue and a repro case, is this one. At the very bottom of the page a commenter proposes a workaround, which I've reproduced here. I haven't actually tested this workaround for myself yet, I guess I've gotten numb to discarding the changes caused by this bug :( From the connect case: I have been able to repro this problem by having developer A run tests with the vsdmi file while developer B check it out and adds unit tests to the vsdmi. This typically will cause a new one to be generated. The workaround that has worked for me is to create vsdmi files per dev for unit testing activities that are not checked in to SCC and create special vsdmis for build testing and automated regression. Yuck, but it works. A: Edit: Oops, was confused about the "list of vsmdi files" thing. Suggestion wouldn't have worked. A: That is a question for the ages...I always check if the solution has been checked out for whatever reason before I commit changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/160046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Many-to-Many with "Primary" I'm working on a database that needs to represent computers and their users. Each computer can have multiple users and each user can be associated with multiple computers, so it's a classic many-to-many relationship. However, there also needs to be a concept of a "primary" user. I have to be able to join against the primary user to list all computers with their primary users. I'm not sure what the best way structure this in the database: 1) As I'm currently doing: linking table with a boolean IsPrimary column. Joining requires something like ON (c.computer_id = l.computer_id AND l.is_primary = 1). It works, but it feels wrong because it's not easy to constrain the data to only have one primary user per computer. 2) A field on the computer table that points directly at a user row, all rows in the user table represent non-primary users. This represents the one-primary-per-computer constraint better, but makes getting a list of computer-users harder. 3) A field on the computer table linking to a row in the linking table. Feels strange... 4) Something else? What is the 'relational' way to describe this relationship? EDIT: @Mark Brackett: The third option seems a lot less strange to me now that you've shown how nice it can look. For some reason I didn't even think of using a compound foreign key, so I was thinking I'd have to add an identity column on the linking table to make it work. Looks great, thanks! @j04t: Cool, I'm glad we agree on #3 now. A: Option 3, though it may feel strange, is the closest to what you want to model. You'd do something like: User { UserId PRIMARY KEY (UserId) } Computer { ComputerId, PrimaryUserId PRIMARY KEY (UserId) FOREIGN KEY (ComputerId, PrimaryUserId) REFERENCES Computer_User (ComputerId, UserId) } Computer_User { ComputerId, UserId PRIMARY KEY (ComputerId, UserId) FOREIGN KEY (ComputerId) REFERENCES Computer (ComputerId) FOREIGN KEY (UserId) REFERENCES User (UserId) } Which gives you 0 or 1 primary user (the PrimaryUserId can be nullable if you want), that must be in Computer_User. Edit: If a user can only be primary for 1 computer, then a UNIQUE CONSTRAINT on Computer.PrimaryUserId will enforce that. Note that there is no requirement that all users be a primary on some computer (that would be a 1:1 relationship, and would call for them to be in the same table). Edit: Some queries to show you the simplicity of this design --All users of a computer SELECT User.* FROM User JOIN Computer_User ON User.UserId = Computer_User.UserId WHERE Computer_User.ComputerId = @computerId --Primary user of a computer SELECT User.* FROM User JOIN Computer ON User.UserId = Computer.PrimaryUserId WHERE Computer.ComputerId = @computerId --All computers a user has access to SELECT Computer.* FROM Computer JOIN Computer_User ON Computer.ComputerId = Computer_User.ComputerId WHERE Computer_User.UserId = @userId --Primary computer for a user SELECT Computer.* FROM Computer WHERE PrimaryUserId = @userId A: Edit -- I didn't think properly about it the first 3 times through... I vote for -- (Number 3 solution) Users user id (pk) Computers computer id (pk) primary user id (fk -> computer users id) Computer Users user id (pk) (fk -> user id) computer id (pk) (fk -> user id) This is the best solution I can think of. Why I like this design. 1) Since this is a relationship involving computers and users I like the idea of being able to associate a user to multiple computers as the primary user. This may not ever occur where this database is being used though. 2) The reason I don't like having the primary_user on the link table (computer_users.primary_user_id fk-> users.user_id) is to prevent a computer from ever having multiple primary users. Given those reasons Number 3 solution looks better since you will never run into some possible problems I see with the other approaches. Solution 1 problem - Possible to have multiple primary users per computer. Solution 2 problem - Computer links to a primary user when the computer and user aren't link to each other. computer.primaryUser = user.user_id computer_users.user_id != user.user_id Solution 3 problem - It does seem kind of odd doesn't it? Other than that I can't think of anything. Solution 4 problem - I can't think of any other way of doing it. This is the 4th edit so I hope it makes sense still. A: Since the primary user is a function of the computer and the user I would tend to go with your approach of having the primaryUser being a column on the linking table. The other alternative that I can think of is to have a primaryUser column directly on the computer table itself. A: I would have made another table PRIMARY_USERS with unique on computer_id and making both computer_id and user_id foreign keys of USERS. A: Either solution 1 or 2 will work. At this point I would ask myself which one will be easier to work with. I've used both methods in different situations though I would generally go with a flag on the linking table and then force a unique constraint on computer_id and isPrimaryUser, that way you ensure that each computer will only have one primary user. A: 2 feels right to me, but I would test out 1, 2 and 3 for performance on the sorts of queries you normally perform and the sorts of data volumes you have. As a general rule of thumb I tend to believe that where there is a choice of implementations you should look to your query requirements and design your schema so you get the best performance and resource utilisation in the most common case. In the rare situation where you have equally common cases which suggest opposite implementations, then use Occam's razor. A: We have a similar situation in the application I work on where we have Accounts that can have many Customers attached but only one should be the Primary customer. We use a link table (as you have) but have a Sequence value on the link table. The Primary user is the one with Sequence = 1. Then, we have an Index on that Link table for AccountID and Sequence to ensure that the combination of AccountID and Sequence is unique (thereby ensuring that no two Customers can be the Primary one on an Account). So you would have: LEFT JOIN c.computer_id = l.computer_id AND l.sequence = 1
{ "language": "en", "url": "https://stackoverflow.com/questions/160051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you decide which API function documentations to read and how seriously? Suppose that you are writing or maintaining a piece of code that uses some API that you are not 100% familiar with. How do you decide whether to read the documentation of a certain call target, and how much time to spend reading it? How do you decide not to read it? (Let's assume you can read it by opening the HTML documentation, inspecting the source code, or using the hover mechanism in the IDE). A: Ideally you should read all of it, but we know that's a pain in the... you know. What I normally do on those cases (and I did that a lot while I worked as a freelancer) is weight some factors and depending on the result, I read the docs. Factors that tell me I shouldn't read the docs: * *What the function does is easy to guess from the name. *It isn't relevant to the code I'm maintaining: for example, you are checking how some code deletes files, and you have some function that obviously does some UI update. You don't care about that for now. *If debugging: the function didn't change the program state in a way meaningful to the task at hand. As before, you don't want to learn what SetOverlayIcon does, if you are debugging the deletion code because it's dying with a file system error. *The API is just a special case of an API you already know and you can guess what the special case is, and what the special arguments (if any) do. For example, let's say you have WriteToFile(string filename) and WriteToFile(string filename, boolean overwrite). Of course, everything depends on the context, so even those rules have exceptions.
{ "language": "en", "url": "https://stackoverflow.com/questions/160077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the big deal with BUILDING 64-bit versions of binaries? There are a ton of drivers & famous applications that are not available in 64-bit. Adobe for instance does not provider a 64-bit Flash player plugin for Internet Explorer. And because of that, even though I am running 64-bit Vista, I have to run 32-bit IE. Microsoft Office, Visual Studio also don't ship in 64-bit AFAIK. Now personally, I haven't had much problems building my applications in 64-bit. I just have to remember a few rules of thumb, e.g. always use SIZE_T instead of UINT32 for string lengths etc. So my question is, what is preventing people from building for 64-bit? A: In addition to the things in @jvasak's post, the major thing that can causes bugs: * *pointers are larger than ints - a huge amount of code makes the assumption that the sizes are the same. Remember that Windows will not even allow an application (whether 32-bit or 64-bit) to handle pointers that have an address above 0x7FFFFFFF (2GB or above) unless they have been specially marked as "LARGE_ADDRESS_AWARE" because so many applications will treat the pointer as a negative value at some point and fall over. A: The biggest issues that I've run into porting our C/C++ code to 64 bit is support from 3rd party libraries. E.g. there is currently only 32 bit versions of the Lotus Notes API and also MAPI so you can't even link against them. Also since you can't load a 32 bit DLL into your 64 bit process you get burnt again trying to load things dynamically. We ran into this problem again trying to support Microsoft Access under 64 bit. From wikipedia: The Jet Database Engine will remain 32-bit for the foreseeable future. Microsoft has no plans to natively support Jet under 64-bit versions of Windows A: Just a guess, but I would think a large part of it would be support - If Adobe compiles the 64 bit version, they have to support it. Even though it may be a simple compile switch, they'd still have to run through a lot of testing, etc, followed by training their support staff to respond correctly, when they do run into issues fixing them either results in a new version of the 32 bit binary or a branch in the code, etc. So while it seems simple, for a large application it can still end up costing a lot. A: Another reason that a lot of companies have not gone through the effort of creating 64 bit versions is simply they don't need to. Windows has WoW64 (Windows on Windows 64 bit) and Linux can have the 32 bit libraries available alongside the 64 bit. Both of these allow us to run 32 bit applications in 64 bit environments. As long as the software is able to run in this way, there is not a major incentive to convert to 64 bit. Exceptions to this are things such as device drivers as they are tied in deeper with the operating systems and cannot run in the 32 bit layer that the x86-64/AMD64 based 64-bit operating systems offer (IA64 is unable to do this from what I understand). I agree with you on flash player though, I am very disappointed in Adobe that they have not updated this product. As you have pointed out, it does not work properly in 64 bit requiring you to run the 32 bit version of Internet Explorer. I think it is a strategic mistake on Adobe's part. Having to run the 32 bit browser for flash player is an inconvenience for users, and many will not understand this solution. This could lead to developers being apprehensive about using flash. The most important thing for a web site is to make sure everyone can view it, solutions that alienate users are typically not popular ones. Flash's popularity was fed by its own popularity, the more sites that used it, the more users had it on their systems, the more users that had it on their systems, the more sites were willing to use it. The retail market pushes these things forward, when a general consumer goes to buy a new computer, they aren't going to know they don't need a 64 bit OS they are going to get it either because they hear it is the latest and greatest thing, the future of computing, or just because they don't know the difference. Vista has been out for about 2 years now, and Windows XP 64-bit was out before that. In my mind that is too long for a major technology such as Flash to not be upgraded if they want to hold on to their market. It may have to do with Adobe taking over Macromedia and this is a sign that Adobe does not feel Flash is part of their future, I find it hard to believe as I think Flash and Dreamweaver were the top parts of what they got out of Macromedia, but then why have they not updated it yet? A: It is not as simple as just flipping a switch on your compiler. At least, not if you want to do it right. The most obvious example is that you need to declare all your pointers using 64-bit datatypes. If you have any code which makes assumptions about the size of these pointers (e.g. a datatype which allocates 4 bytes of memory per pointer), you'll need to change it. All this needs to have been done in any libraries you use, too. Further, if you miss just a few then you'll end up with pointers being down-casted and at the wrong location. Pointers are not the only sticky point but are certainly the most obvious. A: If you are starting from scratch, 64-bit programming is not that hard. However, all the programs you mention are not new. It's a whole lot easier to build a 64-bit application from scratch, rather than port it from an existing code base. There are many gotchas when porting, especially when you get into applications where some level of optimization has been done. Programmers use lots of little assumptions to gain speed, and these are not always easy to quickly port to 64-bit. A few examples I've had to deal with: * *Proper alignment of elements within a struct. As data sizes change, assumptions that certain fields in a struct will be aligned on an optimal memory boundary may fail *Length of long integers change, so if you pass values over a socket to another program that may not be 64-bit, you need to refactor your code *Pointer lengths change, as so hard to decipher code written be a guru that has left the company become a little trickier to debug *Underlying libraries will also need to have 64-bit support to properly link. This is a large part of the problem of porting code if you rely on any libraries that are not open source A: Primarily a support and QA issue. The engineering work to build for 64-bit is fairly trivial for most code, but the testing effort, and the support cost, don't scale down the same way. On the testing side, you'll still have to run all the same tests, even though you "know" they should pass. For a lot of applications, converting to a 64-bit memory model doesn't actually give any benefit (since they never need more than a few GB of RAM), and can actually make things slower, due to the larger pointer size (makes every object field twice as large). Add to that the lack of demand (due to the chicken/egg problem), and you can see why it wouldn't be worth it for most developers. A: Their Linux/Flash blog goes some way to explain why there isn't a 64bit Flash Player as yet. Some is Linux-specific, some is not.
{ "language": "en", "url": "https://stackoverflow.com/questions/160082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's the difference between <%# %> and <%= %>? Pardon my ASP ignorance, but what's the difference? A: See http://weblogs.asp.net/leftslipper/archive/2007/06/29/how-asp-net-databinding-deals-with-eval-and-bind-statements.aspx As Albert says, it's all to do with parsing databinding statements. A: These are somewhat informally referred to as "bee stings". There are 4 types: <%# %> is invoked during the DataBinding phase. <%= %> is used to get values from code to the UI layer. Meant for backward compatibility with ASP applications. Shouldn't use in .NET. <%@ %> represents directives and allow behaviors to be set without resorting to code. <%: %> (introduced in ASP.NET 4) is the same as %=, but with the added functionality of HtmlEncoding the output. The intention is for this to be the default usage (over %=) to help shield against script injection attacks. Directives specify settings that are used by the page and user-control compilers when the compilers process ASP.NET Web Forms pages (.aspx files) and user control (.ascx) files. ASP.NET treats any directive block (<%@ %>) that does not contain an explicit directive name as an @ Page directive (for a page) or as an @ Control directive (for a user control). @Esteban - Added a msdn link to directives. If you need...more explanation, please let me know. A: The # version is used while data binding. <%= is just a simple Response.Write A: Not entirely related to the question, there's another related notation in asp.net called Expression Builder: <asp:SqlDataSource ... Runat="server" ConnectionString="<%$ ConnectionStrings:Northwind %>" /> <asp:Literal Runat="server" Text="<%$ Resources:MyResources, MyText %>" /> and it's extensible, see http://msdn.microsoft.com/en-us/magazine/cc163849.aspx#S4 A: javascript in .aspx that uses a master page. var e = document.getElementById('<%= lblDescription.ClientID %>'); e.innerHTML = 'getElementById(\'lblDescription\') will be null';
{ "language": "en", "url": "https://stackoverflow.com/questions/160097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do you add all untracked files in svn? Something like git add -i? I've been using this long command: svn st | awk '/\?/ {print $2}' | xargs svn add Similarly, to svn rm files I accidentally deleted with normal rm with : svn st | awk '/\!/ {print $2}' | xargs svn rm --force I guess I can write a bash function to do these two, but I'd prefer an interactive add/rm like the one git has. A: there's an easier line... svn add `svn status | grep ?` then you can set it up as an alias in ~/.bashrc such as alias svn-addi='svn add `svn status | grep ?`' A: I use a generalization of the command line that you run, called svnapply.sh. I did not write it, but I don't remember where I found it. Hopefully, the original author will forgive me for reposting it here: #!/bin/bash # # Applies arbitrary commands to any svn status. e.g. # # Delete all non-svn files (escape the ? from the shell): # svnapply \? rm # # List all conflicted files: # svnapply C ls -l APPLY=$1 shift svn st | egrep "^\\${APPLY}[ ]+" | \ sed -e "s|^\\${APPLY}[ ]*||" | \ sed -e "s|\\\\|/|g" | \ xargs -i "$@" '{}' Per the comments, the script allows you to run arbitrary commands against all files with the same status. Update: It would not be too difficult to write a script that takes a file path as an argument and prompts the user for add/delete and then does the appropriate thing for that file. Chaining that together with the above script would get you what you want. A: This adds all svn-untracked and -unversioned files in the current directory, recursing through all subdirectories: svn add --force ./* Works for me in MacOS 10.6+ and Ubuntu 10+, with svn 1.6+. This does not provide any per-file, user-interactivity; I don't know how to do that. This will also add svn-ignored files, for better or worse. A: There is a similar question which contains a nice Ruby script that gives you the option to add, ignore or skip new files. I've tried it and it worked for me. No GUI needed, only Ruby. A: Use a GUI that can show you all the untracked files, then select all and add. Any decent SVN gui should provide this functionality. That said, be careful you really want all those files. A: TortoiseSVN has the option of showing unversioned files in the Commit and Show Changes dialogs. You can right click a file to 'Add' it or to mark it as ignored. If you are using Visual Studio: The latest stable version of AnkhSVN has a similar command, but in most cases it only shows the files you should add. (The project provides a list of files to version to the SCC provider; other files are ignored automatically)
{ "language": "en", "url": "https://stackoverflow.com/questions/160104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How can you bring a control to front in mfc How do you change controls' Z-order in MFC at design time - i.e. I can't use SetWindowPos or do this at runtime - I want to see the changed z-order in the designer (even if I have to resort to direct-editing the .rc code). I have an MFC dialog to which I am adding controls. If there is overlap between the edges of the controls, I want to bring one to the front of the other. In Windows Forms or WPF, etc. I can Bring to Front, Send to Back, Bring Forward, Send Back. I don't find these options in MFC, nor can I tell how it determines what is in front, as a control just added is often behind a control that was there previously. How can I manipulate the Z-order in MFC? Even if I have to manipulate the .rc file code directly (i.e. end-run around the designer). A: Actually, if you want to do this in the resource editor, you can just cut the item and then paste it back as a quick and dirty solution. Just Ctrl-X then Ctrl-V. Editing the RC file will also work. A: I think the control in front will be the last control that occurs in the rc file. In other words, the dialog editor will draw each control as it is encountered from top to bottom in the rc file, overlapping them when necessary. You can edit the rc file to reorder them, or you can change the tab order in the editor, which does the same thing since tab order is also set based on the order that the controls occur in the file. To my knowledge MFC doesn't offer any other way of layering overlapping controls at design time. A: In Visual Studio 6.0 do the following. Open the dialog screen (in designer view) Press Ctrl + D The tab orders will be shown for each control Start clicking controls in the tab order you expect to see in run-time (ie., the control on which you click first will have tab order set to 1 and so on...) A: GetDlgItem(IDC_MYCONTROL)->SetWindowPos(HWND_TOP, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE); A: You can use CWnd::SetWindowPos() to control the Z order of your controls, without changing their position in the parent window. A: In the MSVC 2005 dialog resource editor there is an option to set the tab order. In MSVC 2005 it is found on the Format, Tab Order menu. The tab order displayed by this menu option is the same order in which the controls are written to the resource file. A: GetDlgItem(IDC_CONTROL1)->SetWindowPos(&wndTop, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE)
{ "language": "en", "url": "https://stackoverflow.com/questions/160105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How to implement find as you type on a TComboBox descendant What is the correct way to implement the "find as you type" behavior on a TComboBox descendant component whose style is csOwnerDrawFixed? A: * *Use a TTimer (let's call it timIncSearch). Set (at design time) the following properties: Enabled:=False; Interval:=400; //empirically found - it's the delay used in Windows Explorer ...and in OnTimer you'll wrote your searching engine. Be sure that the first line here will be timIncSearch.Enabled:=False; Also because you use csOwnerDrawFixed perhaps it's better to enforce a repaint of your control. As an aside, - just guessing because you didn't give us many details - perhaps you must hook OnEnter and OnExit events to custom open and close the DropDown list. (Normaly, this is achieved by setting the AutoDropDown property accordingly) *In your ComboBox.KeyPress you'll write with timIncSearch do begin Enabled:=False; Enabled:=True; end; ...also take care here, perhaps you must have a 'case Key of' construct to handle the #13 separately (or whatever). Other hints: * *depending on your situation, perhaps you must hook (also?) the OnKeyDown (if you want to process special keys like eg. BackSpace, Del, Arrows etc. - taking in account that the event repeats itself while the key it's pressed down) and/or OnKeyUp (if you want to do similar processing as above but without taking in account the keyboard's key auto-repeat feature). A: First you need to decide whether you need "*my_string*" or "my_string*" functionality, meaning deciding if you would search inside the strings or just from the beginning. When you have figured that out, then you would have to buld the index of all the words entered in the combo box and search it after every keystroke. I don't think that handling OnTimer is a right approach. I would rather use "OnChange" or similar. You could do it with sorted (dupignore) TStringList, or maybe build the index using hash tables (the implementation is up to you). The architecture depends on the max no of strings your combo could contain. If it is a significant number than you could use hash tables (one hash Cardinal pointing to multiple indexes : array, TList...)
{ "language": "en", "url": "https://stackoverflow.com/questions/160106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Static and Instance methods with the same name? I have a class with both a static and a non-static interface in C#. Is it possible to have a static and a non-static method in a class with the same name and signature? I get a compiler error when I try to do this, but for some reason I thought there was a way to do this. Am I wrong or is there no way to have both static and non-static methods in the same class? If this is not possible, is there a good way to implement something like this that can be applied generically to any situation? EDIT From the responses I've received, it's clear that there is no way to do this. I'm going with a different naming system to work around this problem. A: No you can't. The reason for the limitation is that static methods can also be called from non-static contexts without needing to prepend the class name (so MyStaticMethod() instead of MyClass.MyStaticMethod()). The compiler can't tell which you're looking for if you have both. You can have static and non-static methods with the same name, but different parameters following the same rules as method overloading, they just can't have exactly the same signature. A: Actually, there kind of is a way to accomplish this by explicitly implementing an interface. It is not a perfect solution but it can work in some cases. interface IFoo { void Bar(); } class Foo : IFoo { static void Bar() { } void IFoo.Bar() { Bar(); } } I sometimes run into this situation when I make wrapper classes for P/Invoke calls. A: C# is not well designed when it comes to this... While it is true that you could want the global or non-global, it should pick one by default, and if you want the other then you simply qualify it more. class Logger { public static Logger instance; public static void Log(string message) { instance.Log(message); // currently the compiler thinks this is ambiguous, but really its not at all. Clearly we want the non-static method } public void Log(string message) { } public void DoStuff() { Log("doing instance stuff"); // this could be ambiguous, but in my opinion it should default to a call to this.Log() Logger.Log("doing global stuff"); // if you want the global qualify it explicitly } } A: You can call static methods from instance methods without having to specify the type name: class Foo { static void Bar() { } void Fizz() { Bar(); } } ... so it makes sense that you wouldn't be allowed to have a static method and an instance method with the same signature. What are you trying to accomplish? It's hard to suggest a workaround without knowing specifics. I'd just rename one of the methods. A: OK. The root of this problem is that C# should not let you call a static method from an instance method without specifying the type name. Other full OO languages (like Smalltalk) don't allow this and also its just confusion to people who understand objects. The seperation between instance side and class (or static) side is very important and having a language that promotes confusion in those details is........not a good idea....but typical of the type stuff we expect from MS. Adrian A: You can have static and instance method with the same name, as long as their declaration differs in the number or type of parameters. It's the same rule on how you can have two instance methods with the same name in a class. Though technically, in the case of static vs. instance method, they already differ by the presence of the implicit this parameter in the instance method, that difference is not enough for the compiler to determine which of the two you want to call. Update: I made a mistake. Return values are not enough to have different signature.
{ "language": "en", "url": "https://stackoverflow.com/questions/160118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "66" }
Q: SQL split/merge of table partitions: What is the best approach to implement? Microsoft in its MSDN entry about altering SQL 2005 partitions, listed a few possible approaches: * *Create a new partitioned table with the desired partition function, and then insert the data from the old table into the new table by using an INSERT INTO...SELECT FROM statement. *Create a partitioned clustered index on a heap *Drop and rebuild an existing partitioned index by using the Transact-SQL CREATE INDEX statement with the DROP EXISTING = ON clause. *Perform a sequence of ALTER PARTITION FUNCTION statements. Any idea what will be the most efficient way for a large scale DB (millions of records) with partitions based on the dates of the records (something like monthly partitions), where data spreads over 1-2 years? Also, if I mostly access (for reading) recent information, will it make sense to keep a partition for the last X days, and all the rest of the data will be another partition? Or is it better to partition the rest of the data too (for any random access based on date range)? A: I'd recommend the first approach - creating a new partitioned table and inserting into it - because it gives you the luxury of comparing your old and new tables. You can test query plans against both styles of tables and see if your queries are indeed faster before cutting over to the new table design. You may find there's no improvement, or you may want to try several different partitioning functions/schemes before settling on your final result. You may want to partition on something other than date range - date isn't always effective. I've done partitioning with 300-500m row tables with data spread over 6-7 years, and that table-insert approach was the one I found most useful. You asked about how to partition - the best answer is to try to design your partitions so that your queries will hit a single partition. If you tend to concentrate queries on recent data, AND if you filter on that date field in your where clauses, then yes, have a separate partition for the most recent X days. Be aware that you do have to specify the partitioned field in your where clause. If you aren't specifying that field, then the query is probably going to hit every partition to get the data, and at that point you won't have any performance gains. Hope that helps! I've done a lot of partitioning, and if you want to post a few examples of table structures & queries, that'll help you get a better answer for your environment.
{ "language": "en", "url": "https://stackoverflow.com/questions/160128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Divide by zero error, how do I fix this? C# novice here, when the int 'max' below is 0 I get a divide by zero error, I can see why this happens but how should I handle this when max is 0? position is also an int. private void SetProgressBar(string text, int position, int max) { try { int percent = (100 * position) / max; //when max is 0 bug hits string txt = text + String.Format(". {0}%", percent); SetStatus(txt); } catch { } } A: Well, that entirely depends on the behaviour you want. If the maximum value of your program bar is zero, is it full? Is it empty? This is a design choice, and when you've chosen, just test for max == 0 and deploy your answer. A: * *You can throw an exception. *You can do int percent = ( max > 0 ) ? (100 * position) / max : 0; *You can choose to do nothing instead of assigning a value to percent. *many, many other things... Depends on what you want. A: Check for zero. if ( max == 0 ) { txt = "0%"; } else { // Do the other stuff.... A: This is not a C# problem, it's a math problem. Division by zero is undefined. Have an if statement that checks whether max > 0 and only execute your division then. A: int percent = 0 if (max != 0) percent = (100*position) / max A: Convert your int percent = (100 * position) / max; into int percent; if (max != 0) percent = (100 * position) / max; else percent = 100; // or whatever fits your needs A: Well, if max is zero, then there is no progress to be made. Try catching the exception where this is called. That is probably the place to decide whether there is a problem or if the progress bar should be set at zero or at 100%. A: I guess the root question is: Does it make sense to even call this function where max is '0'? If yes, then I'd add special handling to it i.e.: if (max == 0) { //do special handling here } else { //do normal code here } If 0 doesn't make sense, I'd investigate where it's coming from. A: You would need a guard clause which checks for max == 0. private void SetProgressBar(string text, int position, int max) { if(max == 0) return; int percent = (100 * position) / max; //when max is 0 bug hits string txt = text + String.Format(". {0}%", percent); SetStatus(txt); } You could also handle the Divide by Zero exception, as your sample showed, but it is generally more costly to handle exceptions then to set up checks for known bad values. A: If you are using this for a download, you'll probably want to show 0% as I assume max would == 0 in this case when you don't KNOW the file size yet. int percent = 0; if (max != 0) ...; If you are using this for some other long task, I'd want to assume 100% But also, since position can never be between 0 and -1, so you'll probably want to drop the 100 * A: You can user a ternary operator. int percent = max != 0 ? (100 * position) / max : 0; This means that when max does not equal zero, to perform the calculation. If it equals 0 then it will set the percent to 0.
{ "language": "en", "url": "https://stackoverflow.com/questions/160141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Find X/Y of an HTML element with JavaScript How can I find the XY coordinates of an HTML element (DIV) from JavaScript if they were not explicitly set? A: That can be tricky depending on browser and version. I would suggest using jQuery and the positions plugin. A: You can use a library such as Prototype or jQuery, or you can use this handy function: It returns an array. myPos = findPos(document.getElementById('something')) x = myPos[0] y = myPos[1] function findPos(obj) { var curleft = curtop = 0; if (obj.offsetParent) { curleft = obj.offsetLeft curtop = obj.offsetTop while (obj = obj.offsetParent) { curleft += obj.offsetLeft curtop += obj.offsetTop } } return [curleft,curtop]; } A: Here's how I do it: // Based on: http://www.quirksmode.org/js/findpos.html var getCumulativeOffset = function (obj) { var left, top; left = top = 0; if (obj.offsetParent) { do { left += obj.offsetLeft; top += obj.offsetTop; } while (obj = obj.offsetParent); } return { x : left, y : top }; }; A: For what it's worth, here's a recursive version: function findPos(element) { if (element) { var parentPos = findPos(element.offsetParent); return [ parentPos.X + element.offsetLeft, parentPos.Y + element.offsetTop ]; } else { return [0,0]; } } A: You can add two properties to the Element.prototype to get top/left of any element. window.Object.defineProperty( Element.prototype, 'documentOffsetTop', { get: function () { return this.offsetTop + ( this.offsetParent ? this.offsetParent.documentOffsetTop : 0 ); } } ); window.Object.defineProperty( Element.prototype, 'documentOffsetLeft', { get: function () { return this.offsetLeft + ( this.offsetParent ? this.offsetParent.documentOffsetLeft : 0 ); } } ); Here's a demo comparing the results to jQuery's offset().top and .left: http://jsfiddle.net/ThinkingStiff/3G7EZ/ A: I am not sure what you need it for, and such things are always relative (screen, window, document). But when I needed to figure that out, I found this site helpful: http://www.mattkruse.com/javascript/anchorposition/source.html And I also found that the tooltip plugin someone made for jQuery had all sorts of interesting insight to x,y positioning tricks (look at its viewport class and the underlying support jQuery provides for it). http://bassistance.de/jquery-plugins/jquery-plugin-tooltip/
{ "language": "en", "url": "https://stackoverflow.com/questions/160144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Catching exceptions from a constructor's initializer list Here's a curious one. I have a class A. It has an item of class B, which I want to initialize in the constructor of A using an initializer list, like so: class A { public: A(const B& b): mB(b) { }; private: B mB; }; Is there a way to catch exceptions that might be thrown by mB's copy-constructor while still using the initializer list method? Or would I have to initialize mB within the constructor's braces in order to have a try/catch? A: I know it has been awhile since this discussion started. But that try-and-catch construct mentioned by Adam is part of the C++ standard and is supported by Microsoft VC++ and GNU C++. Here is the program that works. By the way the the catch generates automatically another exception to signal about the constructor failure. #include <iostream> #include <exception> #include <string> using namespace std; class my_exception: public exception { string message; public: my_exception(const char* message1) { message = message1; } virtual const char* what() const throw() { cout << message << endl; return message.c_str(); } virtual ~my_exception() throw() {}; }; class E { public: E(const char* message) { throw my_exception(message);} }; class A { E p; public: A() try :p("E failure") { cout << "A constructor" << endl; } catch (const exception& ex) { cout << "Inside A. Constructor failure: " << ex.what() << endl; } }; int main() { try { A z; } catch (const exception& ex) { cout << "In main. Constructor failure: " << ex.what() << endl; } return 0; } A: It's not particularly pretty: A::A(const B& b) try : mB(b) { // constructor stuff } catch (/* exception type */) { // handle the exception } A: Have a read of http://weseetips.wordpress.com/tag/exception-from-constructor-initializer-list/) Edit: After more digging, these are called "Function try blocks". I confess I didn't know this either until I went looking. You learn something every day! I don't know if this is an indictment of how little I get to use C++ these days, my lack of C++ knowledge, or the often Byzantine features that litter the language. Ah well - I still like it :) To ensure people don't have to jump to another site, the syntax of a function try block for constructors turns out to be: C::C() try : init1(), ..., initn() { // Constructor } catch(...) { // Handle exception } A: You could work with lazy initialization, though, that is hold a unique_ptr to Reader in MyClass and create it with new. That way, you do not even need the flag has_reader but you can just see if your unique_ptr is initial or not. #include <iostream> #include <memory> using namespace std; class MyOtherClass { public: MyOtherClass() { throw std::runtime_error("not working"); } }; class MyClass { public: typedef std::unique_ptr<MyOtherClass> MyOtherClassPtr; MyClass() { try { other = std::make_unique<MyOtherClass>(); } catch(...) { cout << "initialization failed." << endl; } cout << "other is initialized: " << (other ? "yes" : "no"); } private: std::unique_ptr<MyOtherClass> other; }; int main() { MyClass c; return 0; } Of course, there are also solutions without using exceptions at all but I assumed that this is a prerequisite in your setting. A: I don't see how you'd do that with initializer-list syntax, but I'm also a bit sceptical that you'll be able to do anything useful by catching the exception in your constructor. It depends on the design of the classes, obviously, but in what case are you going to fail to create "mB", and still have a useful "A" object? You might as well let the exception percolate up, and handle it wherever the constructor for A is being called.
{ "language": "en", "url": "https://stackoverflow.com/questions/160147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: "Authorization failed" with SVN 1.5.2 on OS X (PowerPC G4) I'm trying to commit to an SVN server hosted on my school's network. I have installed SVN 1.5.2 with binaries downloaded from CollabNet here. The error reported is: svn: Commit failed (details follow): svn: MKACTIVITY of '/opensvn/cs598r/!svn/act/defe271c-f33b-4851-a706-b2906301fed0': authorization failed (http://dna.cs.byu.edu) That's the complete error message - nowhere does it say 403 Forbidden. I have tried deleting my working copy and checking it out again to no avail. I have checked and double-checked that my password and permissions are correct on the server. I have checked that the URL is correct. I can successfully commit from a remote machine, but not from mine. Other members of my team are able to commit from their computers, but when they try from mine, they get the same error. One of the other members of my team is using 1.5.1 CollabNet binaries with no trouble. What about my client is broken? A: Since you can commit from other machines, and your team members can commit but not from your machine, I'd say it's probably an issue with your subversion client. I'd suggest you uninstall the client you have, then install the version that's being run on the server just to be safe. A: I think authorization is required for you to commit your local copy... or Maybe you can commit, but the server is not auto updating? ...try updating the server after committing your work through SSH and SVN update A: Not all forms of accessing a repository allow all forms of access. If you checked out your code via a read-only method, you won't be able to commit. As an example, it isn't uncommon for a WebDav repository to allow only anonymous checkout on http://... and allow authentication and commits only on https://... Check that the repository you are checking out from is letter-for-letter identical to the repositories that the other members of your team are checking out from. A: Make sure you're using the proper CAPS for the entire svn url A: I think the problem is within the parentheses (http://dna.cs.byu.edu). You can often checkout with the http path, but commits usually want https.
{ "language": "en", "url": "https://stackoverflow.com/questions/160162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Required Fields in Share Point data collection not throwing errors I have created a workflow in Share Point Designer on our MOSS2007 dev server. (No one is allowed to have access to write .NET code yet as company policy) In this workflow I am collecting data from the user, and both of my custom content types I created are marked as required, but the page isn't tossing errors when I don't supply values; not client side nor server side. I checked the ..xoml.wfconfig.xml file and set the properties for required to true, I set the content type to required, and I used Share Point design to mark them as required. Nothing. This is a out of the box installation except for the master page and style sheets. To make sure it wasn't that, I reverted to an out of the box style sheet. Any ideas on what else to check / set? A: Sharepoint can be a bit quirky. I followed this guide successfully to get required fields. I'm not sure its quite the same thing you are trying to do, but maybe it will get you going down the right track. Basically it uses Page Layouts and Content Types with to enforce required fields. I'm curious to know if this method is so much different from the way you are doing it
{ "language": "en", "url": "https://stackoverflow.com/questions/160165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Traversing a multi-dimensional hash in Perl If you have a hash (or reference to a hash) in perl with many dimensions and you want to iterate across all values, what's the best way to do it. In other words, if we have $f->{$x}{$y}, I want something like foreach ($x, $y) (deep_keys %{$f}) { } instead of foreach $x (keys %f) { foreach $y (keys %{$f->{$x}) { } } A: This sounds to me as if Data::Diver or Data::Visitor are good approaches for you. A: Keep in mind that Perl lists and hashes do not have dimensions and so cannot be multidimensional. What you can have is a hash item that is set to reference another hash or list. This can be used to create fake multidimensional structures. Once you realize this, things become easy. For example: sub f($) { my $x = shift; if( ref $x eq 'HASH' ) { foreach( values %$x ) { f($_); } } elsif( ref $x eq 'ARRAY' ) { foreach( @$x ) { f($_); } } } Add whatever else needs to be done besides traversing the structure, of course. One nifty way to do what you need is to pass a code reference to be called from inside f. By using sub prototyping you could even make the calls look like Perl's grep and map functions. A: You can also fudge multi-dimensional arrays if you always have all of the key values, or you just don't need to access the individual levels as separate arrays: $arr{"foo",1} = "one"; $arr{"bar",2} = "two"; while(($key, $value) = each(%arr)) { @keyValues = split($;, $key); print "key = [", join(",", @keyValues), "] : value = [", $value, "]\n"; } This uses the subscript separator "$;" as the separator for multiple values in the key. A: Stage one: don't reinvent the wheel :) A quick search on CPAN throws up the incredibly useful Data::Walk. Define a subroutine to process each node, and you're sorted use Data::Walk; my $data = { # some complex hash/array mess }; sub process { print "current node $_\n"; } walk \&process, $data; And Bob's your uncle. Note that if you want to pass it a hash to walk, you'll need to pass a reference to it (see perldoc perlref), as follows (otherwise it'll try and process your hash keys as well!): walk \&process, \%hash; For a more comprehensive solution (but harder to find at first glance in CPAN), use Data::Visitor::Callback or its parent module - this has the advantage of giving you finer control of what you do, and (just for extra street cred) is written using Moose. A: Here's an option. This works for arbitrarily deep hashes: sub deep_keys_foreach { my ($hashref, $code, $args) = @_; while (my ($k, $v) = each(%$hashref)) { my @newargs = defined($args) ? @$args : (); push(@newargs, $k); if (ref($v) eq 'HASH') { deep_keys_foreach($v, $code, \@newargs); } else { $code->(@newargs); } } } deep_keys_foreach($f, sub { my ($k1, $k2) = @_; print "inside deep_keys, k1=$k1, k2=$k2\n"; }); A: It's easy enough if all you want to do is operate on values, but if you want to operate on keys, you need specifications of how levels will be recoverable. a. For instance, you could specify keys as "$level1_key.$level2_key.$level3_key"--or any separator, representing the levels. b. Or you could have a list of keys. I recommend the latter. * *Level can be understood by @$key_stack *and the most local key is $key_stack->[-1]. *The path can be reconstructed by: join( '.', @$key\_stack ) Code: use constant EMPTY_ARRAY => []; use strict; use Scalar::Util qw<reftype>; sub deep_keys (\%) { sub deeper_keys { my ( $key_ref, $hash_ref ) = @_; return [ $key_ref, $hash_ref ] if reftype( $hash_ref ) ne 'HASH'; my @results; while ( my ( $key, $value ) = each %$hash_ref ) { my $k = [ @{ $key_ref || EMPTY_ARRAY }, $key ]; push @results, deeper_keys( $k, $value ); } return @results; } return deeper_keys( undef, shift ); } foreach my $kv_pair ( deep_keys %$f ) { my ( $key_stack, $value ) = @_; ... } This has been tested in Perl 5.10. A: If you are working with tree data going more than two levels deep, and you find yourself wanting to walk that tree, you should first consider that you are going to make a lot of extra work for yourself if you plan on reimplementing everything you need to do manually on hashes of hashes of hashes when there are a lot of good alternatives available (search CPAN for "Tree"). Not knowing what your data requirements actually are, I'm going to blindly point you at a tutorial for Tree::DAG_Node to get you started. That said, Axeman is correct, a hashwalk is most easily done with recursion. Here's an example to get you started if you feel you absolutely must solve your problem with hashes of hashes of hashes: #!/usr/bin/perl use strict; use warnings; my %hash = ( "toplevel-1" => { "sublevel1a" => "value-1a", "sublevel1b" => "value-1b" }, "toplevel-2" => { "sublevel1c" => { "value-1c.1" => "replacement-1c.1", "value-1c.2" => "replacement-1c.2" }, "sublevel1d" => "value-1d" } ); hashwalk( \%hash ); sub hashwalk { my ($element) = @_; if( ref($element) =~ /HASH/ ) { foreach my $key (keys %$element) { print $key," => \n"; hashwalk($$element{$key}); } } else { print $element,"\n"; } } It will output: toplevel-2 => sublevel1d => value-1d sublevel1c => value-1c.2 => replacement-1c.2 value-1c.1 => replacement-1c.1 toplevel-1 => sublevel1a => value-1a sublevel1b => value-1b Note that you CAN NOT predict in what order the hash elements will be traversed unless you tie the hash via Tie::IxHash or similar — again, if you're going to go through that much work, I recommend a tree module. A: There's no way to get the semantics you describe because foreach iterates over a list one element at a time. You'd have to have deep_keys return a LoL (list of lists) instead. Even that doesn't work in the general case of an arbitrary data structure. There could be varying levels of sub-hashes, some of the levels could be ARRAY refs, etc. The Perlish way of doing this would be to write a function that can walk an arbitrary data structure and apply a callback at each "leaf" (that is, non-reference value). bmdhacks' answer is a starting point. The exact function would vary depending one what you wanted to do at each level. It's pretty straightforward if all you care about is the leaf values. Things get more complicated if you care about the keys, indices, etc. that got you to the leaf.
{ "language": "en", "url": "https://stackoverflow.com/questions/160175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Visual Studio 2005, how to get where project are used in a solution? I have a solution with other 70 projects into it. My question is : How can I know where the project is used? I do not want to open all those 70 projects to verify the References one by one. How can I see which project use one project in particular? Edit I do not want to do 1 by 1 search in XML or in the Reference in VS. I would like a quick way to do it. A: There's a pretty cool codeplex project that creates dependecy visualizations that I've used before. Although, with 70 projects, you probably won't be able to read it very well unless you only have a few dependencies per project. Anyway, it's still worth checking out-- you could probably even repurpose some of the source code to just output the depenecies to a list. It at Dependecy Visualizer Codeplex Project A: You could resort to using the Search feature in Windows itself. Each of the projects has a file called library_name.csproj.FileListAbsolute.txt. A quick windows search for the DLL I was looking for with *FileListAbsolute.txt as the filter yielded the results I wanted. The FileListAbsolute.txt files list the DLLs and such for the particular projects. I did this for VS 2008, but I would guess it might be available for VS 2005 too. A: The project files are in XML, so writing something to parse them should be no big deal. If you just want to find which projects reference particular other projects, "grep" would probably work well enough. A: You could even use the search feature of Studio itself, if you are looking for a particular project. Search just the xml project files for that particular project. If you're trying to map out everything, this wouldn't work so well. A: Visual Ndepend is a tool that I am trying at this moment and look promising with my original question.
{ "language": "en", "url": "https://stackoverflow.com/questions/160179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to use ditto on OS X to work like cp -a on Linux I'm a Linux guy and I'm used to copying directory trees with cp -a. OS X doesn't have -a option on cp... but it does have the ditto command. I'm reading the man on ditto now, but is there anything I should specifically be looking out for? A: Personally I use rsync -a (or whatever rsync params are called for). My two reasons are: I already know how to do this, and I need my scripts to be portable across Linux/BSD/Solaris. There are also some filesystems where rsync is more efficient than cp. Sorry that's not a direct answer, I have used ditto on BSDs but don't have any gotchas for you that aren't in the man page. A: If you're using ditto, you should be aware that it moves the contents a bit differently from cp -a when it comes to folders: ditto foo bar will move the contents of foo into bar (resulting in bar/file1, bar/file2 .. ) cp -a foo bar will move foo/ into bar/ (resulting in bar/foo/file1, bar/foo/file2, .. ) Also: OSX cp now support cp -a. A: According to the cp man page cp -a is the same as cp -dpR which is -p = preserve mode,ownership,timestamps -R = recursive -d = no dereference and preserve links The OS X equivalent would be cp -pPR -p = preserve -R = recursive -P = no symbolic links are followed -- can be added but this is the default behavior The only thing missing is -d which I think is the default behavior but I'm not positive. I've never messed with ditto Edit -- @SoloBold -L follows symbolic links. -p does NOT follow symbolic links. OS X (10.4 at least) has no -d option. that is a huge difference. A: From Linux cp(1): -a, --archive same as -dpR which is confusing, since -d appears to be equivalent to -p. Anyway, OSX has -p and -R so you could just use that. A: As j04t pointed out that should be cp -pR On OS X, you would do cp -dRL src target cp preserves resources in newer version of OS X (was it 10.3 when that happened?) Hey d is kinda like an upsidedown p, right ;) A: there is a difference between ditto and cp which is that when source is a directory, cp creates a directory with that name on the destination, but ditto just copies the contents. Beware!
{ "language": "en", "url": "https://stackoverflow.com/questions/160204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: VisualSVN and class libraries not in the working copy root We're making the switch from SourceGear Vault to TortoiseSVN with VisualSVN for Visual Studio integration - absolutely love it. However, there are multiple class libraries that we reference in multiple different applications that aren't a part of the working copy root in any of the applications. What's the best way to deal with this so that we can continue to utilize Visual Studio integration, but still keep various class libraries located outside of each project/application's root? SourceGear doesn't have an issue with this. It is possible to add class libraries separately just using TortoiseSVN in explorer, but there's no ability to commit changes to anything outside of the working copy from within Visual Studio; neither are there the VisualSVN "traffic lights" indicating status for these outside of working copy class libraries. By the way, we're also going with the "one repository with many projects" route as opposed to multiple repositories, especially as that is how we have worked for years to this point. UPDATE: I re-read some things that I had looked at before and discovered that svn:externals don't just refer to using code in different repositories, but can also be used to use multiple working copies in VisualSVN. See http://www.visualsvn.com/support/topic/00007/ and http://svnbook.red-bean.com/en/1.2/svn.advanced.externals.html However, is this the best way to deal with this issue? There's a good thread that goes through things, but doesn't completely resolve things. Therefore, use svn:externals or not? Use multiple repositories or not? Again, for years we have referenced the code in shared class libraries amongst multiple solutions/applications and this works for us. Now how best to make this work with VisualSVN? A: Found the best answers here: Referenced Projects Sometimes it is useful to construct a working copy that is made out of a number of different checkouts. For example, you may want different subdirectories to come from different locations in a repository, or perhaps from different repositories altogether. If you want every user to have the same layout, you can define the svn:externals properties. And here: Include a common sub-project Sometimes you will want to include another project within your working copy, perhaps some library code. You don't want to make a duplicate of this code in your repository because then you would lose connection with the original (and maintained) code. Or maybe you have several projects which share core code. There are at least 3 ways of dealing with this. A: I understand it's been more than ten years since you asked this question, but I am glad to tell you that there was progress in implementing support for multiple working copies in the VisualSVN plug-in. VisualSVN 7.1 and 6.5 support multiple working copies within a single solution. The new functionality is available to Visual Studio 2019 and 2017 users. Download the latest VisualSVN builds from the main download page. Please also see the article KB7: Using Multiple Working Copies in VisualSVN.
{ "language": "en", "url": "https://stackoverflow.com/questions/160212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What does a ListViewSubtemCollection use for its keys? I am trying to get the value of some ListViewSubItems, but I have no idea what values it uses for its keys. I have some simple code: protected override void OnItemDrag(ItemDragEventArgs e) { base.OnItemDrag(e); ListViewItem item = e.Item as ListViewItem; string val = item.SubItems[???].ToString(); } The ??? part is where I am having a problem. I cannot figure out what the keys are. I have tried the column names of the ListView with no luck. I would like to use this method instead of using numeric indices. A: You can only use the column index to add subitems, but you can make it easier to read by making an enumeration containing the index of each of your columns. A: The key of the ListViewSubItem is the Name property as described here. Setting the Name equal to the column name, would allow you to index into the SubItems by the name of the column. And some code as an example ListViewItem myListViewItem = new ListViewItem(); ListViewItem.ListViewSubItem myListViewSubItem = new ListViewItem.ListViewSubItem(); myListViewSubItem.Text = "This will be displayed"; myListViewSubItem.Name = "my key"; myListViewItem.SubItems.Add(myListViewSubItem); ListViewItem.ListViewSubItem subItem = myListViewItem.SubItems["my key"]; A: Subitems are only ordered by column index unluckily. So you'd have to access them like: protected override void OnItemDrag(ItemDragEventArgs e) { base.OnItemDrag(e); ListViewItem item = e.Item as ListViewItem; string val = item.SubItems[0].ToString(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/160214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C# How to replace system tray clock How do I replace the standard system tray clock in the taskbar? Thanks! A: I found an example here in C++: C++ Open Source Project I hope you are able to find something in C# as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/160215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ForEach loop in Mathematica I'd like something like this: each[i_, {1,2,3}, Print[i] ] Or, more generally, to destructure arbitrary stuff in the list you're looping over, like: each[{i_, j_}, {{1,10}, {2,20}, {3,30}}, Print[i*j] ] Usually you want to use Map or other purely functional constructs and eschew a non-functional programming style where you use side effects. But here's an example where I think a for-each construct is supremely useful: Say I have a list of options (rules) that pair symbols with expressions, like attrVals = {a -> 7, b -> 8, c -> 9} Now I want to make a hash table where I do the obvious mapping of those symbols to those numbers. I don't think there's a cleaner way to do that than each[a_ -> v_, attrVals, h[a] = v] Additional test cases In this example, we transform a list of variables: a = 1; b = 2; c = 3; each[i_, {a,b,c}, i = f[i]] After the above, {a,b,c} should evaluate to {f[1],f[2],f[3]}. Note that that means the second argument to each should be held unevaluated if it's a list. If the unevaluated form is not a list, it should evaluate the second argument. For example: each[i_, Rest[{a,b,c}], Print[i]] That should print the values of b and c. Addendum: To do for-each properly, it should support Break[] and Continue[]. I'm not sure how to implement that. Perhaps it will need to somehow be implemented in terms of For, While, or Do since those are the only loop constructs that support Break[] and Continue[]. And another problem with the answers so far: they eat Return[]s. That is, if you are using a ForEach loop in a function and want to return from the function from within the loop, you can't. Issuing Return inside the ForEach loop seems to work like Continue[]. This just (wait for it) threw me for a loop. A: Newer versions of Mathematica (6.0+) have generalized versions of Do[] and Table[] that do almost precisely what you want, by taking an alternate form of iterator argument. For instance, Do[ Print[i], {i, {1, 2, 3}}] is exactly like your ForEach[i_, {1, 2, 3,}, Print[i]] Alterntatively, if you really like the specific ForEach syntax, you can make a HoldAll function that implements it, like so: Attributes[ForEach] = {HoldAll}; ForEach[var_Symbol, list_, expr_] := ReleaseHold[ Hold[ Scan[ Block[{var = #}, expr] &, list]]]; ForEach[vars : {__Symbol}, list_, expr_] := ReleaseHold[ Hold[ Scan[ Block[vars, vars = #; expr] &, list]]]; This uses symbols as variable names, not patterns, but that's how the various built-in control structures like Do[] and For[] work. HoldAll[] functions allow you to put together a pretty wide variety of custom control structures. ReleaseHold[Hold[...]] is usually the easiest way to assemble a bunch of Mathematica code to be evaluated later, and Block[{x = #}, ...]& allows variables in your expression body to be bound to whatever values you want. In response to dreeves' question below, you can modify this approach to allow for more arbitrary destructuring using the DownValues of a unique symbol. ForEach[patt_, list_, expr_] := ReleaseHold[Hold[ Module[{f}, f[patt] := expr; Scan[f, list]]]] At this point, though, I think you may be better off building something on top of Cases. ForEach[patt_, list_, expr_] := With[{bound = list}, ReleaseHold[Hold[ Cases[bound, patt :> expr]; Null]]] I like making Null explicit when I'm suppressing the return value of a function. EDIT: I fixed the bug pointed out be dreeves below; I always like using With to interpolate evaluated expressions into Hold* forms. A: The built-in Scan basically does this, though it's uglier: Scan[Print[#]&, {1,2,3}] It's especially ugly when you want to destructure the elements: Scan[Print[#[[1]] * #[[2]]]&, {{1,10}, {2,20}, {3,30}}] The following function avoids the ugliness by converting pattern to body for each element of list. SetAttributes[ForEach, HoldAll]; ForEach[pat_, lst_, bod_] := Scan[Replace[#, pat:>bod]&, Evaluate@lst] which can be used as in the example in the question. PS: The accepted answer induced me to switch to this, which is what I've been using ever since and it seems to work great (except for the caveat I appended to the question): SetAttributes[ForEach, HoldAll]; (* ForEach[pattern, list, body] *) ForEach[pat_, lst_, bod_] := ReleaseHold[ (* converts pattern to body for *) Hold[Cases[Evaluate@lst, pat:>bod];]]; (* each element of list. *) A: The built-in Map function does exactly what you want. It can be used in long form: Map[Print, {1,2,3}] or short-hand Print /@ {1,2,3} In your second case, you'd use "Print[Times@@#]&/@{{1,10}, {2,20}, {3,30}}" I'd recommend reading the Mathematica help on Map, MapThread, Apply, and Function. They can take bit of getting used to, but once you are, you'll never want to go back! A: Here is a slight improvement based on the last answer of dreeves that allows to specify the pattern without Blank (making the syntax similar to other functions like Table or Do) and that uses the level argument of Cases SetAttributes[ForEach,HoldAll]; ForEach[patt_/; FreeQ[patt, Pattern],list_,expr_,level_:1] := Module[{pattWithBlanks,pattern}, pattWithBlanks = patt/.(x_Symbol/;!MemberQ[{"System`"},Context[x]] :> pattern[x,Blank[]]); pattWithBlanks = pattWithBlanks/.pattern->Pattern; Cases[Unevaluated@list, pattWithBlanks :> expr, {level}]; Null ]; Tests: ForEach[{i, j}, {{1, 10}, {2, 20}, {3, 30}}, Print[i*j]] ForEach[i, {{1, 10}, {2, 20}, {3, 30}}, Print[i], 2] A: I'm years late to the party here, and this is perhaps more an answer to the "meta-question", but something many people initially have a hard time with when programming in Mathematica (or other functional languages) is approaching a problem from a functional rather than structural viewpoint. The Mathematica language has structural constructs, but it's functional at its core. Consider your first example: ForEach[i_, {1,2,3}, Print[i] ] As several people pointed out, this can be expressed functionally as Scan[Print, {1,2,3}] or Print /@ {1,2,3} (although you should favor Scan over Map when possible, as previously explained, but that can be annoying at times since there is no infix operator for Scan). In Mathematica, there's usually a dozen ways to do everything, which is sometimes beautiful and sometimes frustrating. With that in mind, consider your second example: ForEach[{i_, j_}, {{1,10}, {2,20}, {3,30}}, Print[i*j] ] ... which is more interesting from a functional point of view. One possible functional solution is to instead use list replacement, e.g.: In[1]:= {{1,10},{2,20},{3,30}}/.{i_,j_}:>i*j Out[1]= {10,40,90} ...but if the list was very large, this would be unnecessarily slow since we are doing so-called "pattern matching" (e.g., looking for instances of {a, b} in the list and assigning them to i and j) unnecessarily. Given a large array of 100,000 pairs, array = RandomInteger[{1, 100}, {10^6, 2}], we can look at some timings: Rule-replacement is pretty quick: In[3]:= First[Timing[array /. {i_, j_} :> i*j;]] Out[3]= 1.13844 ... but we can do a little better if we take advantage of the expression structure where each pair is really List[i,j] and apply Times as the head of each pair, turning each {i,j} into Times[i,j]: In[4]:= (* f@@@list is the infix operator form of Apply[f, list, 1] *) First[Timing[Times @@@ array;]] Out[4]= 0.861267 As used in the implementation of ForEach[...] above, Cases is decidedly suboptimal: In[5]:= First[Timing[Cases[array, {i_, j_} :> i*j];]] Out[5]= 2.40212 ... since Cases does more work than just the rule replacement, having to build an output of matching elements one-by-one. It turns out we can do a lot better by decomposing the problem differently, and take advantage of the fact that Times is Listable, and supports vectorized operation. The Listable attribute means that a function f will automatically thread over any list arguments: In[16]:= SetAttributes[f,Listable] In[17]:= f[{1,2,3},{4,5,6}] Out[17]= {f[1,4],f[2,5],f[3,6]} So, since Times is Listable, if we instead had the pairs of numbers as two separate arrays: In[6]:= a1 = RandomInteger[{1, 100}, 10^6]; a2 = RandomInteger[{1, 100}, 10^6]; In[7]:= First[Timing[a1*a2;]] Out[7]= 0.012661 Wow, quite a bit faster! Even if the input wasn't provided as two separate arrays (or you have more than two elements in each pair,) we can still do something optimal: In[8]:= First[Timing[Times@@Transpose[array];]] Out[8]= 0.020391 The moral of this epic is not that ForEach isn't a valuable construct in general, or even in Mathematica, but that you can often obtain the same results more efficiently and more elegantly when you work in a functional mindset, rather than a structural one. A: Mathematica have map functions, so lets say you have a function Functaking one argument. Then just write Func /@ list Print /@ {1, 2, 3, 4, 5} The return value is a list of the function applied to each element in the in-list. PrimeQ /@ {10, 2, 123, 555} will return {False,True,False,False} A: Thanks to Pillsy and Leonid Shifrin, here's what I'm now using: SetAttributes[each, HoldAll]; (* each[pattern, list, body] *) each[pat_, lst_List, bod_] := (* converts pattern to body for *) (Cases[Unevaluated@lst, pat:>bod]; Null); (* each element of list. *) each[p_, l_, b_] := (Cases[l, p:>b]; Null); (* (Break/Continue not supported) *)
{ "language": "en", "url": "https://stackoverflow.com/questions/160216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: To ternary or not to ternary? I'm personally an advocate of the ternary operator: () ? : I do realize that it has its place, but I have come across many programmers that are completely against ever using it, and some that use it too often. What are your feelings on it? What interesting code have you seen using it? A: I love them, especially in type-safe languages. I don't see how this: int count = (condition) ? 1 : 0; is any harder than this: int count; if (condition) { count = 1; } else { count = 0; } I'd argue that ternary operators make everything less complex and more neat than the alternative. A: I agree with jmulder: it shouldn't be used in place of a if, but it has its place for return expression or inside an expression: echo "Result: " + n + " meter" + (n != 1 ? "s" : ""); return a == null ? "null" : a; The former is just an example, and better internationalisation and localisation support of plural should be used! A: If you're using the ternary operator for a simple conditional assignment I think it's fine. I've seen it (ab)used to control program flow without even making an assignment, and I think that should be avoided. Use an if statement in these cases. A: (Hack of the day) #define IF(x) x ? #define ELSE : Then you can do if-then-else as expression: int b = IF(condition1) res1 ELSE IF(condition2) res2 ELSE IF(conditions3) res3 ELSE res4; A: I think the ternary operator should be used when needed. It is obviously a very subjective choice, but I find that a simple expression (specially as a return expression) is much clearer than a full test. Example in C/C++: return (a>0)?a:0; Compared to: if(a>0) return a; else return 0; You also have the case where the solution is between the ternary operator and creating a function. For example in Python: l = [ i if i > 0 else 0 for i in lst ] The alternative is: def cap(value): if value > 0: return value return 0 l = [ cap(i) for i in lst ] It is needed enough that in Python (as an example), such an idiom could be seen regularly: l = [ ((i>0 and [i]) or [0])[0] for i in lst ] this line uses properties of the logical operators in Python: they are lazy and returns the last value computed if it is equal to the final state. A: I've seen such beasts like (it was actually much worse since it was isValidDate and checked month and day as well, but I couldn't be bothered trying to remember the whole thing): isLeapYear = ((yyyy % 400) == 0) ? 1 : ((yyyy % 100) == 0) ? 0 : ((yyyy % 4) == 0) ? 1 : 0; where, plainly, a series of if-statements would have been better (although this one's still better than the macro version I once saw). I don't mind it for small things like: reportedAge = (isFemale && (Age >= 21)) ? 21 + (Age - 21) / 3 : Age; or even slightly tricky things like: printf ("Deleted %d file%s\n", n, (n == 1) ? "" : "s"); A: Chained I'm fine with - nested, not so much. I tend to use them more in C simply because they're an if statement that has value, so it cuts down on unnecessary repetition or variables: x = (y < 100) ? "dog" : (y < 150) ? "cat" : (y < 300) ? "bar" : "baz"; rather than if (y < 100) { x = "dog"; } else if (y < 150) { x = "cat"; } else if (y < 300) { x = "bar"; } else { x = "baz"; } In assignments like this, I find it's less to refactor, and clearer. When I'm working in ruby on the other hand, I'm more likely to use if...else...end because it's an expression too. x = if (y < 100) then "dog" elif (y < 150) then "cat" elif (y < 300) then "bar" else "baz" end (Although, admittedly, for something this simple, I might just use the ternary operator anyway.) A: The ternary ?: operator is merely a functional equivalent of the procedural if construct. So as long as you are not using nested ?: expressions, the arguments for/against the functional representation of any operation applies here. But nesting ternary operations can result in code that is downright confusing (exercise for the reader: try writing a parser that will handle nested ternary conditionals and you will appreciate their complexity). But there are plenty of situations where conservative use of the ?: operator can result in code that is actually easier to read than otherwise. For example: int compareTo(Object object) { if((isLessThan(object) && reverseOrder) || (isGreaterThan(object) && !reverseOrder)) { return 1; if((isLessThan(object) && !reverseOrder) || (isGreaterThan(object) && reverseOrder)) { return -1; else return 0; } Now compare that with this: int compareTo(Object object) { if(isLessThan(object)) return reverseOrder ? 1 : -1; else(isGreaterThan(object)) return reverseOrder ? -1 : 1; else return 0; } As the code is more compact, there is less syntactic noise, and by using the ternary operator judiciously (that is only in relation with the reverseOrder property) the end result isn't particularly terse. A: I almost never use the ternary operator, because whenever I do use it, it always makes me think a lot more than I have to later when I try to maintain it. I like to avoid verbosity, but when it makes the code a lot easier to pick up, I will go for the verbosity. Consider: String name = firstName; if (middleName != null) { name += " " + middleName; } name += " " + lastName; Now, that is a bit verbose, but I find it a lot more readable than: String name = firstName + (middleName == null ? "" : " " + middleName) + " " + lastName; Or: String name = firstName; name += (middleName == null ? "" : " " + middleName); name += " " + lastName; It just seems to compress too much information into too little space, without making it clear what's going on. Every time I saw the ternary operator used, I have always found an alternative that seemed much easier to read... then again, that is an extremely subjective opinion, so if you and your colleagues find ternary very readable, go for it. A: I like using the operator in debug code to print error values so I don't have to look them up all the time. Usually I do this for debug prints that aren't going to remain once I'm done developing. int result = do_something(); if( result != 0 ) { debug_printf("Error while doing something, code %x (%s)\n", result, result == 7 ? "ERROR_YES" : result == 8 ? "ERROR_NO" : result == 9 ? "ERROR_FILE_NOT_FOUND" : "Unknown"); } A: I like them. I don't know why, but I feel very cool when I use the ternary expression. A: Well, the syntax for it is horrid. I find functional ifs very useful, and they often makes code more readable. I would suggest making a macro to make it more readable, but I'm sure someone can come up with a horrible edge case (as there always is with C++). A: I treat ternary operators a lot like GOTO. They have their place, but they are something which you should usually avoid to make the code easier to understand. A: I typically use it in things like this: before: if(isheader) drawtext(x, y, WHITE, string); else drawtext(x, y, BLUE, string); after: drawtext(x, y, isheader == true ? WHITE : BLUE, string); A: As others have pointed out they are nice for short simple conditions. I especially like them for defaults (kind of like the || and or usage in JavaScript and Python), e.g. int repCount = pRepCountIn ? *pRepCountIn : defaultRepCount; Another common use is to initialize a reference in C++. Since references have to be declared and initialized in the same statement you can't use an if statement. SomeType& ref = pInput ? *pInput : somethingElse; A: Use it for simple expressions only: int a = (b > 10) ? c : d; Don't chain or nest ternary operators as it hard to read and confusing: int a = b > 10 ? c < 20 ? 50 : 80 : e == 2 ? 4 : 8; Moreover, when using ternary operator, consider formatting the code in a way that improves readability: int a = (b > 10) ? some_value : another_value; A: It's a question of style, really; the subconscious rules I tend to follow are: * *Only evaluate 1 expression - so foo = (bar > baz) ? true : false, but NOT foo = (bar > baz && lotto && someArray.Contains(someValue)) ? true : false *If I'm using it for display logic, e.g. <%= (foo) ? "Yes" : "No" %> *Only really use it for assignment; never flow logic (so never (foo) ? FooIsTrue(foo) : FooIsALie(foo) ) Flow logic in ternary is itself a lie, ignore that last point. I like it because it's concise and elegant for simple assignment operations. A: I recently saw a variation on ternary operators (well, sort of) that make the standard "() ? :" variant seem to be a paragon of clarity: var Result = [CaseIfFalse, CaseIfTrue][(boolean expression)] or, to give a more tangible example: var Name = ['Jane', 'John'][Gender == 'm']; Mind you, this is JavaScript, so things like that might not be possible in other languages (thankfully). A: Only when: $var = (simple > test ? simple_result_1 : simple_result_2); KISS. A: For simple if cases, I like to use it. Actually it's much easier to read/code for instance as parameters for functions or things like that. Also to avoid the new line I like to keep with all my if/else. Nesting it would be a big no-no in my book. So, resuming, for a single if/else I'll use the ternary operator. For other cases, a regular if/else if/else (or switch). A: I like Groovy's special case of the ternary operator, called the Elvis operator: ?: expr ?: default This code evaluates to expr if it's not null, and default if it is. Technically it's not really a ternary operator, but it's definitely related to it and saves a lot of time/typing. A: For simple tasks, like assigning a different value depending on a condition, they're great. I wouldn't use them when there are longer expressions depending on the condition though. A: If you and your workmates understand what they do and they aren't created in massive groups I think they make the code less complex and easier to read because there is simply less code. The only time I think ternary operators make code harder to understand is when you have more than three or foyr in one line. Most people don't remember that they are right based precedence and when you have a stack of them it makes reading the code a nightmare. A: As so many answers have said, it depends. I find that if the ternary comparison is not visible in a quick scan down the code, then it should not be used. As a side issue, I might also note that its very existence is actually a bit of an anomaly due to the fact that in C, comparison testing is a statement. In Icon, the if construct (like most of Icon) is actually an expression. So you can do things like: x[if y > 5 then 5 else y] := "Y" ... which I find much more readable than a ternary comparison operator. :-) There was a discussion recently about the possibility of adding the ?: operator to Icon, but several people correctly pointed out that there was absolutely no need because of the way if works. Which means that if you could do that in C (or any of the other languages that have the ternary operator), then you wouldn't, in fact, need the ternary operator at all. A: Like so many opinion questions, the answer is inevitably: it depends For something like: return x ? "Yes" : "No"; I think that is much more concise (and quicker for me to parse) than: if (x) { return "Yes"; } else { return "No"; } Now if your conditional expression is complex, then the ternary operation is not a good choice. Something like: x && y && z >= 10 && s.Length == 0 || !foo is not a good candidate for the ternary operator. As an aside, if you are a C programmer, GCC actually has an extension that allows you to exclude the if-true portion of the ternary, like this: /* 'y' is a char * */ const char *x = y ? : "Not set"; Which will set x to y assuming y is not NULL. Good stuff. A: It makes debugging slightly more difficult since you can not place breakpoints on each of the sub expressions. I use it rarely. A: In my mind, it only makes sense to use the ternary operator in cases where an expression is needed. In other cases, it seems like the ternary operator decreases clarity. A: I use the ternary operator wherever I can, unless it makes the code extremely hard to read, but then that's usually just an indication that my code could use a little refactoring. It always puzzles me how some people think the ternary operator is a "hidden" feature or is somewhat mysterious. It's one of the first things I learnt when I start programming in C, and I don't think it decreases readability at all. It's a natural part of the language. A: By the measure of cyclomatic complexity, the use of if statements or the ternary operator are equivalent. So by that measure, the answer is no, the complexity would be exactly the same as before. By other measures such as readability, maintainability, and DRY (don't repeat yourself), either choice may prove better than the other. A: I use it quite often in places where I'm constrained to work in a constructor - for example, the new .NET 3.5 LINQ to XML constructs - to define default values when an optional parameter is null. Contrived example: var e = new XElement("Something", param == null ? new XElement("Value", "Default") : new XElement("Value", param.ToString()) ); or (thanks asterite) var e = new XElement("Something", new XElement("Value", param == null ? "Default" : param.ToString() ) ); No matter whether you use the ternary operator or not, making sure your code is readable is the important thing. Any construct can be made unreadable. A: No. They are hard to read. If/Else is much easier to read. This is my opinion. Your mileage may vary. A: My recently formulated rule of thumb for determining whether you should use the ternary operator is: * *if your code is choosing between two different values, go ahead and use the ternary operator. *if your code choosing between two different code paths, stick to an if statement. And be kind to readers of your code. If you are nesting ternary operators, format the code to make that nesting obvious. A: The ternary operator hands down. They aren't complex if you format properly. Take the leap year example from paxdiablo: $isLeapYear = (($year % 400) == 0) ? 1 : ((($year % 100) == 0) ? 0 : ((($year % 4) == 0) ? 1 : 0)); This can be written more concise and be made much more readable with this formatting: //--------------Test expression-----Result $isLeapYear = (($year % 400) == 0) ? 1 : ((($year % 100) == 0)? 0 : ((($year % 4) == 0) ? 1 : 0)); // Default result A: I would say that the number of conditions in a logic expression make it harder to read. This is true of an if statement and this is true of a ternary operator. In a perfect world, there should be one summarizable reason for taking a branch as opposed to others. Chances are that it really is more of a "business rule" if your explanation is "only when this cluster of states occur". However, in the real world, we don't add intermediate steps to fold states into one expressible state simply to obey the ideal case. We have made inferences about multiple states and have to make a decision on how to handle them. I like ternaries because it's possible to do anything with an if statement. if( object.testSomeCondition()) { System.exec( "format c:" ); } else { a++; } On the other hand: a += ( object.testSomeCondition() ? 0 : 1 ); makes it clear that the goal is to find a value for a. Of course, in line with that, there probably shouldn't be more than reasonable side effects. * *I use an if for long or complex conditions after I've decided whether I have the time to rework conditions upstream so that I'm answering an easier question. But when I use an if, I still try to do parallel processing, just under a different condition. if ( user.hasRepeatedlyPressedOKWithoutAnswer() && me.gettingTowardMyLunchtime( time ) ) { ... } *Also my goal is near-single-stream processing. So I often try not to do an else and an if is simply a step off the common path. When you do a lot of single-stream processing, it's much harder for bugs to hide in your code waiting for that one condition that will jump out and break things. *As I said above, if you use a ternary to set one thing, or you have a small number of cases you want to test in order to set it to a value, then I just like the readability of a ternary. *With one caveat--> NO COMPLEX true CLAUSES a = b == c ? ( c == d ? ( c == e ? f : g ) : h ) : i; Of course that can be decomposed into: a = b != c ? i : c != d ? h : c == e ? f : g ; And it looks like a (compressed) truth table. Remember that there are more important factors for readability. One of them is block length and another is indentation level. Doing simple things in ternaries doesn't create an impetus to further and further levels of indentation. A: Use it to: * *access object (array) properties: var status = statuses[error == null ? 'working' : 'stopped']; *return statements: function getFullName(){ return this.isMale() ? "Mr. " : "Ms. " + this.name; } *initialize variables: var formMethod = DEBUG_FLAG == true ? "GET" : "POST"; *validate arguments: function(object){ var prop1 = typeof object.property == 'undefined' ? "default prop" : object.property; //... } Code examples are in JavaScript. A: Interesting anecdote: I have seen the optimizer weigh the ternary operator as less "heavy" for the purposes of inlining than the equivalent if. I noticed this with Microsoft compilers, but it could be more widespread. In particular functions like this would inline: int getSomething() { return m_t ? m_t->v : 0; } But this wouldn't: int getSomething() { if( m_t ) return m_t->v; return 0; } A: I like it a lot. When I use it, I write it like an if-then-else: one line each for condition, true action, and false action. That way, I can nest them easily. Example: x = (a == b ? (sqrt(a) - 2) : (a*a + b*b) ); x = (a == b ? (sqrt(a) - 2) : (a*a + b*b) ); x = (a == b ? (c > d ? (sqrt(a) - 2) : (c + cos(d)) ) : (a*a + b*b) ); To me, this is reasonably easy to read. It also makes it easy to add subcases or change existing cases. A: I use and recommend ternaries to avoid code lines in situations where the logic is trivial. int i; if( piVal ) { i = *piVal; } else { i = *piDefVal; } In the above case I would choose a ternary, because it has less noise: int i = ( piVal ) ? *piVal : *piDefVal; Likewise conditional return values are good candidates: return ( piVal ) ? *piVal : *piDefVal; I think compactness can improve readability which in turn helps to improve the code quality. But readability always depends on the code's audience. The readers must be able to understand the a ? b : c pattern without any mental effort. If you can not presume this, go for the long version. A: If your ternary operator ends up taking the whole screen width, then I wouldn't use it. I keep it to just checking one simple condition and returning single values: int x = something == somethingElse ? 0 : -1; We actually have some nasty code like this in production...not good: int x = something == (someValue == someOtherVal ? string.Empty : "Blah blah") ? (a == b ? 1 : 2 ): (c == d ? 3 : 4); A: The ternary operator is extremely useful for concisely producing comma separated lists. Here is a Java example: int[] iArr = {1, 2, 3}; StringBuilder sb = new StringBuilder(); for (int i = 0; i < iArr.length; i++) { sb.append(i == 0 ? iArr[i] : ", " + iArr[i]); } System.out.println(sb.toString()); It produces: "1, 2, 3" Otherwise, special casing for the last comma becomes annoying. A: If you are trying to reduce the amount of lines in your code or are refactoring code, then go for it. If you care about the next programmer that has to take that extra 0.1 millisecond to understand the expression, then go for it anyway. A: No, ternary operators do not increase complexity. Unfortunately, some developers are so oriented to an imperative programming style that they reject (or won't learn) anything else. I do not believe that, for example: int c = a < b ? a : b; is "more complex" than the equivalent (but more verbose): int c; if (a < b) { c = a; } else { c = b; } or the even more awkward (which I've seen): int c = a; if (!a < b) { c = b; } That said, look carefully at your alternatives on a case-by-case basis. Assuming a propoerly-educated developer, ask which most succinctly expresses the intent of your code and go with that one. A: I used to be in the “ternary operators make a line un-readable” camp, but in the last few years I’ve grown to like them when used in moderation. Single line ternary operators can increase readability if everybody on your team understands what’s going on. It’s a concise way of doing something without the overhead of lots of curly braces for the sake of curly braces. The two cases where I don’t like them: if they go too far beyond the 120 column mark or if they are embedded in other ternary operators. If you can’t quickly, easily and readably express what you’re doing in a ternary operator. Then use the if/else equivalent. A: It depends :) They are useful when dealing with possibly null references (BTW: Java really needs a way to easily compare two possibly null strings). The problem begins, when you are nesting many ternary operators in one expression. A: No (unless they're misused). Where the expression is part of a larger expression, the use of a ternary operator is often much clearer. A: I think it really depends on the context they are used in. Something like this would be a really confusing, albeit effective, way to use them: __CRT_INLINE int __cdecl getchar (void) { return (--stdin->_cnt >= 0) ? (int) (unsigned char) *stdin->_ptr++ : _filbuf (stdin); } However, this: c = a > b ? a : b; is perfectly reasonable. I personally think they should be used when they cut down on overly verbose IF statements. The problem is people are either petrified of them, or like them so much they get used almost exclusively instead of IF statements. A: string someSay = bCanReadThis ? "No" : "Yes"; A: In small doses they can reduce the number of lines and make code more readable; particularly if the outcome is something like setting a char string to "Yes" or "No" based on the result of a calculation. Example: char* c = NULL; if(x) { c = "true"; }else { c = "false"; } compared with: char* c = x ? "Yes" : "No"; The only bug that can occur in simple tests like that is assigning an incorrect value, but since the conditional is usually simple it's less likely the programmer will get it wrong. Having your program print the wrong output isn't the end of the world, and should be caught in all of code review, bench testing and production testing phases. I'll counter my own argument with now it's more difficult to use code coverage metrics to assist in knowing how good your test cases are. In the first example you can test for coverage on both the assignment lines; if one is not covered then your tests are not exercising all possible code flows. In the second example the line will show as being executed regardless of the value of X, so you can't be certain you've tested the alternate path (YMMV depending on the ability of your coverage tools). This matters more with the increasing complexity of the tests. A: One reason no one seems to mention for using the ternary operator, at least in languages, like D, that support type inference is to allow type inference to work for amazingly complicated template types. auto myVariable = fun(); // typeof(myVariable) == Foo!(Bar, Baz, Waldo!(Stuff, OtherStuff)). // Now I want to declare a variable and assign a value depending on some // conditional to it. auto myOtherVariable = (someCondition) ? fun() : gun(); // If I didn't use the ternary I'd have to do: Foo!(Bar, Baz, Waldo!(Stuff, OtherStuff)) myLastVariable; // Ugly. if(someCondition) { myLastVariable = fun(); } else { myLastVariable = gun(): } A: I like the operator in some situations, but I think some people tend to overuse it and that it can make the code harder to read. I recently stumbled across this line in some open source code I am working to modify. Where (active == null ? true : ((bool)active ? p.active : !p.active)) &&... Instead of where ( active == null || p.active == active) &&... I wonder if the ternary use adds extra overhead to the LINQ statement in this case. A: I agree with the sentiments of many of the posters here. The ternary operator is perfectly valid as long as it is used correctly and does not introduce ambiguity (to be fair, you can say that about any operator/construct). I use the ternary operator often in embedded code to clarify what my code is doing. Take the following (oversimplified for clarity) code samples: Snippet 1: int direction = read_or_write(io_command); // Send an I/O io_command.size = (direction==WRITE) ? (32 * 1024) : (128 * 1024); io_command.data = &buffer; dispatch_request(io_command); Snippet 2: int direction = read_or_write(io_command); // Send an I/O if (direction == WRITE) { io_command.size = (32 * 1024); io_command.data = &buffer; dispatch_request(io_command); } else { io_command.size = (128 * 1024); io_command.data = &buffer; dispatch_request(io_command); } Here, I am dispatching an input or output request. The process is the same whether the request is a read or a write, only the default I/O size changes. In the first sample, I use the ternary operator to make it clear that the procedure is the same and that the size field gets a different value depending on the I/O direction. In the second example, it is not as immediately clear that the algorithm for the two cases is the same (especially as the code grows much longer than three lines). The second example would be more difficult to keep the common code in sync. Here, the ternary operator does a better job of expressing the largely parallel nature of the code. The ternary operator has another advantage (albeit one that is normally only an issue with embedded software). Some compilers can only perform certain optimizations if the code is not "nested" past a certain depth (meaning inside a function, you increase the nesting depth by 1 every time you enter an if, loop, or switch statement and decrease it by 1 when you leave it). On occasion, using the ternary operator can minimize the amount of code that needs to be inside a conditional (sometimes to the point where the compiler can optimize away the conditional) and can reduce the nesting depth of your code. In some instances, I was able to re-structure some logic using the ternary operator (as in my example above) and reduce the nested depth of the function enough that the compiler could perform additional optimization steps on it. Admittedly this is a rather narrow use case, but I figured it was worth mentioning anyway. A: Making code smaller doesn't always mean it's easier to parse. It differs from language to language. In PHP for example, whitespace and line-breaks are encouraged since PHP's lexer first breaks the code up in bits starting with line-breaks and then whitespace. So I do not see a performance issue, unless less whitespace is used. Bad: ($var)?1:0; Good: ($var) ? 1 : 0; It doesn't seem like a big issue, but with lexing code in PHP, whitespace is essential. Plus, it also reads a bit better this way. A: I'm a big fan of it ... when appropriate. Stuff like this is great, and, personally, I don't find it too hard to read/understand: $y = ($x == "a" ? "apple" : ($x == "b" ? "banana" : ($x == "c" ? "carrot" : "default"))); I know that probably makes a lot of people cringe, though. One thing to keep in mind when using it in PHP is how it works with a function that returns a reference. class Foo { var $bar; function Foo() { $this->bar = "original value"; } function &tern() { return true ? $this->bar : false; } function &notTern() { if (true) return $this->bar; else return false; } } $f = new Foo(); $b =& $f->notTern(); $b = "changed"; echo $f->bar; // "changed" $f2 = new Foo(); $b2 =& $f->tern(); $b2 = "changed"; echo $f2->bar; // "original value" A: How would anyone win an obfuscated code contest without the ternary operator?! I'm personally for using it, when appropriate, but I don't think I'd ever nest it. It's very useful, but it has a couple knocks against it in that it makes code harder to read and is in use in some other languages in other operations (like Groovy's null-check).
{ "language": "en", "url": "https://stackoverflow.com/questions/160218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "199" }
Q: ASP.net Web App undo support I have a simple web form that has a several fields and a gridview on it. I also have a save and cancel button. I would like a way to undo what was done to the data on the form when the user presses cancel. This is simple enough with the fields however since changes to the gridview happen in real time against the database I do not know how to get undo functionality. I have thought of storing the changes to gridview in viewstate but I would rather not because of the extra space requirement. I have also thought of a temporary table that would stored the changes then roll them back as needed. Does anyone have an idea how to get undo functionality from the form? A: Keep all the data in a session object and write that to the database when you are ready. If you abstract your data layer you can use the ObjectDataSource that interacts with the session objects. I'm currently using this method on a checkout system for an e-commerce site. I store the data in custom objects that mimic the database schema. A: The simplest solution is to not commit the changes in the gridview to the database until the user clicks the "save" button. If you do decide to use viewstate or some such to record changes that you will later undo, don't forget to take the same sorts of precautions re: update collisions that you would when making the initial changes. A: One way would be to store changes to a table in another table together with a timestamp and a identifier for that application instance. If you want to undo changes since a specific time you just traverse the list backwards until that date for that identifier. A: Hmmm... load the data object(s) into session and bind your controls from the (MyObject)Session["MyObject"] objects. I believe you can hook into an ObjectDataSource to use the session... you can then override the Update events so that the changes are written to the session. When the user clicks save, take the session objects and save them: MyObject obj = (MyObject)Session["MyObject"]; MyObject.Save() It wouldn't give you multiple levels of undo... although I guess you could save multiple session objects if you really needed to.
{ "language": "en", "url": "https://stackoverflow.com/questions/160222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What does mysql error 1025 (HY000): Error on rename of './foo' (errorno: 150) mean? I tried this in mysql: mysql> alter table region drop column country_id; And got this: ERROR 1025 (HY000): Error on rename of './product/#sql-14ae_81' to './product/region' (errno: 150) Any ideas? Foreign key stuff? A: Simply run the alter table query using 'KEY' instead of 'FOREIGN KEY' in the drop statement. I hope it will help to solve the issue, and will drop the foreign key constraint and you can change the table columns and drop the table. ALTER TABLE slide_image_sub DROP KEY FK_slide_image_sub; here in DROP KEY instead of DROP FOREIGN KEY, hope it will help. Thanks A: I know, this is an old post, but it's the first hit on everyone's favorite search engine if you are looking for error 1025. However, there is an easy "hack" for fixing this issue: Before you execute your command(s) you first have to disable the foreign key constraints check using this command: SET FOREIGN_KEY_CHECKS = 0; Then you are able to execute your command(s). After you are done, don't forget to enable the foreign key constraints check again, using this command: SET FOREIGN_KEY_CHECKS = 1; Good luck with your endeavor. A: You usually get this error if your tables use the InnoDB engine. In that case you would have to drop the foreign key, and then do the alter table and drop the column. But the tricky part is that you can't drop the foreign key using the column name, but instead you would have to find the name used to index it. To find that, issue the following select: SHOW CREATE TABLE region; This should show you the name of the index, something like this: CONSTRAINT region_ibfk_1 FOREIGN KEY (country_id) REFERENCES country (id) ON DELETE NO ACTION ON UPDATE NO ACTION Now simply issue an: alter table region drop foreign key region_ibfk_1; And finally an: alter table region drop column country_id; And you are good to go! A: I had a similar issues once. I deleted the primary key from TABLE A but when I was trying to delete the foreign key column from table B I was shown the above same error. You can't drop the foreign key using the column name and to bypass this in PHPMyAdmin or with MySQL, first remove the foreign key constraint before renaming or deleting the attribute. A: It is indeed a foreign key error, you can find out using perror: shell$ perror 150 MySQL error code 150: Foreign key constraint is incorrectly formed To find out more details about what failed, you can use SHOW ENGINE INNODB STATUS and look for the LATEST FOREIGN KEY ERROR section it contains details about what is wrong. In your case, it is most likely cause something is referencing the country_id column. A: You can get also get this error trying to drop a non-existing foreign key. So when dropping foreign keys, always make sure they actually exist. If the foreign key does exist, and you are still getting this error try the following: SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0; SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0; SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='TRADITIONAL'; // Drop the foreign key here! SET SQL_MODE=@OLD_SQL_MODE; SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS; SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS; This always does the trick for me :) A: Take a look in error file for your mysql database. According to Bug #26305 my sql do not give you the cause. This bug exists since MySQL 4.1 ;-) A: If you are using a client like MySQL Workbench, right click the desired table from where a foreign key is to be deleted, then select the foreign key tab and delete the indexes. Then you can run the query like this: alter table table_name drop foreign_key_col_name; A: There is probably another table with a foreign key referencing the primary key you are trying to change. To find out which table caused the error you can run SHOW ENGINE INNODB STATUS and then look at the LATEST FOREIGN KEY ERROR section Use SHOW CREATE TABLE categories to show the name of constraint. Most probably it will be categories_ibfk_1 Use the name to drop the foreign key first and the column then: ALTER TABLE categories DROP FOREIGN KEY categories_ibfk_1; ALTER TABLE categories DROP COLUMN assets_id; A: I got this error with MySQL 5.6 but it had nothing to do with Foreign keys. This was on a Windows 7 Professional machine acting as a server on a small LAN. The client application was doing a batch operation that creates a table fills it with some external data then runs a query joining with permanent tables then dropping the "temporary" table. This batch does this approximately 300 times and this particular routine had been running week in week out for several years when suddenly we get the Error 1025 Unable to rename problem at a random point in the batch. In my case the application was using 4 DDL statements a CREATE TABLE followed by 3 CREATE INDEX, there is no foreign key. However only 2 of the indexes actually get created and the actual table .frm file was renamed, at the point of failure. My solution was to get rid of the separate CREATE INDEX statements and create them using the CREATE TABLE statement. This at the time of writing has solved the issue for me and my help someone else scratching their head when they find this thread. A: I'd guess foreign key constraint problem. Is country_id used as a foreign key in another table? I'm not DB guru but I think I solved a problem like this (where there was a fk constraint) by removing the fk, doing my alter table stuff and then redoing the fk stuff. I'll be interested to hear what the outcome is - sometime mysql is pretty cryptic. A: In my case, I was using MySQL workbench and I faced the same issue while dropping one of my columns in a table. I could not find the name of the foreign key. I followed the following steps to resolve the issue: * *Rt. click on your schema and select 'schema inspector'. This gives you various tables, columns, indexes, ect. *Go to the tab named 'Indexes' and search the name of the column under the column named 'Column'. Once found check the name of the table for this record under the column name 'Table'. If it matches the name of the table you want, then note down the name of the foreign key from the column named 'Name'. *Now execute the query : ALTER table tableNamexx DROP KEY foreignKeyName; *Now you can execute the drop statement which shall execute successfully. A: Doing SET FOREIGN_KEY_CHECKS=0; before the Operation can also do the trick. A: averageRatings= FOREACH groupedRatings GENERATE group AS movieID, AVG(ratings.rating) AS avgRating, COUNT(ratings.rating) AS numRatings; If you are using any command like above you must use group in small letters. This may solve your problem it solved mine. At least in PIG script.
{ "language": "en", "url": "https://stackoverflow.com/questions/160233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: Which is the best way to get a list of running processes in unix with python? I'm trying: import commands print commands.getoutput("ps -u 0") But it doesn't work on os x. os instead of commands gives the same output: USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND nothing more A: This works on Mac OS X 10.5.5. Note the capital -U option. Perhaps that's been your problem. import subprocess ps = subprocess.Popen("ps -U 0", shell=True, stdout=subprocess.PIPE) print ps.stdout.read() ps.stdout.close() ps.wait() Here's the Python version Python 2.5.2 (r252:60911, Feb 22 2008, 07:57:53) [GCC 4.0.1 (Apple Computer, Inc. build 5363)] on darwin A: If the OS support the /proc fs you can do: >>> import os >>> pids = [int(x) for x in os.listdir('/proc') if x.isdigit()] >>> pids [1, 2, 3, 6, 7, 9, 11, 12, 13, 15, ... 9406, 9414, 9428, 9444] >>> A cross-platform solution (linux, freebsd, osx, windows) is by using psutil: >>> import psutil >>> psutil.pids() [1, 2, 3, 6, 7, 9, 11, 12, 13, 15, ... 9406, 9414, 9428, 9444] >>> A: The cross-platform replacement for commands is subprocess. See the subprocess module documentation. The 'Replacing older modules' section includes how to get output from a command. Of course, you still have to pass the right arguments to 'ps' for the platform you're on. Python can't help you with that, and though I've seen occasional mention of third-party libraries that try to do this, they usually only work on a few systems (like strictly SysV style, strictly BSD style, or just systems with /proc.) A: I've tried in on OS X (10.5.5) and seems to work just fine: print commands.getoutput("ps -u 0") UID PID TTY TIME CMD 0 1 ?? 0:01.62 /sbin/launchd 0 10 ?? 0:00.57 /usr/libexec/kextd etc. Python 2.5.1 A: any of the above python calls - but try 'pgrep A: It works if you use os instead of commands: import os print os.system("ps -u 0")
{ "language": "en", "url": "https://stackoverflow.com/questions/160245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Flex: Modify an embedded icon and use it in a button? Just that, if you embed an icon: [Embed(source='icons/checkmark.png')] private static var CheckMark:Class; You end up with a dynamic class. You can pretty easily assign the icon to a button at runtime by calling the setStyle method: var btn:Button = new Button(); btn.setStyle("icon", CheckMark); But what if you wanted to alter the icon at runtime, like changing it's alpha value or even redrawing pixels, before assigning it to the button? So far I can't find a satisfactory answer... A: This is the only answer I could find that seemed close: Dynamic Icons (example with View Source) His solution involves a custom "DynamicIcon" class which is used in the button's icon setting, and a custom Button class which adds one method to the Button class to draw dynamic icons. The end result is that you are able to send BitmapData to the DynamicIcon class, which will show up in the button. So, embed your image, instantiate your asset class, get the bitmapasset and modify it however you need to and send the bitmapData to the icon. It's an interesting problem and it seems like there should be an easier solution, but this works without a lot of hassle. A: The way I'd solve this is to implement a programmatic skin class that draws the icon itself manually. There's probably more work you'll have to do to ensure the button calculates the correct size as if it has an icon even though it doesn't. You may have to poke through the Button source code to look at how the reference to the icon is stored. I love just creating programmatic skins that do exactly what I want and then using interesting CSS declarations to modify states - for instance: button.setStyle("customIconAlpha", .4); and then of course the skin or the custom button class would have: var alpha:Number = getStyle("customIconAlpha") as Number; (not sure if you have to typecast that one) A: The big problem I found with programmatic skins is that the button refuses to measure the width/height. I easily got around this by overriding the get methods for each: override public function get width():Number { return WIDTH; } override public function get height():Number { return HEIGHT; } In my case I needed to modify buttons in a TabNavigator, hence no easy way to subclass the button. Thankfully, the parent of each skin is the button, so using static methods within your skin, you can identify the instance of the Button to which the icon skins belong. If you're using the cover-all "icon" style, a new skin object will be created for each state. So you'll need to keep this in mind when changing the state of the icons.
{ "language": "en", "url": "https://stackoverflow.com/questions/160250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Compile error in VS.NET 2008 (VB.NET) that I can't get rid of! I can't shake this error when compiling my Visual Studio.NET 2008 solution. The project that's generating the error is a VB.NET Web Application in a 12 project solution (mixed types and languages). I've tried all the tricks I can find on google, and the obvious of removing the directoy and folder manually. I'm running Vista Business 32 with VS.NET 2008 SP1. This just started happening out of the blue today and I've rebooted a bunch and even re-applied SP1 for VS.NET. Any ideas or has anybody seen this? vbc : error BC31019: Unable to write to output file 'G:\Projects\TCA.NET\TcaNet\WebUI\obj\Debug\TcaNet.WebUI.pdb': Unspecified error Update: After thinking about this and not finding any solutions from answers or via the Internet, I went ahead and moved my entire solution to my C:\ drive vs. my G:\ drive (both are local). Doing this fixed my compile problem for some reason. A: I had the same error a few weeks ago when I was compiling on my server from my laptop. Turns out that if G: is a network drive, this could fail. Microsoft have said that fixing this is not a priority, and that there's much better ways of doing things (such as source control). For a one-man project though, it's a pain. A: Restart IIS on local. If that's not the issue then, install Unlocker and try to delete that pdb file when you get the error, Unlocker will tell you which process is holding an open handle to that file. A: I have found a list of thing to try to fix your problem : Zen-turkey Fix list Hope this help! A: maybe it is a dependency problem. check the build order of all the projects.. sysinternals tools should be of help here. using process explorer, are you able to find out if any process is locking this file? another useful tool is process monitor. after applying a filter for the pdb file, capture a trace of all file access activity.. A: It's probably bug in VB.NET compiler. The error message is incorrect, the real problem is missing file referenced from the project file. For example .vb file. In my case, I found the missing file and added it, then devenv compiled fine again. Someone reported that to MS here A: Although it is very old thread, but I got this error today and the following link solved it. Hope it help someone reading this. VB.NET .pdb fix A: After thinking about this and not finding any solutions from answers or via the Internet, I went ahead and moved my entire solution to my C:\ drive vs. my G:\ drive (both are local). Doing this fixed my compile problem for some reason. A: I had this in Visual Studio 2005 except it was Error 1. I restarted my machine and it fixed the problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/160267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Open Source C++ to C# compiler/converter Even if it requires manual input. Is there any good-enough option available? A: I don't know anything about this site, but a little googling found this. A: If it was in managed C++ or C++/CLI, you could compile it and then disassemble the assembly into C# using a tool like Reflector. Of course, that's not open source but maybe you can find an open source Reflector-style tool? If it's native C++, that's much more difficult.
{ "language": "en", "url": "https://stackoverflow.com/questions/160281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to update object with no data contexts Entity framework provides great flexibility to update data in the same datacontext Dim personA = (from p in datacontext.Person where p.PersonID = 1 select p) personA.name = txtName.value datacontext.savechanges() If I have to move this Update function to Service layer which only takes "Person" in the request, what would be the best way to assign my "Person" request object into the datacontext without doing the deep copying again? A: You need to attach your entity object to a data context. You also need to extend your data context partial class with the AttachUpdeted method. As when you attach a object to a data context it does not know that updates have been made. The code below will tell the data context every property has been updated and needs to be written to the database. public static void Save(EntityObject entity) { using(MyContext ctx = new MyContext) { ctx.AttachUpdated(entity); ctx.SaveChanges(); } } public static void AttachUpdated(this ObjectContext obj, EntityObject objectDetached) { if (objectDetached.EntityState == EntityState.Detached) { object original = null; if (obj.TryGetObjectByKey(objectDetached.EntityKey, out original)) obj.ApplyPropertyChanges(objectDetached.EntityKey.EntitySetName, objectDetached); else throw new ObjectNotFoundException(); } } article 1 article 2
{ "language": "en", "url": "https://stackoverflow.com/questions/160288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: should I free pointer returned by getpwuid() in Linux? After I call getpwuid(uid), I have a reference to a pointer. Should I free that pointer when I don't use it anymore? Reading the man pages, it says that it makes reference to some static area, that may be overwritten by subsequent calls to the same functions, so I'm not sure if I should touch that memory area. Thanks. A: Use the *_r functions (getpwuid_r()) for thread-safe (reentrant) functions that allow you to supply the buffer space to place the returned information in. Be sure check errno for success or failure. If you do not use reentrant functions you can safely assume that the function returns data that does not need to be freed, but will also be overwritten by successive calls to the same function. A: No. You do not need to free the result. You can only call free(3) on pointers allocated on the heap with malloc(3), calloc(3) or realloc(3). Static data is part of a program's data or bss segments and will persist until the process exits (or is overwritten by exec(2)). A: Actually it returns a pointer to an already existing structure, so you should not free it.
{ "language": "en", "url": "https://stackoverflow.com/questions/160290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How can I access argc and argv in c++ from a library function I'm writing a library which is to be dynamically loaded in C++. I'd like to read argc and argv (for debugging reasons) from within my code, however I do not have access to the main function. Is there any way to retrieve the command line (both Windows and Linux solution would be nice). Thanks, Dan A: On windows you can access argc/argv via __argc and __argv. __wargv if you want the wide character version. A: May I suggest that this sounds like a weird situation. Are you writing a plugin or something? Perhaps you should not access argv/argc? A: In Windows you can use GetCommandLine() to get a pointer to the command line and then use CommandLineToArgvW() to convert that pointer to argv[] format. There is only a wide (Unicode) version available, though. A: On Linux the pseudo-file /proc/self/cmdline holds the command line for the process. Each argument is terminated with a 0 byte, and the final argument is followed by an additional 0 byte. A: There is the GetCommandLine() function in the Win32 API. On other platforms, you would have to save argc/argv somewhere (external variable?). A: On Windows, I use this type of thing to get the arguments: #include <windows.h> #include <string> #include <vector> #include <cwchar> #include <cstdio> #include <clocale> using namespace std; vector<wstring> getArgs() { int argc; wchar_t** argv = CommandLineToArgvW(GetCommandLineW(), &argc); vector<wstring> args; if (argv) { args.assign(argv, argv + argc); LocalFree(argv); } return args; } int main() { const vector<wstring> argv = getArgs(); setlocale(LC_CTYPE, ".OCP"); for (vector<wstring>::const_iterator i = argv.begin(); i != argv.end(); ++i) { wprintf(L"%s\n", i->c_str()); } } Edit: A getArgs function like that is also useful for mingw as mingw doesn't support a wmain(). A: This should work under linux: #include <stdio.h> #include <unistd.h> void findargs(int *argc, char ***argv) { size_t i; char **p = &__environ[-2]; for (i = 1; i != *(size_t*)(p-1); i++) { p--; } *argc = (int)i; *argv = p; } int main(int argc, char **argv) { printf("got argc=%d, argv=%p\n", argc, argv); findargs(&argc, &argv); printf("found argc=%d, argv=%p\n", argc, argv); return 0; } Note: fails if setenv() has been called. A: Use getpid() and ps command. int pid; int fd; char cmd[80]; pid = getpid(); sprintf(cmd, "ps %d", pid); fd = popen(cmd, "r"); .... lines should be like .... 1358 ./a.out abc def A: Have you given any thought to using environment variables instead of the command line? Might be easier on the user depending on what kinds of applications the library will be used in, and you can use the standard getenv() function. I think, in any case, if your library is going to use argc and argv, the program should be the one to pass them.
{ "language": "en", "url": "https://stackoverflow.com/questions/160292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Localization in OpenSocial gadget app I'm trying to add multi-language support to an OpenSocial application. I see that I can link to my own message bundles in the gadget XML markup: <Module> <ModulePrefs title="Hello" description="Message bundle demo"> <Require feature="opensocial-0.7"/> <Locale lang="en" messages="http://example.com/hello/en_ALL.xml"/> </ModulePrefs> . . . </Module> But how do I actually extract values from them? A: To extract the values you should use the getMsg method var prefs = new gadgets.Prefs(); var msg = prefs.getMsg('hello_world');
{ "language": "en", "url": "https://stackoverflow.com/questions/160296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Django/Python - Grouping objects by common set from a many-to-many relationships This is a part algorithm-logic question (how to do it), part implementation question (how to do it best!). I'm working with Django, so I thought I'd share with that. In Python, it's worth mentioning that the problem is somewhat related to how-do-i-use-pythons-itertoolsgroupby. Suppose you're given two Django Model-derived classes: from django.db import models class Car(models.Model): mods = models.ManyToManyField(Representative) and from django.db import models class Mods(models.Model): ... How does one get a list of Cars, grouped by Cars with a common set of Mods? I.e. I want to get a class likeso: Cars_by_common_mods = [ { mods: { 'a' }, cars: { 'W1', 'W2' } }, { mods: { 'a', 'b' }, cars: { 'X1', 'X2', 'X3' }, }, { mods: { 'b' }, cars: { 'Y1', 'Y2' } }, { mods: { 'a', 'b', 'c' }, cars: { 'Z1' } }, ] I've been thinking of something like: def cars_by_common_mods(): cars = Cars.objects.all() mod_list = [] for car in cars: mod_list.append( { 'car': car, 'mods': list(car.mods.all()) } ret = [] for key, mods_group in groupby(list(mods), lambda x: set(x.mods)): ret.append(mods_group) return ret However, that doesn't work because (perhaps among other reasons) the groupby doesn't seem to group by the mods sets. I guess the mod_list has to be sorted to work with groupby. All to say, I'm confident there's something simple and elegant out there that will be both enlightening and illuminating. Cheers & thanks! A: Have you tried sorting the list first? The algorithm you proposed should work, albeit with lots of database hits. import itertools cars = [ {'car': 'X2', 'mods': [1,2]}, {'car': 'Y2', 'mods': [2]}, {'car': 'W2', 'mods': [1]}, {'car': 'X1', 'mods': [1,2]}, {'car': 'W1', 'mods': [1]}, {'car': 'Y1', 'mods': [2]}, {'car': 'Z1', 'mods': [1,2,3]}, {'car': 'X3', 'mods': [1,2]}, ] cars.sort(key=lambda car: car['mods']) cars_by_common_mods = {} for k, g in itertools.groupby(cars, lambda car: car['mods']): cars_by_common_mods[frozenset(k)] = [car['car'] for car in g] print cars_by_common_mods Now, about those queries: import collections import itertools from operator import itemgetter from django.db import connection cursor = connection.cursor() cursor.execute('SELECT car_id, mod_id FROM someapp_car_mod ORDER BY 1, 2') cars = collections.defaultdict(list) for row in cursor.fetchall(): cars[row[0]].append(row[1]) # Here's one I prepared earlier, which emulates the sample data we've been working # with so far, but using the car id instead of the previous string. cars = { 1: [1,2], 2: [2], 3: [1], 4: [1,2], 5: [1], 6: [2], 7: [1,2,3], 8: [1,2], } sorted_cars = sorted(cars.iteritems(), key=itemgetter(1)) cars_by_common_mods = [] for k, g in itertools.groupby(sorted_cars, key=itemgetter(1)): cars_by_common_mods.append({'mods': k, 'cars': map(itemgetter(0), g)}) print cars_by_common_mods # Which, for the sample data gives me (reformatted by hand for clarity) [{'cars': [3, 5], 'mods': [1]}, {'cars': [1, 4, 8], 'mods': [1, 2]}, {'cars': [7], 'mods': [1, 2, 3]}, {'cars': [2, 6], 'mods': [2]}] Now that you've got your lists of car ids and mod ids, if you need the complete objects to work with, you could do a single query for each to get a complete list for each model and create a lookup dict for those, keyed by their ids - then, I believe, Bob is your proverbial father's brother. A: check regroup. it's only for templates, but i guess this kind of classification belongs to the presentation layer anyway. A: You have a few problems here. You didn't sort your list before calling groupby, and this is required. From itertools documentation: Generally, the iterable needs to already be sorted on the same key function. Then, you don't duplicate the list returned by groupby. Again, documentation states: The returned group is itself an iterator that shares the underlying iterable with groupby(). Because the source is shared, when the groupby object is advanced, the previous group is no longer visible. So, if that data is needed later, it should be stored as a list: groups = [] uniquekeys = [] for k, g in groupby(data, keyfunc): groups.append(list(g)) # Store group iterator as a list uniquekeys.append(k) And final mistake is using sets as keys. They don't work here. A quick fix is to cast them to sorted tuples (there could be a better solution, but I cannot think of it now). So, in your example, the last part should look like this: sortMethod = lambda x: tuple(sorted(set(x.mods))) sortedMods = sorted(list(mods), key=sortMethod) for key, mods_group in groupby(sortedMods, sortMethod): ret.append(list(mods_group)) A: If performance is a concern (i.e. lots of cars on a page, or a high-traffic site), denormalization makes sense, and simplifies your problem as a side effect. Be aware that denormalizing many-to-many relations might be a bit tricky though. I haven't run into any such code examples yet. A: Thank you all for the helpful replies. I've been plugging away at this problem. A 'best' solution still eludes me, but I've some thoughts. I should mention that the statistics of the data-set I'm working with. In 75% of the cases there will be one Mod. In 24% of the cases, two. In 1% of the cases there will be zero, or three or more. For every Mod, there is at least one unique Car, though a Mod may be applied to numerous Cars. Having said that, I've considered (but not implemented) something like-so: class ModSet(models.Model): mods = models.ManyToManyField(Mod) and change cars to class Car(models.Model): modset = models.ForeignKey(ModSet) It's trivial to group by Car.modset: I can use regroup, as suggested by Javier, for example. It seems a simpler and reasonably elegant solution; thoughts would be much appreciated.
{ "language": "en", "url": "https://stackoverflow.com/questions/160298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Is there any thing such as SELECT LAST in sql query? I am using sybase database to query the daily transaction report. I had subquery within my script. Here as it goes: SELECT orders.accountid ,items.x,etc (SELECT charges.mistotal FROM charges where items.id = charges.id) FROM items,orders WHERE date = '2008-10-02' Here I am getting the error message as: Subquery cannot return more than one values My values are 7.50, 25.00 I want to return the 25.00, but when I use (SELECT TOP 1 charges.mistotal FROM charges where items.id = charges.id) My result is 7.50 but I want to return 25.00 Does anyone has any better suggestion? A: Under what criteria you choose to select the 25.00 instead of the 7.5? If its related to the maximum value, you can try using the MAX() function on that field. If its related to the chronologically last row added, try using the MAX() on the datetime field, if you have details on the hours and minutes it was added. A: SELECT TOP 1 * FROM dbo.YourTable ORDER BY Col DESC In your case, I guess that would be SELECT TOP 1 charges.mistotal FROM charges where items.id = charges.id ORDER BY charges.mistotal DESC A: You could try this: SELECT MAX(charges.mistotal) FROM charges WHERE items.id = charges.id A: So, can you use inverse order: (SELECT TOP 1 charges.mistotal FROM charges WHERE items.id = charges.id ORDER BY charges.mistotal DESC ) Actually, since you didn't give an explicit order, the sequence of the returned results is undefined, and you are just lucky that it gave you the answer you didn't want; it could have given you the answer you wanted, and then you might not have noticed that it was not always correct until after it went into production. Or, can you use: (SELECT MAX(charges.mistotal) FROM charges WHERE charges.id = items.id ) Or did you really want a SUM? A: To get first you use select top 1 | first * from table order ascending to get last, just invert your order. A: SELECT TOP 1 charges.mistotal FROM charges where items.id = charges.id ORDER BY charges.id DESC The order by clause will make sure it comes back in the order of the id, and the DESC means descending so it will give you the largest (newest) value first. TOP 1 of course makes sure you just get that one. A: Sort your subquery. If you want the "last" value, you need to define how you determine which item comes last (remember, SQL result sets are unordered by default). For example: (SELECT TOP 1 charges.mistotal FROM charges where items.id = charges.id ORDER BY charges.mistotal DESC) This would return 25.00 instead of 7.50 (from your data examples above), but I'm assuming that you want this value to be "last" because it's bigger. There may be some other field that it makes more sense for you to sort on; maybe you have a timestamp column, for example, and you could sort on that to get the most recent value instead of the largest value. The key is just defining what you mean by "last".
{ "language": "en", "url": "https://stackoverflow.com/questions/160304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Enumerated types as constants across web services? I'm working on a project where I'm trying to avoid hard-coding DB IDs in a .NET service-oriented project. There are some instances where I need to set ID values through code but I don't want to just hard code the IDs since I've done that before and it lead to DB alignment nightmares when the auto-incrementing IDs were changed when the DB was dumped to a new system. What I want to do is create an enumerated constants that store the IDs as so that at the worst, only 1 file has to be updated if the DB is ever changed instead of trying to go through thousands upon thousands of lines of code to replace any ID in the system. This will work on a single system, but in my company's service oriented environment, enumerations don't serialize with their values, just their names. What is the best way to share IDs across a web service? I'd like to use either enumerations (the ideal situation) or constants in some way, but I can't seem to get this to work. I could make a web method that returns the IDs, but sending a web request for every ID and then serializing the response and deserializing on the client machine just sounds like a bad idea. EDIT I wasn't entirely clear about what I was asking, so I'll elaborate. I want to have a group of constants. The enum would only be used because it groups constants together appropriately. I'm mainly interested in see if there is a way to share constants across a web service. I need the values the enum represent, not the enum itself. The enum is never sent between the service and the client except as an integer. Internally everything is stored as an ID, not an enum. Having a separate shared library doesn't sound like the ideal solution since I'm almost at the completion point for this project and I'd only be storing 1 enum/class in the library. It seems like a bit of a waste to make for just one class. A: I've always created a separate assembly that contains the enumerations and any interfaces the client/server need to share. You can then reference it from both the client and the server without leaking any functionality. A: Enums are inherently serializable as a native data type, so it should not be a problem sharing them across services. But you have to use a shared data contract. We use enums for small lookup lists associating tokens to IDs in the database, but then we also share the data contract between services (we use WCF). This allows us to use the enum tokens to refer to the associated integer value in the code of any service. If the values in the database change, we will have to update the enum manually, but only in one place - the data contract. Another possible solution is to create a cache in each service that needs the IDs. During startup of each service, have it fetch the values from the central data service and store it in an appropriate manner. This could be a custom cache object or maybe a static dictionary. When you experience the renumbering issue, just restart the services. I work on a project where this is done for certain user entities where we need the actual IDs and want to avoid constantly calling the data service for something that doesn't change much if ever. A: You can define the enums in common library and use it on client as well as server side. When you pass enum through a web service - it gets converted to string. Write a simple conversion extension method that converts it to appropriate enum. example: DayOfWeek ConvertToDayOfWeek(this String str) { return (DayOfWeek)Enum.Parse(typeof(DayOfWeek), str, true); } (Note: I'm assuming you have rich client/desktop app. consuming the web service) A: You can write enum on a shared assembly and use it in your applications. But I think that serving enum on webservice should work. I have some c# webservices with enum and at client side (php) we pass enum value name and I get value normally at c#. A: Just create a public enum in your webservice project, and create at least one public WebMethod that uses the enum as its return type. After updating the web reference in your client project, you can then use that enum in the client code as well. This basically lets you define an enum on the server and use it on both the server and the client, without futzing around with a shared library. A: Is the code on the other side of the webservice c# as well or do you actually need to pass the integer across the web? If you have control over the code on both sides you can just pass the enum and convert it back to an int once it has crossed the wire. (Int32)objectThatWasPassed.EnumerationValue; If you don't have access to the code on the other side and need to pass it as an int you can create an integer property on whatever you are passing and just call; objectbeingPassed.ConstantProperty = (Int32)Whatever.Constant1; A: There are a few options for what you would like to do. You could define a set of constants defined within a single class that represents the IDs and can help you translate between the ID and something that is more useful to you. This is moderately flexible and if you want tot get really fancy you can even look up some of your magic ids from the database (as per hurst's suggestion). Pick the type you want to send the values and then just wrap/ignore the fact it is a readonly constant. As mentioned previously you can use WCF to send enumerations across WCF but they are extremely brittle. Any time you change the enum values you will be forced to recompile the service and update your client references. To exposed enums in WCF add the [DataContract] attribute to the class and the [EnumMember] attribute to each member. You have been warned. A few of the previous suggestions mentioned to use a shared set of values. This is highly recommended so that you only have to manage it an update it once. With such a scenario it is very very very (did I say very?) important to make sure that everyone uses the shared set of values and never the values directly. Good luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/160313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to check for key being held down on startup in Java I'm trying to write a resolution selection dialog that pops up when a program first starts up. To prevent boring the user, I want to implement the fairly standard feature that you can turn off that dialog with a checkbox, but get it back by holding down the alt key at startup. Unfortunately, there is no obvious way to ask java whether a given key is currently being pressed. You can only register to be informed of new key presses via a KeyListener, but that doesn't help if the keypress starts before the app launches. A: import java.awt.*; import java.awt.event.*; import javax.swing.JFrame; public class LockingKeyDemo { static Toolkit kit = Toolkit.getDefaultToolkit(); public static void main(String[] args) { JFrame frame = new JFrame(); frame.addWindowListener(new WindowAdapter() { public void windowActivated(WindowEvent e) { System.out.println("caps lock1 = " + kit.getLockingKeyState(KeyEvent.VK_CAPS_LOCK)); try { Robot robot = new Robot(); robot.keyPress(KeyEvent.VK_CONTROL); robot.keyRelease(KeyEvent.VK_CONTROL); } catch (Exception e2) { System.out.println(e2); } System.out.println("caps lock2 = " + kit.getLockingKeyState(KeyEvent.VK_CAPS_LOCK)); } }); frame.addKeyListener(new KeyAdapter() { public void keyReleased(KeyEvent e) { System.out.println("caps lock3 = " + kit.getLockingKeyState(KeyEvent.VK_CAPS_LOCK)); } }); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(200, 200); frame.setLocationRelativeTo(null); frame.setVisible(true); } } A: Well there are two types of key press detection: event based, and polling. If you poll the keyboard for KEY_PRESSED on startup (through a loop with a sleep.thread(timeInMs) constantly checking if your key is down), then you can detect if it's already pressed on startup. A: The original question seems to be not answered. The proposed method determines the locking key state like CapsLock, ScrollLock, etc. So it would not work for Alt pressed state. Consider the following code: com.sun.jna.platform.KeyboardUtils.isPressed(java.awt.event.KeyEvent.VK_ALT); The only problem is that this class is an internal Sun's JDK class and not likely to be available in any other JVM. Depend on your project it may or may not be acceptable. Internally it calls into User32.DLL on Windows: User32.INSTANCE.GetAsyncKeyState(...) A: public class LockingKeyDemo { static Toolkit kit = Toolkit.getDefaultToolkit(); public static void main(String[] args) { System.out.println("caps lock2 = " + kit.getLockingKeyState(KeyEvent.VK_CAPS_LOCK)); } } A: I don't know much about Java (mostly code in C#) but what about having a small loader program written in C or something that then launches your Java app with some parameters (like whether or not a certain key is down)? A: So it appears that you can do this, but only for caps lock et al. Hence, I've switched to using caps lock for this purpose. Not perfect, but OK.
{ "language": "en", "url": "https://stackoverflow.com/questions/160315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Considerations about a simulation game The kind of simulation game that I have in mind is the kind where you have things to build in various locations and workers/transporters that connect such locations. Something more like the Settlers series. Let's assume I don't want any graphics at the moment, that I think I can manage. So my doubts are the following: * *Should every entity be a class and each one have a thread? *Should entities be grouped in lists inside classes and each one have a thread? If one takes implementation 1, it's going to be very hard to run on low spec machines and does not scale well for large numbers. If one takes implementation 2, it's going to be better in terms of resources but then... How should I group the entities? * *Have a class for houses in general and have an Interface List to manage that? *Have a class for specific groups of houses and have an Object List to manage that? and what about threads? * *Should I have the simplistic main game loop? *Should I have a thread for each class group? *How do workers/transporters fit in the picture? A: The MMORPG Eve Online uses stackless python and the actor model to emulate a thread-per-entity system without the resource hit. Check out this link for more information: http://harkal.sylphis3d.com/2005/08/10/multithreaded-game-scripting-with-stackless-python/ A: I'm fairly certain you only want to have one thread executing the game logic. Having multiple threads won't speed up anything, and will only make the code confusing. Having a main game loop is perfectly fine, though things get somewhat trickier if the game has multiplayer. I'm a bit confused about the part of your question relating to classes. If I understand your question correctly, my suggestion would be to have a class for each type of house (pig farm, windmill, etc) deriving from a common abstract base class House. You'd then store all the houses in the game world in a list of Houses. A: The normal approach does not use threading at all, but rather implements entities as state-machines. Then your mainloop looks like this: while( 1 ) { foreach( entity in entlist ) { entity->update(); } render(); } A: Think about using Erlang. With Erlang you can spawn a lot more processes (= lightweight threads) than a normal system thread. Further its distributed, meaning if your system isnt good enough, add another node. Another alternative would be stackless python (or the current python alternative), as it also support some kind of lightweightthread, which is very cool for game engines. Eve Online uses it for its servers. But it isn't distributed, but that can be easily achieved manually. A: While the answer by @Mike F is mostly correct, you have to bear in mind that iteration over the entities in a foreach cycle makes the order of evaluation significantly deterministic, which has undesirable side-effects. On the other hand, introducing threads opens up potential for heisenbugs and concurrency issues, so the best way I have seen and used relies on combining two cycles: the first one collects actions from agents/workers based on previous state, the second cycle composes the results of the actions and updates the state of the simulation. To avoid possible bias, at each cycle the order of evaluation is randomized. This BTW scales to massively parallel evaluation, subject to a synchronization at the end of each cycle. A: I would avoid making a separate class for each entity because then you'll have situations where you're either repeating code for shared capabilities, or you'll have a funky inheritance tree. I'd argue that what you want is a single class and objects with functionality composed onto it. I saw an article on a blog talking about this very concept in an RTS...wait, I think it was a tour of design patterns that someone was writing. Use the Visitor pattern spawning a thread on each object's DoEvents (for lack of a better word) method to tell each object to do what it's going to do during this given loop. Sync the threads at the end of your loop because you don't want to have some objects with complex logic still doing its thing from ten loops back when in reality it was destroyed five loops ago.
{ "language": "en", "url": "https://stackoverflow.com/questions/160318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you localize a database driven website I've been playing with the .NET built in localization features and they seem to all rely on putting data in resx files. But most systems can't rely on this because they are database driven. So how do you solve this issue? Is there a built in .NET way, or do you create a translations table in SQL and do it all manually? And if you have to do this on the majority of your sites, is there any reason to even use the resx way of localization? An example of this is I have an FAQ list on my site, I keep this list in the database so I can easily add/remove more, but by putting it in the database, I have no good way have translating this information into multiple languages. A: For a given item in your data model, split out the description part into a localized table with a locale Id column (LCID). So the table for Product would not in fact contain the products description or name, but only its hard and fast values (ProductId, EAN, NumberInStock, NextStockData, IsActive, IsPublished) etc. ProductDescription then contains ProductId, Name, Description, LCID A: I live in Canada so multilingualism is a big deal. I've seen two approaches. First option is to store all localized data for a specific record in a different table, linked to the original table by the primary key, and the locale. The second option is similar to the first, except that for each locale, there is a different table, with the locale as a suffix for the table name. Option A Item (ItemID, ...) ItemLocal (ItemID,LocaleID,....) Option B Item (ItemID, ...) Item_ENUS (ItemID,....) Item_ENGB (ItemID,....) Item_FR (ItemID,....) A third option I thought Of recent, which would be really nice if a database supported it natively would be to store the values for all locals in the same field. And if the field was set up as varchar-multilocal, then you would have to access it by passing a parameter to specify the language. Nothing like this exists as I know it, but it's something that I think would really make things a lot easier, and a lot more fluid. A: In my opinion, localizing dynamic content (e.g., your FAQ) should be done by you in your database. Depending on how your questions are stored, I would probably create a "locale" column and use that when selecting the FAQ questions from the database. I'm not sure if this would scale very well when you started localizing lots of tables. For static content (e.g, form field labels, static text, icons, etc) you should probably be just fine using file-based resources. If you really wanted to, however, it looks like it wouldn't be super hard to create a custom resource provider implementation that could handle this. Here's some related links: * *http://channel9.msdn.com/forums/Coffeehouse/250892-Localizing-with-a-database-or-resx-files/ *http://weblogs.asp.net/scottgu/archive/2006/05/30/ASP.NET-2.0-Localization-_2800_Video_2C00_-Whitepaper_2C00_-and-Database-Provider-Support_2900_.aspx *http://www.arcencus.nl/Blogs/tabid/105/EntryID/20/Default.aspx *http://msdn.microsoft.com/en-us/library/aa905797.aspx *http://www.codeproject.com/KB/aspnet/customsqlserverprovider.aspx A: Currently, translation is not something that can be done automatically. The best way is to get a person to translate and use Nick's methods to show the proper language. A: We use a mix of RESX files and Option A of Kibbee's response. We created a simple tool to manage the RESX files online: http://blog.lavablast.com/post/2008/02/RESX-file-Web-Editor.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/160335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: What svn command would list all the files modified on a branch? In svn, I have a branch which was created, say at revision 22334. Commits were then made on the branch. How do I get a list of all files that were changed on the branch compared to what's on the trunk? I do not want to see files that were changed on the trunk between when the branch was created and "now". A: This will do it I think: svn diff -r 22334:HEAD --summarize <url of the branch> A: You can also get a quick list of changed files if thats all you're looking for using the status command with the -u option svn status -u This will show you what revision the file is in the current code base versus the latest revision in the repository. I only use diff when I actually want to see differences in the files themselves. There is a good tutorial on svn command here that explains a lot of these common scenarios: SVN Command Reference A: You can use the following command: svn status -q According to svnbook: With --quiet (-q), it prints only summary information about locally modified items. WARNING: The output of this command only shows your modification. So I suggest to do a svn up to get latest version of the file and then use svn status -q to get the files you have modified. A: echo You must invoke st from within branch directory SvnUrl=`svn info | grep URL | sed 's/URL: //'` SvnVer=`svn info | grep Revision | sed 's/Revision: //'` svn diff -r $SvnVer --summarize $SvnUrl A: This will list only modified files: svn status -u | grep M A: -u option will display including object files if they are added during compilation. So, to overcome that additionally you may use like this. svn status -u | grep -v '\?' A: svn log -q -v shows paths and hides comments. All the paths are indented so you can search for lines starting with whitespace. Then pipe to cut and sort to tidy up: svn log --stop-on-copy -q -v | grep '^[[:space:]]'| cut -c6- | sort -u This gets all the paths mentioned on the branch since its branch point. Note it will list deleted and added, as well as modified files. I just used this to get the stuff I should worry about reviewing on a slightly messy branch from a new dev. A: I do this as a two-step process. First, I find the version that was the origin of the branch. From within the checkout of the branch: svn log --stop-on-copy |tail -4 --stop-on-copy tells SVN to only operate on entries after the branch. tail gets you the last log entry, which is the one that contains the branch information. The number that begins with an 'r' is the revision at which you branched. Then, use svn diff to find changes since that version: svn diff -r <revision at which you branched>:head --summarize the --summarize option shows a file list only, without the actual diff contents, similar to the 'svn status' output. If you want to see the actual diff, just remove the --summarize option.
{ "language": "en", "url": "https://stackoverflow.com/questions/160370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: String problem with SQL Reader? Function FillAdminAccount() As Boolean FillAdminAccount = True Try SQLconn.ConnectionString = "connect timeout=9999999;" & _ "data source=" & DefaultIserver & ";" & _ "initial catalog=" & DefaultIdBase & "; " & _ "user id=userid;" & _ "password=userpass;" & _ "persist security info=True; " & _ "packet size=4096" SQLconn.Open() SQLcmd.CommandType = CommandType.Text SQLcmd.CommandText = "Select distinct username, cast(convert(varchar,userpassword) as varchar) as 'userpassword' from " & tblUsersList & " where usertype='MainAdmin'" SQLcmd.Connection = SQLconn SQLreader = SQLcmd.ExecuteReader While SQLreader.Read = True CurrentAdminUser = SQLreader("username").ToString CurrentAdminPass = SQLreader("userpassword").ToString 'PROBLEM' End While Catch ex As Exception ErrorMessage(ex) Finally If SQLconn.State = ConnectionState.Open Then SQLconn.Close() If SQLreader.IsClosed = False Then SQLreader.Close() End Try End Function 'FillAdminAccount Please see the line with the comment PROBLEM. On this code, the output is equal to "userpassword. As you can see, there is no quotation mark on the right and I wonder why. By the way, the data type of the userpassword in the database is BINARY. Wish you could help me on this. Thank you..x_x A: NEVER store actual passwords in the db. Now it looks like your passwords might not quite be plain text because of the convert/cast operations, but you still have a problem. At very least any encryption used is easily reversible, and if your sql server ever ends up on a different machine from the application then passwords will be traveling over the wire in plain text. If you MUST do this (perhaps because of a legacy system or mandate from above) then at least do the matching at the server so that the password never comes back to the application. What you should be doing is using something like SQL Server 2005's HashBytes() function to only store a hash of the actual password. When someone tries to login, hash their attempted password and match up the hashes. As to your specific question, my guess is the cast or convert operation failed resulting in a NULL value coming back to the application. And do you have both a CAST() and a CONVERT() to the same type? It's redundant. A: Could it be as varchar) as 'userpassword' should be ...as varchar) as [userpassword] .. or ...as varchar) as userpassword .. A: @Oglester is right, it's the single quotes around 'userpassword'. This is not a bug, but it's just dumb: cast(convert(varchar,userpassword) as varchar You can use cast or convert but there's no point in using both.
{ "language": "en", "url": "https://stackoverflow.com/questions/160373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: TSQL: How do I do a self-join in XML to get a nested document? I have a SQL Server 2005 table like this: create table Taxonomy( CategoryId integer primary key, ParentCategoryId integer references Taxonomy(CategoryId), CategoryDescription varchar(50) ) with data looking like CategoryIdParentCategoryIdCategoryDescription 123nullfoo345123bar I'd like to query it into an xml document like this: <taxonomy> <category categoryid="123" categorydescription="foo"> <category id="455" categorydescription="bar"/> </category> </taxonomy> Is it possible to do this with FOR XML AUTO, ELEMENTS? Or do I need to use FOR XML EXPLICIT? A: It is possible but the main limitation is that the levels of the hierarchy must be hard coded. The SQL Server Books Online has a description of how to represent hierarchies in XML at this link. Below is a sample query that produces the XML you requested: SELECT [CategoryId] as "@CategoryID" ,[CategoryDescription] as "@CategoryDescription" ,(SELECT [CategoryId] ,[CategoryDescription] FROM [dbo].[Taxonomy] "Category" WHERE ParentCategoryId = rootQuery.CategoryId FOR XML AUTO, TYPE) FROM [dbo].[Taxonomy] as rootQuery where [ParentCategoryId] is null FOR XML PATH('Category'), ROOT('Taxonomy')
{ "language": "en", "url": "https://stackoverflow.com/questions/160374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why move your Javascript files to a different main domain that you also own? I've noticed that just in the last year or so, many major websites have made the same change to the way their pages are structured. Each has moved their Javascript files from being hosted on the same domain as the page itself (or a subdomain of that), to being hosted on a differently named domain. It's not simply parallelization Now, there is a well known technique of spreading the components of your page across multiple domains to parallelize downloading. Yahoo recommends it as do many others. For instance, www.example.com is where your HTML is hosted, then you put images on images.example.com and javascripts on scripts.example.com. This gets around the fact that most browsers limit the number of simultaneous connections per server in order to be good net citizens. The above is not what I am talking about. It's not simply redirection to a content delivery network (or maybe it is--see bottom of question) What I am talking about is hosting Javascripts specifically on an entirely different domain. Let me be specific. Just in the last year or so I've noticed that: youtube.com has moved its .JS files to ytimg.com cnn.com has moved its .JS files to cdn.turner.com weather.com has moved its .JS files to j.imwx.com Now, I know about content delivery networks like Akamai who specialize in outsourcing this for large websites. (The name "cdn" in Turner's special domain clues us in to the importance of this concept here). But note with these examples, each site has its own specifically registered domain for this purpose, and its not the domain of a content delivery network or other infrastructure provider. In fact, if you try to load the home page off most of these script domains, they usually redirect back to the main domain of the company. And if you reverse lookup the IPs involved, they sometimes appear point to a CDN company's servers, sometimes not. Why do I care? Having formerly worked at two different security companies, I have been made paranoid of malicious Javascripts. As a result, I follow the practice of whitelisting sites that I will allow Javascript (and other active content such as Java) to run on. As a result, to make a site like cnn.com work properly, I have to manually put cnn.com into a list. It's a pain in the behind, but I prefer it over the alternative. When folks used things like scripts.cnn.com to parallelize, that worked fine with appropriate wildcarding. And when folks used subdomains off the CDN company domains, I could just permit the CDN company's main domain with a wildcard in front as well and kill many birds with one stone (such as *.edgesuite.net and *.akamai.com). Now I have discovered that (as of 2008) this is not enough. Now I have to poke around in the source code of a page I want to whitelist, and figure out what "secret" domain (or domains) that site is using to store their Javascripts on. In some cases I've found I have to permit three different domains to make a site work. Why did all these major sites start doing this? EDIT: OK as "onebyone" pointed out, it does appear to be related to CDN delivery of content. So let me modify the question slightly based on his research... Why is weather.com using j.imwx.com instead of twc.vo.llnwd.net? Why is youtube.com using s.ytimg.com instead of static.cache.l.google.com? There has to a reasoning behind this. A: Limit cookie traffic? After a cookie is set on a specific domain, every request to that domain will have the cookie sent back to the server. Every request! That can add up quickly. A: Lots of reasons: CDN - a different dns name makes it easier to shift static assets to a content distribution network Parallelism - images, stylesheets, and static javascript are using two other connections which are not going to block other requests, like ajax callbacks or dynamic images Cookie traffic - exactly correct - especially with sites that have a habit of storing far more than a simple session id in cookies Load shaping - even without a CDN there are still good reasons to host the static assets on fewer web servers optimized to respond extremely quickly to a huge number of file url requests, while the rest of the site is hosted on a larger number of servers responding to more processor intensive dynamic requests update - two reasons you don't use the CDN's dns name. The client dns name acts as a key to the proper "hive" of assets the CDN is caching. Also since your CDN is a commodity service you can change the provider by altering the dns record - so you can avoid any page changes, reconfiguration, or redeployment on your site. A: Your follow-up question is essentially: Assuming a popular website is using a CDN, why would they use their own TLD like imwx.com instead of a subdomain (static.weather.com) or the CDN's domain? Well, the reason for using a domain they control versus the CDN's domain is that they retain control -- they could potentially even change CDNs entirely and only have to change a DNS record, versus having to update links in 1000s of pages/applications. So, why use nonsense domain names? Well, a big thing with helper files like .js and .css is that you want them to be cached downstream by proxies and people's browsers as much as possible. If a person hits gmail.com and all the .js is loaded out of their browser cache, the site appears much snappier to them, and it also saves bandwidth on the server end (everybody wins). The problem is that once you send HTTP headers for really aggressive caching (i.e. cache me for a week or a year or forever), these files aren't ever reliably loaded from the server any more and you can't make changes/fixes to them because things will break in people's browsers. So, what companies have to do is stage these changes and actually change the URLs of all of these files to force people's browsers to reload them. Cycling through domains like "a.imwx.com", "b.imwx.com" etc. is how this gets done. By using a nonsense domain name, the Javascript developers and their Javascript sysadmin/CDN liaison counterparts can have their own domain name/DNS that they're pushing these changes through, that they're accountable/autonomous for. Then, if any sort of cookie-blocking or script-blocking starts happening on the TLD, they just change from one nonsense TLD to kyxmlek.com or whatever. They don't have to worry about accidentally doing something evil that has countermeasure side effects on all of *.google.com. A: I think there's something in the CDN theory: For example: $ host j.imwx.com j.imwx.com CNAME twc.vo.llnwd.net twc.vo.llnwd.net A 87.248.211.218 twc.vo.llnwd.net A 87.248.211.219 $ whois llnwd.net <snip ...> Registrant: Limelight Networks Inc. 2220 W. 14th Street Tempe, Arizona 85281-6945 United States Limelight is a CDN. Meanwhile: $ host s.ytimg.com s.ytimg.com CNAME static.cache.l.google.com static.cache.l.google.com A 74.125.100.97 I'm guessing that this is a CDN for static content run internally by Google. $ host cdn.turner.com cdn.turner.com A record currently not present Ah well, can't win 'em all. By the way, if you use Firefox with the NoScript add-on then it will automate the process of hunting through source, and GUI-fy the process of whitelisting. Basically, click on the NoScript icon in the status bar, you're given a list of domains with options to temporarily or permanently whitelist, including "all on this page". A: I implemented this solution about two to three years ago at a previous employer, when the website started getting overloaded due to a legacy web server implementation. By moving the CSS and layout images off to an Apache server, we reduced the load on the main server and increased the speed no end. However, I've always been under the impression that Javascript functions can only be accessed from within the same domain as the page itself. Newer websites don't seem to have this limitation: as you mention, many have Javascript files on separate sub-domains or even completely detached domains altogether. Can anyone give me a pointer on why this is now possible, when it wasn't a couple of years ago? A: It's not just javascript that you can move to different domains but as many assets as possible will yield performance improvements. Most browsers have a limit to the number of simultanious connections you can make to a single domain (I think it's around 4) so when you have a lot of images, js, css, etc theres often hold up in downloading each file. You can use something like YSlow and FireBug to view when each file is downloaded from the server. By having assets on separate domains you lessen the load on your primary and can have more simultanious connections and download more files at any given time. We recently launched a realestate website which has a lot of images (of the houses, duh :P) which uses this principle for the images, so it's a lot faster to list the data. We've also used this on many other websites which have high asset volumne. A: I think you answered your own question. I believe your issue is security-related, rather than WHY. Perhaps a new META tag is in order that would describe valid CDNs for the page in question, then all we need is a browser add-on to read them and behave accordingly. A: Would it be because of blocking done by spam and content filters? If they use weird domains then it's harder to figure out and/or you'll end up blocking something you want. Dunno, just a thought. A: If I were a big name, multi-brand company, I think this approach would make sense because you want to make the javascript code available as a library. I would want to make as many pages be as consistent as possible in handling things like addresses, state names, zip codes. AJAX probably makes this concern prominent. In the current internet business model, domains are brands, not network names. If you get bought or spin-off brands, you end up with a lot of domain changes. This is a problem for even the most prominent sites. There are still links that point to to useful documents in *.netscape.com and *.mcom.com that are long gone. Wikipedia for Netscape says: "On October 12, 2004, the popular developer website Netscape DevEdge was shut down by AOL. DevEdge was an important resource for Internet-related technologies, maintaining definitive documentation on the Netscape browser, documentation on associated technologies like HTML and JavaScript, and popular articles written by industry and technology leaders such as Danny Goodman. Some content from DevEdge has been republished at the Mozilla website." So, that would be, in less than a 10 year period: * *Mosaic Communications Corporation *Netscape Communications Corporation *AOL *AOL Time Warner *Time Warner If you put the code in a domain that is NOT a brand name, you retain a lot of flexibility and you don't have to refactor all the entry points, access control, and code references when the web sites are re-named. A: I have worked with a company that does this. They're in a datacenter w/ fairly good peering, so the CDN reasoning isn't as big for them (maybe it would help, but they don't do it for that reason). Their reason is that they run several webservers in parallel which collectively handle their dynamic pages (PHP scripts), and they serve images and some javascript off of a separate domain on which they use a fast, lightweight webserver such as lighttpd or thttpd to serve up images and static javascript. PHP requires PHP. Static Javascript and images do not. A lot can be stripped out of a full featured webserver when all you need to do is the absolute minimum. Sure, they could probably use a proxy that redirects requests to a specific subdirectory to a different server, but it's easier to just handle all the static content with a different server.
{ "language": "en", "url": "https://stackoverflow.com/questions/160376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Improving the quality of code? So, in reading this site, it seems that the shop in which I work does a lot of things wrong and some things right. How can I improve the code that I work with from my colleagues? The only thing I can think of is to lead by example - start using Boost, etc. Any other thoughts? A: You probably have to look more closely at what it is your shop does wrong and what they do right. What can you actually change there? What can you change about your own practices that will improve your skills or that of your team? It can be difficult to realize change in an entrenched shop. Try proposing code reviews (on your code first), which could lead to discussion. For tangible items, I'd look at Scott Meyers' Effective C++, etc. Develop your skillset and you will either help improve others around you or move on to a shop that will. Also, look at the Gang of Four's Design Patterns book. A: Code reviews are the best way I found to improve code quality overall. Reviewing code from different individuals helping each other increases general awareness of different techniques and help propagate best practices. Hire a person more experienced than you are is also a good tool but it is a bit more tedious to implement. A: * *Reading good programming books *Learning from other's code - Open source projects are the best place to start *Read good blogs and forums regularly - Sutter mill, Coding Horror, Martin fowler etc *Code reviews *Unit tests *Using good libraries like Boost, STL. Also understanding their implementation A: Leading by example is always a good thing, though convincing others that your example is better than however they're currently doing it is not so easy. Constructive criticism through code review is probably your best bet for gently suggesting alternative approaches to how your colleagues work. The key point is to convince others that what you're proposing really is better in a tangible way that they can appreciate. A: Sometimes folks have to see that your way is working better than their way. It is often difficult to make people change. Have you considered unit test writing if you don't do that already? I've found it to really improve my production code and give me more confidence that what I'm writing is what I'm supposed to be writing. I like Jason's idea about code reviews. They can be helpful or they can be a place for arguing - really depends on how you set the tone. A: Architect and design the project well so that none of the developers will be able to take a different route to violate the quality. If you set a great design, people will just follow the route and they will automatically learn A: Other things to try is to add unit tests and documentation. A: Although this probably isn't as direct of an answer, I recommend you pick up the book Code Complete. I find it to be the best resource for learning how to be a better programmer. If you read through the whole book and understand what it talks about, you'll really learn how to better yourself, and your code. A: I find writing unit tests helps code quality a lot - it means you have to think about how your code will interact with the tests and other parts of the code. Peer code-review: Checking quality of code will also make the programmers think about how they write the code. A: It's great that you recognize that there's room for improvement and have the desire to try to enact some change. I suggest reading James Shore's 19-week diary where he documents the steps that he went through to enact agile development at his company. It's a hard fight, but his experience shows that you can make a difference. A: Just asking the question is a good start. Specifically you can: * *Admit that your code sucks *Start asking others, preferably others with more experience, to review your code *Implement a continuous build server - you have to be the one who uses this first *Have courage because this can be difficult *Be humble *Read Code Complete *Use a software development methodology that encourages team work. Some of the agile methodologies are really good at this *Read development blogs *Get involved in a user group Change is hard and you have to be the one who changes first. If you are working in an environment where others are happy the way it is you are going to have rough going. Be persistent about wanting to improve code quality. A: I am biased (as a result of my work), but depending on your budget (if it exists) static analysis is a possible option. There are lots of different types of tools, some of which also include coding standard enforcement checking.. If you use g++, you might be able to get a basic amount of help from the -Weffc++ option.
{ "language": "en", "url": "https://stackoverflow.com/questions/160379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Extract all string from a java project I have a rather big number of source files that I need parse and extract all string literals and put them in a file as play old java constant. For exemple: Label l = new Label("Cat"); Would become: Label l = new Label(Constants.CAT); And in Constants.java I would have: public final static String CAT = "Cat"; I do not want the strings to be externalized in a property text file. One reason is for consistency and code readability. The other is that our client code uses GWT, which does not support Java property text file mechanism. I could write some sort of parser (using ant replace task maybe)? But I wondered if an IDE already does this sort of thing automatically. A: To complete Peter Kelley answer, you might consider for eclipse IDE the AST solution. You might then write an AST program which parse your source code and does what you want. A full example is available in this eclipse corner article, also more details in the eclipse help. And you can find some examples in Listing 5 of the section "Implementation of in-place translation" of Automating the embedding of Domain Specific Languages in Eclipse JDT, alongside multiple examples in GitHub projects. A: Eclipse does do this automatically. Right-click the file, choose "Source", then "Externalize strings" This doesn't do exactly what you requested (having the strings in a Constants.java file as Strings) but the method used is very powerful indeed. It moves them into a properties file which can be loaded dynamically depending on your locale. Having them in a separate Java source file as you suggest means you'll either have ALL languages in your application at once or you'll ship different applications depending on locale. We use it for our applications where even the basic stuff has to ship in English and Japanese - our more complicated applications ship with 12 languages - we're not a small software development company by any means :-). If you do want them in a Java file, despite the shortcomings already mentioned, it's a lot easier to write a program to morph the properties file into a Java source file than it is to try and extract the strings from free-form Java source. All you then need to do is modify the Accessor class to use the in-built strings (in the separate class) rather than loading them at run time. A: There are some good reasons why you wouldn't want to do this. Aside from the fact that any such generated file (I didn't know about the eclipse function)is not going to distinguish between strings that you're setting, for example, as constructor args in test classes and things you actually want to have as constants, the bigger issue is that all of your public static finals are going to be compiled into your classes, and if you want to alter the classes behaviour you'll need to alter not only the class holding the constants but everything that references it. A: I fully acknowledge what Pax Diablo said. We're using that function too. When applied to a class file the function "Externalize strings" will create two files, a class Messages.class and a properties file messages.properties. Then it will redirect all direct usages of string literals to a call to Messages.get(String key) and using the key you entered for the string in the "Ext. String" wizard. BTW: What's so bad about property files? As he said, you can just change the properties file and don't have to change the class if you need to change the text. Another advantage is this one: The way of extracting the string literals into a property file leaves you free to translate the source language in any language you want without modifying any code. The properties file loader loads the target language file automatically by using the corresponding file with the language iso code. So you don't have to worry about the platform your code runs on, it will select the appropriate language (nearly) automatically. See documentation of class ResourceBundle for how this works in detail. A: You may want to check out the Jackpot source transformation engine in NetBeans which would allow you to script your source transformations.
{ "language": "en", "url": "https://stackoverflow.com/questions/160382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Apache mod_rewrite to catch XML requests How do I create a apache RewriteRule that catches any request URL ending in .xml, strips off the .xml and passes it to a specific script? http://www.example.com/document.xml, becomes http://www.example.com/document passed to script.php A: This should do the trick, I believe. RewriteRule ^(.+)\.xml$ script.php?path=$1
{ "language": "en", "url": "https://stackoverflow.com/questions/160384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ListBox with Grid as ItemsPanelTemplate produces weird binding errors I've got a ListBox control and I'm presenting a fixed number of ListBoxItem objects in a grid layout. So I've set my ItemsPanelTemplate to be a Grid. I'm accessing the Grid from code behind to configure the RowDefinitions and ColumnDefinitions. So far it's all working as I expect. I've got some custom IValueConverter implementations for returning the Grid.Row and Grid.Column that each ListBoxItem should appear in. However I get weird binding errors sometimes, and I can't figure out exactly why they're happening, or even if they're in my code. Here's the error I get: System.Windows.Data Error: 4 : Cannot find source for binding with reference 'RelativeSource FindAncestor, AncestorType='System.Windows.Controls.ItemsControl', AncestorLevel='1''. BindingExpression:Path=HorizontalContentAlignment; DataItem=null; target element is 'ListBoxItem' (Name=''); target property is 'HorizontalContentAlignment' (type 'HorizontalAlignment') Can anybody explain what's going on? Oh, and, here's my XAML: <UserControl.Resources> <!-- Value Converters --> <v:GridRowConverter x:Key="GridRowConverter" /> <v:GridColumnConverter x:Key="GridColumnConverter" /> <v:DevicePositionConverter x:Key="DevicePositionConverter" /> <v:DeviceBackgroundConverter x:Key="DeviceBackgroundConverter" /> <Style x:Key="DeviceContainerStyle" TargetType="{x:Type ListBoxItem}"> <Setter Property="FocusVisualStyle" Value="{x:Null}" /> <Setter Property="Background" Value="Transparent" /> <Setter Property="Grid.Row" Value="{Binding Path=DeviceId, Converter={StaticResource GridRowConverter}}" /> <Setter Property="Grid.Column" Value="{Binding Path=DeviceId, Converter={StaticResource GridColumnConverter}}" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ListBoxItem}"> <Border CornerRadius="2" BorderThickness="1" BorderBrush="White" Margin="2" Name="Bd" Background="{Binding Converter={StaticResource DeviceBackgroundConverter}}"> <TextBlock FontSize="12" HorizontalAlignment="Center" VerticalAlignment="Center" Text="{Binding Path=DeviceId, Converter={StaticResource DevicePositionConverter}}" > <TextBlock.LayoutTransform> <RotateTransform Angle="270" /> </TextBlock.LayoutTransform> </TextBlock> </Border> <ControlTemplate.Triggers> <Trigger Property="IsSelected" Value="true"> <Setter TargetName="Bd" Property="BorderThickness" Value="2" /> <Setter TargetName="Bd" Property="Margin" Value="1" /> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Border CornerRadius="3" BorderThickness="3" Background="#FF333333" BorderBrush="#FF333333" > <Grid ShowGridLines="False"> <Grid.RowDefinitions> <RowDefinition Height="15" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <StackPanel Grid.Row="0" Orientation="Horizontal"> <Image Margin="20,3,3,3" Source="Barcode.GIF" Width="60" Stretch="Fill" /> </StackPanel> <ListBox ItemsSource="{Binding}" x:Name="lstDevices" Grid.Row="1" ItemContainerStyle="{StaticResource DeviceContainerStyle}" Background="#FF333333" SelectedItem="{Binding SelectedDeviceResult, ElementName=root, Mode=TwoWay}" > <ListBox.ItemsPanel> <ItemsPanelTemplate> <Grid> <Grid.LayoutTransform> <RotateTransform Angle="90" /> </Grid.LayoutTransform> </Grid> </ItemsPanelTemplate> </ListBox.ItemsPanel> </ListBox> </Grid> </Border> A: This is an amalgam of the other answers here, but for me, I had to apply the Setter in two places to solve the error, although this was when using a custom VirtualizingWrapPanel If I remove either one of the below Setter declarations, my errors reappear. <ListView> <ListView.Resources> <Style TargetType="ListViewItem"> <Setter Property="HorizontalContentAlignment" Value="Left" /> <Setter Property="VerticalContentAlignment" Value="Top" /> </Style> </ListView.Resources> <ListView.ItemContainerStyle> <Style TargetType="ListViewItem"> <Setter Property="HorizontalContentAlignment" Value="Left" /> <Setter Property="VerticalContentAlignment" Value="Top" /> </Style> </ListView.ItemContainerStyle> <ListView.ItemsPanel> <ItemsPanelTemplate> <controls:VirtualizingWrapPanel /> </ItemsPanelTemplate> </ListView.ItemsPanel> </ListView> I don't really have the time to investigate further at the moment, but I suspect it's related to the default style that JTango mentions in his answer - I'm not really customising my template to a huge degree. I think there's more mileage to be had out of the other answers, but I thought I'd post this on the off chance it helps someone in the same boat. David Schmitt's answer looks like it might describe the root cause. A: This is a common problem with ListBoxItems and other ephemeral *Item containers. They are created asynchronously/on the fly, while the ItemsControl is loaded/rendered. You have to attach to ListBox.ItemContainerGenerator's StatusChanged event and wait for the Status to become ItemsGenerated before trying to access them. A: I had the same problem as you and I just wanted to share what was my solution. I have tried all options from this post but the last one was the best for me - thx Chris. So my code: <ListBox.Resources> <Style x:Key="listBoxItemStyle" TargetType="ListBoxItem"> <Setter Property="HorizontalContentAlignment" Value="Center" /> <Setter Property="VerticalContentAlignment" Value="Center" /> <Setter Property="MinWidth" Value="24"/> <Setter Property="IsEnabled" Value="{Binding IsEnabled}"/> </Style> <Style TargetType="ListBoxItem" BasedOn="{StaticResource listBoxItemStyle}"/> </ListBox.Resources> <ListBox.ItemContainerStyle> <Binding Source="{StaticResource listBoxItemStyle}"/> </ListBox.ItemContainerStyle> <ListBox.ItemsPanel> <ItemsPanelTemplate> <WrapPanel Orientation="Horizontal" IsItemsHost="True" MaxWidth="170"/> </ItemsPanelTemplate> </ListBox.ItemsPanel> I have also discovered that this bug do not appear when custom ItemsPanelTemplate do not exists. A: The binding problem comes from the default style for ListBoxItem. By default when applying styles to elements WPF looks for the default styles and applies each property that is not specifically set in the custom style from the default style. Refer to this great blog post By Ian Griffiths for more details on this behavior. Back to our problem. Here is the default style for ListBoxItem: <Style xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:s="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" TargetType="{x:Type ListBoxItem}"> <Style.Resources> <ResourceDictionary/> </Style.Resources> <Setter Property="Panel.Background"> <Setter.Value> <SolidColorBrush> #00FFFFFF </SolidColorBrush> </Setter.Value> </Setter> <Setter Property="Control.HorizontalContentAlignment"> <Setter.Value> <Binding Path="HorizontalContentAlignment" RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType=ItemsControl, AncestorLevel=1}"/> </Setter.Value> </Setter> <Setter Property="Control.VerticalContentAlignment"> <Setter.Value> <Binding Path="VerticalContentAlignment" RelativeSource="{RelativeSource Mode=FindAncestor, AncestorType=ItemsControl, AncestorLevel=1}"/> </Setter.Value> </Setter> <Setter Property="Control.Padding"> <Setter.Value> <Thickness> 2,0,0,0 </Thickness> </Setter.Value> </Setter> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ListBoxItem}"> ... </ControlTemplate> </Setter.Value> </Setter> </Style> Note that I have removed the ControlTemplate to make it compact (I have used StyleSnooper - to retrieve the style). You can see that there is a binding with a relative source set to ancestor with type ItemsControl. So in your case the ListBoxItems that are created when binding did not find their ItemsControl. Can you provide more info with what is the ItemsSource for your ListBox? P.S.: One way to remove the errors is to create new setters for HorizontalContentAlignment and VerticalContentAlignment in your custom Style. A: This worked for me. Put this in your Application.xaml file. <Application.Resources> <Style TargetType="ListBoxItem"> <Setter Property="HorizontalContentAlignment" Value="Left" /> <Setter Property="VerticalContentAlignment" Value="Center" /> </Style> </Application.Resources> from... http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/42cd1554-de7a A: Setting OverridesDefaultStyle to True in your ItemContainerStyle will also fix these problems. <Style TargetType="ListBoxItem"> <Setter Property="OverridesDefaultStyle" Value="True"/> <!-- set the rest of your setters, including Template, here --> </Style> A: I just encountered the same type of error: System.Windows.Data Error: 4 : Cannot find source for binding with reference 'RelativeSource FindAncestor, AncestorType='System.Windows.Controls.ItemsControl', AncestorLevel='1''. BindingExpression:Path=HorizontalContentAlignment; DataItem=null; target element is 'ListBoxItem' (Name=''); target property is 'HorizontalContentAlignment' (type 'HorizontalAlignment') This happened while doing a binding like this: <ListBox ItemsSource="{Binding Path=MyListProperty}" /> To this property on my data context object: public IList<ListBoxItem> MyListProperty{ get; set;} After some experimenting I discovered that the error was only triggered when the number of items exceeded the visible height of my ListBox (e.g. when vertical scrollbars appear). So I immediately thought about virtualization and tried this: <ListBox ItemsSource="{Binding Path=MyListProperty}" VirtualizingStackPanel.IsVirtualizing="False" /> This solved the problem for me. Although I would prefer to keep virtualization turned on I did not use any more time to dive into it. My application is a bit on the complex side with mulitiple levels of grids, dock panels etc. and some asynch method calls. I was not able to reproduce the problem in a simpler application. A: According to the Data Templating Overview on MSDN, DataTemplates should be used as the ItemTemplate to define how the data is presented, while a Style would be used as the ItemContainerStyle to style just the generated container, such as ListBoxItem. However, it appears that you are trying to use the latter to do the job of the former. I can't recreate your situation without much more code, but I suspect that doing databinding in the container style could be throwing a wrench in the assumed visual/logical tree. I also can't help but think that a custom layout of items based on the item's information calls for creating a custom Panel. It's probably better for the custom Panel to layout the items than for the items to lay themselves out with a Rube Goldberg assortment of IValueConverters. A: If you want to completely replace the ListBoxItem template such that no selection is visible (perhaps you want the look of ItemsControl with the grouping/etc behaviour of ListBox) then you can use this style: <Style TargetType="ListBoxItem"> <Setter Property="Margin" Value="2" /> <Setter Property="FocusVisualStyle" Value="{x:Null}" /> <Setter Property="OverridesDefaultStyle" Value="True" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type ListBoxItem}"> <ContentPresenter Content="{TemplateBinding ContentControl.Content}" HorizontalAlignment="Stretch" VerticalAlignment="{TemplateBinding Control.VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" /> </ControlTemplate> </Setter.Value> </Setter> </Style> This template also excludes the standard Border wrapper. If you need that, you can use replace the template with this: <Border BorderThickness="{TemplateBinding Border.BorderThickness}" Padding="{TemplateBinding Control.Padding}" BorderBrush="{TemplateBinding Border.BorderBrush}" Background="{TemplateBinding Panel.Background}" SnapsToDevicePixels="True"> <ContentPresenter Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}" HorizontalAlignment="{TemplateBinding Control.HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding Control.VerticalContentAlignment}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" /> </Border> If you don't need all these TemplateBinding values then you can remove some for performance. A: Another workaround/solution that worked for me was to suppress these errors (actually, it seems more appropriate to call them warnings) by setting the data binding source switch level as critical in constructor of the class or a top level window - #if DEBUG System.Diagnostics.PresentationTraceSources.DataBindingSource.Switch.Level = System.Diagnostics.SourceLevels.Critical; #endif Ref.: How to suppress the System.Windows.Data Error warning message A: Simply creating a default style for the type "ComboBoxItem" doesn't work, because it it overwritten by the ComboBox's default "ItemContainerStyle". To really get rid of this, you need to change the default "ItemContainerStyle" for ComboBoxes, like this: <Style TargetType="ComboBox"> <Setter Property="ItemContainerStyle"> <Setter.Value> <Style TargetType="ComboBoxItem"> <Setter Property="HorizontalContentAlignment" Value="Left" /> <Setter Property="VerticalContentAlignment" Value="Center" /> </Style> </Setter.Value> </Setter> </Style> A: I started running into this problem, even though my ListBox had both a Style and an ItemContainerStyle set - and these named styles had already defined HorizontalContentAlignment. I was using CheckBox controls to turn on/off live filtering on my ListBox and this seemed to be causing the items to pull instead from the default style instead of my assigned styles. Most errors would occur the first time the live filtering kicked in, but thereafter it would continue to throw 2 errors on each change. I find it interesting that exactly 2 records in my collection were empty and thus had nothing to display in the item. So this seems to have contibuted. I plan to create default data to be displayed when a record is empty. Carter's suggestion worked for me. Adding a separate "default" style with no key and a TargetType="ListBoxItem" that defined the HorizontalContentAlignment property solved the problem. I didn't even need to set the OverridesDefaultStyle property for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/160391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Programmatically Sort Start Menu I'm looking to sort the start menu alphabetically using C#. I've read about deleting the registry key HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\MenuOrder but when I tried it on my machine it doesn't appear to do much of anything. Does anyone have any other ideas as to what must be done in order to sort the Start Menu? A: Apparently, you can't, and it's on purpose. From Raymond Chen's blog (good read for Windows developers): Because the power would be used for evil far more than it would be used for good. Full entry here.
{ "language": "en", "url": "https://stackoverflow.com/questions/160418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What exactly do you do when your team leader is incompetent? One of your team members has been appointed "technical lead" or "team lead" yet he is technically incompetent and lacks major leadership skills. By technically incompetent, I mean that the person doesn't know the difference between an abstract class and an interface, doesn't understand why coupling should be avoided, doesn't understand the concept of cohesion, provides solutions without taking some time to think, doesn't understand why we should favor composition over inheritance and doesn't get design patterns (except the singleton pattern). Plus that person has over 10 years of "experience" (yes, I did put that word in quotes because he's given a whole different dimension of what experience really is). I'm dealing with such a person at work. It's taking away the passion I have for this profession. How do you react? What do you do? A: Is he incompetent? Or are you just flushed with the arrogance of youth? He may be incompetent, but perhaps he's just not technically as strong as you. Technical ability is but one factor when considering who to make a team leader. Perhaps he questions your technical ability and professional judgement? Perhaps you should engage with him and discuss your different approaches with him. You may actually learn something new from him, or perhaps you can demonstrate to him your strengths, and he can grow to depend on your advice and opinions - maybe even putting in a good word for you to the higher echelons. I've had some awful team leaders and some great ones. And in each case, I thought I was superior to them, professionally and technically. Sometimes I was, but often I simply didn't appreciate their strengths - which was presumably why they were ahead of me in the pecking order. Ultimately, if the guy is a real turkey, you must grin and bear it or leave. Hanging him out to dry is unlikely to do you any favours - Good team leaders have to demonstrate sound political, personal & business skills and have to be able to work with a wide range of potentially awkward people. [For the record, one of my line managers was that bad that I left; knowing when you are beaten is a worthy skill too!] A: A team leader leads the team, not the project. * *It protects the team against interruptions. *It takes care of the problems of the team. You, as a professional, you should know about the technical stuff and the team leader will trust on you. Perhaps one of the developers should lead the project and lead about the technical stuff. A: In my experience, the members who demonstrate their worth by being engaged in all areas of a development project, and whom often help others with challenging tasks, and more importantly are willing to take initiative and do what needs to be done to complete a project get rewarded for their efforts. Whether the reward comes in the form of a title, or a promotion after a good review it will happen. If you are that person your team leader already knows it, and may be intimidated by that; most likely his/her boss also knows. If you are getting favorable reviews and being rewarded at review time, then making your team leader look good now and then will only help your career. You will find that by helping projects to succeed, and taking strategic opportunities to demonstrate "the right way" to do things to your team leader you will foster an ally, rather than an adversary. And if he/she truly doesn't know how to lead or inspire, and that is what you want to be doing. Then do it. Lead your teammates, but do it in such a way that they respect you for your breadth of knowledge. Other managers will see this, you may be pleasantly surprised. If you want to be a leader, you should approach your Team lead, and ask if you can take point on the next project. He/She may be all to happy to allow you to relieve them of some of the burden. A: You have no choice but to continue doing the best job that you can and supplying the team with the best ideas that you can. Why? A failure of the team to succeed will be viewed by management-types as your failure too (unless they are extremely savvy). It is a crapy situation, but one that almost everyone has been in before. If the team lead continually disregards your advice when he/she is clearly wrong, compose a nice in-depth explanation of why you feel the task should be handled differently and submit that to all your team members. Try to move the team towards concensus building and away from the tyranny of the team lead. Only after that fails repeatedly should you consider escalating the issue. I wouldn't recommend escalating the issue unless: * *You are prepared for things to get ugly. I.e. people to get fired or reassigned, including you. For all we know you're team lead is right and your wrong ;) *You know the majority of the other well respected team-members agrees with you. *You have told the team-lead that you are willing to escalate the issue. Everyone deserves a chance to take corrective action, even your boss. If you haven't said as much to the team lead directly, then you shouldn't escalate the issue. Having done that, continue composing your objections to the team-lead's technical decisions clearly and concisely accompanied by justifications and examples of why you are write and they are wrong and start CC'ing them to your boss and the team-lead's boss. A: * *Do your own work to the best of your ability. *Don't ask advice from people who give bad advice. *Read "How to Win Friends and Influence People" and try to help him "get it". A: In software development, the primary qualification is ability to withstand stress. Headaches will always be coming one way or another. Just look for the good things for you. A twenty minute walk during lunchtime is currently my favorite part of work. A: Gishu - are you on my team? jk. I think "lacks major leadership skills" is a far worse crime for a team lead than "technically incompetent". A team lead can be quite effective if (s)he relies on the members of the team to provide some technical guidance. But if the team lead does not listen/learn, does not foster communication, and tries to ram constructs down everyone's throats (without understanding them), then there's a problem. A: Brian, This is your team leader. Stop screwing around and get back to work! A: Show competence yourself and don't ever let the project suffer because of a disagreement in your team. Show your boss that you or someone else might be a better choice for that role. Hang in there and try not to lose your passion. Stick to your ethics and do your work well. But if there's absolutely no change in sight, you might want to look for something else. A: I've learned this one the hard way - by essentially being a facilitator for him because I knew him for a long time. Never again! You've got three choices (well, really infinite ones but three that seem like possible positive routes to me): Document all the stuff he does wrong, incidents where he's shown his incompetence and how he's been an anchor to the team. Confront him with this information and explain to him that he can either gracefully ask to be reassigned or you are going to his boss. You won't earn any points with him, but do you care? If his boss goes to bat for him, be prepared to walk. I like this one because it allows him to save face. We don't tend to do a lot of this these days. Ask to be transferred to another group within the organization. You can be completely honest as to why. Be prepared to walk. If you are truly an asset to the company, you can leverage this for a new position outside this dillweed's control. Walk. This is what I ultimately ended up doing once it was clear that my PM wasn't going to do anything whatsoever to fix the problem. He was essentially as incompetent as my immediate supervisor. In the end, I'm glad they were incompetent because they forced my hand and I have found GREAT job as a result. As I see it, you've got three choices: get rid of the guy, get out of his group, get out of the company. There's no reason a talented dev can't get a better job than s/he's in. There are too many of them out there just crying out for talented programers. All the best! A: I've been there mate. First try to do your best and stay put, the guy may fail and leave soon. BUT, if you feel like your sanity is in danger, go for another position. Cheers, Ali A: Sometimes you need to look beyond the immediate problem of his incompetence and look at the process that lead to him being hired in the first place. If the hiring practices at your company allow an "experienced", yet incompetent, team leader you should consider the following: * *Your company's interviewing & selection processes are not up to scratch. *Maybe, just maybe, you do have a good selection process, but you've just gone with the cheapest. I consider this unlikely, since any company that takes its selection process seriously is probably not going to let a good hire go just for a few bucks more. *You are just unlucky and somehow he managed to slip through the cracks. This can happen if he was an internal hire and did not go through the same selection process as external hires do. You need to ask yourself if there is anything you can do to improve your company's hiring practices. If you think you can then give it a go. However, if your company's culture is such that this is clearly impossible, then I suggest you dust off your resume. I have encountered exactly your problem. In my case it was obvious from day one that this guy was not up to the challenge (and I'm 99.999% sure he doesn't read StackOverflow!). I explained the situation to our CEO and he initially gave me some hope that things would change. However, I discovered that he was just paying lip service to my complaints and consequently nothing happened. Two years later our team leader was finally "demoted" which forced his resignation. Basically he was fired, but it took two years for our CEO to act on a problem that should never have festered for that long. Good luck! A: My advice on these situations is always "If you don't like the situation you are in, wait it out for 6 months", things usually change that fast. I worked for a company for less than 5 years and had 5 different managers. Noone likes to suck at their job, if he sucks he will probably regret his decision, change himself or move on. A: ah, memories... I once worked with a fellow who started every pontification with the phrase "Well, in my ten years of experience..." he didn't really have ten years of accumulated experience, he had one year of experience ten times! if the lead is incompetent, respect the uniform if not the man, and do exactly as he decides - and document everything that he tells you to do, when, and why, as well as the objections that you tactfully raised, so that when he falls he doesn't fall on you. This should balance your duty to the company with your duty to self-preservation. in the meantime, look for a better job! Chances are that if whoever promoted this person did not realize that he/she is incompetent, things won't change for a long time... A: MusiGenesis and Jason Stevenson are right on. Let me add one step past what they said: Make your boss a success. It may be that your boss is technically incompetent (we don't know, he's not here defending himself), but he might have the skills that smooth things out with the rest of the company, or he may have some skill like marketing that most developers don't, or a solid relationship with a key stakeholder that makes your life easier [not that you necessarily would know it, because those things are largely invisible to most developers]. The key is to understand he has responsibilities that are different from yours, and his burdens are bigger. He could use a helping hand, and an ally. Give it a try. Treat him him with respect, and remember people can be worthy of respect even if they are wrong on things. It all comes down to being human and finite. A: Just talk to your boss. Be objective, show on concrete examples when tech lead was wrong\incompetent. Worse thing you can do is to fight with tech lead. Just escalate problem. If your boss is a reasonable person he will find a way to help you, otherwise you're out of luck :) If tech lead's decisions affect your work you should not be silent. Otherwise you will be responsible for problems. I had such problem with leads and subordinates. Trying to negotiate with person himself rarely gives desired result. Make it a problem of your boss/staff manager. Key to success is to be objective and persuasive. EDIT: +1 for Tobias' answer. Prove that you're not a whining loser and maybe you will become the new team leader. You should always do your job well, no matter how stupid your tech lead is. Believe me, managers appreciate responsible employees. Don't sabotage project, take active counter measures (and don't forget to check local jobs list). Of course there might be a rare situation when your boss is a close friend of your tech lead or a lot of managers will support him. There is not much to be said in this situation, be strong or leave. A: If I were managing both of you, some of your complaints would make me think the problem wasn't 100% his fault. Does he really provide solutions "without taking some time to think", or is he presenting you with solutions derived from his experience, but you think the solutions are a bad idea? Should you favor composition over inheritance? I personally agree, but I would never in a million years accuse someone who favors inheritance of being "technically incompetent". Does he not "get" design patterns, or does he merely not use the same terminology? Accusing a co-worker of technical imcompetence is a pretty serious charge. You need to make a case that doesn't rely upon legitimate debates in software. A: you could just quit? A: Assuming you have competent management (which may not be a possibility considering their choice of lead), the fact will invariably become apparent that their choice was a bad one. As has been stated before, he'll eventually dig his own grave, but make sure not to let the project suffer because of it. If management starts becoming concerned by the issues, offer to step in and help fix the problems. This will demonstrate not only that you're capable of doing his job, but doing it better. Its also your responsibility as a developer to objectively tell your management where technical issues (be them with developers or otherwise) exist. If you feel that he is putting the projects you work on at risk, you have to step in and voice that opinion. Also, one of the most common mistakes is to get into impassioned arguments in a situation like this. Don't let your emotions get the best of you. Remember, no matter how poor a developer he might be, there's still a possibility that he's right about something. A: We had a guy on our last time who was very up on all the latest oo jargon, and wanted to try out all the latest trendy design patterns. He did -- and the app took about twice as long as it should have. This company just needed a one-off app, and needed it done quickly. So, although the code was beautiful and elegant, the project failed. Ignorance does suck, especially in a lead. But, I'd be careful before judging someone on their knowledge of the latest trendy technical jargon. Sometimes simple ability to get the job done, or other human factors, can be important too. A: Consider leaving or moving to a different department, if things don't change soon. It will be hard to succeed at a company or under a boss that promotes people like that to the team lead. A: Sit back and let him hang himself. There is no reason for you to do anything. A: I would allow him to self destruct. Let your ideas be know but don't unreasonably push them if you don't have to. The idea is to let him hang himself by not following your advice that turns out to be true. If you are tactful, respectful, and not insubordinate his boss should notice. If that doesn't work you can always find another job. A: Happened the same to me... The guy had over 10 years of "experience". After a couple of months the veil came off of my eyes: his "experience" was, more or less, akin to: * *recruit a lot of interns *assign difficult (or impossible) tasks, knowing nothing about them *check in some months which was progressing *report to management the successfull ones, taking credit and boasting "great tech leader qualities" *repeat My solution? I endured, then moved on on first occasion! Edit: After all, he can be described more as a "reaper" than a "leader" A: I have team leader (in a 2 man team, of him and me) that claims 7 years of .NET/C# experience (same as me), and another number of years before that with other languages (which I do not have). I dont know under what rock this guy been sleeping, but when you see code like: public byte[] ReadBytes(string filename) { FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); FileInfo fi = new FileInfo(filename); byte[] buffer = new byte[fi.Length]; for (int i = 0; i < buffer.Length; i++) { // optimize this buffer[i] = br.ReadByte(); } return buffer; } And then he still wastes time on writing unit tests on this trivial stuff (we have a an impossible deadline looming already) that any 1-2 year experienced person should know. Besides not knowing how to properly use a FileStream (whats up with the BinaryReader? ;p ), he didn't realize there was File.ReadAllBytes. Anyways, when I saw this code he 'contributed', I told him to you the above mentioned function. I even sent him the MSDN link via email, which he did not get due to Outlook being closed. I then went outside for smoke for 10 odd minutes to pick up my jaw from the floor. When I came back, he still could not find the method. He trying something like 'new File().Rea...' saying the method does not exist. I should have probably kept my mouth shut and waited for him to checkin his code, but that could be weeks from now... I have also addressed various issues about him to the manager, and we have had group discussions to resolve other issues. I still work as hard as I can, regardless. It does get frustrating knowing you are the only person in 2 man team, contributing any code (I dont think I have seen him write more than 200 odd lines of code by hand), putting in 60+ hours a week. My current situation. :| Note: The code isn't exactly as I remember seeing it, it was longer, and perhaps had more checking for an existing file and/or closing the streams. A: If your team lead is incompetent it should not frustrate you/your work unless it affects you directly. As simple as that. You know you are better than him. So do your work and excel. Show your brilliance in making your project a success. Why spend time worrying about the team lead. If he/she has 10 years of experience and doesn't know things he is supposed to know it's bad for him and not you. And yes, if he is making certain wrong technical decisions for example in proposing a design or something you can always put in your suggestions and gracefully convince everybody that your suggestion is better. But do it in a nice way. Speaking in a rude way to a 10 year exp. one doesn't do you any favor because ego factor comes in. Make him realize you a good in your trade (in a nice way) and things it will work wonders for you, he might make sure that you are projected well in front of top boss. A: Manage your manager by managing your manager, especially the micro managers.
{ "language": "en", "url": "https://stackoverflow.com/questions/160433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Updating two tables in a many-to-one relationship through a view in PostgreSQL I have two tables: foos and bars, and there is a many-to-one relationship between them: each foo can have many bars. I also have a view foobars, which joins these two tables (its query is like select foo.*, bar.id from foos, bars where bar.foo_id=foo.id). EDIT: You would not be wrong if you said that there's a many-to-many relationship between foos and bars. A bar, however, is just a tag (in fact, it is a size), and consists just of its name. The table bars has the same role as a link table would have. I have a rule on inserting to foobars such that the “foo” part is inserted to foos as a new row, and “bar” part, which consists of a couple of bar-id's separated by commas is split, and for each such part a link between it and the appropriate foo is created (I use a procedure to do that). This works great for inserts. I have a problem, however, when it comes to updating the whole thing. The foo part of the rule is easy. However, I don't know how to deal with the multiple bars part. When I try to do something like DELETE FROM bars WHERE foo_id=new.foo_id in the rule, I end deleting everything from the table bars. What am I doing wrong? Is there a way of achieving what I need? Finally, is my approach to the whole thing sensible? (I do this overcomplicated thing with the view because the data I get is in the form of “foo and all its bars”, but the user must see just foobars.) A: Rysiek, if I understood correctly, you have text column in foos table that is parsed to extract foreign keys pointing to bars table. This approach to building relations may be justified in some cases, however almost every guide/tutorial to database programming would discourage doing so. Why not use standard foreign key in bars that would point to foo in foos? Unless there is a requirement for bars to be assigned to more than one foo. If so, this identifies your relation as many-to-many rather one-to-many. In either situation using standard foreign key based solution seems much more natural for database. Example db schema for one-to-many relation: CREATE TABLE foos ( id SERIAL PRIMARY KEY, .... ); CREATE TABLE bars ( id SERIAL PRIMARY KEY, foo_id INT REFERENCES bars (id) ON DELETE CASCADE, ... ); And the same for many-to-many relation: CREATE TABLE foos ( id SERIAL PRIMARY KEY, .... ); CREATE TABLE bars ( id SERIAL PRIMARY KEY, ... ); CREATE TABLE foostobars ( foo_id INT REFERENCES foos (id) ON DELETE CASCADE, bar_id INT REFERENCES bars (id) ON DELETE CASCADE ); I would also recommend using INNER JOIN instead of table multiplication (SELECT FROM foos, bars). CREATE VIEW foobars AS SELECT foos.id AS foo_id, foos.something, bars.id AS bar_id, bars.somethingelse FROM foos INNER JOIN bars ON bars.foo_id = foo.id; The same for many-to-many INNER JOINS CREATE VIEW foobars AS SELECT foos.id AS foo_id, foos.something, bars.id AS bar_id, bars.somethingelse FROM foos INNER JOIN foostobars AS ftb ON ftb.foo_id = foo.id INNER JOIN bars ON bars.id = ftb.bar_id; A: I don't think new.foo_id is correct in the context of a delete. Shouldn't it be DELETE FROM bars WHERE foo_id=old.foo_id? A: This is how I have actually dealt with it: when I get a unique constraint violation, instead of updating I simply delete the foo and let the cascade take care of the bars. Then I simply try to insert once again. I have to use more than one SQL statement to do it, but it seems to work. A: The deletion problem is that you are deleting on a predicate that is not based on the table you are deleting from. You need to delete based on a join predicate. This would look something line: delete b from foo f join foobar fb on f.FooID = fb.FooID join bar b on b.BarId = fb.BarID where f.FooID = 123 This lets you jave a table of Foo's, a table of Bar's and a join table that records what Bar's the Foo has. You don't need to compose lists and split them apart. This is a bad thing because the query optimiser can't use an index to identify the relevant records - in fact this violates the 1NF 'No repeating groups' rule.. The correct schema would look something like: Create table Foo ( FooID int ,[Other Foo attributes] ) Create table Bar ( BarID int ,[Other Bar attributes] ) Create table FooBar ( FooID int ,BarID int ) With appropriate indexes, the M:M relationship can be stored in FooBar and the DBMS can store and manipulate this efficiently in its native data structures.
{ "language": "en", "url": "https://stackoverflow.com/questions/160453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I create a Jet ODBC link to a SQL Server view with periods in the field names? I need to create an ODBC link from an Access 2003 (Jet) database to a SQL Server hosted view which contains aliased field names containing periods such as: Seq.Group In the SQL source behind the view, the field names are encased in square brackets... SELECT Table._Group AS [Seq.Group] ...so SQL Server doesn't complain about creating the view, but when I try to create an ODBC link to it from the Jet DB (either programmatically or via the Access 2003 UI) I receive the error message: 'Seq.Group' is not a valid name. Make sure that it does not include invalid characters or punctuation and that it is not too long. Unfortunately, I cannot modify the structure of the view because it's part of another product, so I am stuck with the field names the way that they are. I could add my own view with punctuation-free field names, but I'd really rather not modify the SQL Server at all because then that becomes another point of maintenance every time there's an upgrade, hotfix, etc. Does anyone know a better workaround? A: Although I didn't technically end up escaping the dot, your suggestion actually did make me realize another alternative. While wondering how I would "pass" the escape code to the "SQL" server, it dawned on me: Why not use a "SQL Pass-Through Query" instead of an ODBC linked table? Since I only need read access to the SQL Server data, it works fine! Thanks! A: Just guessing here: did you try escaping the dot? Something like "[Seq\.Group]"? A: Another proposal would be to add a new view on your sql server, not modifying the existing one. Even if your initial view is part of a "solution", nothing forbids you of adding new views: SELECT Table._Group AS [Seq_Group]
{ "language": "en", "url": "https://stackoverflow.com/questions/160467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to check if there is any read/write activity for a specific harddrive with C#? I'm curious how to assess if there is any read- or write-activity for a specific harddrive at a given moment with .NET / C#. Second it would be interesting to assess the actual speed of access. Any ideas? A: Look into the Windows Management Instrumentation (WMI) APIs which are supported in the .NET Framework via the System.Management and System.Management.Instrumentation namespaces.
{ "language": "en", "url": "https://stackoverflow.com/questions/160473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ASP.NET Deployment For the last couple of months I have been writing an intranet site for my college with lots of reports for staff about students. It is all going well but once a week (ish) I am having to go to IT get them to log into IIS stop the application pool, clear out the website folder, clear out the temporary asp.NET cache and replace the website with the new one. Not a big job but I would prefer to do it myself as and when I want. I don't know much about ASP.NET deployment and IIS is there a way for me to update the website myself (keeping the system live if possible)? Last time I looked at this I think I found the files were locked within the website directory. What do the different publish options achieve? A: Actually, I work with 2 forms: 1) Publish form VS to a local directory and then upload (via ftp) to server. I`m doing this way to use filezilla ftp client and do not transfer web.config file. 2) Precompile website (from ccnet), zip it, transfer to server, connect at server (Remote desktop) and execute a .bat file, that put application in offline mode (App_Offline.htm), backup it, and unzip new version. We are planing to create a second website to admin that avoid to manually connect and execute .bat. A: Why are you stopping the app pool and clearing the temp files? It works just fine of you overwrite the files in the website itself. The only thing I do is go into the bin directory and clear out all the randomly named .dll files. Visual Studio also has it's own deployment option, which you can just give a UNC path and it will delete the old files and copy up the new version of the site. It even throws up an app_offline.html for you. A: You could activate the Frontpage-Extensions for this web in IIS. They handle the whole deployment and updating. We have them activated for every website for easy maintain. Visual Studio can connect and deploy directly to Frontpage-Extension enabled websites, so there should be no problem. A: You can just overwrite the files that exist if you are going that route. Personally I have Cruise Control.NET for my deployment and I love it. I have subversion setup so our development and staging systems are automatically updated with each commit to svn. Then for live I tag whatever I need to and change the config to pull from that new tag for the Live site and force a build on it. It works really well you should check it out. It also compiles your .cs files and doesn't deploy if your code doesn't build. I believe Jeff is using it with stackoverflow. A: You can use a program like Beyond Compare to connect to your server's FTP (which you can point to the website folder) and upload the new files. You can compare your files, see a line by line comparison, and upload only changed files. Works pretty well for me. A: To install an ASP.net application on our IIS server (without needing Visual Studio), we checkout the project code from version control then run msbuild.exe on the project file. You can find msbuild.exe in the .Net framework folder, eg: C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\ The only thing we had to do was copy across the target definitions from one of the development machines, which are usually stored in: C:\Program Files\MSBuild\ It's safe to leave the .csproj/.vbproj files in the web folder as IIS won't serve them up as there will be no MIME type defined for them on that machine, but you can delete them if you like. This setup is easily scriptable too, which is a bonus. A: I always update my website during scheduled maintenance, usually outside work hours. So, evening or weekend are the best time to shutdown for a short laps of time the services, and update the website. Schedule that maintenance so everyone know their will be a downtime.
{ "language": "en", "url": "https://stackoverflow.com/questions/160475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Java development in a Perl shop: How to select the right tool? My group is a Perl shop in an organization that is very heterogenous. Although we support the odd Java, PHP, or Python installation, we're using Perl for nearly all of our web applications and systems/data marshalling tasks. All of our boxes are Linux, although we interface with IIS systems as well. We're coming under some pressure from management to move to Java for at least some of our work. I'd like to use the right tool for the right task, but for our work, and due to our collective experience, Perl seems like the right tool for everything. My initial thought is to use Java for applications that are open to the organization at large, since there is more Java mindshare out there than Perl. Does anyone have similar experience? If I'm given the choice, what kinds of tasks should we start with applying Java? What kinds of tasks should we insist on sticking with Perl? Does it make any difference? Why or why not? A: Is there a specific technical reason for switching to Java? Is there something you can do in Java but not Perl? Is there a performance difference? Is some other group/person all about Java and doesn't want to learn Perl? My experience has been that you should stick with what you know. Your group knows Perl really well. You've had your share of teeth gnashing and some of you have probably attained uber Perl guru status. So I'd say stick with what you know unless you can think of some good reasons. A: If your team is going to be supporting the applications, then stick with what you know. If another team is supporting your apps then you may need to consider Java as it definetly has a greater penetration in today's corporate world. Management are told that Java is the only way, or "real enterprises use Java", and therefore they think they have to use Java. I know where I work they think that Java is the only language out there, and things like C# are just for 'tactical' projects and not 'strategic' - whatever that means. You should use the best tool for the job. A: I would suggest that if the reason you're being pulled away from Perl is because of performance issues, then I would push to just rewrite in C as Perl XS modules the parts of your application that would benefit from it most, rather than move wholesale to a new development environment. I work in a mostly-Perl environment, but key parts of our system were rewritten in C and C++ to satisfy performance requirements. A: I've been through this where I work, It can be a very painful process. If you have a good Perl team, then I'd recommend resisting. There is very little that decent java can do that decent Perl can't. The only valid reason I can see for switching is if you are having difficulty hiring decent Perl coders. At the end of the day, if management are pushing it, then there isn't a lot you can do, except try to find out why they are pushing it. The amount of acceptance/push back from your team should depend on the reasons. If you do move to java, I suggest you make sure you hire someone who knows what they are doing in java (not just knows the language, but knows the frameworks and application servers). Its not magic, and its just as easy to make crap applications in java as any other language, it will just take you longer. A: Management :-( Give them a cost/benefit analysis of switching to Java. Explain that the WHOLE development team feels this way. A: The benefits to Java aren't in the language per-se but in the supporting infrastructure around it. The class libraries are one thing but then you look at the application servers, messaging infrastructure, open source libraries and frameworks, the list goes on. So, pick an area and do some research. Have a look at source forge, apache, codehaus, java.net and Google. Find the libraries and frameworks that suit the problem and see if they are going to reduce your development costs. Have a look at Spring, Hibernate and Struts2. Have a look at the IDE options and see if they will make you more productive (Eclipse, NetBeans and IntelliJ IDEA are the frontrunners). Listen to podcasts like the Java Posse to get ideas and read sites like Java World, InfoQ and The Server Side. Sooner or later you will come up with something that you can see is going to save you time and money and when that happens dip your toe in and give it a go. If it doesn't go as planned figure out why and do better next time. If you have a candidate and are bewildered by the array of library, product and framework choices there are plenty of experienced Java developers on stack overflow who are willing to provide guidance. I hope that this helps. A: I ran into the same scenario with my former company. A little different in the fact that we wanted to go Java and move away from Perl. Our Perl was having performance issues and not scaling very well. Also our user management was a mess. We moved to Java and used some of the Single Sign On features and we were quite pleased with the results. This may have been more of a case of moving out of legacy code that was written very long ago. A: If you'd be allowed to move to something Java compatible, yet has a syntax that's at least closer to Perl than Java, check out Groovy. Groovy is a dynamic language that compiles to Java byte-code. You can code in either Java-ish class and functions, or Perl/Ruby's dynamic "scripts"... or both.
{ "language": "en", "url": "https://stackoverflow.com/questions/160482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ASP.NET WSAT (Website Administration Tool) and Custom Membership Providers I'm building an ASP.NET MVC applicaiton that will have custom role and membership providers. I have been looking into adminstration tools to save us some time, WSAT has crossed my path. It looks good at a glance, it's all open source and very simple if it doesn't work I can fix it myself. First question is have any of you used WSAT for a product system in the past. Is it worth while, should I consider it and what reasons are there for not using it? Second question, does anyone know how well WSAT works with custom providers? Thanks for your feedback. A: MVC WSAT appears to be the tool of choice for providing Web site Administration functionality to ASP.NET MVC web sites. Also, although the ASPNetWSAT tool is no longer available on Codeplex, it is still available in places. See this thread: ASP.Net WSAT (Web Site Administration) Starter Kit. What happened? and check the last post! To address your specific question, the MyWSAT tool is fairly well written and robust, and has been used by quite a few people within their own websites (many people were quite upset when it was removed from Codeplex!). Short of writing your own website administration tool, or paying for a commercial component, it's about the best out there. The MVC WSAT tool is a rewrite of the original MyWSAT tool, made specifically for ASP.NET MVC-developed websites, and MVC WSAT should be used instead of MyWSAT for MVC sites. EDIT: Since approximately April of 2010, the MyWSAT project has been made available again by the original author on Codeplex! Get it here: http://mywsat.codeplex.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/160488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Generic method call using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace GenericCount { class Program { static int Count1<T>(T a) where T : IEnumerable<T> { return a.Count(); } static void Main(string[] args) { List<string> mystring = new List<string>() { "rob","tx" }; int count = Count1<List<string>>(mystring);****** Console.WriteLine(count.ToString()); } } } What do I have to change in the above indicated line of code to make it work. I am just trying to pass either List or array in order to get the count. A: You want this static int Count1<T>(IEnumerable<T> a) { return a.Count(); } A: Your generic constraint is wrong. You cannot enforce it to implement IEnumerabl<T> A: You have "where T : IEnumerable<T>", which is not what you want. Change it to e.g. "IEnumerable<string>" and it will compile. In this case, "T" is List<string>, which is an IEnumerable<string>. A: Your count method is expecting a type of IEnumerable and then you have set T to be List which means the method will expect IEnumerable> which is not what you are passing in. Instead you should restrict the parameter type to IEnumerable and you can leave T unconstrained. namespace GenericCount { class Program { static int Count1<T>(IEnumerable<T> a) { return a.Count(); } static void Main(string[] args) { List<string> mystring = new List<string>() { "rob","tx" }; int count = Count1(mystring); Console.WriteLine(count.ToString()); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/160494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to remove TFS source control bindings for a solution from the command line Is there a command-line approach for having VS 2008 remove TFS source control bindings from a solution file and the constituent project files? I need something that I can run from a batch file so that I don't have to open the solution and click the 2 prompts to permanently remove the bindings. Edit: After deleting the *.vspscc and *.vssscc files, the project and solution files still have references to TFS source control. So when the solution is opened, you are prompted to remove the solution from source control. This updates the solution and project files to remove the bindings, and that is what I want to automate. A: A colleague wrote this NAnt task to do it http://www.atalasoft.com/cs/blogs/jake/archive/2008/05/21/2custom-nant-task-for-removing-tfs-bindings.aspx A: Try deleting the *.vspscc file. A: I would write a simple C# console app that would iterate through a directory, load any sln or proj files, and strip out the source control configuration. For TFS you should just need to edit the sln file, as I don't think I've seen any source control info stored in the actual projects. The structure of the sln file is very simple to understand, I believe you just need to find the appropriate GlobalSection. Open the sln file in NotePad to find it.
{ "language": "en", "url": "https://stackoverflow.com/questions/160495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Removing .svn folders from project for deployment I'm using subversion (TortoiseSVN) and I want to remove the .svn folders from my project for deployment, is there an automated way of doing this using subversion or do I have to create a custom script for this? A: But if you don't want to use svn export (for whatever reason)... find /path/to/project/root -name '.svn' -type d -exec rm -rf '{}' \; A: Use svn export <url-to-repo> <dest-path> It gets just the source, nothing else. Look in svn export (in Version Control with Subversion) for more information. A: On a computer: rsync -avz --exclude=".svn" /yourprojectwithsvninside/ /yourprojectwithoutsvninside/ From the repository: svn export http://yourserver/svn/yourproject/ ./yourproject/ A: TortoiseSVN has an export function. This will create the entire SVN tree elsewhere without the .svn folders. Also, a lot of FTP clients have filtering, which you can add .svn to just in case you forget one day. A: No need for a script. As suggested, use the Export command: * *Right click on the top level of your working copy. *Open the TortoiseSVN sub-menu *Select Export *Follow on screen dialogs. A: public static: yes FileZilla has filename filtering. Look under View -> Filename Filters. I checked in v3.1.1 I think most FTP clients have it now. A: Windows Vista / 7: turn on view hidden folders and files. In the search box (top right in a Windows Explorer window), write .svn. All .svn folders will show at top, delete them all and turn hide files back on. A: Do svn export <url> to export a clean copy without .svn folders. A: Use the export feature.
{ "language": "en", "url": "https://stackoverflow.com/questions/160497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I use Google Analytics in ASP.NET? I have a web page in which people go to register for my site. I use server side validation to authenticate them. I need to call some Javascript to tell GA that user was or was not registered. Later I need stats on success in registration. How can I call the GA function on my server side C# code? A: A project i have released under open source allows for easy integration with Google Analytics from .net native code to fire page views, events etc through code. It's called GaDotNet and can be found here: http://www.diaryofaninja.com/projects/details/ga-dot-net A: Rich B is correct, google analytics is triggered by client-side javascript. I saw the comment about Flash demonstrations, but bear in mind that Flash executes on the client. see flash example you will need to emit some javascript to the client on a successful registration that simulates the goal page (like in the flash example) A: I don't think Google Analytics is set up to do what you are trying to do. You should just be pasting the code into your master page. From there, you should be able to get a good idea of how many people have registered based on how many visited the landing page after they register. A: Paste the code Google gives you into your footer or template that is displayed on every page or each of the individual templates if you don't have a footer. Then you can setup "conversion goals" on the pages where users end up when they are successful or not successful. If you just want to track how many, it would probably be easier to just store it in your own database. BTW, Google Analytics doesn't work like you are thinking. It's all based on hits to pages and/or get/post parameters A: I don't know about Google Analytics, but I have used "beacons" The info on this URL looks promising http://code.google.com/apis/analytics/docs/gaJSApiEcommerce.html
{ "language": "en", "url": "https://stackoverflow.com/questions/160509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Partial Classes in C# Are there are good uses of Partial Classes outside the webforms/winforms generated code scenarios? Or is this feature basically to support that? A: Code generation was the driving force behind partial classes. The need comes from having a code-generated class that is constantly changing, but allow developers to supply custom code as part of the class that will not be overridden everytime changes are made that force the class to be regenerated. Take WinForms or Typed-DataSets for example (or any designer for that matter). Everytime you make a change to the designer it serializes the corresponding code to a file. Let's say you need to provide a few additional methods that the generator doesn't know anything about. If you added it to the generated file your changes would be lost the next time it was generated. A project that I'm currently working on uses code-generation for all the DAL, BLL, and business entities. However, the generator only get's us 75% of the information. The remaining portion has to be hand coded (custom business logic for instance). I can assume that every BLL class has a SelectAll method, so that's easy to generate. However My customer BLL also needs to have a SelectAllByLocation method. I can't put this in my generator because it's not generic to all BLL classes. Therefore I generate all of my classes as partial classes, and then in a separate file I define my custom methods. Now down the road when my structure changes, or I need to regenerate my BLL for some reason, my custom code won't get wiped out. A: I use partial classes as a means of separating out the different sub elements of custom controls that I write. Also, when used with entity creation software, it allows products like LLBLGen to create generated versions of classes, as well as a custom, user edited version, that won't get replaced if the entities need to be regenerated. A: I often use partial classes to give each nested class its own file. There have been some architectures I've worked on where most of the implementation was only required by one class and so we nested those classes in that one class. It made sense to keep the files easier to maintain by using the partial class ability and splitting each one into its own file. We've also used them for grouping stock overrides or the hiding of a stock set of properties. Things like that. It's a handy way of mixing in a stock change (just copy the file and change the partial class name to the target class - as long as the target class is made partial too, of course). A: Another possible use for partial classes would be to take advantage of partial methods to make methods selectively disappear using conditional compilation - this would be great for debug-mode diagnostic code or specialized unit testing scenarios. You can declare a partial method kind of like an abstract method, then in the other partial class, when you type the keyword "partial" you can take advantage of the Intellisense to create the implementation of that method. If you surround one part with conditional build statements, then you can easily cut off the debug-only or testing code. In the example below, in DEBUG mode, the LogSomethingDebugOnly method is called, but in the release build, it's like the method doesn't exist at all - a good way to keep diagnostic code away from the production code without a bunch of branching or multiple conditional compilation blocks. // Main Part public partial class Class1 { private partial void LogSomethingDebugOnly(); public void SomeMethod() { LogSomethingDebugOnly(); // do the real work } } // Debug Part - probably in a different file public partial class Class1 { #if DEBUG private partial void LogSomethingDebugOnly() { // Do the logging or diagnostic work } #endif } A: LINQ to SQL makes good use of partial classes to extend designer generated code. I think you will typically find this pattern of partial classes being used by designer-created code. A: I find partial classes to be extremely helpful. Usually they are used to be able to extend autogenerated classes. I used them in one project with heavy unit tests. My UT classes had complex dependencies and it was not very practical to separate code across multiple classes.Of course it is better to use inheritance\composition but in some cases partial classes can be rally helpful. A: As mentioned earlier, I too think this is a code smell. If a class is so big that it needs to be split into more files, means that it is breaking the single responsibility principle and doing too many things. The large class could be broken down into smaller classes that cooperate together. If you have to use partial classes or regions to organize code, consider if they should be in their own classes. It increases readability and you'd get more code reuse. A: Generally, I consider it a code smell. If your class is that complicated then it can probably be broken up into smaller reusable components. Or it means that theres no inheritance hierarchy where there should be one. For code generation scenarios it's good but I think code generation is another code smell. A: maybe its too late but please let me to add my 2 cents too: *.When working on large projects, spreading a class over separate files allows multiple programmers to work on it simultaneously. *.You can easily write your code (for extended functionality) for a VS.NET generated class. This will allow you to write the code of your own need without messing with the system generated code A: It is in part to support scenarios (WebForms, WinForms, LINQ-to-SQL, etc) mixing generated code with programmer code. There are more reasons to use it. For example, if you have big classes in large, unwieldy files, but the classes have groups of logically related methods, partial classes may be an option to make your file sizes more manageable. A: Where I'm at we have a program that handles incoming files from clients. It's set up so that each client's code is in it's own class library project, which knows how to handle whatever format that client chooses to use. The main code uses the libraries by defining a fairly extensive interface that a class in the library must implement (probably should be a few distinct interfaces, but it's too late to change it now). Sometimes that involves a lot more code in the same class than we'd normally think prudent. Partial classes allow us to break them up somewhat. A: On UserControls which are relatively complicated, I put the event handling stuff in one file and the painting and properties in another file. Partial classes work great for this, Usually these parts of the class are relatively independent and it's nice to be able to edit painting and event handling side by side. A: I worked on a project a couple years ago where we had a typed DataSet class that had a ton of code in it: Methods in the DataTables, methods in the TableAdapters, declarations of TableAdapter instances, you name it. It was a massive central point of the project that everyone had to work on often, and there was a lot of source-control contention over the partial class code file. So I split the code file into fix or six partial class files, grouped by function, so that we could work on smaller pieces and not have to lock the whole file every time we had to change some little thing. (Of course, we could also have solved the problem by not using an exclusively-locking source-control system, but that's another issue.) A: I am late in the game... but just my 2 cents... One use could be to refactor an existing god class in an existing legacy code base to multiple partial classes. It could improve the discoverability of code - if proper naming convention is being followed for the file names containing the partial classes. This could also reduce the source code repository - resolve and merge to an extent. Ideally, a god class should be broken down into multiple small classes - each having single responsibility. Sometimes it is disruptive to perform medium to large refactorings. In such cases partial classes could provide a temporary relief. A: Correction, as Matt pointed out, both sides of the partial need to be in the same assembly. my bad. A: I use it in a data access layer. The generated classes like the mapper and queries a partial. If I need to add a mapper method for example to do a fancy load that's not generated I add it to the custom class. At the end the programmer that uses the data layer in the business layer only sees one class with all the functionality he or she needs. And if the data source changes the generic parts can easily be generated without overwriting custom stuff. A: I just found a use for partial classes. I have a [DataContract] class that I use to pass data to the client. I wanted the client to be able to display the class in a specific way (text output). so I created a partial class and overrode the ToString method. A: Sometimes you might find terribly old code at work that may make it close to impossible to refactor out into distinct elements without breaking existing code. When you aren't given the option or the time to create a more genuine architecture, partial classes make it incredibly easy to separate logic where its needed. This allows existing code to continue using the same architecture while you gain a step closer to a more concrete architecture. A: Anywhere you'd have used #region sections before probably makes more sense as separate files in partial classes. I personally use partial classes for large classes where static members go in one file and instance members go in the other one. A: EDIT: DSL Tools for Visual Studio uses partial classes. Thus, it's a feature that many automatic generated code uses. Instead of using #region the automatic generated code goes to one file and the user code (also called custom code) goes to another and even in different directories so that the developer does not get confused with so many meaningless files. It's good to have this choice which you can combine - but not forced to use -with inheritance Also, it can be handy to separate the logic of some classes among several directories. Of course, for machines, it's the same, but it enhances the user readability experience.
{ "language": "en", "url": "https://stackoverflow.com/questions/160514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: linqtosql, timespan, aggregates ... can it be done? Can this be done w/ linqtosql? SELECT City, SUM(DATEDIFF(minute,StartDate,Completed)) AS Downtime FROM Incidents GROUP BY City A: using System.Data.Linq.SqlClient; db.Incidents .GroupBy(i => i.City) .Select(g => new { City = g.Key, DownTime = g.Sum(i => SqlMethods.DateDiffMinute(i.StartDate, i.Completed)) }); A: Yes. You have to use the SqlMethods class. http://msdn.microsoft.com/en-us/library/bb882657.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/160519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Exporting MSAccess Tables as Unicode with Tilde delimiter I want to export the contents of several tables from MSAccess2003. The tables contain unicode Japanese characters. I want to store them as tilde delimited text files. I can do this manually using File/Export and, in the 'Advanced' dialog selecting tilde as Field Delimiter and the Unicode as the Code Page. I can store this as an Export Specification, but this seems to be table specific. I want to export many tables using VBA Code. So far I have tried: Sub ExportTables() Dim lTbl As Long Dim dBase As Database Dim TableName As String Set dBase = CurrentDb For lTbl = 0 To dBase.TableDefs.Count 'If the table name is a temporary or system table then ignore it If Left(dBase.TableDefs(lTbl).Name, 1) = "~" Or _ Left(dBase.TableDefs(lTbl).Name, 4) = "MSYS" Then '~ indicates a temporary table 'MSYS indicates a system level table Else TableName = dBase.TableDefs(lTbl).Name DoCmd.TransferText acExportDelim, "UnicodeTilde", TableName, "c:\" + TableName + ".txt", True End If Next lTbl Set dBase = Nothing End Sub When I run this I get an exception: Run-time error '3011': The Microsoft Jet database engine could not find the object "Allowance1#txt'. Make sure the object exists and that you spell its name and the path name correctly. If I debug at this point, TableName is 'Allowance1', as expected. I guess my UnicodeTilde export specification is table specific, so I can't use it for multiple tables. What is the solution? Should I use something else, other than TransferText, or perhaps create the export specification programatically? Any help appreciated. A: I have eventually solved this. (I am now using Access 2007 but had the same problems as with Access 2003.) First, what didn't work: TransferText would only make the Header Row unicode and tilde delimited, even with a correctly formatted schema.ini. (No, I didn't put it all on one line, that was just a formatting issue with the html on stackoverflow.) [MyTable.txt] CharacterSet = Unicode Format = Delimited(~) ColNameHeader = True NumberDigits = 10 Col1= "Col1" Char Width 10 Col2= "Col2" Integer Col3= "Col3" Char Width 2 Just using a select statement: SELECT * INTO [Text;DATABASE=c:\export\;FMT=Delimited(~)].[MyTable.txt] FROM [MyTable] Totally ignored the FMT. I found it very hard to find documentation on the format of the parameters. Whatever I typed in the FMT parameter, the only things I could get to work was Fixed. Everything else was treated as CSVDelimited. I could chech this as the select statement created a schema.ini file like this: [MyTable.txt] ColNameHeader=True CharacterSet=1252 Format=CSVDelimited Col1=Col1 Char Width 10 Col2=Col2 Integer Col3=Col3 Char Width 2 My eventual solution was to create my own schema.ini then use the select statement. My Module code looks something like this: Option Compare Database Option Explicit Public Function CreateSchemaFile(bIncFldNames As Boolean, _ sPath As String, _ sSectionName As String, _ sTblQryName As String) As Boolean Dim Msg As String On Local Error GoTo CreateSchemaFile_Err Dim ws As Workspace, db As Database Dim tblDef As TableDef, fldDef As Field Dim i As Integer, Handle As Integer Dim fldName As String, fldDataInfo As String ' ----------------------------------------------- ' Set DAO objects. ' ----------------------------------------------- Set db = CurrentDb() ' ----------------------------------------------- ' Open schema file for append. ' ----------------------------------------------- Handle = FreeFile Open sPath & "schema.ini" For Output Access Write As #Handle ' ----------------------------------------------- ' Write schema header. ' ----------------------------------------------- Print #Handle, "[" & sSectionName & "]" Print #Handle, "CharacterSet = Unicode" Print #Handle, "Format = Delimited(~)" Print #Handle, "ColNameHeader = " & _ IIf(bIncFldNames, "True", "False") Print #Handle, "NumberDigits = 10" ' ----------------------------------------------- ' Get data concerning schema file. ' ----------------------------------------------- Set tblDef = db.TableDefs(sTblQryName) With tblDef For i = 0 To .Fields.Count - 1 Set fldDef = .Fields(i) With fldDef fldName = .Name Select Case .Type Case dbBoolean fldDataInfo = "Bit" Case dbByte fldDataInfo = "Byte" Case dbInteger fldDataInfo = "Short" Case dbLong fldDataInfo = "Integer" Case dbCurrency fldDataInfo = "Currency" Case dbSingle fldDataInfo = "Single" Case dbDouble fldDataInfo = "Double" Case dbDate fldDataInfo = "Date" Case dbText fldDataInfo = "Char Width " & Format$(.Size) Case dbLongBinary fldDataInfo = "OLE" Case dbMemo fldDataInfo = "LongChar" Case dbGUID fldDataInfo = "Char Width 16" End Select Print #Handle, "Col" & Format$(i + 1) _ & "= """ & fldName & """" & Space$(1); "" _ & fldDataInfo End With Next i End With CreateSchemaFile = True CreateSchemaFile_End: Close Handle Exit Function CreateSchemaFile_Err: Msg = "Error #: " & Format$(Err.Number) & vbCrLf Msg = Msg & Err.Description MsgBox Msg Resume CreateSchemaFile_End End Function Public Function ExportATable(TableName As String) Dim ThePath As String Dim FileName As String Dim TheQuery As String Dim Exporter As QueryDef ThePath = "c:\export\" FileName = TableName + ".txt" CreateSchemaFile True, ThePath, FileName, TableName On Error GoTo IgnoreDeleteFileErrors FileSystem.Kill ThePath + FileName IgnoreDeleteFileErrors: TheQuery = "SELECT * INTO [Text;DATABASE=" + ThePath + "].[" + FileName + "] FROM [" + TableName + "]" Set Exporter = CurrentDb.CreateQueryDef("", TheQuery) Exporter.Execute End Function Sub ExportTables() Dim lTbl As Long Dim dBase As Database Dim TableName As String Set dBase = CurrentDb For lTbl = 0 To dBase.TableDefs.Count - 1 'If the table name is a temporary or system table then ignore it If Left(dBase.TableDefs(lTbl).Name, 1) = "~" Or _ Left(dBase.TableDefs(lTbl).Name, 4) = "MSYS" Then '~ indicates a temporary table 'MSYS indicates a system level table Else TableName = dBase.TableDefs(lTbl).Name ExportATable (TableName) End If Next lTbl Set dBase = Nothing End Sub I make no claims that this is elegant, but it works. Also note that the stackoverflow code formatter doesn't like my \", so it doesn't pretty print my code very nicely. A: In relation to this thread I have stumbled across an incredibly simple solution for being able to use one specification across all table exports whereas normally you would have to create a separate one for each; or use the sub routine provided by Richard A. The process is as follows: Create a specification e.g Pipe | delimited with any table, then open a dynaset query in access using SQL SELECT * FROM MSysIMEXColumns and then simply delete all resulting rows. Now this spec will not give error 3011 when you attempt to use a different table to that which you used to create the original spec and is essentially a universal Pipe export spec for any table/query you wish. This has been discovered/tested in access 2003 so I assume will work for later versions also. Kind Regards, Matt Donnan A: I've got part of the answer: I'm writing a schema.ini file with VBA, then doing my TransferText. This is creating an export format on the fly. The only problem is, although my schema.ini contains: ColNameHeader = True CharacterSet = Unicode Format = Delimited(~) Only the header row is coming out in unicode with tilde delimiters. The rest of the rows are ANSI with commas. A: I've got two suggestions for you: * *Make sure you're putting each setting in your [schema.ini] file on a new line. (You've listed it here all on one line, so I thought I'd make sure.) *Don't forget to supply the CodePage argument (last one) when you call your TransferText. Here's a list of supported values if you need it: http://msdn.microsoft.com/en-us/library/aa288104.aspx Other than that, it looks like your approach should work.
{ "language": "en", "url": "https://stackoverflow.com/questions/160532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I get the "td" in a table element with jquery? I need to get the "td" element of a table. I do not have the ability to add a mouseover or onclick event to the "td" element, so I need to add them with JQUERY. I need JQUERY to add the mouseover and onclick event to the all "td" elements in the table. Thats what I need, maybe someone can help me out? A: $(function() { $("table#mytable td").mouseover(function() { //The onmouseover code }).click(function() { //The onclick code }); }); A: Work off of the following code to get you started. It should do just what you need. $("td").hover(function(){ $(this).css("background","#0000ff"); }, function(){ $(this).css("background","#ffffff"); }); You can use this as a reference, which is where I pulled that code.
{ "language": "en", "url": "https://stackoverflow.com/questions/160534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: ZIP Code (US Postal Code) validation I thought people would be working on little code projects together, but I don't see them, so here's an easy one: Code that validates a valid US Zip Code. I know there are ZIP code databases out there, but there are still uses, like web pages, quick validation, and also the fact that zip codes keep getting issued, so you might want to use weak validation. I wrote a little bit about zip codes in a side project on my wiki/blog: https://benc.fogbugz.com/default.asp?W24 There is also a new, weird type of zip code. https://benc.fogbugz.com/default.asp?W42 I can do the javascript code, but it would be interesting to see how many languages we can get here. A: Here's a JavaScript function which validates a ZIP/postal code based on a country code. It allows somewhat liberal formatting. You could add cases for other countries as well. Note that the default case allows empty postal codes since not all countries use them. function isValidPostalCode(postalCode, countryCode) { switch (countryCode) { case "US": postalCodeRegex = /^([0-9]{5})(?:[-\s]*([0-9]{4}))?$/; break; case "CA": postalCodeRegex = /^([A-Z][0-9][A-Z])\s*([0-9][A-Z][0-9])$/; break; default: postalCodeRegex = /^(?:[A-Z0-9]+([- ]?[A-Z0-9]+)*)?$/; } return postalCodeRegex.test(postalCode); } FYI The second link referring to vanity ZIP codes appears to have been an April Fool's joke. A: Javascript Regex Literal: US Zip Codes: /(^\d{5}$)|(^\d{5}-\d{4}$)/ var isValidZip = /(^\d{5}$)|(^\d{5}-\d{4}$)/.test("90210"); Some countries use Postal Codes, which would fail this pattern. A: If you're doing for Canada remember that not all letters are valid These letters are invalid: D, F, I, O, Q, or U And the letters W and Z are not used as the first letter. Also some people use an optional space after the 3rd character. Here is a regular expression for Canadian postal code: new RegExp(/^[abceghjklmnprstvxy][0-9][abceghjklmnprstvwxyz]\s?[0-9][abceghjklmnprstvwxyz][0-9]$/i) The last i makes it case insensitive. A: Here's one from jQuery Validate plugin's additional-methods.js file... jQuery.validator.addMethod("zipUS", function(value, element) { return /(^\d{5}$)|(^\d{5}-\d{4}$)/.test(value); }, "Please specify a valid US zip code."); EDIT: Since the above code is part of the jQuery Validate plugin, it depends on the .addMethod() method. Remove dependency on plugins and make it more generic.... function checkZip(value) { return (/(^\d{5}$)|(^\d{5}-\d{4}$)/).test(value); }; Example Usage: http://jsfiddle.net/5PNcJ/ A: function isValidUSZip(sZip) { return /^\d{5}(-\d{4})?$/.test(sZip); } A: This is a good JavaScript solution to the validation issue you have: /\b\d{5}-\d{4}\b/ A: As I work in the mailing industry for 17 years I've seen all kinds of data entry in this area I find it interesting how many people do not know their address as it is defined by the USPS. I still see addresses like this: XYZ College IT Department City, St ZIP The worst part is the mail 99% of the time is delivered, the other 1%, well that is returned for an incomplete address as it should. In an earlier post someone mentioned USPS CASS, that software is not free. To regex a zip code tester is nice, I'm using expressions to determine if US, CA, UK, or AU zip code. I've seen expressions for Japan and others which only add challenges in choosing the correct country that a zip belongs to. By far the best answer is to use Drop Down Lists for State, and Country. Then use tables to further validate if needed. Just to give you an idea there are 84052 acceptable US City St Zip combinations on just the first 5 digits. There are 249 valid countries as per the ISO and there are 65 US State/Territories. There are Military, PO Box only, and Unique zip code classes as well. KISS applies here. A: Suggest you have a look at the USPS Address Information APIs. You can validate a zip and obtain standard formatted addresses. https://www.usps.com/business/web-tools-apis/address-information.htm A: One way to check valid Canada postal code is- function isValidCAPostal(pcVal) { return ^[A-Za-z][0-9][A-Za-z]\s{0,1}[0-9][A-Za-z][0-9]$/.test(pcVal); } Hope this will help someone. A: To further my answer, UPS and FedEx can not deliver to a PO BOX not without using the USPS as final handler. Most shipping software out there will not allow a PO Box zip for their standard services. Examples of PO Box zips are 00604 - RAMEY, PR and 06141 - HARTFORD, CT. The the whole need to validate zip codes can really be a question of how far do you go, what is the budget, what is the time line. Like anything with expressions test, test, test, and test again. I had an expression for State validation and found that YORK passed when it should fail. The one time in thousands someone entered New York, New York 10279, ugh. Also keep in mind, USPS does not like punctuation such as N. Market St. and also has very specific acceptable abbreviations for things like Lane, Place, North, Corporation and the like. A: Drupal 7 also has an easy solution here, this will allow you to validate against multiple countries. https://drupal.org/project/postal_code_validation You will need this module as well https://drupal.org/project/postal_code Test it in http://simplytest.me/ A: Are you referring to address validation? Like the previous answer by Mike, you need to cater for the othe 95%. What you can do is when the user select's their country, then enable validation. Address validation and zipcode validation are 2 different things. Validating the ZIP is just making sure its integer. Address validation is validating the actual address for accuracy, preferably for mailing. A: To allow a user to enter a Canadian Postal code with lower case letters as well, the regex would need to look like this: /^([a-zA-Z][0-9][a-zA-Z])\s*([0-9][a-zA-Z][0-9])$/
{ "language": "en", "url": "https://stackoverflow.com/questions/160550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: My form doesn't properly display when it is launched from another thread Here's the situation: I'm developing a simple application with the following structure: * *FormMain (startup point) *FormNotification *CompleFunctions Right? Well, in FormMain I have the following function: private void DoItInNewThread(ParameterizedThreadStart pParameterizedThreadStart, object pParameters, ThreadPriority pThreadPriority) { Thread oThread = new Thread(pParameterizedThreadStart); oThread.CurrentUICulture = Settings.Instance.Language; oThread.IsBackground = true; oThread.Priority = pThreadPriority; oThread.Name = "μRemote: Background operation"; oThread.Start(pParameters); } So, everytime that I need to call a time consuming method located on ComplexFunctions I do the following: // This is FormMain.cs string strSomeParameter = "lala"; DoItInNewThread(new ParameterizedThreadStart(ComplexFunctions.DoSomething), strSomeParameter, ThreadPriority.Normal); The other class, FormNotification, its a Form that display some information of the process to the user. This FormNotification could be called from FormMain or ComplexFunctions. Example: // This is ComplexFunctions.cs public void DoSomething(string pSomeParameter) { // Imagine some time consuming task FormNotification formNotif = new FormNotification(); formNotif.Notify(); } FormNotify has a timer, so, after 10 seconds closes the form. I'm not using formNotif.ShowDialog because I don't want to give focus to this Form. You could check this link to see what I'm doing in Notify. Ok, here's the problem: When I call FormNotify from ComplexFunction which is called from another Thread in FormMain ... this FormNotify disappears after a few milliseconds. It's the same effect that when you do something like this: using(FormSomething formSomething = new FormSomething) { formSomething.Show(); } How can avoid this? These are possible solutions that I don't want to use: * *Using Thread.Sleep(10000) in FormNotify *Using FormNotif.ShowDialog() This is a simplified scenario (FormNotify does some other fancy stuff that just stay for 10 seconds, but they are irrelevant to see the problem). Thanks for your time!!! And please, sorry my english. A: You aren't allowed to make WinForms calls from other threads. Look at BeginInvoke in the form -- you can call a delegate to show the form from the UI thread. Edit: From the comments (do not set CheckForIllegalCrossThreadCalls to false). More Info Almost every GUI library is designed to only allow calls that change the GUI to be made in a single thread designated for that purpose (called the UI thread). If you are in another thread, you are required to arrange for the call to change the GUI to be made in the UI thread. In .NET, the way to do that is to call Invoke (synchronous) or BeginInvoke (asynchronous). The equivalent Java Swing call is invokeLater() -- there are similar functions in almost every GUI library. A: Almost every GUI library is designed to only allow calls that change the GUI to be made in a single thread designated for that purpose (called the UI thread). If you are in another thread, you are required to arrange for the call to change the GUI to be made in the UI thread. In .NET, the way to do that is to call Invoke (synchronous) or BeginInvoke (asynchronous). The equivalent Java Swing call is invokeLater() -- there are similar functions in almost every GUI library. There is something called thread affinity. There are two threads in a WinForm Application, one for rendering and one for managing user interface. You deal only with user interface thread. The rendering thread remains hidden - runs in the background. The only objects created on UI thread can manipulate the UI - i.e the objects have thread affinity with the UI thread. Since, you are trying to update UI (show a notification) from a different thread than the UI thread. So in your worker thread define a delegate and make FormMain listen to this event. In the event handler (define in FormMain) write code to show the FormNotify. Fire the event from the worker thread when you want to show the notification. When a thread other than the creating thread of a control tries to access one of that control's methods or properties, it often leads to unpredictable results. A common invalid thread activity is a call on the wrong thread that accesses the control's Handle property. Set CheckForIllegalCrossThreadCalls to true to find and diagnose this thread activity more easily while debugging. Note that illegal cross-thread calls will always raise an exception when an application is started outside the debugger. Note: setting CheckForIllegalCrossThreadCalls to ture should only be done in DEBUGGIN SITUATIONS ONLY. Unpredicatable results will occur and you will wind up trying to chase bugs that you will have a difficuly tome finding. A: There is something called thread affinity. There are two threads in a WinForm Application, one for rendering and one for managing user interface. You deal only with user interface thread. The rendering thread remains hidden - runs in the background. The only objects created on UI thread can manipulate the UI - i.e the objects have thread affinity with the UI thread. Since, you are trying to update UI (show a notification) from a different thread than the UI thread. So in your worker thread define a delegate and make FormMain listen to this event. In the event handler (define in FormMain) write code to show the FormNotify. Fire the event from the worker thread when you want to show the notification. A: Use the SetWindowPos API call to ensure that your notify form is the topmost window. This post explains how: http://www.pinvoke.net/default.aspx/user32/SetWindowPos.html A: Assuming you have button in the form and want to open another form Form1 when user clicks that button private void button1_Click(object sender, EventArgs e) { Thread t = new Thread(new ThreadStart(this.ShowForm1)); t.Start(); } All you need to do is check InvokeRequired property and if yes call Invoke method of your form passing ShowForm1 delegate, which will end up in recursive call where InvokeRequired will be false delegate void Func(); private void ShowForm1() { if (this.InvokeRequired) { Func f = new Func(ShowForm1); this.Invoke(f); } else { Form1 form1 = new Form1(); form1.Show(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/160555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Defining custom actions in Selenium I have a Selenium test case that enters dates into a date selector made up of three pulldowns (year, month, and day). select validity_Y label=2008 select validity_M label=08 select validity_D label=08 This part gets repeated a lot throughout the test case. I'd like to reduce it by defining my custom action "selectValidity", so that I can have less redundancy, something like selectValidity 2008,08,08 What is the best (easiest, cleanest) way to add macros or subroutines to a test case? A: I take it you're coding your tests in Selenese. If so, have you considered using one of the client drivers in any one of many languages? They've got java, .net, perl, ruby, javascript, php, and python. Each and every one of them have subroutines. Supposedly, the IDE can translate your existing Selenese tests into most of these. A: You may be able to define your helper JS functions in a JS file and include it as a core extension or as part of user-extensions.js (as it is done for UI-Elements). A JS function called selectValidity could then use DOM to select the values.
{ "language": "en", "url": "https://stackoverflow.com/questions/160557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Programming Contests (with prizes) I've had a go at solving the Eternity puzzle (1,000,000 GBP prize) and Netflix Prize ($1,000,000) in the past. I didn't win either, but they motivated me to find out about a new area. What other contests with prizes do you know about / have competed in yourself? A: http://www.topcoder.com/ A: The Millenium Problems. A: There is an eternity 2 project, although it has been out for a bit and the first solutions are being checked on Dec 31. http://www.eternityii.com/ A: The Netflix Prize (for voting) A: I entered the STM32-Primer 2009 programming contest, came in 5th place with my Solitaire program. alt text http://www.stm32circle.com/projects/contestAnnouncement.gif A: I have started a website with much smaller scale programming contests called Code Competition that are run each month.
{ "language": "en", "url": "https://stackoverflow.com/questions/160567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: No output to console from a WPF application? I'm using Console.WriteLine() from a very simple WPF test application, but when I execute the application from the command line, I'm seeing nothing being written to the console. Does anyone know what might be going on here? I can reproduce it by creating a WPF application in VS 2008, and simply adding Console.WriteLine("text") anywhere where it gets executed. Any ideas? All I need for right now is something as simple as Console.WriteLine(). I realize I could use log4net or somet other logging solution, but I really don't need that much functionality for this application. Edit: I should have remembered that Console.WriteLine() is for console applications. Oh well, no stupid questions, right? :-) I'll just use System.Diagnostics.Trace.WriteLine() and DebugView for now. A: I use Console.WriteLine() for use in the Output window... A: Old post, but I ran into this so if you're trying to output something to Output in a WPF project in Visual Studio, the contemporary method is: Include this: using System.Diagnostics; And then: Debug.WriteLine("something"); A: You can use Trace.WriteLine("text"); This will output to the "Output" window in Visual Studio (when debugging). make sure to have the Diagnostics assembly included: using System.Diagnostics; A: I've create a solution, mixed the information of varius post. Its a form, that contains a label and one textbox. The console output is redirected to the textbox. There are too a class called ConsoleView that implements three publics methods: Show(), Close(), and Release(). The last one is for leave open the console and activate the Close button for view results. The forms is called FrmConsole. Here are the XAML and the c# code. The use is very simple: ConsoleView.Show("Title of the Console"); For open the console. Use: System.Console.WriteLine("The debug message"); For output text to the console. Use: ConsoleView.Close(); For Close the console. ConsoleView.Release(); Leaves open the console and enables the Close button XAML <Window x:Class="CustomControls.FrmConsole" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:CustomControls" mc:Ignorable="d" Height="500" Width="600" WindowStyle="None" ResizeMode="NoResize" WindowStartupLocation="CenterScreen" Topmost="True" Icon="Images/icoConsole.png"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="40"/> <RowDefinition Height="*"/> <RowDefinition Height="40"/> </Grid.RowDefinitions> <Label Grid.Row="0" Name="lblTitulo" HorizontalAlignment="Center" HorizontalContentAlignment="Center" VerticalAlignment="Center" VerticalContentAlignment="Center" FontFamily="Arial" FontSize="14" FontWeight="Bold" Content="Titulo"/> <Grid Grid.Row="1"> <Grid.ColumnDefinitions> <ColumnDefinition Width="10"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="10"/> </Grid.ColumnDefinitions> <TextBox Grid.Column="1" Name="txtInner" FontFamily="Arial" FontSize="10" ScrollViewer.CanContentScroll="True" VerticalScrollBarVisibility="Visible" HorizontalScrollBarVisibility="Visible" TextWrapping="Wrap"/> </Grid> <Button Name="btnCerrar" Grid.Row="2" Content="Cerrar" Width="100" Height="30" HorizontalAlignment="Center" HorizontalContentAlignment="Center" VerticalAlignment="Center" VerticalContentAlignment="Center"/> </Grid> The code of the Window: partial class FrmConsole : Window { private class ControlWriter : TextWriter { private TextBox textbox; public ControlWriter(TextBox textbox) { this.textbox = textbox; } public override void WriteLine(char value) { textbox.Dispatcher.Invoke(new Action(() => { textbox.AppendText(value.ToString()); textbox.AppendText(Environment.NewLine); textbox.ScrollToEnd(); })); } public override void WriteLine(string value) { textbox.Dispatcher.Invoke(new Action(() => { textbox.AppendText(value); textbox.AppendText(Environment.NewLine); textbox.ScrollToEnd(); })); } public override void Write(char value) { textbox.Dispatcher.Invoke(new Action(() => { textbox.AppendText(value.ToString()); textbox.ScrollToEnd(); })); } public override void Write(string value) { textbox.Dispatcher.Invoke(new Action(() => { textbox.AppendText(value); textbox.ScrollToEnd(); })); } public override Encoding Encoding { get { return Encoding.UTF8; } } } //DEFINICIONES DE LA CLASE #region DEFINICIONES DE LA CLASE #endregion //CONSTRUCTORES DE LA CLASE #region CONSTRUCTORES DE LA CLASE public FrmConsole(string titulo) { InitializeComponent(); lblTitulo.Content = titulo; Clear(); btnCerrar.Click += new RoutedEventHandler(BtnCerrar_Click); Console.SetOut(new ControlWriter(txtInner)); DesactivarCerrar(); } #endregion //PROPIEDADES #region PROPIEDADES #endregion //DELEGADOS #region DELEGADOS private void BtnCerrar_Click(object sender, RoutedEventArgs e) { Close(); } #endregion //METODOS Y FUNCIONES #region METODOS Y FUNCIONES public void ActivarCerrar() { btnCerrar.IsEnabled = true; } public void Clear() { txtInner.Clear(); } public void DesactivarCerrar() { btnCerrar.IsEnabled = false; } #endregion } the code of ConsoleView class static public class ConsoleView { //DEFINICIONES DE LA CLASE #region DEFINICIONES DE LA CLASE static FrmConsole console; static Thread StatusThread; static bool isActive = false; #endregion //CONSTRUCTORES DE LA CLASE #region CONSTRUCTORES DE LA CLASE #endregion //PROPIEDADES #region PROPIEDADES #endregion //DELEGADOS #region DELEGADOS #endregion //METODOS Y FUNCIONES #region METODOS Y FUNCIONES public static void Show(string label) { if (isActive) { return; } isActive = true; //create the thread with its ThreadStart method StatusThread = new Thread(() => { try { console = new FrmConsole(label); console.ShowDialog(); //this call is needed so the thread remains open until the dispatcher is closed Dispatcher.Run(); } catch (Exception) { } }); //run the thread in STA mode to make it work correctly StatusThread.SetApartmentState(ApartmentState.STA); StatusThread.Priority = ThreadPriority.Normal; StatusThread.Start(); } public static void Close() { isActive = false; if (console != null) { //need to use the dispatcher to call the Close method, because the window is created in another thread, and this method is called by the main thread console.Dispatcher.InvokeShutdown(); console = null; StatusThread = null; } console = null; } public static void Release() { isActive = false; if (console != null) { console.Dispatcher.Invoke(console.ActivarCerrar); } } #endregion } I hope this result usefull. A: Brian's solution is to always open a console when your WPF application starts. If you want to dynamically enable console output (for example, only when launched with certain commandline arguments) call AttachConsole: [DllImport("kernel32.dll")] static extern bool AttachConsole(uint dwProcessId); const uint ATTACH_PARENT_PROCESS = 0x0ffffffff; Then, when you want to start writing to the console: AttachConsole(ATTACH_PARENT_PROCESS); Console.WriteLine("Hello world!"); Console.WriteLine("Writing to the hosting console!"); A: Although John Leidegren keeps shooting down the idea, Brian is correct. I've just got it working in Visual Studio. To be clear a WPF application does not create a Console window by default. You have to create a WPF Application and then change the OutputType to "Console Application". When you run the project you will see a console window with your WPF window in front of it. It doesn't look very pretty, but I found it helpful as I wanted my app to be run from the command line with feedback in there, and then for certain command options I would display the WPF window. A: Right click on the project, "Properties", "Application" tab, change "Output Type" to "Console Application", and then it will also have a console, the WPF Applications still runs as expected (even if the Application output type is switched to "Console Application"). A: It's possible to see output intended for console by using command line redirection. For example: C:\src\bin\Debug\Example.exe > output.txt will write all the content to output.txt file. A: You'll have to create a Console window manually before you actually call any Console.Write methods. That will init the Console to work properly without changing the project type (which for WPF application won't work). Here's a complete source code example, of how a ConsoleManager class might look like, and how it can be used to enable/disable the Console, independently of the project type. With the following class, you just need to write ConsoleManager.Show() somewhere before any call to Console.Write... [SuppressUnmanagedCodeSecurity] public static class ConsoleManager { private const string Kernel32_DllName = "kernel32.dll"; [DllImport(Kernel32_DllName)] private static extern bool AllocConsole(); [DllImport(Kernel32_DllName)] private static extern bool FreeConsole(); [DllImport(Kernel32_DllName)] private static extern IntPtr GetConsoleWindow(); [DllImport(Kernel32_DllName)] private static extern int GetConsoleOutputCP(); public static bool HasConsole { get { return GetConsoleWindow() != IntPtr.Zero; } } /// <summary> /// Creates a new console instance if the process is not attached to a console already. /// </summary> public static void Show() { //#if DEBUG if (!HasConsole) { AllocConsole(); InvalidateOutAndError(); } //#endif } /// <summary> /// If the process has a console attached to it, it will be detached and no longer visible. Writing to the System.Console is still possible, but no output will be shown. /// </summary> public static void Hide() { //#if DEBUG if (HasConsole) { SetOutAndErrorNull(); FreeConsole(); } //#endif } public static void Toggle() { if (HasConsole) { Hide(); } else { Show(); } } static void InvalidateOutAndError() { Type type = typeof(System.Console); System.Reflection.FieldInfo _out = type.GetField("_out", System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.NonPublic); System.Reflection.FieldInfo _error = type.GetField("_error", System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.NonPublic); System.Reflection.MethodInfo _InitializeStdOutError = type.GetMethod("InitializeStdOutError", System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.NonPublic); Debug.Assert(_out != null); Debug.Assert(_error != null); Debug.Assert(_InitializeStdOutError != null); _out.SetValue(null, null); _error.SetValue(null, null); _InitializeStdOutError.Invoke(null, new object[] { true }); } static void SetOutAndErrorNull() { Console.SetOut(TextWriter.Null); Console.SetError(TextWriter.Null); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/160587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "153" }
Q: Experience with CSLA in a WPF/WCF application Has anyone used CSLA in an application that has a WPF front end and a WCF wire for entities? If so, which "entity framework" did you use? (nHibernate, Linq, etc...) What were the hang-ups? What did it help you with? I am concerned with implementing this for an application, not knowing how data-binding, validation with UI/entities, or deffered loading will react. We are also worried about the message sizes coming over WCF, especially with the depth of our entities. Are there any stress tests out there? I am trying to figure out what size application/entities this is really designed for. If you can help answer any of these questions it would be greatly appreciated. A: I don't know much about CSLA, but I have built application with WPF + LINQ + WCF, This combination works well for one way databinding to WPF, but we need to have an intermediate object class(ViewModel for WPF) to get the TwoWay working. This classes might be an exact copy of WCF classes(in most of the cases) but it would have INotiftPropertyChanged implemented.
{ "language": "en", "url": "https://stackoverflow.com/questions/160589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Combine multiple LINQ expressions from an array I'm trying to combine a list of functions like so. I have this: Func<int, bool>[] criteria = new Func<int, bool>[3]; criteria[0] = i => i % 2 == 0; criteria[1] = i => i % 3 == 0; criteria[2] = i => i % 5 == 0; And I want this: Func<int, bool>[] predicates = new Func<int, bool>[3]; predicates[0] = i => i % 2 == 0; predicates[1] = i => i % 2 == 0 && i % 3 == 0; predicates[2] = i => i % 2 == 0 && i % 3 == 0 && i % 5 == 0; So far I've got the following code: Expression<Func<int, bool>>[] results = new Expression<Func<int, bool>>[criteria.Length]; for (int i = 0; i < criteria.Length; i++) { results[i] = f => true; for (int j = 0; j <= i; j++) { Expression<Func<int, bool>> expr = b => criteria[j](b); var invokedExpr = Expression.Invoke( expr, results[i].Parameters.Cast<Expression>()); results[i] = Expression.Lambda<Func<int, bool>>( Expression.And(results[i].Body, invokedExpr), results[i].Parameters); } } var predicates = results.Select(e => e.Compile()).ToArray(); Console.WriteLine(predicates[0](6)); // Returns true Console.WriteLine(predicates[1](6)); // Returns false Console.WriteLine(predicates[2](6)); // Throws an IndexOutOfRangeException Does anyone know what I'm doing wrong? A: No need to pull in Expressions... Func<int, bool>[] criteria = new Func<int, bool>[3]; criteria[0] = i => i % 2 == 0; criteria[1] = i => i % 3 == 0; criteria[2] = i => i % 5 == 0; Func<int, bool>[] predicates = new Func<int, bool>[3]; predicates[0] = criteria[0]; for (int i = 1; i < criteria.Length; i++) { //need j to be an unchanging int, one for each loop execution. int j = i; predicates[j] = x => predicates[j - 1](x) && criteria[j](x); } Console.WriteLine(predicates[0](6)); //True Console.WriteLine(predicates[1](6)); //True Console.WriteLine(predicates[2](6)); //False A: This was a guess, as I know little about this stuff, but this seems to fix it: Func<int, bool>[] criteria = new Func<int, bool>[3]; criteria[0] = i => i % 2 == 0; criteria[1] = i => i % 3 == 0; criteria[2] = i => i % 5 == 0; Expression<Func<int, bool>>[] results = new Expression<Func<int, bool>>[criteria.Length]; for (int i = 0; i < criteria.Length; i++) { results[i] = f => true; for (int j = 0; j <= i; j++) { int ii = i; int jj = j; Expression<Func<int, bool>> expr = b => criteria[jj](b); var invokedExpr = Expression.Invoke(expr, results[ii].Parameters.Cast<Expression>()); results[ii] = Expression.Lambda<Func<int, bool>>(Expression.And(results[ii].Body, invokedExpr), results[ii].Parameters); } } var predicates = results.Select(e => e.Compile()).ToArray(); The key is the introduction of 'ii' and 'jj' (maybe only one matters, I didn't try). I think you are capturing a mutable variable inside a lambda, and thus when you finally reference it, you're seeing the later-mutated value rather than the original value.
{ "language": "en", "url": "https://stackoverflow.com/questions/160604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Do a "git export" (like "svn export")? I've been wondering whether there is a good "git export" solution that creates a copy of a tree without the .git repository directory. There are at least three methods I know of: * *git clone followed by removing the .git repository directory. *git checkout-index alludes to this functionality but starts with "Just read the desired tree into the index..." which I'm not entirely sure how to do. *git-export is a third-party script that essentially does a git clone into a temporary location followed by rsync --exclude='.git' into the final destination. None of these solutions really strike me as being satisfactory. The closest one to svn export might be option 1, because both require the target directory to be empty first. But option 2 seems even better, assuming I can figure out what it means to read a tree into the index. A: You can archive a remote repo at any commit as zip file. git archive --format=zip --output=archive.zip --remote=USERNAME@HOSTNAME:PROJECTNAME.git HASHOFGITCOMMIT A: My preference would actually be to have a dist target in your Makefile (or other build system) that exports a distributable archive of your code (.tar.bz2, .zip, .jar, or whatever is appropriate). If you happen to be using GNU autotools or Perl's MakeMaker systems, I think this exists for you automatically. If not, I highly recommend adding it. ETA (2012-09-06): Wow, harsh downvotes. I still believe it is better to build your distributions with your build tools rather than your source code control tool. I believe in building artifacts with build tools. In my current job, our main product is built with an ant target. We are in the midst of switching source code control systems, and the presence of this ant target means one less hassle in migration. A: If you want something that works with submodules this might be worth a go. Note: * *MASTER_DIR = a checkout with your submodules checked out also *DEST_DIR = where this export will end up *If you have rsync, I think you'd be able to do the same thing with even less ball ache. Assumptions: * *You need to run this from the parent directory of MASTER_DIR ( i.e from MASTER_DIR cd .. ) *DEST_DIR is assumed to have been created. This is pretty easy to modify to include the creation of a DEST_DIR if you wanted to cd MASTER_DIR && tar -zcvf ../DEST_DIR/export.tar.gz --exclude='.git*' . && cd ../DEST_DIR/ && tar xvfz export.tar.gz && rm export.tar.gz A: As I understand the question, it it more about downloading just certain state from the server, without history, and without data of other branches, rather than extracting a state from a local repository (as many anwsers here do). That can be done like this: git clone -b someBranch --depth 1 --single-branch git://somewhere.com/repo.git \ && rm -rf repo/.git/ * *--single-branch is available since Git 1.7.10 (April 2012). *--depth is (was?) reportedly faulty, but for the case of an export, the mentioned issues should not matter. A: A special case answer if the repository is hosted on GitHub. Just use svn export. As far as I know Github does not allow archive --remote. Although GitHub is svn compatible and they do have all git repos svn accessible so you could just use svn export like you normally would with a few adjustments to your GitHub url. For example to export an entire repository, notice how trunk in the URL replaces master (or whatever the project's HEAD branch is set to): svn export https://github.com/username/repo-name/trunk/ And you can export a single file or even a certain path or folder: svn export https://github.com/username/repo-name/trunk/src/lib/folder Example with jQuery JavaScript Library The HEAD branch or master branch will be available using trunk: svn ls https://github.com/jquery/jquery/trunk The non-HEAD branches will be accessible under /branches/: svn ls https://github.com/jquery/jquery/branches/2.1-stable All tags under /tags/ in the same fashion: svn ls https://github.com/jquery/jquery/tags/2.1.3 A: Bash-implementation of git-export. I have segmented the .empty file creation and removal processes on their own function, with the purpose of re-using them in the 'git-archive' implementation (will be posted later on). I have also added the '.gitattributes' file to the process in order to remove un-wanted files from the target export folder. Included verbosity to the process while making the 'git-export' function more efficient. EMPTY_FILE=".empty"; function create_empty () { ## Processing path (target-dir): TRG_PATH="${1}"; ## Component(s): EXCLUDE_DIR=".git"; echo -en "\nAdding '${EMPTY_FILE}' files to empty folder(s): ..."; find ${TRG_PATH} -not -path "*/${EXCLUDE_DIR}/*" -type d -empty -exec touch {}/${EMPTY_FILE} \; #echo "done."; ## Purging SRC/TRG_DIRs variable(s): unset TRG_PATH EMPTY_FILE EXCLUDE_DIR; return 0; } declare -a GIT_EXCLUDE; function load_exclude () { SRC_PATH="${1}"; ITEMS=0; while read LINE; do # echo -e "Line [${ITEMS}]: '${LINE%%\ *}'"; GIT_EXCLUDE[((ITEMS++))]=${LINE%%\ *}; done < ${SRC_PATH}/.gitattributes; GIT_EXCLUDE[${ITEMS}]="${EMPTY_FILE}"; ## Purging variable(s): unset SRC_PATH ITEMS; return 0; } function purge_empty () { ## Processing path (Source/Target-dir): SRC_PATH="${1}"; TRG_PATH="${2}"; echo -e "\nPurging Git-Specific component(s): ... "; find ${SRC_PATH} -type f -name ${EMPTY_FILE} -exec /bin/rm '{}' \; for xRULE in ${GIT_EXCLUDE[@]}; do echo -en " '${TRG_PATH}/{${xRULE}}' files ... "; find ${TRG_PATH} -type f -name "${xRULE}" -exec /bin/rm -rf '{}' \; echo "done.'"; done; echo -e "done.\n" ## Purging SRC/TRG_PATHs variable(s): unset SRC_PATH; unset TRG_PATH; return 0; } function git-export () { TRG_DIR="${1}"; SRC_DIR="${2}"; if [ -z "${SRC_DIR}" ]; then SRC_DIR="${PWD}"; fi load_exclude "${SRC_DIR}"; ## Dynamically added '.empty' files to the Git-Structure: create_empty "${SRC_DIR}"; GIT_COMMIT="Including '${EMPTY_FILE}' files into Git-Index container."; #echo -e "\n${GIT_COMMIT}"; git add .; git commit --quiet --all --verbose --message "${GIT_COMMIT}"; if [ "${?}" -eq 0 ]; then echo " done."; fi /bin/rm -rf ${TRG_DIR} && mkdir -p "${TRG_DIR}"; echo -en "\nChecking-Out Index component(s): ... "; git checkout-index --prefix=${TRG_DIR}/ -q -f -a ## Reset: --mixed = reset HEAD and index: if [ "${?}" -eq 0 ]; then echo "done."; echo -en "Resetting HEAD and Index: ... "; git reset --soft HEAD^; if [ "${?}" -eq 0 ]; then echo "done."; ## Purging Git-specific components and '.empty' files from Target-Dir: purge_empty "${SRC_DIR}" "${TRG_DIR}" else echo "failed."; fi ## Archiving exported-content: echo -en "Archiving Checked-Out component(s): ... "; if [ -f "${TRG_DIR}.tgz" ]; then /bin/rm ${TRG_DIR}.tgz; fi cd ${TRG_DIR} && tar -czf ${TRG_DIR}.tgz ./; cd ${SRC_DIR} echo "done."; ## Listing *.tgz file attributes: ## Warning: Un-TAR this file to a specific directory: ls -al ${TRG_DIR}.tgz else echo "failed."; fi ## Purgin all references to Un-Staged File(s): git reset HEAD; ## Purging SRC/TRG_DIRs variable(s): unset SRC_DIR; unset TRG_DIR; echo ""; return 0; } Output: $ git-export /tmp/rel-1.0.0 Adding '.empty' files to empty folder(s): ... done. Checking-Out Index component(s): ... done. Resetting HEAD and Index: ... done. Purging Git-Specific component(s): ... '/tmp/rel-1.0.0/{.buildpath}' files ... done.' '/tmp/rel-1.0.0/{.project}' files ... done.' '/tmp/rel-1.0.0/{.gitignore}' files ... done.' '/tmp/rel-1.0.0/{.git}' files ... done.' '/tmp/rel-1.0.0/{.gitattributes}' files ... done.' '/tmp/rel-1.0.0/{*.mno}' files ... done.' '/tmp/rel-1.0.0/{*~}' files ... done.' '/tmp/rel-1.0.0/{.*~}' files ... done.' '/tmp/rel-1.0.0/{*.swp}' files ... done.' '/tmp/rel-1.0.0/{*.swo}' files ... done.' '/tmp/rel-1.0.0/{.DS_Store}' files ... done.' '/tmp/rel-1.0.0/{.settings}' files ... done.' '/tmp/rel-1.0.0/{.empty}' files ... done.' done. Archiving Checked-Out component(s): ... done. -rw-r--r-- 1 admin wheel 25445901 3 Nov 12:57 /tmp/rel-1.0.0.tgz I have now incorporated the 'git archive' functionality into a single process that makes use of 'create_empty' function and other features. function git-archive () { PREFIX="${1}"; ## sudo mkdir -p ${PREFIX} REPO_PATH="`echo "${2}"|awk -F: '{print $1}'`"; RELEASE="`echo "${2}"|awk -F: '{print $2}'`"; USER_PATH="${PWD}"; echo "$PREFIX $REPO_PATH $RELEASE $USER_PATH"; ## Dynamically added '.empty' files to the Git-Structure: cd "${REPO_PATH}"; populate_empty .; echo -en "\n"; # git archive --prefix=git-1.4.0/ -o git-1.4.0.tar.gz v1.4.0 # e.g.: git-archive /var/www/htdocs /repos/domain.name/website:rel-1.0.0 --explode OUTPUT_FILE="${USER_PATH}/${RELEASE}.tar.gz"; git archive --verbose --prefix=${PREFIX}/ -o ${OUTPUT_FILE} ${RELEASE} cd "${USER_PATH}"; if [[ "${3}" =~ [--explode] ]]; then if [ -d "./${RELEASE}" ]; then /bin/rm -rf "./${RELEASE}"; fi mkdir -p ./${RELEASE}; tar -xzf "${OUTPUT_FILE}" -C ./${RELEASE} fi ## Purging SRC/TRG_DIRs variable(s): unset PREFIX REPO_PATH RELEASE USER_PATH OUTPUT_FILE; return 0; } A: a git export to a zip archive while adding a prefix (e.g. directory name): git archive master --prefix=directoryWithinZip/ --format=zip -o out.zip A: This will copy the files in a range of commits (C to G) to a tar file. Note: this will only get the files commited. Not the entire repository. Slightly modified from Here Example Commit History A --> B --> C --> D --> E --> F --> G --> H --> I git diff-tree -r --no-commit-id --name-only --diff-filter=ACMRT C~..G | xargs tar -rf myTarFile.tar git-diff-tree Manual Page -r --> recurse into sub-trees --no-commit-id --> git diff-tree outputs a line with the commit ID when applicable. This flag suppressed the commit ID output. --name-only --> Show only names of changed files. --diff-filter=ACMRT --> Select only these files. See here for full list of files C..G --> Files in this range of commits C~ --> Include files from Commit C. Not just files since Commit C. | xargs tar -rf myTarFile --> outputs to tar A: By far the easiest way i've seen to do it (and works on windows as well) is git bundle: git bundle create /some/bundle/path.bundle --all See this answer for more details: How can I copy my git repository from my windows machine to a linux machine via usb drive? A: From the Git Manual: Using git-checkout-index to "export an entire tree" The prefix ability basically makes it trivial to use git-checkout-index as an "export as tree" function. Just read the desired tree into the index, and do: $ git checkout-index --prefix=git-export-dir/ -a A: I've written a simple wrapper around git-checkout-index that you can use like this: git export ~/the/destination/dir If the destination directory already exists, you'll need to add -f or --force. Installation is simple; just drop the script somewhere in your PATH, and make sure it's executable. The github repository for git-export A: I needed this for a deploy script and I couldn't use any of the above mentioned approaches. Instead I figured out a different solution: #!/bin/sh [ $# -eq 2 ] || echo "USAGE $0 REPOSITORY DESTINATION" && exit 1 REPOSITORY=$1 DESTINATION=$2 TMPNAME="/tmp/$(basename $REPOSITORY).$$" git clone $REPOSITORY $TMPNAME rm -rf $TMPNAME/.git mkdir -p $DESTINATION cp -r $TMPNAME/* $DESTINATION rm -rf $TMPNAME A: Doing it the easy way, this is a function for .bash_profile, it directly unzips the archive on current location, configure first your usual [url:path]. NOTE: With this function you avoid the clone operation, it gets directly from the remote repo. gitss() { URL=[url:path] TMPFILE="`/bin/tempfile`" if [ "$1" = "" ]; then echo -e "Use: gitss repo [tree/commit]\n" return fi if [ "$2" = "" ]; then TREEISH="HEAD" else TREEISH="$2" fi echo "Getting $1/$TREEISH..." git archive --format=zip --remote=$URL/$1 $TREEISH > $TMPFILE && unzip $TMPFILE && echo -e "\nDone\n" rm $TMPFILE } Alias for .gitconfig, same configuration required (TAKE CARE executing the command inside .git projects, it ALWAYS jumps to the base dir previously as said here, until this is fixed I personally prefer the function ss = !env GIT_TMPFILE="`/bin/tempfile`" sh -c 'git archive --format=zip --remote=[url:path]/$1 $2 \ > $GIT_TMPFILE && unzip $GIT_TMPFILE && rm $GIT_TMPFILE' - A: I have another solution that works fine if you have a local copy of the repository on the machine where you would like to create the export. In this case move to this repository directory, and enter this command: GIT_WORK_TREE=outputdirectory git checkout -f This is particularly useful if you manage a website with a git repository and would like to checkout a clean version in /var/www/. In this case, add thiscommand in a .git/hooks/post-receive script (hooks/post-receive on a bare repository, which is more suitable in this situation) A: It appears that this is less of an issue with Git than SVN. Git only puts a .git folder in the repository root, whereas SVN puts a .svn folder in every subdirectory. So "svn export" avoids recursive command-line magic, whereas with Git recursion is not necessary. A: I found out what option 2 means. From a repository, you can do: git checkout-index -a -f --prefix=/destination/path/ The slash at the end of the path is important, otherwise it will result in the files being in /destination with a prefix of 'path'. Since in a normal situation the index contains the contents of the repository, there is nothing special to do to "read the desired tree into the index". It's already there. The -a flag is required to check out all files in the index (I'm not sure what it means to omit this flag in this situation, since it doesn't do what I want). The -f flag forces overwriting any existing files in the output, which this command doesn't normally do. This appears to be the sort of "git export" I was looking for. A: The equivalent of svn export . otherpath inside an existing repo is git archive branchname | (cd otherpath; tar x) The equivalent of svn export url otherpath is git archive --remote=url branchname | (cd otherpath; tar x) A: I think @Aredridel's post was closest, but there's a bit more to that - so I will add this here; the thing is, in svn, if you're in a subfolder of a repo, and you do: /media/disk/repo_svn/subdir$ svn export . /media/disk2/repo_svn_B/subdir then svn will export all files that are under revision control (they could have also freshly Added; or Modified status) - and if you have other "junk" in that directory (and I'm not counting .svn subfolders here, but visible stuff like .o files), it will not be exported; only those files registered by the SVN repo will be exported. For me, one nice thing is that this export also includes files with local changes that have not been committed yet; and another nice thing is that the timestamps of the exported files are the same as the original ones. Or, as svn help export puts it: *Exports a clean directory tree from the working copy specified by PATH1, at revision REV if it is given, otherwise at WORKING, into PATH2. ... If REV is not specified, all local changes will be preserved. Files not under version control will not be copied. To realize that git will not preserve the timestamps, compare the output of these commands (in a subfolder of a git repo of your choice): /media/disk/git_svn/subdir$ ls -la . ... and: /media/disk/git_svn/subdir$ git archive --format=tar --prefix=junk/ HEAD | (tar -t -v --full-time -f -) ... and I, in any case, notice that git archive causes all the timestamps of the archived file to be the same! git help archive says: git archive behaves differently when given a tree ID versus when given a commit ID or tag ID. In the first case the current time is used as the modification time of each file in the archive. In the latter case the commit time as recorded in the referenced commit object is used instead. ... but apparently both cases set the "modification time of each file"; thereby not preserving the actual timestamps of those files! So, in order to also preserve the timestamps, here is a bash script, which is actually a "one-liner", albeit somewhat complicated - so below it is posted in multiple lines: /media/disk/git_svn/subdir$ git archive --format=tar master | (tar tf -) | (\ DEST="/media/diskC/tmp/subdirB"; \ CWD="$PWD"; \ while read line; do \ DN=$(dirname "$line"); BN=$(basename "$line"); \ SRD="$CWD"; TGD="$DEST"; \ if [ "$DN" != "." ]; then \ SRD="$SRD/$DN" ; TGD="$TGD/$DN" ; \ if [ ! -d "$TGD" ] ; then \ CMD="mkdir \"$TGD\"; touch -r \"$SRD\" \"$TGD\""; \ echo "$CMD"; \ eval "$CMD"; \ fi; \ fi; \ CMD="cp -a \"$SRD/$BN\" \"$TGD/\""; \ echo "$CMD"; \ eval "$CMD"; \ done \ ) Note that it is assumed that you're exporting the contents in "current" directory (above, /media/disk/git_svn/subdir) - and the destination you're exporting into is somewhat inconveniently placed, but it is in DEST environment variable. Note that with this script; you must create the DEST directory manually yourself, before running the above script. After the script is ran, you should be able to compare: ls -la /media/disk/git_svn/subdir ls -la /media/diskC/tmp/subdirB # DEST ... and hopefully see the same timestamps (for those files that were under version control). Hope this helps someone, Cheers! A: git archive also works with remote repository. git archive --format=tar \ --remote=ssh://remote_server/remote_repository master | tar -xf - To export particular path inside the repo add as many paths as you wish as last argument to git, e.g.: git archive --format=tar \ --remote=ssh://remote_server/remote_repository master path1/ path2/ | tar -xv A: Probably the simplest way to achieve this is with git archive. If you really need just the expanded tree you can do something like this. git archive master | tar -x -C /somewhere/else Most of the time that I need to 'export' something from git, I want a compressed archive in any case so I do something like this. git archive master | bzip2 >source-tree.tar.bz2 ZIP archive: git archive --format zip --output /full/path/to/zipfile.zip master git help archive for more details, it's quite flexible. Be aware that even though the archive will not contain the .git directory, it will, however, contain other hidden git-specific files like .gitignore, .gitattributes, etc. If you don't want them in the archive, make sure you use the export-ignore attribute in a .gitattributes file and commit this before doing your archive. Read more... Note: If you are interested in exporting the index, the command is git checkout-index -a -f --prefix=/destination/path/ (See Greg's answer for more details) A: If you're not excluding files with .gitattributes export-ignore then try git checkout mkdir /path/to/checkout/ git --git-dir=/path/to/repo/.git --work-tree=/path/to/checkout/ checkout -f -q -f When checking out paths from the index, do not fail upon unmerged entries; instead, unmerged entries are ignored. and -q Avoid verbose Additionally you can get any Branch or Tag or from a specific Commit Revision like in SVN just adding the SHA1 (SHA1 in Git is the equivalent to the Revision Number in SVN) mkdir /path/to/checkout/ git --git-dir=/path/to/repo/.git --work-tree=/path/to/checkout/ checkout 2ef2e1f2de5f3d4f5e87df7d8 -f -q -- ./ The /path/to/checkout/ must be empty, Git will not delete any file, but will overwrite files with the same name without any warning UPDATE: To avoid the beheaded problem or to leave intact the working repository when using checkout for export with tags, branches or SHA1, you need to add -- ./ at the end The double dash -- tells git that everything after the dashes are paths or files, and also in this case tells git checkout to not change the HEAD Examples: This command will get just the libs directory and also the readme.txt file from that exactly commit git --git-dir=/path/to/repo/.git --work-tree=/path/to/checkout/ checkout fef2e1f2de5f3d4f5e87df7d8 -f -q -- ./libs ./docs/readme.txt This will create(overwrite) my_file_2_behind_HEAD.txt two commits behind the head HEAD^2 git --git-dir=/path/to/repo/.git --work-tree=/path/to/checkout/ checkout HEAD^2 -f -q -- ./my_file_2_behind_HEAD.txt To get the export of another branch git --git-dir=/path/to/repo/.git --work-tree=/path/to/checkout/ checkout myotherbranch -f -q -- ./ Notice that ./ is relative to the root of the repository A: I use git-submodules extensively. This one works for me: rsync -a ./FROM/ ./TO --exclude='.*' A: I have hit this page frequently when looking for a way to export a git repository. My answer to this question considers three properties that svn export has by design compared to git, since svn follows a centralized repository approach: * *It minimizes the traffic to a remote repository location by not exporting all revisions *It does not include meta information in the export directory *Exporting a certain branch using svn is accomplished by specifying the appropriate path git clone --depth 1 --branch main git://git.somewhere destination_path rm -rf destination_path/.git When building a certain release it is useful to clone a stable branch as for example --branch stable or --branch release/0.9. A: If you need submodules as well, this should do the trick: https://github.com/meitar/git-archive-all.sh/wiki A: This will copy all contents, minus the .dot files. I use this to export git cloned projects into my web app's git repo without the .git stuff. cp -R ./path-to-git-repo /path/to/destination/ Plain old bash works just great :) A: As simple as clone then delete the .git folder: git clone url_of_your_repo path_to_export && rm -rf path_to_export/.git A: For GitHub users, the git archive --remote method won't work directly, as the export URL is ephemeral. You must ask GitHub for the URL, then download that URL. curl makes that easy: curl -L https://api.github.com/repos/VENDOR/PROJECT/tarball | tar xzf - This will give you the exported code in a local directory. Example: $ curl -L https://api.github.com/repos/jpic/bashworks/tarball | tar xzf - $ ls jpic-bashworks-34f4441/ break conf docs hack LICENSE mlog module mpd mtests os README.rst remote todo vcs vps wepcrack Edit If you want the code put into a specific, existing directory (rather than the random one from github): curl -L https://api.github.com/repos/VENDOR/PROJECT/tarball | \ tar xzC /path/you/want --strip 1 A: Yes, this is a clean and neat command to archive your code without any git inclusion in the archive and is good to pass around without worrying about any git commit history. git archive --format zip --output /full/path/to/zipfile.zip master A: I just want to point out that in the case that you are * *exporting a sub folder of the repository (that's how I used to use SVN export feature) *are OK with copying everything from that folder to the deployment destination *and since you already have a copy of the entire repository in place. Then you can just use cp foo [destination] instead of the mentioned git-archive master foo | -x -C [destination]. A: i have the following utility function in my .bashrc file: it creates an archive of the current branch in a git repository. function garchive() { if [[ "x$1" == "x-h" || "x$1" == "x" ]]; then cat <<EOF Usage: garchive <archive-name> create zip archive of the current branch into <archive-name> EOF else local oname=$1 set -x local bname=$(git branch | grep -F "*" | sed -e 's#^*##') git archive --format zip --output ${oname} ${bname} set +x fi } A: The option 1 sounds not too efficient. What if there is no space in the client to do a clone and then remove the .git folder? Today I found myself trying to do this, where the client is a Raspberry Pi with almost no space left. Furthermore, I also want to exclude some heavy folder from the repository. Option 2 and others answers here do not help in this scenario. Neither git archive (because require to commit a .gitattributes file, and I don't want to save this exclusion in the repository). Here I share my solution, similar to option 3, but without the need of git clone: tmp=`mktemp` git ls-tree --name-only -r HEAD > $tmp rsync -avz --files-from=$tmp --exclude='fonts/*' . raspberry: Changing the rsync line for an equivalent line for compress will also work as a git archive but with a sort of exclusion option (as is asked here).
{ "language": "en", "url": "https://stackoverflow.com/questions/160608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2521" }
Q: Cause of No suitable driver found for I'm trying to unit test (JUnit) a DAO i've created. I'm using Spring as my framework, my DAO (JdbcPackageDAO) extends SimpleJdbcDaoSupport. The testing class (JdbcPackageDAOTest) extends AbstractTransactionalDataSourceSpringContextTests. I've overridden the configLocations as follows: protected String[] getConfigLocations(){ return new String[] {"classpath:company/dc/test-context.xml"}; } My test-context.xml file is defined as follows: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"> <bean id="dataPackageDao" class="company.data.dao.JdbcPackageDAO"> <property name="dataSource" ref="dataSource" /> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:hsql://localhost"/> <property name="username" value="sa" /> <property name="password" value="" /> </bean> <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>company/data/dao/jdbc.properties</value> </list> </property> </bean> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource" /> </bean> </beans> I'm using HSQL as my backend, it's running in standalone mode. My IDE of choice is eclipse. When I run the class as a JUnit test here's my error (below). I have no clue as to why its happening. hsql.jar is on my build path according to Eclipse. org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: No suitable driver found for jdbc:hsqldb:hsql://localhost at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:219) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:377) at org.springframework.test.AbstractTransactionalSpringContextTests.startNewTransaction(AbstractTransactionalSpringContextTests.java:387) at org.springframework.test.AbstractTransactionalSpringContextTests.onSetUp(AbstractTransactionalSpringContextTests.java:217) at org.springframework.test.AbstractSingleSpringContextTests.setUp(AbstractSingleSpringContextTests.java:101) at junit.framework.TestCase.runBare(TestCase.java:128) at org.springframework.test.ConditionalTestCase.runBare(ConditionalTestCase.java:76) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Caused by: java.sql.SQLException: No suitable driver found for jdbc:hsqldb:hsql://localhost at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:291) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:277) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriverManager(DriverManagerDataSource.java:259) at org.springframework.jdbc.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:241) at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:182) ... 18 more A: Okay so here's the solution. Most everyone made really good points but none solved the problem (THANKS for the help). Here is the solution I found to work. * *Move jars from .../web-inf/lib to PROJECT_ROOT/lib *Alter build path in eclipse to reflect this change. *cleaned and rebuilt my project. *ran the junit test and BOOM it worked! My guess is that it had something to do with how Ganymede reads jars in the /web-inf/lib folder. But who knows... It works now. A: In order to have HSQLDB register itself, you need to access its jdbcDriver class. You can do this the same way as in this example. Class.forName("org.hsqldb.jdbcDriver"); It triggers static initialization of jdbcDriver class, which is: static { try { DriverManager.registerDriver(new jdbcDriver()); } catch (Exception e) {} } A: If you look at your original connection string: <property name="url" value="jdbc:hsqldb:hsql://localhost"/> The Hypersonic docs suggest that you're missing an alias after localhost: http://hsqldb.org/doc/guide/ch04.html A: It looks like you're not specifying a database name to connect to, should go something like jdbc:hsqldb:hsql://serverName:port/DBname A: great I had the similar problem. The advice for all is to check jdbc url sintax A: "no suitable driver" usually means that the syntax for the connection URL is incorrect. A: Can you import the driver (org.hsqldb.jdbcDriver) into one of your source files? (To test that the class is actually on your class path). If you can't import it then you could try including hsqldb.jar in your build path. A: I had the same problem with spring, commons-dbcp and oracle 10g. Using this URL I got the 'no suitable driver' error: jdbc:oracle:thin@192.168.170.117:1521:kinangop The above URL is missing a full colon just before the @. After correcting that, the error disappeared. A: when try to run datasource connectivity using static main method, first we need to run database connection. This we can achieve in eclipse as bellow. 1) open any IDE(Eclipse or RAD) after opening workspace by default IDE will be opened in JAVA prospective. Try to switch from java to database prospective in order to create datasource as well as virtual database connectivity. 2)in database prospective enter all the details like userName, Password and URL of the particular schema. 3)then try to run main method to access database. This will resolve the "serverName undefined". A: As some answered before, this line of code solved the problem Class.forName("org.hsqldb.jdbcDriver"); But my app is running in some tomcats but only in one installation I had to add this code. A: It might be that hsql://localhost can't be resolved to a file. Look at the sample program here: Sample HSQLDB program See if you can get that working first, and then see if you can take that configuration information and use it in the Spring bean configuration. Good luck! A: I think your HSQL URL is wrong. It should also include the database name, so something like jdbc:hsqldb:hsql://localhost/mydatabase if mydatabase is the name of your DB (file). Not including this can (I'm not sure if it is the case here) confuse the parsing of the URL, which may lead to the DriverManagerDS thinking that your driver is not suitable (it is found, but it thinks it is not a good one) A: Not sure if it's worth anything, but I had a similar problem where I was getting a "java.sql.SQLException: No suitable driver found" error. I found this thread while researching a solution. The way I ended up solving my problem was to forgo using java.sql.DriverManager to get a connection and instead built up an instance of org.hsqldb.jdbc.jdbcDataSource and used that. The root cause of my problem (I believe) had to do with the classloader hierarchy and the fact that the JRE was running Java 5. Even though I could successfully load the jdbcDriver class, the classloader behind java.sql.DriverManager was higher up, to the point that it couldn't see the hsqldb.jar I needed. Anyway, just putting this note here in case someone else stumbles by with a similar problem. A: I was facing similar problem and to my surprise the problem was in the version of Java. java.sql.DriverManager comes from rt.jar was unable to load my driver "COM.ibm.db2.jdbc.app.DB2Driver". I upgraded from jdk 5 and jdk 6 and it worked. A: In some cases check permissions (ownership).
{ "language": "en", "url": "https://stackoverflow.com/questions/160611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Using ASP.NET Dynamic Data site on Windows XP IIS? I have a Dynamic Data website built in Visual Studio 2008 using .NET 3.5 SP1. The site works OK on my Vista machine, but I get the following error when running it on a Windows XP machine: Server Error in '/FlixManagerWeb' Application. -------------------------------------------------------------------------------- The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /FlixManagerWeb -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 I have added the .* -> aspnet_isapi.dll mapping in the site config, made sure that it is an "application," but that did not help. Anyone have any luck running a Dynamic Data website on Windows XP? What (if anything) special is required to get it to work? A: I've been running a Dynamic Data website on Windows XP, without any problems. Nothing special was required to get it to work. Sorry I can't be more helpful! A: IIS 7 handles requests differently than IIS 5/6, and "default" routes are not handled by MVC in the classic mode. While IIS 5/6 will work if you specify a specific page, it will not work out-of-the-box for typical MVC URLs (http://somesite/controller/action/parm). It will only work if a) you include an extension in every request (.aspx or .mvc), or implement a wildcard mapping in IIS to pass EVERY request through the .NET processor. Steve Sanderson has a good writeup on the options available. FYI, we chose the wildcard option
{ "language": "en", "url": "https://stackoverflow.com/questions/160614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why do we still program with flat files? Why are flat text files the state of the art for representing source code? Sure - the preprocessor and compiler need to see a flat file representation of the file, but that's easily created. It seems to me that some form of XML or binary data could represent lots of ideas that are very difficult to track, otherwise. For instance, you could embed UML diagrams right into your code. They could be generated semi-automatically, and annotated by the developers to highlight important aspects of the design. Interaction diagrams in particular. Heck, embedding any user drawing might make things more clear. Another idea is to embed comments from code reviews right into the code. There could be all sorts of aids to make merging multiple branches easier. Something I'm passionate about is not just tracking code coverage, but also looking at the parts of code covered by an automated test. The hard part is keeping track of that code, even as the source is modified. For instance, moving a function from one file to another, etc. This can be done with GUIDs, but they're rather intrusive to embed right in the text file. In a rich file format, they could be automatic and unobtrusive. So why are there no IDEs (to my knowledge, anyway) which allow you to work with code in this way? EDIT: On October 7th, 2009. Most of you got very hung up on the word "binary" in my question. I retract it. Picture XML, very minimally marking up your code. The instant before you hand it to your normal preprocessor or compiler, you strip out all of the XML markup, and pass on just the source code. In this form, you could still do all of the normal things to the file: diff, merge, edit, work with in a simple and minimal editor, feed them into thousands of tools. Yes, the diff, merge, and edit, directly with the minimal XML markup, does get a tad more complicated. But I think the value could be enormous. If an IDE existed which respected all of the XML, you could add so much more than what we can do today. For instance, your DOxygen comments could actually look like the final DOxygen output. When someone wanted to do a code review, like Code Collaborator, they could mark up the source code, in place. The XML could even be hidden behind comments. // <comment author="mcruikshank" date="2009-10-07"> // Please refactor to Delegate. // </comment> And then if you want to use vi or emacs, you can just skip over the comments. If I want to use a state-of-the-art editor, I can see that in about a dozen different helpful ways. So, that's my rough idea. It's not "building blocks" of pictures that you drag on the screen... I'm not that nuts. :) A: <?xml version="1.0" encoding="UTF-8"?><code>Flat files are easier to read.</code></xml> A: It's a good question. FWIW, I'd love to see a Wiki-style code management tool. Each functional unit would have its own wiki page. The build tools pull together the source code out of the wiki. There would be a "discuss" page linked to that page, where people can argue about algorithms, APIs and such like. Heck, it wouldn't be that hard to hack one up from a pre-existing Wiki implementation. Any takers...? A: Here's why: * *Human readable. That makes a lot easier to spot a mistake, in both the file and the parsing method. Also can be read out loud. That's one that you just cannot get with XML, and might make a difference, specially in customer support. *Insurance against obsolescence. As long as regex exist, it is possible to write a pretty good parser in just a few lines of code. *Leverage. Almost everything there is, from revision control systems to editors to filter, can inspect, merge and operate on flat files. Merging XML can be a mess. *Ability to integrate them rather easily with UNIX tools, such as grep, cut or sed. A: Ironically there ARE programming constructs that use precisely what you describe. For example, SQL Server Integration Services, which involve coding logic flow by dragging components into a visual design surface, are saved as XML files describing precisely that back end. On the other hand SSIS is pretty difficult to source-control. It is also fairly difficult to design any sort of complex logic into it: if you need a little bit more "control", you'll need to code VB.NET code into the component, which brings us back to where we started. I guess that, as a coder, you should consider the fact that for every solution to a problem there are consequences that follow. Not everything could (and some argue, should) be represented in UML. Not everything could be visually represented. Not everything could be simplified enough to have a consistent binary file representation. That being said, I would posit that the disadvantages of relegating code to binary formats (most of which will also tend to be proprietary) far outweight the advantages of having them in plain text. A: People have tried for a long time to create an editing environment that goes beyond the flat file and everyone has failed to some extent. The closest I've seen was a prototype for Charles Simonyi's Intentional Programming but then that got downgraded to a visual DSL creation tool. No matter how the code is stored or represented in memory, in the end it has to be presentable and modifiable as text (without the formatting changing on you) since that's the easiest way we know to express most of the abstract concepts that are needed for solving problems by programming. With flat files you get this for free and any plain old text editor (with the correct character encoding support) will work. A: IMHO, XML and binary formats would be a total mess and wouldn't give any significant benefit. OTOH, a related idea would be to write into a database, maybe one function per record, or maybe a hierarchical structure. An IDE created around this concept could make navigating source more natural, and easier to hide anything not relevant to the code you're reading at a given moment. A: Steve McConnell has it right, as always - you write programs for other programmers (including yourself), not for computers. That said, Microsoft Visual Studio must internally manage the code you write in a very structured format, or you wouldn't be able to do such things as "Find All References" or rename or re-factor variables and methods so readily. I'd be interested if anyone had links to how this works. A: Actually, roughly 10 years ago, Charles Simonyi's early prototype for intentional programming attempted to move beyond the flat file into a tree representation of code that can be visualized in different ways. Theoretically, a domain expert, a PM, and a software engineer could all see (and piece together) application code in ways that were useful to them, and products could be built on a hierarchy of declarative "intentions", digging down to low-level code only as needed. ETA (per request in the questions) There's a copy of one of his early papers on this at the Microsoft research web site. Unfortunately, since Simonyi left MS to start a separate company several years ago, I don't think the prototype is still available for download. I saw some demos back when I was at Microsoft, but I'm not sure how widely his early prototype was distributed. His company, IntentSoft is still a little quiet about what they're planning to deliver to the market, if anything, but some of the early stuff that came out of MSR was pretty interesting. The storage model was some binary format, but I'm not sure how much of those details were disclosed during the MSR project, and I'm sure some things have changed since the early implementations. A: Old habits die hard, I guess. Until recently, there weren't many good-quality, high-performing, widely-available libraries for general storage of structured data. And I would emphatically not put XML in that category even today--too verbose, too intensive to process, too finicky. Nowadays, my favorite thing to use for data that doesn't need to be human-readableis SQLite and make a database. It's so incredibly easy to embed a full-featured SQL database into any app... there are bindings for C, Perl, Python, PHP, etc... and it's open-source and really fast and reliable and lightweight. I <3 SQLite. A: Why do text files rule? Because of McIlroy's test. It is vital to have the output of one program be acceptable as the source code for another, and text files are the simplest thing that works. A: Labview and Simulink are two graphical programming environments. They are both popular in their fields (interfacing to hardware from a PC, and modeling control systems, respectively), but not used much outside of those fields. I've worked with people who were big fans of both, but never got into them myself. A: In my opinion, any possible benefits are outweighed by being tied to a particular tool. With plain-text source (that seems to be what you're discussing, rather than flat files per se) I can paste chunks into an email, use simple version control systems (very important!), write code into comments on Stack Overflow, use one of a thousand text editors on any number of platforms, etc. With some binary representation of code, I need to use a specialized editor to view or edit it. Even if a text-based representation can be produced, you can't trivially roll back changes into the canonical version. A: Anyone ever tryed Mathematica? The pic above is from an old version but it was the best google could give me. Anyway...compare the first equation there to Math.Integrate(1/(Math.Pow("x",3)-1), "x") like you would have to write if you were coding with plain text in most common languages. Imo the mathematical representation is much easier to read, and that is still a pretty small equation. And yes, you can both input and copy-paste the code as plain text if you want. See it as the next generation syntax highlighting. I bet there are alot of other stuff than math that could take benifit from this kind of representation. A: * *you can diff them *you can merge them *anyone can edit them *they are simple and easy to deal with *they are universally accessible to thousands of tools A: Smalltalk is an image-based environment. You are no longer working with code in a file on disk. You are working with and modifying the real objects in runtime. It still is text but classes are not stored in human readable files. Instead the whole object memory (the image) is stored on a file in binary format. But the biggest complaints of those trying out smalltalk is because it doesn't use files. Most of the file-based tools that we have (vim, emacs, eclipse, vs.net, unix tools) will have to be abandoned in favor of smalltalk's own tools. Not that the tools provided in smalltalk in inferior. It is just different. A: Why are essays written in text? Why are legal documents written in text? Why are fantasy novels written in text? Because text is the single best form - for people - of persisting their thoughts. Text is how people think about, represent, understand, and persist concepts - and their complexities, hierarchies, and interrelationships. A: Lisp programs are not flat files. They are serialization of data structures. This code-as-data is an old idea, and actually one of the greatest idea in computer science. A: You mention that we should use "some form of XML"? What do you think XHTML and XAML are? Also XML is still just a flat file. A: The trend we are seeing about DSL's are the first thing that comes to mind when reading your question. The problem has been that there does not exist a 1-to-1 relationship between models (like UML) and an implementation. Microsoft among others are working on getting there, so that you can create your app as something UML-like, then code can be generated. And the important thing - as you opt to change your code, the model will reflect this again. Windows Workflow Foundation is a pretty good example. Of cause there are flat files and/or XML in the background, but you usually end up defining your business logic in the orchestration tool. And that is pretty cool! We need more of the "software factories" thinking, and will see a richer IDE experience in the future, but as long as computers run on zeroes and ones, flat text files can and (probably) will always be an intermediate stage. As stated be several people already, simple text files are very flexible. A: It's pretty obvious why plain text is king. But it is equally obvious why a structured format would be even better. Just one example: If you rename a method, your diff/merge/source control tool would be able to tell that only one thing had changed. The tools we use today would show a long list of changes, one for every place and file that the method was called or declared. (By the way, this post doesn't answer the question as you might have noticed) A: I've wistfully wondered the same thing, as described in the answer to: What tool/application/whatever do you wish existed? While it's easy to imagine a great number of benefits I think the biggest hurdle that would have to be addressed is that no-one has produced a viable alternative. When people think of alternatives to storing source as text they seem to often immediately think in terms of graphical representations (I'm referring here to the commercial products that have been available - eg. HP-vee). And if we look at the experience of people like the FPGA designers, we see that programming (exclusively) graphically just doesn't work - hence languages like Verilog and VHDL. But I don't see that the storage of source necessarily needs to be bound to the method of writing it in the first place. Entry of source can be largely done as text - which means that the issues of copying/pasting can still be achieved. But I also see that by allowing merges and rollbacks to be done on the basis of tokenised meta-source we could achieve more accurate and more powerful manipulation tools. A: For a example of a language that does away with traditional text-programming, see the Lava Language. Another nifty thing I just recently discovered is subtext2 (video demo). A: Visual FoxPro uses dbf table structures to store code and metadata for forms, reports, class libs, etc. These are binary files. It also stores code in prg files that actual text files... The only advantage I see is being able to use the built in VFP data language to do code searches on those files... other than that it is a liability imo. At least once every few months, one of these files will become corrupted for no apparent reason. Integration with source control and diffs very painful as well. There are workarounds for this, but involve converting the file to text temporarily! A: Who works with flat files? Eclipse gives you views into your source so that I can see inner classes, methods and data, all sorted and grouped. if I want to edit the inner class I click on it. While technically there is a flat file underlying I almost never navigate it like that. A: The code of your program define the structure that would be created with xml or the binary format. Your programming language is a more direct representation of your program's structure than an XML or Binary representation would be. Have you ever noticed how Word misbehaves on you as you give structure to your document. WordPerfect at least would 'reveal codes' to allow you to see what lay beneath your document. Flat files do the same thing for your program. A: Neat idea's. I have myself wondered on a smaller scale ... much smaller, why can't IDE X generate this or that. I don't know if I am capable as a programmer yet to develop something as cool and complex as your talking about or what I am thinking about, but I would be interested in trying. Maybe start out with some plugins for .NET, Eclipse, Netbeans, and so on? Show off what can be done, and start a new trend in coding. A: I think another aspect of this is that the code is what is important. It is what is going to be executed. For example, in your UML example, I would think rather than having UML (presumably created in some editor, not directly related to the "code") included in your "source blob" would be almost useless. Much better would be to have the UML generated directly from your code, so it describes the exact state the code is in as a tool for understanding the code, rather than as a reminder of what the code should have been. We've been doing this for years regarding automated doc tools. While the actual programmer generated comments in the code might get out of sync with the code, tools like JavaDoc and the like faithfully represent the methods on an object, return types, arguments, etc. They represent them as they actually exist,not as some artifact that came out of endless design meetings. It seems to me that if you could arbitrarily add random artifacts to some "source blob", these would likely be out of date and less than useful right away. If you can generate such artifacts directly from the code, then the small effort to get your build process to do so is vastly better than the previously mentioned pitfalls of moving away from plain text source files. Related to this, an explanation of why you'd want to use a plain-text UML tool (UMLGraph) seems to apply nearly equally as well to why you want plain-text source files. A: This might not answer exactly your question but here is an editor allows having an higher view of code: http://webpages.charter.net/edreamleo/front.html A: I think the reason of why text files are used in development is that they are universal against various development tools. You can look inside or even fix some errors using a simple text editor (you can't do it in a binary file because you never know how any fix would destroy other data). It doesn't mean, however, that text files are best for all those purposes. Of course, you can diff and merge them. But it doesn't mean that the diff/merge tool understand the distinct structure of the data encoded by this text file. You can do the diff/merge, but (especially seen in XML files) the diff tool won't show you the differences correctly, that is, it will show you where the files differ and which parts of the data the tool "thinks" are the same. But it will not show you the differences in the structure of XML file - it will just match lines that look the same. Regardless whether we're using binary files or text files, it's always better that the diff/merge tools take care of the data structure this file represents rather than the lines and characters. For C++ or Java files, for example, report that some identifier changed its name, report that some section was surrounded with additional if(){}, but, on the other hand, ignore changes in indents or EOL characters. The best approach would be that a file is read into internal structures and dumped using specific format rules. This way the diff-ing will be made through the internal structures and the merge result will be generated from the merged internal structure. A: Modern programs are composed of flat pieces, but are they flat? There are usings, and includes, and libraries of objects, etc. An ordinary function call is a peek into a different place. The logic isn't flat, due to having multiple threads, etc. A: I have the same vision! I really wish this would exists. You might want to take a look at Fortress, a research language by Sun. It has special support for formulas in source code. The quote below is from Wikipedia Fortress is being designed from the outset to have multiple syntactic stylesheets. Source code can be rendered as ASCII text, in Unicode, or as a prettied image. This will allow for support of mathematical symbols and other symbols in the rendered output for easier reading. The major reason for the persistence of text as source is the lack for powertools, as eg version control, for non-text date. This is based on my experience working with Smalltalk, where plain byte-code is kept in a core-dump all time. In a non-text system, with today's tools, team development is a nightmare. A: One thing not touched on is that some languages have the concept of a source file builtin with respect to things like variable scoping. Changing to something else (like storing functions in a database) would require you to alter the language itself. A: While having a drink this night with my friends(programmers too), one of them told me that they use UML to generated code. But he said that they still need to manually edit the generated code, there are some problem domains that can't be easily described with UML. With all the LINQ-goodness, lambda and all, some problem domains cannot be represented by UML, we still need to make our way around the generated code for the computer to do our bidding. How could we represent in UML, let alone XML, the following problem? LINQ to SQL using GROUP BY and COUNT(DISTINCT) The amount of answers to that simple problem is very telling that UML, SQL(the most important assembly language, whatever those ORM guys tell you otherwise), XML are not an XOR proposition. We will still use the combinations of these technology, not using just one of them to the exclusion of others. A: It's still flat files because maybe that's how they can sell softwares tools :D Source Code should be itself Object Oriented that is encapsulated as Member. There is only one Product I know that does so, it exists since very long (Windows 3.0) and designed by Paul Allen himself. It was originally inspired by Hypercard on Mac but as Bill Gates told it: http://community.seattletimes.nwsource.com/archive/?date=19900522&slug=1073140 ``It's generations beyond HyperCard,'' says Gates. Unfortunately they didn't target the right people: In pursuing (interests of) software developers,'' says Alsop, Asymetrix may have made ToolBook too complex for the little guy.'' They should have targeted Professional Programmers instead of Hobbysts. Still today on concept level it's still beyond other languages except Rebol of course ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/160633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: FitNesse Wiki Per Project or Company? For a company with many development projects, should you create multiple FitNesse wikis (one for each project/product) or contain them all within a large wiki for everything? Advantages of one large wiki is the ability to easily link to other products in the company and that it is a one-stop location for all the FitNesse tests. Alternatively, the advantages of multiple wikis is it becomes easier to divide automation of the tests to multiple servers, it becomes easier to branch the wiki along with a project branch / tag. I'm interested in the advantages and disadvantages of these two possibilities or a well thought out alternative (e.g. not "just combine the wiki roots together somehow"). A: It seems it all boils down to an administration problem: How many servers are available out there and who manages them ? One central server mean a server able to take the load, both in term of requests (all developers from all projects can make many queries on a fairly regular base), and in term of "on-server" FitNesse-related computations. Another criteria is the visibility. Do your development projects need to see each other FitNesse indicators ? For "political" reason, some of those indicators may not be always welcome to be seen at all time by the rest of the projects! Some project manager might want to keep them close to the vest and control their official communication. Actually, it is for the latter reason our FitNesses wikis are managed by each teams, more as an internal tool. We have another global wiki (based on Confluance), for managing global documentation for each projects. Those common wikis may extract some of the internal FitNesse wiki data.
{ "language": "en", "url": "https://stackoverflow.com/questions/160635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Run custom Javascript whenever a client-side ASP.NET validator is triggered? Is there a way to run some custom Javascript whenever a client-side ASP.NET validator (RequiredFieldValidator, RangeValidator, etc) is triggered? Basically, I have a complicated layout that requires I run a custom script whenever a DOM element is shown or hidden. I'm looking for a way to automatically run this script when a validator is displayed. (I'm using validators with Display="dynamic") A: See this comment for how I managed to extend the ASP.Net client side validation. Others have managed to extend it using server side techniques. A: The best solution I've identified for my specific situation is this: * *Create a global JS data structure mapping control IDs to a visibility state. *Register the client IDs of the validators (or anything else, for that matter) in this data structure. *Every 250 milliseconds, loop through the global data structure and compare the cached visiblity state with the element's current state. If the states are different, update the cache and run the custom resize script. This is ugly in lots of ways, and it's only a solution for my specific scenario, not the abstract case where we want to piggyback arbitrary code onto the showing/hiding of a validator. I'd love a better suggestion! A: i'm not sure if got your question right but here goes... you can add a custom validator (or maybe handle the onblur event), in your javascript custom validation, you can call Page_ClientValidate() and check for Page_IsValid for errors. Something like the code below: function customValidation() { Page_ClientValidate(); if(!Page_IsValid) { //run your resize script } } HTH,
{ "language": "en", "url": "https://stackoverflow.com/questions/160650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: HTTPHandler to Retrieve Download File from another server? I would like to provide downloadable files to website users, but want to hide the URL of the files from the user... I'm thinking an HTTPHandler could do the trick, but is it possible to retrieve a file from an external server and stream it to the user? Perhaps somebody can give me a hint at how to accomplish this, or point me to a resource where it's been done before? Just to elaborate on what I'm trying to achieve... I'm building an ASP.NET website, which contains a music download link. I want to protect the actual URLs of the file, and I also want to store them on an external (PHP) server (MUCH MUCH cheaper)... So what I need to do is set up a stream that can grab the file from a URL (points to another server), and stream it to the Response object without the user realising it's coming from another server. Will the TransmitFile method allow streaming of a file from a completely separate server? I don't want the file to be streamed "through" my server, as that defeats the purpose (saving bandwidth)... I want the client (browser) to download the file direct from the other server. Do I need a handler on the file hosting server perhaps? Maybe a PHP script on the other end is the way to go...? A: I recommend you look at the TransmitFile method: http://msdn.microsoft.com/en-us/library/12s31dhy.aspx A: With your clarification of wanting the bandwidth to come from the external server and not yours, it changes the question quite a bit. In order to accomplish that, the external server would have to have a website on it you could send the user to. You cannot stream the file through your site but not get hit with the bandwidth, or control it from your site but streamed through the other server, so it must be completely handled through the other site. Problem with that is a normal URL based approach would show the user the URL, which you said is the second requirement that it not show the URL. But, couldn't you just have a generic page that serves the files on the external site, and the specifics on which file to stream would be passed through a post from the page on the original site? That would remove the URL pointing to a specific file. It would show the domain, but users would not be able to pull files without knowing the post fields. This would not need to be an HTTPHandler, just a normal page. A: Yes, you can streaming from a remote stream (download from another server) to output stream. Assume serviceUrl is the location of file to stream: HttpWebRequest webrequest = (HttpWebRequest)WebRequest.Create(serviceUrl); webrequest.AllowAutoRedirect = false; webrequest.Timeout = 30 * 1000; webrequest.ReadWriteTimeout = 30 * 1000; webrequest.KeepAlive = false; Stream remoteStream = null; byte[] buffer = new byte[4 * 1024]; int bytesRead; try { WebResponse responce = webrequest.GetResponse(); remoteStream = responce.GetResponseStream(); bytesRead = remoteStream.Read(buffer, 0, buffer.Length); Server.ScriptTimeout = 30 * 60; Response.Buffer = false; Response.BufferOutput = false; Response.Clear(); Response.ContentType = "application/octet-stream"; Response.AppendHeader("Content-Disposition", "attachment; filename=" + Uid + ".EML"); if (responce.ContentLength != -1) Response.AppendHeader("Content-Length", responce.ContentLength.ToString()); while (bytesRead > 0 && Response.IsClientConnected) { Response.OutputStream.Write(buffer, 0, bytesRead); bytesRead = remoteStream.Read(buffer, 0, buffer.Length); } } catch (Exception E) { Logger.LogErrorFormat(LogModules.DomainUsers, "Error transfering message from remote host: {0}", E.Message); Response.End(); return; } finally { if (remoteStream != null) remoteStream.Close(); } Response.End(); A: I've done this before. First, and obviously, the files have to be in a share on the external server that the user process of the website has access to. As far as the HTTPHandler goes, I handled this by giving the users zip files containing the files they want to download; this way my handler could intercept any call for .zip files and stream them the zip file I create. Here's the code (quite a chunk; I use MVP, so it is split into Handler and Presenter): Handler: public class ZipDownloadModule: IHttpHandler, ICompressFilesView, IErrorView { CompressFilesPresenter _presenter; public ZipDownloadModule() { _presenter = new CompressFilesPresenter(this, this); } #region IHttpHandler Members public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext context) { OnDownloadFiles(); } private void OnDownloadFiles() { if(Compress != null) Compress(this, EventArgs.Empty); } #endregion #region IFileListDownloadView Members public IEnumerable<string> FileNames { get { string files = HttpContext.Current.Request["files"] ?? string.Empty; return files.Split(new Char[] { ',' }); } } public System.IO.Stream Stream { get { HttpContext.Current.Response.ContentType = "application/x-zip-compressed"; HttpContext.Current.Response.AppendHeader("Content-Disposition", "attachment; filename=ads.zip"); return HttpContext.Current.Response.OutputStream; } } public event EventHandler Compress; #endregion #region IErrorView Members public string errorMessage { set { } } #endregion } Presenter: public class CompressFilesPresenter: PresenterBase<ICompressFilesView> { IErrorView _errorView; public CompressFilesPresenter(ICompressFilesView view, IErrorView errorView) : base(view) { _errorView = errorView; this.View.Compress += new EventHandler(View_Compress); } void View_Compress(object sender, EventArgs e) { CreateZipFile(); } private void CreateZipFile() { MemoryStream stream = new MemoryStream(); try { CreateZip(stream, this.View.FileNames); WriteZip(stream); } catch(Exception ex) { HandleException(ex); } } private void WriteZip(MemoryStream stream) { byte[] data = stream.ToArray(); this.View.Stream.Write(data, 0, data.Length); } private void CreateZip(MemoryStream stream, IEnumerable<string> filePaths) { using(ZipOutputStream s = new ZipOutputStream(stream)) // this.View.Stream)) { s.SetLevel(9); // 0 = store only to 9 = best compression foreach(string fullPath in filePaths) AddFileToZip(fullPath, s); s.Finish(); } } private static void AddFileToZip(string fullPath, ZipOutputStream s) { byte[] buffer = new byte[4096]; ZipEntry entry; // Using GetFileName makes the result compatible with XP entry = new ZipEntry(Path.GetFileName(fullPath)); entry.DateTime = DateTime.Now; s.PutNextEntry(entry); using(FileStream fs = File.OpenRead(fullPath)) { int sourceBytes; do { sourceBytes = fs.Read(buffer, 0, buffer.Length); s.Write(buffer, 0, sourceBytes); } while(sourceBytes > 0); } } private void HandleException(Exception ex) { switch(ex.GetType().ToString()) { case "DirectoryNotFoundException": _errorView.errorMessage = "The expected directory does not exist."; break; case "FileNotFoundException": _errorView.errorMessage = "The expected file does not exist."; break; default: _errorView.errorMessage = "There has been an error. If this continues please contact AMG IT Support."; break; } } private void ClearError() { _errorView.errorMessage = ""; } } Hope this helps!! A: Okay, seems my quest to avoid writing/ deploying some php code is in vain... here's what i'm going to run with on the file (php) server: http://www.zubrag.com/scripts/download.php Then the links from my asp.net web server will point to that script, which will then download the relevant file (hence avoiding direct downloads, and allowing tracking of downloads via google analytics)... i think that'll do the trick Thanks all Greg
{ "language": "en", "url": "https://stackoverflow.com/questions/160651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: HTML - display an image as large as possible while preserving aspect ratio I'd like to have an HTML page which displays a single PNG or JPEG image. I want the image to take up the whole screen but when I do this: <img src="whatever.jpeg" width="100%" height="100%" /> It just stretches the image and messes up the aspect ratio. How do I solve this so the image has the correct aspect ratio while scaling to the maximum size possible ? The solution posted by Wayne almost works except for the case where you have a tall image and a wide window. This code is a slight modification of his code which does what I want: <html> <head> <script> function resizeToMax(id){ myImage = new Image() var img = document.getElementById(id); myImage.src = img.src; if(myImage.width / document.body.clientWidth > myImage.height / document.body.clientHeight){ img.style.width = "100%"; } else { img.style.height = "100%"; } } </script> </head> <body> <img id="image" src="test.gif" onload="resizeToMax(this.id)"> </body> </html> A: You don't necessarily want to stretch in a certain direction based on which is bigger. For example, I have a widescreen monitor, so even if it's a wider image than it is tall, stretching it left-to-right may still clip the top and bottom edges off. You need to calculate the ratio between the window width and height and the image width and height. The smaller one is your controlling axis - the other is dependent. This is true even if both axes are larger than the respective window length. <script type="text/javascript"> // <![CDATA[ function resizeToMax (id) { var img = document.getElementById(id); myImage = new Image(); myImage.src = img.src; if (window.innerWidth / myImage.width < window.innerHeight / myImage.height) { img.style.width = "100%"; } else { img.style.height = "100%"; } } // ]]> </script> A: It is also possible to do this with pure CSS using a background image and the background-size:contain property: <head> <style> #bigPicture { width:100%; height:100%; background:url(http://upload.wikimedia.org/wikipedia/commons/4/44/CatLolCatExample.jpg); background-size:contain; background-repeat:no-repeat; background-position:center; } </style> </head> <body style="margin:0px"> <div id="bigPicture"> </div> </body> This has the benefit of automatically updating if the container changes aspect ratios, without having to respond to resize events (the Javascript methods, as coded here, can result in cutting off the image when the user resizes the browser). The <embed> method has the same benefit, but CSS is much smoother and has no issues with security warnings. Caveats: * *No <img> element means no context menu and no alt text. *IE support for background-size:contain is 9+ only, and I couldn't even get this to work in IE9 (for unknown reasons). *It seems like all the background-* properties have to be specified in the same CSS block as the background image, so multiple images on the same page will each need their own contain, no-repeat, and center. A: Try this: <img src="whatever.jpeg" width="100%" height="auto" /> A: To piggyback on Franci Penov, yes you just want to set one of them. If you have a wide picture, you want to set width to 100% and leave height. If you have a long picture, you want to set height to 100% and leave width. A: The easiest way to do so (if you don't need to support IE) is setting the object-fit CSS property to contain: img { object-fit: contain; } See also: * *https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit *https://caniuse.com/#search=object-fit A: Here's a quick function that will adjust the height or width to 100% depending on which is bigger. Tested in FF3, IE7 & Chrome <html> <head> <script> function resizeToMax(id){ myImage = new Image() var img = document.getElementById(id); myImage.src = img.src; if(myImage.width > myImage.height){ img.style.width = "100%"; } else { img.style.height = "100%"; } } </script> </head> <body> <img id="image" src="test.gif" onload="resizeToMax(this.id)"> </body> </html> A: For this, JavaScript is your friend. What you want to do, is on page load, walk through the dom, and for every image (or alterantively, pass a function an image id if it's just a single image) check if which attribute of the image is greater, it's height or width. This is the IMAGE itself, not the tag. Once you got that, then set the corresponding height/width on the tag to 100% and the other to auto some helpful code--all from off the top of my head, so your mileage may vary on the syntax.. var imgTag = $('myImage'); var imgPath = imgTag.src; var img = new Image(); img.src = imgPath; var mywidth = img.width; var myheight = img.height; As an aside, this would be a much easier task on the server side of things. On the server, you could literally change the size of the image that's getting streamed down tot he browser. A: Tested on IE and Firefox, plus the first line will center image: <div align="center"> <embed src="image.gif" height="100%"> ... also great to preserve aspect ratio with any other size value, so no more annoying calculations =)
{ "language": "en", "url": "https://stackoverflow.com/questions/160666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How do you know if the HTTP compression is working? How do you know if the HTTP compression setup is working? Is there any tool I can use to see the compressed page before it is uncompressed by the browser? Are there any tools to measure the amount compressed and response speed? A: As well as something like Fiddler to look at the HTTP-level traffic, you can use Firefox with the Firebug and YSlow add-ons. YSlow gives you a lot of useful analysis about why your page might be slow - among these, it gives you the size of the various assets that your request downloads (HTML, CSS, JavaScript, images and other media etc). You can compare the size of pages with and without compression - if the HTML is smaller with the compression turned on, you know it's working. It will also give you values with an empty cache and a primed cache, allowing you to see how much you're saving for both new visitors and returning visitors. A: You can use this website: http://whatsmyip.org/mod_gzip_test/ A: Wireshark, former Etherial has proved to be the most valuable tool for me. Just choose a network adapter (if there are many), type "tcp port 80" into the filter field, press Capture - and you're all set. A: The easiest, quickest thing is to take a look at the Developer Tools Network tab and see if the Content and Size values for each request are different. If the values differ, then compression is working. Divide size by content to get your compression ratio. I'm not sure how long these values have been visible, but they're there in Chrome 53. Not sure on Firefox. A: For Firefox have a look at these add-ons: * *Firebug *HttpFox Both can be used to monitor your traffic to/from the browser (You can see the size of each response). I especially like Httpfox, a really nice add-on I use everyday. A: This isn't IIS-specific, but you can use cURL: curl -H 'Accept-Encoding: gzip,deflate' -D - http://example.com Then look for a Content-Encoding: gzip header in the output. A: Use Fiddler to spy on your HTTP transmissions. "Build Request" (create an HTTP GET) and check the "Content-Encoding" header of the HTTP response for your uncompressed page, and check its "Content-Length". Compare those to the same values for your compressed page. "Content-Encoding" should be something like "gzip" for compressed responses, and your "Content-Length" should be shorter. You can use the "Content-Length" fields from both to determine the compression ratio. A: For Windows, I highly recommend Fiddler, which is a client-side tool that proxies your web traffic and lets you examine it. It will show you if compression is on and working. It is also useful for many other client-side HTTP-related debugging and diagnosis tasks. A: If you have chrome, press F12 and then navigate to the site. Once the site loads, go the Network tab. Click on the file you are looking and the then Look for section Response Headers under Headers. Look for content-encoding section Look at the picture below for a example To see how much data is transferred, in the network tab, hover over with mouse over the size column. It shows the full file size as well the size of data transferred over network. An example below. See the tool-tip that shows this info A: If you want to go really low tech, you can telnet to the HTTP port (80?) on the target server and type in the request manually. If you get plain text back, then it's not gzipped, but if you get gibberish then you're onto something. If you need to see the structure of the headers, you can copy them from Firefox using something like the Live HTTP Headers extension. A: The easiest way is to use this: http://www.whatsmyip.org/http-compression-test/ A: In the Chrome Developer Tools, You can add the response header of your choice—content-encoding in your case—in the columns of the Network tab. Simply right click on a request, then click Header Options, Response Headers, and then select Content-Encoding. Once you've done that, you'll be able to see the content-encoding in the Network tab without needing to click on any individual request: A: We searched around a bit. Apparently, there are a lot of sites which can verify that our pages are compressed.
{ "language": "en", "url": "https://stackoverflow.com/questions/160691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Syntax highlighting code with Javascript What Javascript libraries can you recommend for syntax highlighting <code> blocks in HTML? (One suggestion per answer please). A: How about: syntaxhighlighter highlight.js JSHighlighter A: If you're using jQuery there's Chilli: http://code.google.com/p/jquery-chili-js/ All you have to do is include the jquery-chili.js and recipes.js, and do the highlight with $("code").chili(); It should figure out the language by itself. A: I'm very happy with SHJS. It supports a bevy of languages and seems pretty fast and accurate. Here's an example where I use it on my blog. I'm using my own custom CSS file that simulates Coda's syntax highlighting. Email me if you'd like to use it. A: I recently developed one called rainbow. The main design goal was to make the core library really small and make it really easy for developers to extend. See http://rainbowco.de. A: jQuery.Syntax is an extremely fast and lightweight syntax highlighter. It has dynamic loading of syntax source files and integrates cleanly using CSS or modelines. It was developed specifically to fill a gap - that is: a fast, clean, client-side syntax parser. A: I'm not being argumentative but just thought it worth mentioning that if you're using a CMS or blog platform then using a backend highlighter is better for obvious reasons — Have a look at Geshi(http://qbnz.com/highlighter/) if you're interested. Actually you could set up your server to parse HTML content through a backend technology — so there is no need for the JS highlighters at all. (The only functionality they add is the ability to print/copy[using swf].) A: If you are looking for syntax highlighting in an in-browser editor, try CodeMirror. A: SyntaxHighlighter is available as a GitHub project. A: SyntaxHighlighter A: This article at the Web Resources Depot lists a bunch of options for highlighting code, some of which use Javascript. It was published on 4th May 2009. A: What about Prism by Lea Verou. From her blog post announcement in June (2012): * *It’s tiny. The core is only 1.5KB minified & gzipped. *It’s incredibly extensible. Not only it’s easy to add new languages (that’s a given with every syntax highlighter these days), but also to extend existing ones. *It supports parallelism through Web Workers, for better performance in certain cases. *It doesn’t force you to use any Prism-specific markup, not even a Prism-specific class name, only standard markup you should be using anyway. So, you can just try it for a while, remove it if you don’t like it and leave no traces behind. A: jQuery Syntax Highlighter is a new one based on Google's Prettify - a really really really popular plain javascript syntax highlighter. It supports such things as code and pre blocks, able to use classnames like language-javascript to indicate we want it to highlight, as well as wordwrap. You can copy and paste code by selecting it normally instead of having to open a raw view like many others. It can be further customised by using the HTML5 data attribute data-sh or via specifying options at initialisation. A great stable choice which is updated regularly. A: StackOverflow uses the Prettify library.
{ "language": "en", "url": "https://stackoverflow.com/questions/160694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: Is it good practice to use the xor operator for boolean checks? I personally like the exclusive or, ^, operator when it makes sense in the context of boolean checks because of its conciseness. I much prefer to write if (boolean1 ^ boolean2) { //do it } than if((boolean1 && !boolean2) || (boolean2 && !boolean1)) { //do it } but I often get confused looks from other experienced Java developers (not just the newbies), and sometimes comments about how it should only be used for bitwise operations. I'm curious as to the best practices regarding the usage of the ^ operator. A: if((boolean1 && !boolean2) || (boolean2 && !boolean1)) { //do it } IMHO this code could be simplified: if(boolean1 != boolean2) { //do it } A: With code clarity in mind, my opinion is that using XOR in boolean checks is not typical usage for the XOR bitwise operator. From my experience, bitwise XOR in Java is typically used to implement a mask flag toggle behavior: flags = flags ^ MASK; This article by Vipan Singla explains the usage case more in detail. If you need to use bitwise XOR as in your example, comment why you use it, since it's likely to require even a bitwise literate audience to stop in their tracks to understand why you are using it. A: You can simply use != instead. A: I think you've answered your own question - if you get strange looks from people, it's probably safer to go with the more explicit option. If you need to comment it, then you're probably better off replacing it with the more verbose version and not making people ask the question in the first place. A: I find that I have similar conversations a lot. On the one hand, you have a compact, efficient method of achieving your goal. On the other hand, you have something that the rest of your team might not understand, making it hard to maintain in the future. My general rule is to ask if the technique being used is something that it is reasonable to expect programmers in general to know. In this case, I think that it is reasonable to expect programmers to know how to use boolean operators, so using xor in an if statement is okay. As an example of something that wouldn't be okay, take the trick of using xor to swap two variables without using a temporary variable. That is a trick that I wouldn't expect everybody to be familiar with, so it wouldn't pass code review. A: I think it'd be okay if you commented it, e.g. // ^ == XOR. A: You could always just wrap it in a function to give it a verbose name: public static boolean XOR(boolean A, boolean B) { return A ^ B; } But, it seems to me that it wouldn't be hard for anyone who didn't know what the ^ operator is for to Google it really quick. It's not going to be hard to remember after the first time. Since you asked for other uses, its common to use the XOR for bit masking. You can also use XOR to swap the values in two variables without using a third temporary variable. // Swap the values in A and B A ^= B; B ^= A; A ^= B; Here's a Stackoverflow question related to XOR swapping. A: I personally prefer the "boolean1 ^ boolean2" expression due to its succinctness. If I was in your situation (working in a team), I would strike a compromise by encapsulating the "boolean1 ^ boolean2" logic in a function with a descriptive name such as "isDifferent(boolean1, boolean2)". For example, instead of using "boolean1 ^ boolean2", you would call "isDifferent(boolean1, boolean2)" like so: if (isDifferent(boolean1, boolean2)) { //do it } Your "isDifferent(boolean1, boolean2)" function would look like: private boolean isDifferent(boolean1, boolean2) { return boolean1 ^ boolean2; } Of course, this solution entails the use of an ostensibly extraneous function call, which in itself is subject to Best Practices scrutiny, but it avoids the verbose (and ugly) expression "(boolean1 && !boolean2) || (boolean2 && !boolean1)"! A: != is OK to compare two variables. It doesn't work, though, with multiple comparisons. A: str.contains("!=") ^ str.startsWith("not(") looks better for me than str.contains("!=") != str.startsWith("not(") A: If the usage pattern justifies it, why not? While your team doesn't recognize the operator right away, with time they could. Humans learn new words all the time. Why not in programming? The only caution I might state is that "^" doesn't have the short circuit semantics of your second boolean check. If you really need the short circuit semantics, then a static util method works too. public static boolean xor(boolean a, boolean b) { return (a && !b) || (b && !a); } A: As a bitwise operator, xor is much faster than any other means to replace it. So for performance critical and scalable calculations, xor is imperative. My subjective personal opinion: It is absolutely forbidden, for any purpose, to use equality (== or !=) for booleans. Using it shows lack of basic programming ethics and fundamentals. Anyone who gives you confused looks over ^ should be sent back to the basics of boolean algebra (I was tempted to write "to the rivers of belief" here :) ).
{ "language": "en", "url": "https://stackoverflow.com/questions/160697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "159" }
Q: How to make ODBC connection in web.Config file I was to make crystal reporting with sybase database. I will use reportviewer to view those report. I am stuck on how to make ODBC connection in web.config file. I had done with winforms but I am still learning. A: have a look at connectionstrings.com: http://www.connectionstrings.com/sybase-advantage and http://www.connectionstrings.com/sybase-adaptive
{ "language": "en", "url": "https://stackoverflow.com/questions/160714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Uploading images along with Google App Engine I'm working on a Google App Engine project. My app is working and looking correct locally, but when I try to upload images in an image directory, they're not being displayed at appspot. As a little troubleshoot, I put a HTML page in "/images/page2.html" and I can load that page at the appspot, but my pages don't display my images. So, it's not a problem with my path. As another sanity check, I'm also uploading a style sheet directory with .css code in it, and that's being read properly. I have a suspicion that the problem lies in my app.yaml file. Any ideas? I don't want to paste all the code here, but here are some of the key lines. The first two work fine. The third does not work: <link type="text/css" rel="stylesheet" href="/stylesheets/style.css" /> <a href="/images/Page2.html">Page 2</a> <img src="/images/img.gif"> This is my app.yaml file application: myApp version: 1 runtime: python api_version: 1 handlers: - url: /stylesheets static_dir: stylesheets - url: /images static_dir: images - url: /.* script: helloworld.py A: You have to configure app.yaml for static content such as images and css files Example: url: /(.*\.(gif|png|jpg)) static_files: static/\1 upload: static/(.*\.(gif|png|jpg)) For more info check out: http://code.google.com/appengine/docs/configuringanapp.html A: I'll bet your problem is that you're using Windows. If that's the case, I believe you need a preceding slash for your static_dir value. A: I am using the Java version of App engine, and I faced a similar issues with the server not able to serve static images. What worked ultimately was to change the AppEngine config file "appengine-web.xml" in my case to contain <static-files> <include path="**.*"/> <include path="/images/**.*" /> </static-files> My images are in the /images directory and HTML and CSS are in . directory which is at the WEB-INF level A: @jamtoday The preceding slash didn't make a difference, but it did get me started figuring out what each app needs to be told what about my directory structure. So, I have nothing very conclusive to add, but I wanted to follow up, because I got it working, but I didn't explore all the issues after I got it working. One change that helped was to stop working from a HwlloWorld/src/ directory and start working in the HelloWorld/ directory. It seems like the dev_appserver picked up all the dependencies, but the remote server didn't. Essentially, the relative path of my local links didn't match the relative path of the links after uploading. I also realized that the dev-appserver relies on the .yaml file, as well as the appcfg script. That is. . .if you add a directory to your project, and then try to link to files in that directory, you need to add the directory to the .yaml file, and then restart the dev-appserver to pick up on this. So, there are probably ways to handle what I was originally trying to do if you give the .yaml file the right info, but changing to a different directory structure locally handled it for me. A: <img src="/images/img.gif"> this line can't show you the image. Try this one: 1-Create a class to handle "image request" class GetImage(webapp.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'image/jpg' self.response.out.write(image_object) 2-In your page.html: <img src="/image" 3-At the main function in your code.py: application = webapp.WSGIApplication(('/image', GetImage), debug=True) have fun
{ "language": "en", "url": "https://stackoverflow.com/questions/160724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Giving character to your unit tests I have been thinking a lot about unit testing and how to improve the readability of the unit tests. I thought why not give a character to the classes in the unit test to clarify what they do. Here is a simple unit test that I wrote: [TestFixture] public class when_dave_transfers_money_from_wamu_account_to_the_woodforest_account { [Test] public void should_increase_the_amount_in_woodforest_account_when_transaction_successfull() { Dave dave = new Dave(); Wamu wamu = new Wamu(); wamu.Balance = 150; wamu.AddUser(dave); Woodforest woodforest = new Woodforest(); woodforest.AddUser(dave); FundTransferService.Transfer(100, wamu, woodforest); Assert.AreEqual(wamu.Balance, 50); Assert.AreEqual(woodforest.Balance, 100); } } Here is the Dave class: /// <summary> /// This is Dave! /// </summary> public class Dave : User { public Dave() { FirstName = "Dave"; LastName = "Allen"; } } The unit test name clearly serves the purpose. But, maybe I want to dig a little deeper and assign the Wamu and Woodforest accounts to Dave whenever Dave is created. The problem is that it will move away from readability as I will have to use index values to refer to the account. What are your thoughts on making this more readable? A: How about a private helper method in the test fixture private Dave GetDave_With_Wamu_And_Woodforest_AccountsHookedUp() A: Here is another way to run the test: [Test] public void should_increase_the_amount_in_woodforest_account_when_transaction_successfull() { Dave dave = new Dave(); // we know that dave has wamu and wooforest accounts dave.WamuAccount("Wamu").Balance = 150; FundTransferService.Transfer(100, dave.WamuAccount("Wamu"), dave.WoodforestAccount( "Woodforest")); Assert.AreEqual(50, dave.WamuAccount("Wamu").Balance); Assert.AreEqual(100, dave.WoodforestAccount("Woodforest").Balance); } A: I can add Dave to the Wamu and the Woodforest account when Dave is created like this: public Dave() { FirstName = "Dave"; LastName = "Allen"; // add accounts for Dave Wamu wamu = new Wamu(); wamu.AddUser(this); Woodforest woodforest = new Woodforest(); woodforest.AddUser(this); } The accounts are added the List collection in the User object from which Dave inherits. A: when you attempt to instantiate the Wamu instance, shouldn't it throw a WamuNotFoundException?
{ "language": "en", "url": "https://stackoverflow.com/questions/160726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Crystal Reports data source in a different project in my solution? I would like to create a Crystal Reports report using pre-existing LINQ classes that live in a different project than where the report lives. I can't find a way to do this. I'm using VS2008. Whenever I expand the "Project Data" tree, I see only classes in my current project. The "History" tree shows me the last 5 class in the OTHER project, but I need more than those 5. I found the "Make New Connection" option under "ADO.NET", but it looks like it's looking for XML sources and DLLs. A: You might be able to hack your way to getting all the tables you need in the report. The History information is read from xml files on disk. These usually reside in "C:\Documents and Settings{UserName}\My Documents\History". Here you will find the five most recently used connections. I haven't tried this, but perhaps you can set up your first five classes, add the necessary tables and save the report and close VStudio. Next, edit the xml files to point to the right classes and then reopen the solution. Go to Databse Expert and you should have the new five history connections available. You can then add the necessary tables to the existing report. FYI the registry setting "HKCU\Business Objects\Suite 11.5\Crystal Reports\Crystal Data Source History" gives the location of the history files. A: I dont know if this is related or not, but I have a similar issue with the Visual Studio ReportBuilder(.rdlc reports). What happens is that when I am building a report my datasource from other projects do not show in the data sources window. What I have to end up doing is hilighting the project in solution explorer that the data sources are in. Only after doing this will I be able to choose datasources from the other project. A: I'm using Crystal Reports 13.0 and Visual Studio 2010. I was able to set my data source manually by creating a new ADO.NET (XML) connection. When the dialog box appears, there is a place to enter the class name and I just needed to enter the full namespace and class name. A: Steps - Add a CR report to the project containing the existing Linq classes - On Database Expert -> Project Data -> .NET Objects: right click -> Refresh - Back to the report you are working on - Open the Database Expert: the classes should be there. Note: In my case the 2 projects are in the same solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/160737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you put { and } in a format string I'm trying to generate some code at runtime where I put in some boiler-plate stuff and the user is allowed to enter the actual working code. My boiler-plate code looks something like this: using System; public class ClassName { public double TheFunction(double input) { // user entered code here } } Ideally, I think I want to use string.Format to insert the user code and create a unique class name, but I get an exception on the format string unless it looks like this: string formatString = @" using System; public class ClassName {0} public double TheFunction(double input) {0} {2} {1} {1}"; Then I call string.Format like this: string entireClass = string.Format(formatString, "{", "}", userInput); This is fine and I can deal with the ugliness of using {0} and {1} in the format string in place of my curly braces except that now my user input cannot use curly braces either. Is there a way to either escape the curly braces in my format string, or a good way to turn the curly braces in the user code into {0}'s and {1}'s? BTW, I know that this kind of thing is a security problem waiting to happen, but this is a Windows Forms app that's for internal use on systems that are not connected to the net so the risk is acceptable in this situation. A: "{{" and "}}" A: What I think you want is this... string formatString = @" using System; public class ClassName {{ public double TheFunction(double input) {{ {0} }} }}"; string entireClass = string.Format(formatString, userInput); A: Escape them by doubling them up: string s = String.Format("{{ hello to all }}"); Console.WriteLine(s); //prints '{ hello to all }' From http://msdn.microsoft.com/en-us/netframework/aa569608.aspx#Question1 A: Double the braces: string.Format("{{ {0} }}", "Hello, World"); would produce { Hello, World } A: Be extra extra cautious in who has access to the application. A better solution might be to create a simple parser that only expects a few, limited, commands.
{ "language": "en", "url": "https://stackoverflow.com/questions/160742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How would you compare IP address? For my server app, I need to check if an ip address is in our blacklist. What is the most efficient way of comparing ip addresses? Would converting the IP address to integer and comparing them efficient? A: 32-bit integers are the way to go -- until you start dealing with 128-bit IPv6 addresses. A: You mean if you should compare it as a text string or convert int to int and compare as an int? That's not usually the bottleneck in this sort of lookups. you can just try to implement both methods and see which one runs faster. The real issue with IP address lookup is usually making efficient queries, taking advantage of the fact that you are dealing with IP addresses and not just random numbers. to accomplish this you can lookup LC trie. Obviously this should interest you only if your blacklist holds tens of thousands or millions of entries. If it has only 10-20 entries a linear search should be preferred and indeed the more interesting question is textual comparison vs integer comparison. A: Yes I have found that to be efficient, it will be a long though, and of course you have to index blacklisted IPs in integer form. A: Depends what language you're using, but an IP address is usually stored as a 32-bit unsigned integer, at least at the network layer, making comparisons quite fast. Even if it's not, unless you're designing a high performance packet switching application it's not likely to be a performance bottleneck. Avoid premature optimization - design your program for testability and scalability and if you have performance problems then you can use a profiler to see where the bottlenecks are. Edit: to clarify, IPv4 addresses are stored as 32-bit integers, plus a netmask (which is not necessary for IP address comparisons). If you're using the newer and currently more rare IPv6, then the addresses will be 128 bits long. A: Use a tool like PeerGuardian which disallows incoming TCP/IP connections at the driver level to IPs on a blacklist. Highly secure, no code required (arguably: highly secure, because no code required). A: I've done this and I've tested it, using an unsigned int (32 bit) is the fastest - I'm assuming that you're comparing this to the string representation. Another thing that might help you is when creating the table, in the past I've had 2 colums: LowIP and HighIP; that way I have been able to blacklist entire ranges of IP's with 1 record entry and still get good performance by checking for the IP in the range. A: I once inherited code where somebody thought that storing IP addresses as 4 int's was a really good thing, except they spent all their time converting to/from int's. Keeping them as strings in the database was far easier, and it only required a single index. You'd be surprised how well sql server can index strings as opposed to 4 columns of integers. But this IP list wasn't for blacklisting. A database round-trip is pretty costly. If a database is overkill, store them in a dictionary in memory, but that's just a guess since we've no idea how many you need to compare. Since most hashcodes are 32-bit int's, and IPv4 addresses are 32 bits, the IP address itself might just be a good hashcode. But as others point out, the best option might be to reduce the load on your server and buy specialized hardware. Maybe you keep recently blacklisted IP's in memory and periodically publish new one's to the router. If you're the one trying to make some software inside a router, then you'll need to fish out your data-structures book and create something like a b-tree. A: The Radix or PATRICIA Trie is the optimal structure for this. Check out the C source for flow-tools: http://www.splintered.net/sw/flow-tools/ I worked on this years ago. A: Do you have an existing problem with efficiency? If so then by all means post the code (or pseudo-code) and we can pick at the corpse. If not then I would suggest trying something simple like storing the entries in a sorted list and using your environment's existing Sort() and Find(). A: Integer comparisons are much faster than string comparisons. If you store the integers in a sorted list, you can find them faster than in an unsorted list. A: if you receive the IP address as a string, comparing it to a string may be more efficient than converting it to integer representation but i'd profile both solutions to be certain, if a few milliseconds (nanoseconds!) are going to matter on this operation ;-) A: The following one is I've used in JavaScript function isValidIPv4Range(iPv4Range = '') { if (IP_V4_RANGE_REGEX.test(iPv4Range)) { const [fromIp, toIp] = iPv4Range.split('-'); if (!isValidOctets(fromIp) || !isValidOctets(toIp)) { return false; } const convertToNumericWeight = ip => { const [octet1, octet2, octet3, octet4] = ip.split('.').map(parseInt); return octet4 + (octet3 * 256) + (octet2 * 256 * 256) + (octet1 * 256 * 256 * 256); }; return convertToNumericWeight(fromIp) < convertToNumericWeight(toIp); } return false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/160776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Sticky mouse when dragging controls in VS2005 Maybe this is a dumb question, but I have the following behavior in Visual Studio 2005 while designing forms: 1 - Drop a control onto the form (suppose it's a Label, just for discussion) 2 - Drag that label to a specific location (aligning w/other controls, whatever) 3 - Release the mouse button 4 - The control is still stuck to the mouse!!! To get it un-stuck from the mouse, I have to hit ESC, which restores the Label to it's original location. This is driving me nuts. I literally have to use the arrow keys to move each control into place, pixel-by-pixel. I don't observe this behavior anywhere else in VS2005, nor do I observe it in the OS in general. I am running on Windows XP inside a Parallels Virtual Machine, hosted on OS X. I don't think there is a driver problem though, b/c as I already said, no other apps demonstrate anything like this. Please tell me there is some tiny checkbox buried somewhere that will turn off this behavior. A: Sounds like you might have ClickLock enabled (or a similar feature). Try this: Go to Control Panel in Windows Open the Mouse control panel Go to the Activities tab Deselect ClickLock If that doesn't work, maybe you have a similar feature in OS X? A: This problem spread to other applications within my VM, so I reinstalled Parallels tools and it went away.
{ "language": "en", "url": "https://stackoverflow.com/questions/160791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there build farm for checking open source apps against different OS'es? I have an Open Source app and I have it working on Windows, Linux and Macintosh ( it's in C++ and built with gcc ). I've only tested it on a few different flavors of Linux so I don't know if it compiles and runs on all different Linux versions. Is there a place where I can upload my code and have it tested across a bunch of different systems like other Linux flavors and things like, Solaris, FreeBSD and other operating systems? What would be great is if I can have it directly connect to my svn repository and grab the latest code and then email me back any compile errors generated and what the OS was that it had a problem with. I would be happy to just know it compiles as it is a GUI based app so I wouldn't expect it to actually be ran and tested. A: There are a few options but there don't appear to be many (any?) free services like this, which isn't surprising considering the amount of effort and resources it requires. Sourceforge used to operate a compile farm like what you describe but it shut down a year or so ago. You might look into some of the following. If you're inclined to pay for a service or roll your own, then some of these links may be useful. If you're just looking for a free open source compile/build farm that covers multiple platforms it looks like you're pretty much out of luck. OpenSuse Build Service Mentioned by Ted first, worth repeating - only for Linux currently but does support a number of distros. GCC Compile Farm Mainly focused on testing builds for GCC but does also host a few other projects such as coLinux, BTG BitTorrent client, ClamAV, and others. May be something you can take advantage of, though I don't see what OSes are in the compile farm (contains at least Linux and Solaris based on the page notes). BuildLocker BuildLocker is a Web-based continuous integration solution for Java and .NET projects. BuildLocker is a virtual dedicated build machine that helps teams find bugs earlier in the development cycle, saving time and money. BuildLocker manages scheduled automated builds of source code in your ProjectLocker Source Control repository. Just check in the source code, and scheduled builds validate the integrity of the code. BuildLocker can even run automated tests, and can alert you anytime a test fails. CruiseControl CruiseControl is a framework for a continuous build process. It includes, but is not limited to, plugins for email notification, Ant, and various source control tools. A web interface is provided to view the details of the current and previous builds. Interesting side note, CruiseControl is actually used by StackOverflow's dev team for automated build testing as well, according to the podcast. Hudson Hudson monitors executions of repeated jobs, such as building a software project or jobs run by cron. RunCodeRun Mentioned in the other linked question, only supports Ruby projects and is in private beta currently. However, if your project is in Ruby, it might be worth keeping an eye on RunCodeRun. CI Feature Matrix There are many Continuous Integration systems available. This page is an attempt to keep an unbiased comparison of as many as possible of them. A: Take a look at the OpenSuSE build service, it includes a fairly wide variety of Linux distros (not just SuSE/OpenSuSE). A: From a software point of view, there's also buildbot (sourceforge project site), which can be used to set up your own build/continous integration server. This was suggested and considered to be used for gcc development (as mentioned on the gcc compile farm wiki page posted above). A: If you are planning to go commercial with your open source product you might consider our Parabuild. It allows you to run a set of builds on multiple platforms and machines in parallel. The build will success only if all platform-specific builds success.
{ "language": "en", "url": "https://stackoverflow.com/questions/160793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Run Sparc binaries without Sparc hardware I've been curious in the past few months in trying my hand at doing some assembly for the SPARC processor (either V8 or V9). My question is this, I have no access to a SPARC machine, is there a way I can run SPARC binaries on my x86 machine? I've looked at QEMU but I am not too sure how to set it up. A: SimICS emulates a Sparc platform. Academic and personal licenses are free. Edit: I didn't do SimICS justice in my initial response, it is a very useful tool for Sparc-based development. You can instrument, profile, and explore the behavior or code in both user space and kernel space. I first became aware of it about 10 years ago, when it was released by the Swedish Institute of Computer Science (SICS). They later spun it out as a commercial product. A: Ok, here it is: * *qemu is emulating user code, not system *tkisem is graphically displaying cpu internals *Also, there is a thing called "ISEM" (Instructional Sparc Emulator) Maybe googling will help you with detailed information. My opinion - qemu is good enough for that. A: As an aside, you can get older secondhand Sun workstations off Ebay for about 3/4 of buggerall. A: This emulates a SparcStation 2 (sun4c 32bit SPARCv7) and also an Ultra-1 (sun4u 64bit SPARCv9) from what I understand the fedelity is pretty high as it uses stock dumped roms even from the framebuffer. http://people.csail.mit.edu/fredette/tme/ As for the difference between v7 v8 and v9 of Sparc v7 and v8 are 32bit v9 is 64bit. Note that QEMU isn't all that faithful of an emulation and it does dynamically recompile so it might be a bit faster (in practice it is pretty slow) but at the price of possibly less compatibility. QEMU for v7-v8 can boot Linux and most BSD's and experimentally Solaris. QEMU for SPARCv9 is experimental. I have built TME on Linux but it will probably go better if you just install NetBSD and build it on there that way you can use the networking support it has and the older compilers won't complain some much on the code. I have a somewhat patched version of it here https://github.com/cb88/tme I would appreciate help working on it most of the patches are from NetBSD pkgsrc but I think I cleaned up a few other things. A: In addition to complete system emulation, QEMU can also emulate SPARC at the user-space level, so that you can write a program in SPARC assembly and use standard Linux system calls and it will call the standard x86 kernel versions... works pretty well!! If you aren't sure you want to learn SPARC assembly specifically, you might look into MIPS instead. Most wireless routers are based on MIPS processors and can run Linux. It is similar to SPARC, of a similar vintage, and along with SPARC it's one of the two original RISC architectures... in my opinion MIPS is actually a little bit more of a clean and elegant architecture than SPARC, but they're both great. A: Looks like QEMU does enough emulation for you to install a SPARC Linux on: http://www.bellard.org/qemu/status.html In which case, just grab a SPARC distribution (e.g., Debian), and you're all set! A: polarhome offers shell access to a Solaris system (which appears to be a Sun Sparc system, not x86). It costs 10 local currency units (dollars, euros, etc) or $2 US, whichever is greater. A: Aeroflex Gaisler has comercial simulators for their LEON2, LEON3 and LEON4 processors which are actually SPARC. There is also a limited evaluation version for LEON3. See http://www.gaisler.com/index.php/downloads/simulators They provide free GCC cross-compilers for Windows (MinGW) and Linux: http://www.gaisler.com/index.php/downloads/compilers A: Please have a look at http://www.stromasys.com, CHARON-SPK meets your requirements. Also http://www.stromasys.ch/virtualization-solutions/virtual-sparcstation/ could help. A: Just a note that If you're intend to run Solaris later than 2.5.1/5.5 then qemu-sparc won't help you. NetBSD and Linux should run fine though. This is because qemu-sparc supports only very old SPARC processors. It will either cause the Solaris installer to throw an error when it realizes the architecture is too old, or fault/crash before it gets started if you try to install Solaris 8-11. Mentioned previously but some cost details: A fully functional 64 core SPARC 2U capable of running Solaris 11 can be acquired from EBay for about $400USD if you want to go that route. A T5220 will do Solaris 11. I don't have experience with the other emulators mentioned here but have also heard good things about Simics, though it's expensive.
{ "language": "en", "url": "https://stackoverflow.com/questions/160800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Define a .NET extension method with solution scope I have a few 'helper' style extension methods I use quite regularly now (they are mostly quite simple, intuitive, and work for good not evil, so please don't have this descend into a discussion around whether or not I should use them). They are largely extending core .NET CLR classes. Currently, I have to copy the 'ExtensionMethods.cs' file that holds my extension methods to each new project within a solution to be able to use them in multiple projects. Is it possible to define an extension to work over multiple projects within a solution, or wrap them in an 'extensions' dll, or are they confined to the scope of project? EDIT Whilst the 'dedicated project' answers are perfectly valid, I chose marxidad's as I prefer the approach he gives. Thanks for all the answers so far, and I have upmodded them all, as they were all good answers A: If you don't want to create a whole project just for the extension methods, you can link the same file into separate projects without copying the file: * *In Solution Explorer, select the target project. *Select the Project menu. *Select Add Existing Item. *In the Add Existing Item dialog box, select the item you want to link. *From the Open button drop-down list, select Add As Link. A: The best approach is to put them all in a single project and create a DLL. You can then include that project as a project reference or include the DLL as a binary reference (probably the better choice). A: You could put your extensions in a separate project, and include that project to every new solution you're making. Just be careful about versioning, e.g., when other applications try to make changes to your extension project then all the projects that use that method should be retested. Scott Dorman is correct in his post too: if you don't want them changed you can compile them as a DLL library which you include in your new projects (as opposed to including an uncompiled project). A: Create a project for your extensions to the .NET platform, and reference that one project in each of your application projects. It goes without saying: any and all platform stuff, and only platform stuff, goes in that one project; application stuff goes in your application projects. You might also look into various platform libraries out there, such as Umbrella, which offer suites of extensions to the base platform. A: Several answers suggests that someone would put the extension functions into a common assembly. Which is the right answer. But there's a weird thing for beginners: the IntelliSense couldn't help well enough. Let's say I extended the ObservableCollection with a ReplaceRange function / method. After a move the class of the extension functions, at first the compiler will say error CS1061: 'System.Collections.ObjectModel.ObservableCollection<WhateverDto>' does not contain a definition for 'ReplaceRange' and no extension method 'ReplaceRange' accepting a first argument of type 'System.Collections.ObjectModel.ObservableCollection<WhateverDto>' could be found (are you missing a using directive or an assembly reference?) If you then hover over the problematic ReplaceRange call, you won't get the offer to include the proper using statement automatically. Someone might think at that point that he/she did something wrong. Nothing is however, you just have to know where is your extension method, and you have to manually type the using statement for the namespace of the methods. After you got that right your source will compile.
{ "language": "en", "url": "https://stackoverflow.com/questions/160813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I pass data between activities in Windows Workflow? I'm not completely sure I understand the workflow way of doing things, but if it's a pipeline n filter style model I should be able to pass data (even strings) from one activity to another. Does anyone know how to do this? Bonus points for a video! I hope this is possible. If WF were the same as my idea of it then it would be extremely useful. A: Method 1 (Design- and run-time): Activity databinding You can bind the dependency properties of an activity to the properties of another activity. See * *Activity Binding in Windows Workflow Foundation. *Enabling Activity Data Binding Method 2 (Run-time): Find an activity from the activity tree See * *Paragraph "Workflow Development" of the article ActivityExecutionContext in Workflows. *Binding to Activity Dependency Property in Code Activity
{ "language": "en", "url": "https://stackoverflow.com/questions/160824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Are there any other good alternatives to zc.buildout and/or virtualenv for installing non-python dependencies? I am a member of a team that is about to launch a beta of a python (Django specifically) based web site and accompanying suite of backend tools. The team itself has doubled in size from 2 to 4 over the past few weeks and we expect continued growth for the next couple of months at least. One issue that has started to plague us is getting everyone up to speed in terms of getting their development environment configured and having all the right eggs installed, etc. I'm looking for ways to simplify this process and make it less error prone. Both zc.buildout and virtualenv look like they would be good tools for addressing this problem but both seem to concentrate primarily on the python-specific issues. We have a couple of small subprojects in other languages (Java and Ruby specifically) as well as numerous python extensions that have to be compiled natively (lxml, MySQL drivers, etc). In fact, one of the biggest thorns in our side has been getting some of these extensions compiled against appropriate versions of the shared libraries so as to avoid segfaults, malloc errors and all sorts of similar issues. It doesn't help that out of 4 people we have 4 different development environments -- 1 leopard on ppc, 1 leopard on intel, 1 ubuntu and 1 windows. Ultimately what would be ideal would be something that works roughly like this, from the dos/unix prompt: $ git clone [repository url] ... $ python setup-env.py ... that then does what zc.buildout/virtualenv does (copy/symlink the python interpreter, provide a clean space to install eggs) then installs all required eggs, including installing any native shared library dependencies, installs the ruby project, the java project, etc. Obviously this would be useful for both getting development environments up as well as deploying on staging/production servers. Ideally I would like for the tool that accomplishes this to be written in/extensible via python, since that is (and always will be) the lingua franca of our team, but I am open to solutions in other languages. So, my question then is: does anyone have any suggestions for better alternatives or any experiences they can share using one of these solutions to handle larger/broader install bases? A: Setuptools may be capable of more of what you're looking for than you realize -- if you need a custom version of lxml to work correctly on MacOS X, for instance, you can put a URL to an appropriate egg inside your setup.py and have setuptools download and install that inside your developers' environments as necessary; it also can be told to download and install a specific version of a dependency from revision control. That said, I'd lean towards using a scriptably generated virtual environment. It's pretty straightforward to build a kickstart file which installs whichever packages you depend on and then boot virtual machines (or production hardware!) against it, with puppet or similar software doing other administration (adding users, setting up services [where's your database come from?], etc). This comes in particularly handy when your production environment includes multiple machines -- just script the generation of multiple VMs within their handy little sandboxed subnet (I use libvirt+kvm for this; while kvm isn't available on all the platforms you have developers working on, qemu certainly is, or you can do as I do and have a small number of beefy VM hosts shared by multiple developers). This gets you out of the headaches of supporting N platforms -- you only have a single virtual platform to support -- and means that your deployment process, as defined by the kickstart file and puppet code used for setup, is source-controlled and run through your QA and review processes just like everything else. A: I always create a develop.py file at the top level of the project, and have also a packages directory with all of the .tar.gz files from PyPI that I want to install, and also included an unpacked copy of virtualenv that is ready to run right from that file. All of this goes into version control. Every developer can simply check out the trunk, run develop.py, and a few moments later will have a virtual environment ready to use that includes all of our dependencies at exactly the versions the other developers are using. And it works even if PyPI is down, which is very helpful at this point in that service's history. A: Basically, you're looking for a cross-platform software/package installer (on the lines of apt-get/yum/etc.) I'm not sure something like that exists? An alternative might be specifying the list of packages that need to be installed via the OS-specific package management system such as Fink or DarwinPorts for Mac OS X and having a script that sets up the build environment for the in-house code? A: I have continued to research this issue since I posted the question. It looks like there are some attempts to address some of the needs I outlined, e.g. Minitage and Puppet which take different approaches but both may accomplish what I want -- although Minitage does not explicitly state that it supports Windows. Lacking any better options I will try to make either one of these or just extensive customized use of zc.buildout work for our needs, but I still feel like there must be better options out there. A: You might consider creating virtual machine appliances with whatever production OS you are running, and all of the software dependencies pre-built. Code can be edited either remotely, or with a shared folder. It worked pretty well for me in a past life that had a fairly complicated development environment. A: Puppet doesn't (easily) support the Win32 world either. If you're looking for a deployment mechanism and not just a "dev setup" tool, you might consider looking into ControlTier (http://open.controltier.com/) which has a open-source cross-platform solution. Beyond that you're looking at "enterprise" software such as BladeLogic or OpsWare and typically an outrageous pricetag for the functionality offered (my opinion, obviously). A lot of folks have been aggressively using a combination of Puppet and Capistrano (even non-rails developers) for deployment automation tools to pretty good effect. Downside, again, is that it's expecting a somewhat homogeneous environment.
{ "language": "en", "url": "https://stackoverflow.com/questions/160834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }