text
stringlengths
8
267k
meta
dict
Q: What tools are available for TDDD (Test Driven Database Development)? About a year ago, I picked up Scott Ambler's Refactoring Databases: Evolutionary Database Design. I was won over to the idea that just as you can develop your code with TDD, you probably should be covering your databases with unit tests (at least) or even writing database tests before you make a change to the schema so you do database work in a TDD style as well. I really like the idea, and I have been doing this (OK, sometimes I do it) by hand for a while now, just writing regular unit tests that happen to connect to the database and check its structure against a given schema file. But I haven't found any good database change management tool-kits that might help automate this process. Does anyone know any? A: I only know of two unit testing frameworks: * *DBUnit *TSQLUnit As for change management, these are some recommended tools: * *Redgate SQL Compare *ApexSQL *Toad Although I am not sure if this is really what you're looking for. A: I've tried most of the tools, Jon mentioned, but have mostly settled on writing nUnit tests using SMO and SQL commands. I usually validate the table structure, Stored Procs, views and functions.Being able to show the boss 14 broken test due to a developers datatype change made all the work more than worthwhile. A: Microsoft's next version of Visual Studio for Database is supposed to have these. A: UTPLSQL for PL/SQL A: The tool ounit is to Oracle's PL/SQL what junit is to java. We use it to "harness" our database calls. We can repeat them easily and compare the results before and after changes to the code or underlining data structure.
{ "language": "en", "url": "https://stackoverflow.com/questions/151563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I disable Tortoise BZR? I'm a huge fan of bzr and I'm glad they're working on tortoise for it, but currently it's WAY too slow to be useful. The icons are almost always incorrect and when I load a directory in explorer with a lot of branches it locks up my entire system for anywhere from 10 seconds to 2 minutes. I look forward to trying it again in the future, but for now I'd like to disable it. Unfortunately I don't see it in add/remove programs and I can't find a way to disable it in the bazaar config directory. When I right click the icon in the task panel (by the clock) and choose "Exit Program" it just restarts moments later. I don't see it in the Services panel either. Is there any way to disable it? I'm running Windows XP on the system in question. A: I think you can do: regsvr32 /u tbzrshellext_x86.dll I also killed tbzrcachew.exe in memory, but since, like enobrev, I couldn't find it with AutoRuns, I will suppose it is the shell extension that runs this cache. Will know for sure when I will reboot my computer... I agree that currently these icons are slow, doesn't update in real time, and options in context menu are often limited. I hope all these points will improve in the future. [EDIT] It works! No need to kill the cache too. A: According to the TortoiseBZR readme, you can disable it by running python tortoise-bzr.py --unregister from the install folder. Not sure where it's installed by default, but it looks like that might be in your Python site-packages folder. A: You can disable icon overlays (main thing that makes it slow) via context menu: right-click on the bzr icon in the tray, settings, uncheck all drives. A: I went to the install directory "C:\Program Files\Bazaar" and ran unins000.exe and got a nice deinstaller. A: You can use the utility "Autoruns" by SysInternals (now part of Microsoft) to disable Windows Explorer extensions (such as extensions which add themselves as right-click menu items). This can come in handy when you can't find the 'proper' way to do it in an app or the app doesn't offer one. A: Jason's answer seemed valid, so I spent some time looking for the py file. It's nowhere to be found. It seems when installing bzr via the setup it also installs tbzr binaries. I've looked through as many panels as I can find. Process Explorer (sysinternals), AutoRuns (sysinternals), Some Shell Extension browser, etc. I couldn't find a formal entry anywhere. I found the registry entries, but I've no idea where they came from or how to "formally" get rid of them. I'm not in the mood to just start killing off registry entries as I actually have to get work done this week. I'm just going to run the uninstall and then install the latest version (with TBZR unchecked). As far as I can tell that's the only way to resolve this. A: I hear you Enobrev. It's quite annoying that it cannot easily be removed. I too will just uninstall, then reinstall BZR. On another note, "BZR" is a terrible key sequence to have to type for every command. I'll be sure to rename the "BZR.EXE" something more finger-friendly because my pinky just can't handle that "z" key all time.
{ "language": "en", "url": "https://stackoverflow.com/questions/151587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to detect a remote side socket close? How do you detect if Socket#close() has been called on a socket on the remote side? A: The isConnected method won't help, it will return true even if the remote side has closed the socket. Try this: public class MyServer { public static final int PORT = 12345; public static void main(String[] args) throws IOException, InterruptedException { ServerSocket ss = ServerSocketFactory.getDefault().createServerSocket(PORT); Socket s = ss.accept(); Thread.sleep(5000); ss.close(); s.close(); } } public class MyClient { public static void main(String[] args) throws IOException, InterruptedException { Socket s = SocketFactory.getDefault().createSocket("localhost", MyServer.PORT); System.out.println(" connected: " + s.isConnected()); Thread.sleep(10000); System.out.println(" connected: " + s.isConnected()); } } Start the server, start the client. You'll see that it prints "connected: true" twice, even though the socket is closed the second time. The only way to really find out is by reading (you'll get -1 as return value) or writing (an IOException (broken pipe) will be thrown) on the associated Input/OutputStreams. A: Since the answers deviate I decided to test this and post the result - including the test example. The server here just writes data to a client and does not expect any input. The server: ServerSocket serverSocket = new ServerSocket(4444); Socket clientSocket = serverSocket.accept(); PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true); while (true) { out.println("output"); if (out.checkError()) System.out.println("ERROR writing data to socket !!!"); System.out.println(clientSocket.isConnected()); System.out.println(clientSocket.getInputStream().read()); // thread sleep ... // break condition , close sockets and the like ... } * *clientSocket.isConnected() returns always true once the client connects (and even after the disconnect) weird !! *getInputStream().read() * *makes the thread wait for input as long as the client is connected and therefore makes your program not do anything - except if you get some input *returns -1 if the client disconnected *out.checkError() is true as soon as the client is disconnected so I recommend this A: You can also check for socket output stream error while writing to client socket. out.println(output); if(out.checkError()) { throw new Exception("Error transmitting data."); } A: The method Socket.Available will immediately throw a SocketException if the remote system has disconnected/closed the connection.
{ "language": "en", "url": "https://stackoverflow.com/questions/151590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "88" }
Q: What's a good naming convention for large-scope function variables? You can have different naming convention for class members, static objects, global objects, and structs. Some of the examples of them are as below. _member m_member or in Java case, the usage of this.member. But is there any good technique or naming convention for function variables scope that conveys when a single variable has complete function scope or a short lifespan scope? void MyFunction() { int functionScopeVariable; if(true) { //no need for function variable scope naming convention } } A: I actually encourage delegating this task to the IDE/editor you use. No, I'm not actually talking about naming variables, that is still best done by a human. But the underlying task of such naming strategies is to show you which type of variable any one name represents. Pretty much every IDE worth its salt can define different styles (colors, fonts, font types, ...) to different variable types (instance member, static member, argument, local variable, ...) so letting the IDE tell you what type of variable it is actually frees you from having to type those (otherwise useless) pre- or suffixes every time. So my suggestion: use meaningful names without any prefix or suffix. A: One method is to follow the guideline that the larger the scope of the variable, the longer the name. In this way, global variables get long descriptive names while scope-limited things like loop index variable can be as small as single letters. A: I use prefixes or special naming conventions on global, static and member variables so I don't have to use prefixes on locals. I prefer having the option of using short local variable names, especially for loop variables. A: There's an argument that you shouldn't have 'large scope functions' so there shouldn't be a problem with naming - just use the 'small scope function' variable naming conventions. A: The guidance from MSFT and other style guides for private instance fields is _memberName (camel case notation prefixed with "_"). This is also the convention used in the source code of many recent Microsoft tutorials. I use it because it's shorter, not Hungarian, and R# supports it as the default rule for private instance fields. I also like it because it sort of obscures the private fields from Intellisense, as it should, since you should prefer to access your public members first. If I want to access the property Name and I start typing "Na" the first suggestion is the Pascal-cased public instance property Name. In the rare cases that I want to access the private field directly this forces me to make a conscious decision to start typing "_", then I get the full list of my private fields in the Intellisense popup. I have also seen guidance that says it should be _MemberName if it is the backing field for a public property named MemberName (Pascal-case notation prefixed with "_") I personally don't like that because I think the capital M is redundant, adds unnecessary keystrokes and does not add any extra information. A: We tend to use an l_ prefix in our functions for "local." And, that's worked pretty well. A: It really all comes down to whatever the style guidelines for the language suggest if there are any. A: I guess anything is fine as long as it conveys the meaning regarding its use. A: I prefer to keep it simple, I use: m_varname - Class member variables g_varname - Global variables A: I use the same convention I use for class members. The IDE should take care of finding your declaration. If a function is that large and confusing that it becomes a prblem, there is a gretaer issue needs to be addresses.
{ "language": "en", "url": "https://stackoverflow.com/questions/151594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: JRuby on Rails vs. Ruby on Rails, what's difference? I'm looking to try out JRuby and JRuby on Rails. I'm having trouble finding information on what's difference between JRuby on Rails and Ruby on Rails. What's the differences I need to look out for? A: I may be wrong, but I think you can package a JRuby on Rails app in a way you can't do with normal RoR - look at Mingle or similar. Makes it possible to sell without dropping your pants / opening the komono. That said, I'm not familiar enough with RoR packaging, so dont hold me to it :) A: I'm surprised there's a crucial thing missing in all answers to this question, related to GIL. The main difference you should care about esp. in web-applications such as ones built with Rails is true concurrency ("Global Interpreter Lock" free). When two threads are running (e.g. serving 2 user requests) with JRuby they are capable of running concurrently within a single process, while in MRI there's the GIL (even with 1.9's native threads) that avoids executing Ruby code in parallel. For an application developer this is the first thing to keep in mind while considering JRuby, as it really shines with config.threadsafe! but requires you to make sure your code (and your gems code) to be "truly" thread-safe. A: mostly it should work the same. in jRoR you can access stuff you wouldn't have in RoR. Usually its mainly a deployment concern. However, if your RoR app uses native libraries that don't have an equivalent that runs on the JVM, that can be a pain. However most libs have a non native version available (at least the popular ones I have come across). A: There are some great answers here already. eebbesen already covered the basics, and kares (himself!) has told us JRuby has no GIL. I'll add from a more practical perspective, I've launched apps on Ruby on Rails, and then migrated to JRuby for performance reasons. There were two main performance benefits: JRuby is (or was) simply faster than Ruby in some circumstances, and two, the lack of the Global Interpreter Lock kares mentions allowed me to do multithreading, which, while tricky, unlocked orders of magnitude performance benefits. A very large Ruby on Rails app ported and ran in an hour, gems and all. The only actual glitch was that Java's regexes are slightly different than Ruby's. That's a monumental achievement on JRuby's part. A: JRuby is the Ruby implementation that runs on a JVM whereas Matz's Ruby is a C implementation. Key features to note are: * *JRuby runs on Java VM's and it's either compiled or interpreted down to Java byte code. *JRuby can integrate with Java code. If you have Java class libraries (.jar's), you can reference and use them from within Ruby code with JRuby. In the other direction you can also call JRuby code from within Java. JRuby can also use the JVM and application server capabilities. *JRuby is usually hosted within Java application servers such as Sun's GlassFish or even the Tomcat web server. *Although you cannot use native Ruby gems with JRuby there are JRuby implementations for most of the popular Ruby libraries. There are other differences which are listed at the JRuby wiki: * *Differences between JRuby and Ruby (MRI) *JRuby On Rails
{ "language": "en", "url": "https://stackoverflow.com/questions/151595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "137" }
Q: Yasnippet and pabbrev working together in Emacs I am trying to get the yasnippet and pabbrev packages working together with emacs, but I cannot seem to get any love. How can I get them to play nicely together? The crux of the problem is that pabbrev and yasnippet are binding to the tab keys. Both packages seem to do this fallback when a match isn't found, but they don't fall back properly. I am currently using Emacs W32 (the last emacs 22 release). yasnippet is byte compiled, but pabbrev is not. Edit: Thus far neither tabkey2 nor hippie expand work out of the box, which is why I have yet to mark either solution as a correct answer. I'm hacking away at tabkey2 to make it work though. A: I use hippie-expand to manage tab expansion packages. The following code will try each package in order to expand your tab key press: (require 'hippie-exp) (setq hippie-expand-try-functions-list '(yas/hippie-try-expand try-expand-dabbrev try-expand-dabbrev-all-buffers try-expand-dabbrev-from-kill try-complete-file-name try-complete-lisp-symbol)) Note: hippie-expand will probably not work with pabbrev, because pabbrev is an emacs minor mode. A: Have a look at tabkey2.el. It looks like it addresses the problem you're having.
{ "language": "en", "url": "https://stackoverflow.com/questions/151639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Flex Popup Button Question Banging my head against the wall here. I don't want to reinvent the wheel. The default Flex 3 classs for PopupButton is a combination of two buttons. One is a normal button with label and/or icon, and the second is the arrow which opens the popup. My struggle here is that I just want a button with an icon that opens the popup directly, without having to write all the popup handling code all over again. The plan was to override the PopupButton class with, say, a new class called SimplePopupButton. This class would just hide the arrow, and point the button click handler to open the popup. Seems simple, but I don't see an easy way to do this. Suggestions? Alternatives? [Edit] I want a 16x16 icon button that opens a popup. PopupButton shipped with flex has two buttons: "It contains a main button and a secondary button, called the pop-up button, which pops up any UIComponent object when a user clicks the pop-up button." (source). I want the main button to open the popup, and hide the popup-button. (or vice-versa) A: Have you tried setting a new skin? Not sure if it would work but it would be far easier than trying to write a new control. A: In Flex 3.4, the PopUpButton control has an attribute called "openAlways", which, if set to true, allows the main button to open the popUp as well. Then, as mentioned before, simply set the skin of the button to hide the down-arrow. A: It's been a while since I did some work with Flex, but here's my idea: Create a new component consisting of a classic button and a list. The component should have two view states. The list should not be visible in the base state, but should become visible when the component enters the other state. The other state is, of course, entered on the click of the button. You can set the list to be initially positioned so that it's bottom left corner is aligned with the buttons's bottom left corner. Then create a transition from base state to the other state that would make the list do a "slide down" as it does in a standard PopuButton control. You can do this by simultaneously using a wipeup effect and a move effect in which you move the list on its y axis until it's top left corner is where it's bottom left corner was. Name the component MyPopupButton or whatever you wish to call it. For returning to the base state simply do a reverse on these effects. As for the handling code - your app, of course, only needs to know what the user chose from the list, so that's no more code than usual. Hope this was helpful. A: Um, I may be being a total idiot here, but why can't you just use a ComboBox? I mean the action on it is essentially the same as a popup button without the arrow button separation? Or am I being daft here. A: Try the popup property found here. It should be set to your popup. print(</mx:Script> <![CDATA[ import mx.controls.Alert; public var myAlert:Alert = new Alert(); ]]> </mx:Script> <mx:popUpButton popUp="{myAlert}" label="Button"/> ); A: this is sort of a designer hack, but I just set the following properties on my popUp button... (or you could create a style if you want to reuse) Assuming you just want a 16x16 icon to popUp a menu when clicked... <mx:PopUpButton icon="@Embed(source='pathToIcon.png')" arrowButtonWidth="16" paddingLeft="0" paddingRight="0" width="16" height="16" popUp="{menu}"/> A: kind of a nasty hack but I did something much like Matt above that seems to work/look alright. in CSS. .camButtons { padding-left:0; padding-right:1; up-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControlsOver"); over-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControls"); down-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControls"); disabled-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControls"); pop-up-up-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControlsOver"); pop-up-down-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControls"); pop-up-over-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControls"); } <mx:PopUpButton width="38" popUpGap="0" paddingLeft="37" arrowButtonWidth="38" id="flirts_btn" popUp="{flirts_menu}" styleName="camButtons" icon="@Embed(source='/assets/images/skins.swf', symbol='Icon_WinkOver')" downIcon="@Embed(source='/assets/images/skins.swf', symbol='Icon_WinkOver')" disabledIcon="@Embed(source='/assets/images/skins.swf', symbol='Icon_Wink')" toolTip="Send Flirt to User" buttonMode="true" useHandCursor="true" /> .... the important parts... pop-up-up-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControlsOver"); pop-up-down-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControls"); pop-up-over-skin: Embed(source="/assets/images/skins.swf", symbol="Button_ChatRoomControls"); width="38" popUpGap="0" paddingLeft="37" arrowButtonWidth="38"
{ "language": "en", "url": "https://stackoverflow.com/questions/151641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Does anyone use GoogleData? I was recently considering using GoogleData for a hobby project to store my service's old data, (say 24+hours old), while I keep the fresh data on my servers (hobby project==cheap home server). However I haven't really heard of anyone using GoogleData, so I was wondering about what other's experiences have been. Edit: My brief usage pattern would be to basically store (cached versions) of objects representing (historical) entities. And relatively immutable data like past events or these entities, global prototype data my objects (also relatively immutable), in order to reduce the load on my server. As for active entities I'd be storing changes locally and then posting them to GooglData (after 24 hours). Thanks A: I did use GoogleData to store data frokm one of my project called TaskList. I use google spreadsheet specifically. It's quite hard to start with, but from google's sample, you can pretty sure knows what to do next. I did that in C#. Here's the sample apps and sdk for google-gdata. My advice, don't bother to read the online documentation about gdata, it explain a lot on the underlying xml structure and method to access each level (private vs public) data. You need to have a google account to start with. The way the data is read and written is quite odd comparing to standard SQL or dataset. But as an overall, the API is well design and almost everything is taken care off. Do give it a try. PS: No doubt it's a bit slow when accessing with all the xml overhead plus plenty of redundant tagging with gdata. A: Since you didn't really get a satisfactory answer to this, I might suggest looking at Amazon SimpleDB. It's not free, but unless you're storing zillions of records you'll probably only spend pennies per month. Like Amazon's other web services, you only pay for what you use. SimpleDB is more generic than the Google Data services, which may suit a wider range of applications. A: So, I guess nobody uses google data apparently. It does seem nice to store data you can't afford to host yourself though. Thus I think I'll still give it a try. A: Haven't had time to get to my computer and clean up the code for posting, but my current solution has been to use Yahoo Pipes to get my query results from Google Data as directly to the browser as JSON instead of XML through a server. And its all done with client side Javascript alone, so I can get and use the data without the need for a server. However I still haven't made a script to store data on Google Data. That's the next step. A: There are some nice gdata-based apps listed here. You can treat spreadsheets like basic databases, take a look at this python wrapper and its .net port.
{ "language": "en", "url": "https://stackoverflow.com/questions/151651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: creating system wide vista keyboard shortcuts and macros This question might not seem programming related at first, but let me explain. I'm stuck with using a keyboard that doesn't have home end page up and page down buttons. I need those functions for programming. So the question is: what's a good/free utility to define system wide shortcuts and macros in vista? Mapping for example "ctrl/left arrow to home, ctrl/right arrow to end would solve my problem. A: AutoHotKey is the most configurable I know... http://www.autohotkey.com [EDIT] Another neat utility is Microsoft Keyboard Layout creator http://www.microsoft.com/globaldev/tools/msklc.mspx A: AutoHotkey worked out great. Here's the script I'm using: #Right::End #Left::Home #Up::PgUp #Down::PgDn #BS::Del This maps "Win"+"right arrow" to "end", etc... this made the Apple bluetooth wireless keyboard usable for programming. Thanks for the suggestions! A: I have not used this personally, but I believe the Key Transformation application will help you out: http://softboy.net/key/index.htm A: KeyTweak http://webpages.charter.net/krumsick/
{ "language": "en", "url": "https://stackoverflow.com/questions/151652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can I trust PHP __destruct() method to be called? In PHP5, is the __destruct() method guaranteed to be called for each object instance? Can exceptions in the program prevent this from happening? A: Use a shutdown function if you want to go for sure: register_shutdown_function() A: It's also worth mentioning that, in the case of a subclass that has its own destructor, the parent destructor is not called automatically. You have to explicitly call parent::__destruct() from the subclass __destruct() method if the parent class does any required cleanup. A: The destructor will be called when the all references are freed, or when the script terminates. I assume this means when the script terminates properly. I would say that critical exceptions would not guarantee the destructor to be called. The PHP documentation is a little bit thin, but it does say that Exceptions in the destructor will cause issues. A: In my experience destructors will be always called in PHP 5.3, but be warned that if some piece of code calls exit() or if a fatal error occurs, PHP will call destructors in "any" order (I think the actual order is order in memory or the order the memory was reserved for the objects. In practice, this order is almost always problematic). This is referred as "shutdown sequence" in PHP documentation. PHP documentation of destructors says: PHP 5 introduces a destructor concept similar to that of other object-oriented languages, such as C++. The destructor method will be called as soon as there are no other references to a particular object, or in any order during the shutdown sequence. As a result if you have class X which holds a reference to Y, the destructor of X may be called AFTER the destructor of Y has already been called. Hopefully, the reference to Y was not that important... Officially this is not a bug because it has been documented. However, it's very hard to workaround this issue because officially PHP provides no way to know if destructor is called normally (destructors are called in correct order) or destructors are called in "any" order where you cannot use data from referenced objects because those might have been already destroyed. One could workaround this lack of detection using debug_backtrace() and examining the stack. Lack of normal stack seems to imply "shutdown sequence" with PHP 5.3 but this, too, is undefined. If you have circular references, the destructors of those objects will not be called at all with PHP 5.2 or lesser and will be called in "any" order during "shutdown sequence" in PHP 5.3 or greater. For circular references, there does not exists a logically "correct" order so "any" order is good for those. There are some exceptions (this is PHP after all): * *if exit() is called in another destructor, any remaining destructors will not be called (http://php.net/manual/en/language.oop5.decon.php) *if FATAL error occurs anywhere (many possible causes, e.g. trying to throw an exception out from any other destructor could be one cause), any remaining destructors will not be called. Of course, if the PHP engine hits segmentation fault or some other internal bug occurs, then all bets are off. If you want to understand current implementation of "shutdown sequence", see https://stackoverflow.com/a/14101767. Note that this implementation may change in future PHP versions. A: There is a current bug with circular references that stops the destruct method being called implicitly. http://bugs.php.net/bug.php?id=33595 It should be fixed in 5.3 A: It's worth noting, that although destructors are likely to be called, there is no guarantee they will be called, as this tale from The Daily WTF illustrates: “All right,” I sighed. As tempting as it was to ask him to describe the just-made-up concept of process marshaling, I decided to concede. And just at that moment, I thought of the perfect rebuttal. “But what if you just, say, pull the plug? A Finally block __destruct won’t execute when the computer is turned off!” I expected the developers to jump back and chastise me for an “unreasonable scenario.” But instead, their faces turned bright white. They slowly turned to look at each other. It was clear that they made the same mistake that many before them had made: believing Try-Finally to be as infallible as things like database transactions. “And… umm…” I said slowly, breaking the awkward silence, “that’s why… you should… never put critical… business transaction code in finally blocks.” It's okay to assume that destructor will be called if you're just freeing resources or doing some logging, but they aren't safe to use for 'undo-ing' previous operations to try to make some code ACID compliant.
{ "language": "en", "url": "https://stackoverflow.com/questions/151660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Tool for adding license headers to source files? I'm looking for a tool that will, in bulk, add a license header to some source files, some of which already have the header. Is there a tool out there that will insert a header, if it is not already present? Edit: I am intentionally not marking an answer to this question, since answers are basically all environment-specific and subjective A: #!/bin/bash for i in *.cc # or whatever other pattern... do if ! grep -q Copyright $i then cat copyright.txt $i >$i.new && mv $i.new $i fi done A: Check out license-adder. It supports multiple code files (even custom ones) and handles existing headers correctly. Comes already with templates for the most common Open Source licenses. A: Here is one I rolled in PHP to modify PHP files. I also had old license information to delete so it replaces the old text first, then adds the new text immediately after the opening <?php class Licenses { protected $paths = array(); protected $oldTxt = '/** * Old license to delete */'; protected $newTxt = '/** * @license http://opensource.org/licenses/osl-3.0.php Open Software License (OSL 3.0) */'; function licensesForDir($path) { foreach(glob($path.'/*') as $eachPath) { if(is_dir($eachPath)) { $this->licensesForDir($eachPath); } if(preg_match('#\.php#',$eachPath)) { $this->paths[] = $eachPath; } } } function exec() { $this->licensesForDir('.'); foreach($this->paths as $path) { $this->handleFile($path); } } function handleFile($path) { $source = file_get_contents($path); $source = str_replace($this->oldTxt, '', $source); $source = preg_replace('#\<\?php#',"<?php\n".$this->newTxt,$source,1); file_put_contents($path,$source); echo $path."\n"; } } $licenses = new Licenses; $licenses->exec(); A: Here's one I found on the Apache list. Its written in Ruby and seems easy enough to read. You should even be able to call it from rake for extra special niceness. :) A: Python 2 solution, modify for your own need Features: * *handles UTF headers (important for most IDEs) *recursively updates all files in target directory passing given mask (modify the .endswith parameter for the filemask of your language (.c, .java, ..etc) *ability to overwrite previous copyright text (provide old copyright parameter to do this) *optionally omits directories given in the excludedir array # updates the copyright information for all .cs files # usage: call recursive_traversal, with the following parameters # parent directory, old copyright text content, new copyright text content import os excludedir = ["..\\Lib"] def update_source(filename, oldcopyright, copyright): utfstr = chr(0xef)+chr(0xbb)+chr(0xbf) fdata = file(filename,"r+").read() isUTF = False if (fdata.startswith(utfstr)): isUTF = True fdata = fdata[3:] if (oldcopyright != None): if (fdata.startswith(oldcopyright)): fdata = fdata[len(oldcopyright):] if not (fdata.startswith(copyright)): print "updating "+filename fdata = copyright + fdata if (isUTF): file(filename,"w").write(utfstr+fdata) else: file(filename,"w").write(fdata) def recursive_traversal(dir, oldcopyright, copyright): global excludedir fns = os.listdir(dir) print "listing "+dir for fn in fns: fullfn = os.path.join(dir,fn) if (fullfn in excludedir): continue if (os.path.isdir(fullfn)): recursive_traversal(fullfn, oldcopyright, copyright) else: if (fullfn.endswith(".cs")): update_source(fullfn, oldcopyright, copyright) oldcright = file("oldcr.txt","r+").read() cright = file("copyrightText.txt","r+").read() recursive_traversal("..", oldcright, cright) exit() A: Here's a Bash script that'll do the trick, assuming you have the license header in the file license.txt: File addlicense.sh: #!/bin/bash for x in $*; do head -$LICENSELEN $x | diff license.txt - || ( ( cat license.txt; echo; cat $x) > /tmp/file; mv /tmp/file $x ) done Now run this in your source directory: export LICENSELEN=`wc -l license.txt | cut -f1 -d ' '` find . -type f \(-name \*.cpp -o -name \*.h \) -print0 | xargs -0 ./addlicense.sh A: Check out the copyright-header RubyGem. It supports files with extensions ending in php, c, h, cpp, hpp, hh, rb, css, js, html. It can also add and remove headers. Install it by typing "sudo gem install copyright-header" After that, can do something like: copyright-header --license GPL3 \ --add-path lib/ \ --copyright-holder 'Dude1 <dude1@host.com>' \ --copyright-holder 'Dude2 <dude2@host.com>' \ --copyright-software 'Super Duper' \ --copyright-software-description "A program that makes life easier" \ --copyright-year 2012 \ --copyright-year 2012 \ --word-wrap 80 --output-dir ./ It also supports custom license files using the --license-file argument. A: Edit: If you're using eclipse, there's a plugin I wrote a simple python script based on Silver Dragon's reply. I needed a more flexible solution so I came up with this. It allows you to add a headerfile to all files in a directory, recursively. You can optionally add a regex which the filenames should match, and a regex wich the directory names should match and a regex which the first line in the file shouldn't match. You can use this last argument to check if the header is already included. This script will automatically skip the first line in a file if this starts with a shebang (#!). This to not break other scripts that rely on this. If you do not wish this behaviour you'll have to comment out 3 lines in writeheader. here it is: #!/usr/bin/python """ This script attempts to add a header to each file in the given directory The header will be put the line after a Shebang (#!) if present. If a line starting with a regular expression 'skip' is present as first line or after the shebang it will ignore that file. If filename is given only files matchign the filename regex will be considered for adding the license to, by default this is '*' usage: python addheader.py headerfile directory [filenameregex [dirregex [skip regex]]] easy example: add header to all files in this directory: python addheader.py licenseheader.txt . harder example adding someone as copyrightholder to all python files in a source directory,exept directories named 'includes' where he isn't added yet: python addheader.py licenseheader.txt src/ ".*\.py" "^((?!includes).)*$" "#Copyright .* Jens Timmerman*" where licenseheader.txt contains '#Copyright 2012 Jens Timmerman' """ import os import re import sys def writeheader(filename,header,skip=None): """ write a header to filename, skip files where first line after optional shebang matches the skip regex filename should be the name of the file to write to header should be a list of strings skip should be a regex """ f = open(filename,"r") inpt =f.readlines() f.close() output = [] #comment out the next 3 lines if you don't wish to preserve shebangs if len(inpt) > 0 and inpt[0].startswith("#!"): output.append(inpt[0]) inpt = inpt[1:] if skip and skip.match(inpt[0]): #skip matches, so skip this file return output.extend(header) #add the header for line in inpt: output.append(line) try: f = open(filename,'w') f.writelines(output) f.close() print "added header to %s" %filename except IOError,err: print "something went wrong trying to add header to %s: %s" % (filename,err) def addheader(directory,header,skipreg,filenamereg,dirregex): """ recursively adds a header to all files in a dir arguments: see module docstring """ listing = os.listdir(directory) print "listing: %s " %listing #for each file/dir in this dir for i in listing: #get the full name, this way subsubdirs with the same name don't get ignored fullfn = os.path.join(directory,i) if os.path.isdir(fullfn): #if dir, recursively go in if (dirregex.match(fullfn)): print "going into %s" % fullfn addheader(fullfn, header,skipreg,filenamereg,dirregex) else: if (filenamereg.match(fullfn)): #if file matches file regex, write the header writeheader(fullfn, header,skipreg) def main(arguments=sys.argv): """ main function: parses arguments and calls addheader """ ##argument parsing if len(arguments) > 6 or len(arguments) < 3: sys.stderr.write("Usage: %s headerfile directory [filenameregex [dirregex [skip regex]]]\n" \ "Hint: '.*' is a catch all regex\nHint:'^((?!regexp).)*$' negates a regex\n"%sys.argv[0]) sys.exit(1) skipreg = None fileregex = ".*" dirregex = ".*" if len(arguments) > 5: skipreg = re.compile(arguments[5]) if len(arguments) > 3: fileregex = arguments[3] if len(arguments) > 4: dirregex = arguments[4] #compile regex fileregex = re.compile(fileregex) dirregex = re.compile(dirregex) #read in the headerfile just once headerfile = open(arguments[1]) header = headerfile.readlines() headerfile.close() addheader(arguments[2],header,skipreg,fileregex,dirregex) #call the main method main() A: For Java you can use Maven's License plugin: http://code.google.com/p/maven-license-plugin/ A: Ok here is a simple windows-only UI tool that searches for all files of your specified type in a folder, prepends the text you desire to the top (your license text), and copies the result to another directory (avoiding potential overwrite problems). It's also free. Required .Net 4.0. I am actually the author, so feel free to request fixes or new features... no promises on delivery schedule though. ;) more info: License Header tool at Amazify.com A: If you still need one, there is a little tool I have written, named SrcHead. You can find it at http://www.solvasoft.nl/downloads.html A: if you are using sbt, there is https://github.com/Banno/sbt-license-plugin
{ "language": "en", "url": "https://stackoverflow.com/questions/151677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "97" }
Q: Dynamically setting the Header text of a Silverlight DataGrid Column <my:DataGridTemplateColumn CanUserResize="False" Width="150" Header="{Binding MeetingName, Source={StaticResource LocStrings}}" SortMemberPath="MeetingName"> </my:DataGridTemplateColumn> I have the above column in a Silverlight grid control. But it is giving me a XamlParser error because of how I am trying to set the Header property. Has anyone done this before? I want to do this for multiple languages. Also my syntax for the binding to a resouce is correct because I tried it in a lable outside of the grid. A: You can't Bind to Header because it's not a FrameworkElement. You can make the text dynamic by modifying the Header Template like this: xmlns:data="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" xmlns:dataprimitives="clr-namespace:System.Windows.Controls.Primitives;assembly=System.Windows.Controls.Data" <data:DataGridTemplateColumn> <data:DataGridTemplateColumn.HeaderStyle> <Style TargetType="dataprimitives:DataGridColumnHeader"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <TextBlock Text="{Binding MeetingName, Source={StaticResource LocStrings}}" /> </ControlTemplate> </Setter.Value> </Setter> </Style> </data:DataGridTemplateColumn.HeaderStyle> </data:DataGridTemplateColumn> A: Found an interesting workaround that also works with the wpflocalizeaddin.codeplex.com: Created by Slyi It uses an IValueConverter: public class BindingConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (value.GetType().Name == "Binding") { ContentControl cc = new ContentControl(); cc.SetBinding(ContentControl.ContentProperty, value as Binding); return cc; } else return value; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return null; } } And a style for the DataGridColumnHeader <UserControl.Resources> <local:BindingConverter x:Key="BindCon"/> <Style x:Key="ColBinding" TargetType="dataprimitives:DataGridColumnHeader" > <Setter Property="ContentTemplate" > <Setter.Value> <DataTemplate> <ContentPresenter Content="{Binding Converter={StaticResource BindCon}}" /> </DataTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> so that you can keep your favorite binding syntax on the Header attribute <Grid x:Name="LayoutRoot" Background="White"> <StackPanel> <TextBox Text="binding header" x:Name="tbox" /> <data:DataGrid ItemsSource="{Binding AllPeople,Source={StaticResource folks}}" AutoGenerateColumns="False" ColumnHeaderStyle="{StaticResource ColBinding}" > <data:DataGrid.Columns> <data:DataGridTextColumn Binding="{Binding ID}" Header="{Binding Text, ElementName=tbox}" /> <data:DataGridTextColumn Binding="{Binding Name}" Header="hello" /> </data:DataGrid.Columns> </data:DataGrid> </StackPanel> </Grid> http://cid-289eaf995528b9fd.skydrive.live.com/self.aspx/Public/HeaderBinding.zip A: It does seem much simpler to set the value in code, as mentioned above: dg1.Columns[3].Header = SomeDynamicValue; Avoids using the Setter Property syntax, which in my case seemed to mess up the styling, even though I did try using ContentTemplate as well as Template. One point I slipped up on was that it is better to use the dg1.Columns[3].Header notation rather than trying to reference a named column. I had named one of my columns and tried to reference that in code but got null exceptions. Using the Columns[index] method worked well, and I could assign the Header a text string based on localization resources. A: My workaround was to use an attached property to set the binding automatically: public static class DataGridColumnHelper { public static readonly DependencyProperty HeaderBindingProperty = DependencyProperty.RegisterAttached( "HeaderBinding", typeof(object), typeof(DataGridColumnHelper), new PropertyMetadata(null, DataGridColumnHelper.HeaderBinding_PropertyChanged)); public static object GetHeaderBinding(DependencyObject source) { return (object)source.GetValue(DataGridColumnHelper.HeaderBindingProperty); } public static void SetHeaderBinding(DependencyObject target, object value) { target.SetValue(DataGridColumnHelper.HeaderBindingProperty, value); } private static void HeaderBinding_PropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { DataGridColumn column = d as DataGridColumn; if (column == null) { return; } column.Header = e.NewValue; } } Then, in the XAML: <data:DataGridTextColumn util:DataGridColumnHelper.HeaderBinding="{Binding MeetingName, Source={StaticResource LocStrings}}" /> A: To keep the visual styling from the original header, use ContentTemplate instead of Template: <Setter Property="ContentTemplate"> <Setter.Value> <DataTemplate> <Image Source="<image url goes here>"/> </DataTemplate> </Setter.Value> A: Why not simply set this in code: dg1.Columns[3].Header = SomeDynamicValue; A: Please note in solution provided by RobSiklos, Source {staticResource...} is the Key, if you plan to pass the RelativeSource like Binding DataContext.SelectedHistoryTypeItem,RelativeSource={RelativeSource AncestorType=sdk:DataGrid}, it may not work A: I got some solution for the binding. Since you use DataGridTemlateColumn, subclass it and add a property of type Binding named for instance "HeaderBinding". Now you can bind to that property from the XAML. Next, you should propagate the binding to the TextBlock in the DataTemplate of your header. For instance, you can do it with OnLoaded event of that TextBlock. HeaderTextBlock.SetBinding(TextBlock.TextProperty, HeaderBinding); That's it. If you have more columns and want to use only one DataTemplate then it's a bit more complicated, but the idea is the same.
{ "language": "en", "url": "https://stackoverflow.com/questions/151682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Asynchronous WPF Commands Note: The code in this question is part of deSleeper if you want the full source. One of the things I wanted out of commands was a baked design for asynchronous operations. I wanted the button pressed to disable while the command was executing, and come back when complete. I wanted the actual work to be performed in a ThreadPool work item. And lastly, I wanted a way to handle any errors that occurred during the asynchronous processing. My solution was an AsyncCommand: public abstract class AsyncCommand : ICommand { public event EventHandler CanExecuteChanged; public event EventHandler ExecutionStarting; public event EventHandler<AsyncCommandCompleteEventArgs> ExecutionComplete; public abstract string Text { get; } private bool _isExecuting; public bool IsExecuting { get { return _isExecuting; } private set { _isExecuting = value; if (CanExecuteChanged != null) CanExecuteChanged(this, EventArgs.Empty); } } protected abstract void OnExecute(object parameter); public void Execute(object parameter) { try { IsExecuting = true; if (ExecutionStarting != null) ExecutionStarting(this, EventArgs.Empty); var dispatcher = Dispatcher.CurrentDispatcher; ThreadPool.QueueUserWorkItem( obj => { try { OnExecute(parameter); if (ExecutionComplete != null) dispatcher.Invoke(DispatcherPriority.Normal, ExecutionComplete, this, new AsyncCommandCompleteEventArgs(null)); } catch (Exception ex) { if (ExecutionComplete != null) dispatcher.Invoke(DispatcherPriority.Normal, ExecutionComplete, this, new AsyncCommandCompleteEventArgs(ex)); } finally { dispatcher.Invoke(DispatcherPriority.Normal, new Action(() => IsExecuting = false)); } }); } catch (Exception ex) { IsExecuting = false; if (ExecutionComplete != null) ExecutionComplete(this, new AsyncCommandCompleteEventArgs(ex)); } } public virtual bool CanExecute(object parameter) { return !IsExecuting; } } so the question is: Is all this necessary? I've noticed built in asynchronous support for data-binding, so why not command execution? Perhaps it's related to the parameter question, which is my next question. A: I've been able to refine the original sample down and have some advice for anyone else running into similar situations. First, consider if BackgroundWorker will meet the needs. I still use AsyncCommand often to get the automatic disable function, but if many things could be done with BackgroundWorker. But by wrapping BackgroundWorker, AsyncCommand provides command like functionality with asynchronous behavior (I also have a blog entry on this topic) public abstract class AsyncCommand : ICommand { public event EventHandler CanExecuteChanged; public event EventHandler RunWorkerStarting; public event RunWorkerCompletedEventHandler RunWorkerCompleted; public abstract string Text { get; } private bool _isExecuting; public bool IsExecuting { get { return _isExecuting; } private set { _isExecuting = value; if (CanExecuteChanged != null) CanExecuteChanged(this, EventArgs.Empty); } } protected abstract void OnExecute(object parameter); public void Execute(object parameter) { try { onRunWorkerStarting(); var worker = new BackgroundWorker(); worker.DoWork += ((sender, e) => OnExecute(e.Argument)); worker.RunWorkerCompleted += ((sender, e) => onRunWorkerCompleted(e)); worker.RunWorkerAsync(parameter); } catch (Exception ex) { onRunWorkerCompleted(new RunWorkerCompletedEventArgs(null, ex, true)); } } private void onRunWorkerStarting() { IsExecuting = true; if (RunWorkerStarting != null) RunWorkerStarting(this, EventArgs.Empty); } private void onRunWorkerCompleted(RunWorkerCompletedEventArgs e) { IsExecuting = false; if (RunWorkerCompleted != null) RunWorkerCompleted(this, e); } public virtual bool CanExecute(object parameter) { return !IsExecuting; } } A: As I answered in your other question, you probably still want to bind to this synchronously and then launch the commands asynchronously. That way you avoid the problems you're having now.
{ "language": "en", "url": "https://stackoverflow.com/questions/151686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Where do you send the kernel console on an embedded system? I'm developing an embedded system which currently boots linux with console output on serial port 1 (using the console boot param from the boot loader). However, eventually we will be using this serial port. What is the best solution for the kernel console output? /dev/null? Can it be put on a pty somehow so that we could potentially get access to it? A: If you just want to read kernel printk messages from the console, and not actually run getty or a shell on it, you can use netconsole. You can supply the following to your bootloader kernel options (or to modprobe netconsole): netconsole=4444@10.0.0.1/eth1,9353@10.0.0.2/12:34:56:78:9a:bc where 4444 is the local port, 10.0.0.1 is the local ip, eth1 is the local interface to send the messages out of. 9353 is the remote port, 10.0.0.2 is the remote ip to send the messages to, and the final argument is your remote (eg: your desktop) system's mac address. Then to view the messages run: netcat -u -l -p 9353 You can read more about this in Documentation/networking/netconsole.txt A: You can access the printk message buffer from a shell using dmesg. The kernel buffer is of finite size and will overwrite the oldest entries with the most recent, so you'd need to either check dmesg periodically or hook up netconsole as @bmdhacks suggests. If there is no console you'll miss any oops information printed out by a kernel crash. Even using netconsole may not help there, if the kernel dies and begins to reboot before TCP manages to deliver the output to the remote socket. We generally modify kernel/panic.c:panic() to save register contents and other state to an area of NOR flash, so there will be at least some information available for post-mortem debugging.
{ "language": "en", "url": "https://stackoverflow.com/questions/151687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: C# Custom Applications that Access TFS We have built a custom application, for internal use, that accesses TFS. We use the Microsoft libraries for this (e.g Microsoft.TeamFoundation.dll). When this application is deployed to PCs that already have Team Explorer or VS installed, everything is fine. When it’s deployed to PCs that don’t have this installed, it fails. We include all the required DLLs, but the error we get is “Common Language Runtime detected and invalid program”. The error occurs on the moderately innocuous line: TeamFoundationServer myServer = new TeamFoundationServer(“ourserver.ourdomain.com”); Interestingly the popular TFSAdmin tool (when you drop in the required DLLs to the exe directory) gives the same error. I also note that many other custom applications that access TFS (e.g. http://hinshelwood.com/tfsstickybuddy.aspx) also require Team Explorer or VS to be installed to work. Clearly the DLLs are not enough and there is some magic that happens when these installs occur. Anyone know what it is? Anyone know how to make the magic happen? A: The "officially supported" way of writing an application that uses the TFS Object Model is to have Team Explorer installed on the machine. This is especially important for servicing purposes - i.e. making sure that when a service pack for VSTS is applied to the client machine then the TFS API's get upgraded as well. There are no re-distribution rights to the TFS API's therefore they should not be shipped with your application. BTW - Also note that if you are writing an application that used the TFS OM, then be sure to compile it as "X86" only rather than "Any CPU". The TFS API assemblies are all marked X86, but if you app is marked "Any CPU" then on a x64 machine it will get loaded by the 64bit CLR but when it comes time to dynamically load the TFS Assemblies it will fail. Good luck, Martin. A: Try this list: http://geekswithblogs.net/jjulian/archive/2007/06/14/113228.aspx And also trying putting them in the GAC. It may be a security trust issue - assemblies in the GAC are granted a higher CAS level.
{ "language": "en", "url": "https://stackoverflow.com/questions/151691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Infragistics V7.3 vs. V8.1 We are using the Infragistics controls for .net (both ASP.Net and WinForms) for a few years. We want to upgrade our current version (v6.3) and are in a pickle. We can upgrade to v7.3 or v8.1 but not to a later one due to licensing limitations, and we don't want to spend more money on licenses unless it is really necessary. Rumor has it that the x.1 version is often less stable than the previous (x-1).3 version because there are more changes. This is only a rumor and I don't know if it is true or not, nor do I claim it to be true. What I want to find out is which option is preferred and why: * *Upgrade to v7.3 *Upgrade to v8.1 *Buy more licenses and upgrade to the latest and greatest (must be really good reason). Any recommendations? Do you have experience with both versions and can compare their stability? Thank you A: I haven't, traditionally, had any "stability" issues with Infragistics NetAdvantage releases. However, I have found breaking changes (particularly with the ASP.NET CSOM side), and smaller, more insidious gotchas. As a result, our company's policy is to not upgrade an Infragistics-enabled project until it is absolutely, critically necessary. (Yes, that means my current dev machine actually has four versions of IG installed. My last one had 8, because it also supported VS2003/.NET 1.1!) IG does tend to fix quite a few bugs when it drops a new release; often I'll find that a hack or workaround I needed on an old project is no longer necessary on a new one. (This is, obviously, control-specific.) If you are going to upgrade the control set for a given project, my official advice would be to go as new as you can. You're going to (potentially) have breaking changes and such no matter what, but at least this way, you get the advantages of the most recent bug fixes as well. This may not be justification for new licensing, mind you -- if you're canceling your IG subscription, this isn't a reason to undo that. But go as new as you can -- which for you, sounds like 8.1.
{ "language": "en", "url": "https://stackoverflow.com/questions/151696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WPF Commands and Parameters I'm finding the WPF command parameters to be a limitation. Perhaps that's a sign that I'm using them for the wrong purpose, but I'm still giving it a try before I scrap and take a different tack. I put together a system for executing commands asynchronously, but it's hard to use anything that requires data input. I know one common pattern with WPF commands is to pass in this. But this will not work at all for asynchronous commands because all the dependency properties are then inaccessible. I end up with code like this: <Button Command="{Binding ElementName=servicePage, Path=InstallServiceCommand}"> <Button.CommandParameter> <MultiBinding Converter="{StaticResource InstallServiceParameterConverter}"> <MultiBinding.Bindings> <Binding ElementName="servicePage" Path="IsInstalled"/> <Binding ElementName="localURI" Path="Text"/> <Binding ElementName="meshURI" Path="Text"/> <Binding ElementName="registerWithMesh" Path="IsChecked"/> </MultiBinding.Bindings> </MultiBinding> </Button.CommandParameter> </Button> and also need the InstallServiceParametersConverter class (plus InstallServiceParameters). Anyone see an obvious way to improve upon this? A: Let me point you to my open source project Caliburn. You can find it at here. The feature that would most help solve your problem is documented briefly here A: Commands are for avoid tight coupling between your UI and program logic. Here, you're trying to get around that so you'll find it painful. You want to have your UI bound to some other object (that contains this data) and your command can then simply make a call to that object. Try searching for MV-V-M, or look at the PRISM example. A: Try using something like MVVM: Create a class that stores all the data displayed in the current "view" (window, page, whatever makes sense for your application). Bind your control to an instance of this class. Have the class expose some ICommand properties, bind the button's Command property to the appropriate property in the data class, you don't need to set the command parameter because all the data has already been transfered to the object using normal everyday data binding. Have an ICommand derived class that calls back into you object, look at this link for several implementations: http://dotnet.org.za/rudi/archive/2009/03/05/the-power-of-icommand.aspx Inside the method called by the command, pack all required data and send it off to a background thread. A: You need something that will allow you to request the proper object. Perhaps you need an object just for storing these parameters that your parent object can expose as a property. Really what you should do is leave the commands synchronous and execute them asynchronously by throwing off a new thread or passing them to a command manager (home rolled).
{ "language": "en", "url": "https://stackoverflow.com/questions/151700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Running Google Analytics in iframe? Our company runs a web site (oursite.com) with affiliate partners who send us traffic. In some cases, we set up our affiliates with their own subdomain (affiliate.oursite.com), and they display selected content from our site on their site (affiliate.com) using an iframe. Example of a page on their site: <html> <head></head> <body> <iframe src="affiliate.example.com/example_page.html"> ...content... [google analytics code for affiliate.oursite.com] </iframe> [google analytics code for affiliate.com] </body> </html> We would like to have Google Analytics tracking for affiliate.oursite.com. At present, it does not seem that Google is receiving any data from the affiliate when the page is loaded from the iframe. Now, there are security implications in that Javascript doesn't like accessing information about a page in a different domain, and IE doesn't like setting cookies for a different domain. Does anyone have a solution to this? Will we need to CNAME the affiliate.oursite.com to cname.oursite.com, or is there a cleaner solution? A: In the specific case of iframes, Google doesn't say much. I've been in the same situation but I'm glad I figured it out. I posted a walkthrough here. It's in French but you won't need to speak the language to copy / paste the code. Plus there's a demo file you can download. A: Sorry, but it's not going to work. The reason is because Google Analytics uses first-party cookies. This means the cookies that GA sets are specific to the domain the code is on. In your case, the iFrame is on a third-party domain. This means you're going to have two sets of GA cookies (one set for each domain), and no real way to reconcile the data. A: * *You have to append the Google Analytics tracking code to the end of example_page.html. The content between the <iframe> - </iframe> tag only displays for browsers, which do not support that specific tag. *Should you need to merge the results from the subdomains, there's an excellent article on Google's help site: How do I track all of the subdomains for my site in one profile? A: I found this YouTube for Google Analytic very helpful. The web page can be found here: How to Track Conversions in Iframes with Google Tag Manager
{ "language": "en", "url": "https://stackoverflow.com/questions/151701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Code Promotion: Build or Binary? Given a pretty basic source tree structure like the following: trunk ------- QA |-------- Stage |------- Prod |------ And an environment which mirrors that (Dev, QA, Staging and Production servers) - how do you all manage automated or manual code promotion? Do you use a CI server to build and promote at all stages? CI at Dev to build the binaries which are used throughout? Some other hybrid? I've been kicking around a couple of thoughts. The first being that each promotion would do a get latest, build, and then push the output of the build to the correct server. The second being that at some point - QA or Staging - the binaries that were promoted would be the exact same ones copied to the other stages. The third is keeping a secondary source tree for deployed binaries which would automatically move in lockstep with the code promotion. Any other thoughts or ideas? A: You want absolutely no possibility of the production code not being identical to the one QA tested, so you should use binaries. You should also tag the sources used to create each build, so if needed you can reproduce the build in a dev environment. At least if you make a mistake at this point, the consequences won't be as drastic. A: We make use of CI at the dev stage, and use daily builds that are promoted. These daily builds, if successful, are tagged in SVN so that we don't need to keep a seperate copy of the binaries. Any third party libraries referenced are also included so that a tag is an exact source copy of what is compiled.
{ "language": "en", "url": "https://stackoverflow.com/questions/151721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Developing for different platforms individually, does anyone recommend it? I know it is easy to recommend several cross platform libraries. However, are there benefits to treating each platform individually for your product? Yes there will be some base libraries used in all platforms, but UI and some other things would be different on each platform. I have no restriction that the product must be 100% alike on each platform. Mac, Linux, and Windows are the target platforms. Heavy win32 API, MFC is already used for the Windows version. The reason I'm not fully for cross platform libraries is because I feel that the end product will suffer a little in trying to generalize it for all platforms. A: I see the benefits as * *being able to get a totally native behavior in all your platforms which is a good thing for end users The risks being that * *You may end up coding three apps instead of one with a few differences (ie, it's easier to fall in the let's do X differently as well!) *You'll create slight incompatibilities between the different OS versions without noticing it. *The maintenance cost will suffer proportionally to the amount of different code you have So, by taking the risks in consideration, you can develop nice native behaving apps, with the largest common core possible (resisting the temptation of risk 1), a great set of integrated tests for all the platforms (minimizing risk 2) and designing to reduce the amount of code needed to get the native behavior (taking care of risk 3) A: I would say that the benefits of individual development for each platform are: - native look and feel - platform knowledge acquired by your developers -... i'm out of ideas Seriously, the cost of developing and maintaining 3 separate copies of your application could be huge if you're not careful. If it's just the GUI code you're worried about then by all means separate out the GUI portion into a per-platform development effort, but you'll regret not keeping the core "business logic" type code common. And given that keeping your GUI from your logic separate is generally considered a good idea, this would force your developers to maintain that separation when the temptation to put 'just a little bit' of business logic into the presentation layer inevitably arises. A: I can't deny that this is attractive, but it certainly raises the question of a middle ground. Obviously, you'll share some backend code, but how much can you share and what will you share in the UI side in both terms of design and code? I think this is an individual case issue. Usually, it probably isn't worth it, but some specific applications on some specific platforms should be targeted to the particulars of that operating system. A: Yes, cross-platform UI libraries are always going to make your program look and/or act a little "weird" on at least one platform. If you have good separation between the UI code and the internals, it isn't very hard to re-use the non-UI code, and create an optimized user interface for each platform. A lot of high-budget cross-platform applications are made just this way. A: It also depends just how different your platforms are, and whether all the functionality on one platform is available on another. I develop tools that have versions available for Win32 platforms, Windows CE and mobile, and various embedded platforms. Some aspects of the product simply aren't pertinent to platforms that don't have the matching hardware. For example, I'm currently working on [link text]a field based land survey product1 that works with a variety of measurement devices, such as GPS and Total stations, using a variety of communications media, such as bluetooth, RS232, and radio modems, on a wide range of platforms. The specific version I'm currently working on will be hosted onboard on a measurement device with a relatively small screen and keyboard, and very limited memory and storage. There is no point including the functionality relating to other devices, and it is highly beneficial to the user to keep the interface as simple and streamlined as possible. Streamlined user interfaces, small executables, and a zero tolerence to bloatware are still of paramount importance in some domains. Plenty of common source for sure, but also plenty of target specific source and conditional compilation. A: For client programs (i.e. not web server programs) it's pretty hard to find to find good Mac and Linux developers, and not really easy to find good Windows developers. The more platform independent code you have, the easier and faster it will be to complete to your project on all three platforms. It's going to be expensive and risky to maintain three codebases. Your competitor who uses cross platform tools is going to beat you to the market every time.
{ "language": "en", "url": "https://stackoverflow.com/questions/151728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Custom Sorting of a DataSet Column I have a DataSet that contains a few columns. One of these columns is a number - most of the time. Because it's occasionally a string, in the database it's a varchar(10) column. However, when you sort a varchar column, it sorts it like a string. What I want to do instead is to try and override this somehow so that it sorts the integers like integers; this isn't all that hard, and I've already got a function that does this elsewhere in my code. However, I don't think it's possible to give a typed DataSet like I have a custom type with its own sorting implementation, and from what I can see BindingSource doesn't think about columns at all, which makes it awful hard to sort on them. I can easily do it using the ListView/DataGridView sorting functionality -- but I'd like the display to be in virtual mode because of the quantity of data I have, and for that I need to provide my own sorting anyway. Is there any way to do what I want to do? A: I think your best bet is to add a new calculated column which converts the varchar(10) to an int, and sort on that. myDataTable.Columns.Add("Sorter", typeof(System.Int32), "Convert(TextColumn, 'System.Int32')"); This will throw exceptions when the string in your varchar column (which I've referred to as "TextColumn" in the code above), but you may be able to work around that using an Iif() function call in the expression.
{ "language": "en", "url": "https://stackoverflow.com/questions/151730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Ext.form.FormPanel and form submission I've been trying to submit a form with the FormPanel using the Action class Ext defaults to. However, I'd like it to consider the response as a script, not JSON-encoded. Has anyone had any experience on this? A: The best plan would be to create a custom action by extending Ext.form.Action. You can then eval the response object or the result object in the success callback of your custom action. Your custom action can be called from Ext.form.BasicForm in the usual way. A: using Form.getForm().submit() as your action response call works great and will automatically submit your form values to your backend as well as any custom values you would want to supply. On the return response you are passed a response object, which could be anything you want. So you could easily eval the return in the success handler. There are also overrides to add this functionality into a normal Ext.ajax.request seen here. There also exists Ext.data.ScriptTagProxy which does the same thing but for cross domain script tags.
{ "language": "en", "url": "https://stackoverflow.com/questions/151731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Parameterized singleton patterns The link over here lists ([http://www.yoda.arachsys.com/csharp/singleton.html][1]) some singleton patterns in C#. The article also describes the obvious that a singleton is not meant to accept parameters which “as otherwise a second request for an instance but with a different parameter could be problematic”. This means that any parameters you need to get the class working should be induced as a property. I am curious to know if there are any parameterized singleton design patterns out there. Accepting values as a property does not enforce anything to the consumer. A: Based on your question, it seems you may be looking at an Abstract Factory pattern (creates an instance of several families of classes) that keeps an internal list/dictionary of classes that have already been instantiated, thus mimicking the singleton pattern functionality. You would then use this factory class to request an object based on parameters you've passed in, and if it exists in its internal list it gets returned, and if not, a new instance is created and then added to the list and returned. A: his means that any parameters you need to get the class working should be induced as a property. Ideally singleton class should not depend on external code. In case when you need to provide additional information to singleton constructor, you can just create a pool of objects. It can be a simple list or any other suitable data structure. You will need to make it thread-safe (if it matters) and guarantee that there will not be multiple objects instantiated with the same parameters. Basically you will have a class factory. It will return the same object for the same parameters. In this case you will have N singleton objects - i.e. objects with different state will be treated as completely different instances. You can find examples of such singletons in Inversion of Controls containers. For example you can have some service that depends on other services. When you call container.Get(type of service). DI container will automatically initialize service instance with required parameters and return it to caller. But this service instance becomes singleton - you will not be able to create another service with the same parameters.
{ "language": "en", "url": "https://stackoverflow.com/questions/151736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: iPhone tab bar Item image resolution? What is the resolution of the image for the tab bar item? And also, please provide some other useful information regarding that tab item image. Thanks in advance. A: The documentation says that the tab bar image is usually 30x30, but I've found that the best size to setup the images is 48x32 pixels. This size still renders and gives you a bit more space. The image is a PNG with transparency, only the mask is used. The UI renders the mask gray when unselected or blue/chrome when selected. A: Check the "UITabBarItem Class Reference" in the SDK documentation A: http://developer.apple.com/iphone/library/documentation/UserExperience/Conceptual/MobileHIG/IconsImages/IconsImages.html When possible, you should use the system-provided buttons and icons in navigation bars, toolbars, and tab bars... For a complete list of standard buttons and icons, and guidelines on how to use them, see “System-Provided Buttons and Icons.” Of course, not every task your application performs is a standard one. If your application supports custom tasks users need to perform frequently, you need to create custom icons that represent these tasks in your toolbar or navigation bar. Similarly, if your application displays a tab bar that allows users to switch among custom application modes or custom subsets of data, you need to design tab bar icons that clearly describe these modes or subsets. This section gives you some guidance on how to design icons that work well in navigation bars, toolbars, and tab bars. Before you create the art for your icon, you need to spend some time thinking about what it should convey. As you consider designs, aim for an icon that is: * *Simple and streamlined. Too many details can make an icon appear sloppy or indecipherable. *Not easily mistaken for one of the system-provided icons. Users should be able to distinguish your custom icon from the standard icons at a glance. *Readily understood and widely acceptable. Strive to create a symbol that most users will interpret correctly and that no users will find offensive. After you’ve decided on the appearance of your icon, follow these guidelines as you create it: * *Use the PNG format. *Use pure white with appropriate alpha. *Do not include a drop shadow. *Use anti-aliasing. *If you decide to add a bevel, be sure that it is 90° (to help you do this, imagine a light source positioned at the top of the icon). *For toolbar and navigation bar icons, create an icon that measures about 20 x 20 pixels. *For tab bar icons, create an icon that measures about 30 x 30 pixels... A: This statement is technically incorrect: "...only the mask is used. The UI renders the mask gray when unselected or blue/chrome when selected..." You are not supplying any type of mask. Rather, the tab image should simply be a monochrome .png image, aka a only 1 color used. If you provide a colored image, UIKit will quantize it down to a monochrome image. In the worst case the color image will be ~8000 bytes, which is a waste of ~6k (retina). The file format must be 24-bit .png with transparency for the quantization to work properly. Even though this is a color file format, don't use the color or you're wasting space. Bottom line is to have the right size, with the best performance and memory usage, use one of these: Standard display   48x32 .PNG, 24-bit with transparency (but use only 1 color). Worst case size ~500 bytes.   30x30 .PNG, 24-bit with transparency (but use only 1 color). Worst case size ~350 bytes. Retina display   60x60 .PNG, 24-bit with transparency (but use only 1 color). Worst case size ~2000 bytes.
{ "language": "en", "url": "https://stackoverflow.com/questions/151746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Storyboards can't find ControlTemplate elements I've created some fairly simple XAML, and it works perfectly (at least in KAXML). The storyboards run perfectly when called from within the XAML, but when I try to access them from outside I get the error: 'buttonGlow' name cannot be found in the name scope of 'System.Windows.Controls.Button'. I am loading the XAML with a stream reader, like this: Button x = (Button)XamlReader.Load(stream); And trying to run the Storyboard with: Storyboard pressedButtonStoryboard = Storyboard)_xamlButton.Template.Resources["ButtonPressed"]; pressedButtonStoryboard.Begin(_xamlButton); I think that the problem is that fields I am animating are in the template and that storyboard is accessing the button. Here is the XAML: <Button xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:customControls="clr-namespace:pk_rodoment.SkinningEngine;assembly=pk_rodoment" Width="150" Height="55"> <Button.Resources> <Style TargetType="Button"> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid Background="#00FFFFFF"> <Grid.BitmapEffect> <BitmapEffectGroup> <OuterGlowBitmapEffect x:Name="buttonGlow" GlowColor="#A0FEDF00" GlowSize="0"/> </BitmapEffectGroup> </Grid.BitmapEffect> <Border x:Name="background" Margin="1,1,1,1" CornerRadius="15"> <Border.Background> <SolidColorBrush Color="#FF0062B6"/> </Border.Background> </Border> <ContentPresenter HorizontalAlignment="Center" Margin="{TemplateBinding Control.Padding}" VerticalAlignment="Center" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}"/> </Grid> <ControlTemplate.Resources> <Storyboard x:Key="ButtonPressed"> <Storyboard.Children> <DoubleAnimation Duration="0:0:0.4" FillBehavior="HoldEnd" Storyboard.TargetName="buttonGlow" Storyboard.TargetProperty="GlowSize" To="4"/> <ColorAnimation Duration="0:0:0.6" FillBehavior="HoldEnd" Storyboard.TargetName="background" Storyboard.TargetProperty="(Panel.Background).(SolidColorBrush.Color)" To="#FF844800"/> </Storyboard.Children> </Storyboard> <Storyboard x:Key="ButtonReleased"> <Storyboard.Children> <DoubleAnimation Duration="0:0:0.2" FillBehavior="HoldEnd" Storyboard.TargetName="buttonGlow" Storyboard.TargetProperty="GlowSize" To="0"/> <ColorAnimation Duration="0:0:0.2" FillBehavior="Stop" Storyboard.TargetName="background" Storyboard.TargetProperty="(Panel.Background).(SolidColorBrush.Color)" To="#FF0062B6"/> </Storyboard.Children> </Storyboard> </ControlTemplate.Resources> <ControlTemplate.Triggers> <Trigger Property="ButtonBase.IsPressed" Value="True"> <Trigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource ButtonPressed}"/> </Trigger.EnterActions> <Trigger.ExitActions> <BeginStoryboard Storyboard="{StaticResource ButtonReleased}"/> </Trigger.ExitActions> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </Button.Resources> <DockPanel> <TextBlock x:Name="TextContent" FontSize="28" Foreground="White" >Test</TextBlock> </DockPanel> </Button> Any suggestions from anyone who understands WPF and XAML a lot better than me? Here is the error stacktrace: at System.Windows.Media.Animation.Storyboard.ResolveTargetName(String targetName, INameScope nameScope, DependencyObject element) at System.Windows.Media.Animation.Storyboard.ClockTreeWalkRecursive(Clock currentClock, DependencyObject containingObject, INameScope nameScope, DependencyObject parentObject, String parentObjectName, PropertyPath parentPropertyPath, HandoffBehavior handoffBehavior, HybridDictionary clockMappings, Int64 layer) at System.Windows.Media.Animation.Storyboard.ClockTreeWalkRecursive(Clock currentClock, DependencyObject containingObject, INameScope nameScope, DependencyObject parentObject, String parentObjectName, PropertyPath parentPropertyPath, HandoffBehavior handoffBehavior, HybridDictionary clockMappings, Int64 layer) at System.Windows.Media.Animation.Storyboard.BeginCommon(DependencyObject containingObject, INameScope nameScope, HandoffBehavior handoffBehavior, Boolean isControllable, Int64 layer) at System.Windows.Media.Animation.Storyboard.Begin(FrameworkElement containingObject) at pk_rodoment.SkinningEngine.ButtonControlWPF._button_MouseDown(Object sender, MouseButtonEventArgs e) at System.Windows.Input.MouseButtonEventArgs.InvokeEventHandler(Delegate genericHandler, Object genericTarget) at System.Windows.RoutedEventArgs.InvokeHandler(Delegate handler, Object target) at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs) at System.Windows.EventRoute.InvokeHandlersImpl(Object source, RoutedEventArgs args, Boolean reRaised) at System.Windows.UIElement.RaiseEventImpl(DependencyObject sender, RoutedEventArgs args) at System.Windows.UIElement.RaiseEvent(RoutedEventArgs args, Boolean trusted) at System.Windows.Input.InputManager.ProcessStagingArea() at System.Windows.Input.InputManager.ProcessInput(InputEventArgs input) at System.Windows.Input.InputProviderSite.ReportInput(InputReport inputReport) at System.Windows.Interop.HwndMouseInputProvider.ReportInput(IntPtr hwnd, InputMode mode, Int32 timestamp, RawMouseActions actions, Int32 x, Int32 y, Int32 wheel) at System.Windows.Interop.HwndMouseInputProvider.FilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at System.Windows.Interop.HwndSource.InputFilterMessage(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.WrappedInvoke(Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at System.Windows.Threading.Dispatcher.Invoke(DispatcherPriority priority, Delegate method, Object arg) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.PushFrame(DispatcherFrame frame) at System.Windows.Threading.Dispatcher.Run() at System.Windows.Application.RunDispatcher(Object ignore) at System.Windows.Application.RunInternal(Window window) at System.Windows.Application.Run(Window window) at System.Windows.Application.Run() at ControlTestbed.App.Main() in C:\svnprojects\rodomont\ControlsTestbed\obj\Debug\App.g.cs:line 0 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() A: I made it work by restructuring the XAML so that the SolidColorBrush and OuterGlowBitmapEffect were resources of the button and thus referenced are shared by the Storyboards and the elements they're applied to. I retrieved the Storyboard and called Begin() on it just as you did, but here is the modified XAML for the Button: (Please note the keys "buttonGlow" and "borderBackground" and all StaticResource markup extensions referencing them.) <Button xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="150" Height="55"> <Button.Resources> <OuterGlowBitmapEffect x:Key="buttonGlow" GlowColor="#A0FEDF00" GlowSize="0" /> <SolidColorBrush x:Key="borderBackground" Color="#FF0062B6" /> <Style TargetType="Button"> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid Name="outerGrid" Background="#00FFFFFF" BitmapEffect="{StaticResource buttonGlow}"> <Border x:Name="background" Margin="1,1,1,1" CornerRadius="15" Background="{StaticResource borderBackground}"> </Border> <ContentPresenter HorizontalAlignment="Center" Margin="{TemplateBinding Control.Padding}" VerticalAlignment="Center" Content="{TemplateBinding ContentControl.Content}" ContentTemplate="{TemplateBinding ContentControl.ContentTemplate}" /> </Grid> <ControlTemplate.Resources> <Storyboard x:Key="ButtonPressed"> <Storyboard.Children> <DoubleAnimation Duration="0:0:0.4" FillBehavior="HoldEnd" Storyboard.Target="{StaticResource buttonGlow}" Storyboard.TargetProperty="GlowSize" To="4" /> <ColorAnimation Duration="0:0:0.6" FillBehavior="HoldEnd" Storyboard.Target="{StaticResource borderBackground}" Storyboard.TargetProperty="Color" To="#FF844800" /> </Storyboard.Children> </Storyboard> <Storyboard x:Key="ButtonReleased"> <Storyboard.Children> <DoubleAnimation Duration="0:0:0.2" FillBehavior="HoldEnd" Storyboard.Target="{StaticResource buttonGlow}" Storyboard.TargetProperty="GlowSize" To="0" /> <ColorAnimation Duration="0:0:0.2" FillBehavior="Stop" Storyboard.Target="{StaticResource borderBackground}" Storyboard.TargetProperty="Color" To="#FF0062B6" /> </Storyboard.Children> </Storyboard> </ControlTemplate.Resources> <ControlTemplate.Triggers> <Trigger Property="ButtonBase.IsPressed" Value="True"> <Trigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource ButtonPressed}" /> </Trigger.EnterActions> <Trigger.ExitActions> <BeginStoryboard Storyboard="{StaticResource ButtonReleased}" /> </Trigger.ExitActions> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </Button.Resources> <DockPanel> <TextBlock x:Name="TextContent" FontSize="28" Foreground="White">Test</TextBlock> </DockPanel> </Button> A: Finally found it. When you call Begin on storyboards that reference elements in the ControlTemplate, you must pass in the control template as well. Changing: pressedButtonStoryboard.Begin(_xamlButton); To: pressedButtonStoryboard.Begin(_xamlButton, _xamlButton.Template); Fixed everything. A: I think just had this problem. Let me refer you to my blog entry on the matter: http://www.cplotts.com/2008/09/26/dr-wpf-namescopes/ Basically, the trick is that you need to call Begin with an argument that is an object in the same name scope that the storyboards are targeting. In particular, from your sample above, I would try to call Begin and send in a reference to the _background element in your template. Let me know if this doesn't solve your problem. Update: I like Erickson's solution better than mine ... and it worked for me too. I don't know how I missed that overload of the Begin method! A: (@ Sam Meldrum) To get STOP working, add 'true for "isControllable" at the begin pressedButtonStoryboard.Begin(_xamlButton, _xamlButton.Template); change to pressedButtonStoryboard.Begin(_xamlButton, _xamlButton.Template,true); and now pressedButtonStoryboard.Stop(xamlButton) will work A: I ran into this error as well. My situation is a bit different, perhaps simpler. I have a WPF window that has a template with an animation on it. I then had a separate completely unrelated animation triggered by MouseEnter defined for a button, both on the window itself. I started getting 'button1 cannot be found in the namescope'. After playing around a bit with some of the ideas here and debugging the actual Namescope (put a watch on the result of NameScope.GetNameScope(this), I finally found the solution was to put: this.RegisterName("button1", this.button1); in a MouseEnter method defined in code and attached to the button. This MouseEnter will be called before the xaml Trigger. Curiously, the register method does not work if it is in the constructor or Window.Activated() method. Hope this helps someone.
{ "language": "en", "url": "https://stackoverflow.com/questions/151752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Identifying the device requesting a response Is it possible for a web server to know which type of device request has been received from? For example, can a create a website which shows different contents if request came from a computer (Firefox) and something different if it came from iPhone? A: What Mitch said, with the caveat that it's possible to falsify ones user agent. A: The way is the User Agent header, as has been said. You best use a list like this one to find out which mobile is it. When I had to do something like it I stored the unknown received User Agents in a table to find out later about the ones I didn't have stored and thus wasn't able to know for sure what to serve. A: Check the User-Agent in the Request Header For full details on HTTP headers, see the specifications at http://www.w3.org/Protocols/.
{ "language": "en", "url": "https://stackoverflow.com/questions/151756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are .NET framework updates pushed to every user of windows-updates? Does Microsoft force an update down to all of its users of windows-update? Is it for legal reasons? EDIT(aku): Question was reformulated. if you want to know which version of Windows comes with .NET see this topic: https://stackoverflow.com/questions/71390/which-operating-systems-come-with-net A: Yes, Windows Vista shipped with .NET Framework 3.0. I'm sure future versions of Windows will ship with whichever is the most recent version of the framework. Scott Hanselman recently blogged about the status of .NET 3.5 on Windows Update: http://www.hanselman.com/blog/UpdateOnNETFramework35SP1AndWindowsUpdate.aspx A: Does Microsoft force an update down to all of it's users of windows-update? Is it for legal reasons? Probably it would be better to ask Microsoft. I can't remember a case when they forcibly installed a new version of .NET. AFAIK only critical security updates are mandatory. They don't force you to install service packs, etc. But I can be wrong :) A: Some XP OEM installations have some version of .NET installed as it is needed from some included software. Vista comes with .NET 3.0 installed. A: Microsoft would only push down an update as part of the Windows/Microsoft update process, and even then, only if you have them set to automatically install. If you didn't have updated configured to run without asking, then you would never have to update the .NET framework, though these updates would rarely break functionality, and are highly recommended for security reasons. A: Scott Hanselman recently blogged about the status of .NET 3.5 on Windows Update: http://www.hanselman.com/blog/UpdateOnNETFramework35SP1AndWindowsUpdate.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/151765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the common way for OOP Pattern design (Data Access) Originally there was the DAL object which my BO's called for info and then passed to UI. Then I started noticing reduced code in UI and there were Controller classes. What's the decent recomendation. I currently structure mine Public Class OrderDAL Private _id Integer Private _order as Order Public Function GetOrder(id as Integer) as Order ...return Order End Function End Class then I have controller classes (recently implemented this style) Public Class OrderController Private Shared _orderDAL as new OrderDAL Public Shared Function GetOrder(id) As Order Return _orderDAL.GetOrder(id) End Function End Class Then in my application My app Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click msgbox(OrderController.GetOrder(12345).Customer.Name) End Sub End app I originally found that with the Shared Class I didn't have to keep creating a new instance of the DAL whenever I need to fetch data Dim _orderDAL as New OrderDal _orderDAL.GetOrder(1234) ..... What's your take? Thanks A: I think there are several alternatives listed in this excellent book: Patterns of Enterprise Application Architecture. Some patterns that may be of interest to you: * *Active Record *Table Data Gateway *Row Data Gateway *Data Mapper A: Well your application shouldn't be instantiating seperate versions of the data acces layer, so you have that under control. The Psuedo code you posted is really hard to read though. The question is, what is your data access layer, and how much is there? That's going to dictate a good bit of what you do. If your persiting to a file then I think what you have written is fine, and if you need to share it with other controllers in your system, it's possible to wrap the item into a singelton(shudder). If you really are doing order processing and persisting back to a database, I personaly think it's time to look at an ORM. These packages will handle the CRUM aspects for you and reduce the number of items that you have to maintain. My $.02, and I reserve the right to revise my answer once I see a better example. A: I can't speak to the VB details because I'm not a VB developer, but: What you are doing is a well-established good practice, separating the GUI/presentation layer from the data layer. Including real application code in GUI event methods is a (sadly also well-established) bad practice. Your controller class resembles the bridge pattern which is also a good idea if both layers shall be able to change their form without the other knowing about it. Go ahead! A: Its a good practice -- especially when you get to a point where the Controller will need to do more than simple delegation to the underlying DAL. A: I've used your solution in the past, and the only problem I faced is that "Shared" or "static" methods don't support inheritance. When your application grows, you might very well need to support different types of "OrderControllers". The estabilished way of supporting different OrderControllers would be, in theory, to create a factory: OrderControllerFactory.ConfiguredOrderController().GetOrder(42); The problem here is: what type is returned by "ConfiguredOrderController()"? Because it must have the static "GetOrder(int id)" method -- and static methods are not supported by inheritance or interfaces. The way around this is not to use static methods in the OrderController class. public interface IOrderController { Order GetOrder(int Id) } public class OrderController: IOrderController { public Order GetOrder(int Id) {} } and public class OrderControllerFactory() { public IOrderController ConfiguredOrderController() {} } Hence, you will probably be better off by using non-static methods for the controller.
{ "language": "en", "url": "https://stackoverflow.com/questions/151769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Are there any tools for converting Managed C++ to C++/CLI? We have an old project written using Managed C++ syntax. I would like to propose to the team a reasonably pain-free (I don't mind some level of human interaction, I think I'm realistic in my expectations that we'll still have to do some work by hand) method of updating the existing code to C++/CLI syntax so that we can also add XML documentation (the project is a library into other projects and having documentation would be immensely useful). So, are there any good tools out there to help with this? Or is it just a case of switching to the new C++/CLI syntax compiler and fixing errors as we go? A: Microsoft has a tool that will help a little. Visual c++ blog post about it. Here are a couple other resources I found useful when I made our switch C++/CLI Migration Primer Managed Extensions for C++ Syntax Upgrade Checklist The Microsoft tool is just a start. there were many files that it could not convert. A: Unfortunately, I found the migration tool from Microsoft to be two steps away from useless. However, there is the C++/CLI disassembler for Reflector, which has proved a lot more useful. This isn't perfect as comments get lost, but I've found that re-adding the comments is much easier than trying to hand-convert the majority of the code.
{ "language": "en", "url": "https://stackoverflow.com/questions/151776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I save an activity state using the save instance state? I've been working on the Android SDK platform, and it is a little unclear how to save an application's state. So given this minor re-tooling of the 'Hello, Android' example: package com.android.hello; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class HelloAndroid extends Activity { private TextView mTextView = null; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mTextView = new TextView(this); if (savedInstanceState == null) { mTextView.setText("Welcome to HelloAndroid!"); } else { mTextView.setText("Welcome back."); } setContentView(mTextView); } } I thought it would be enough for the simplest case, but it always responds with the first message, no matter how I navigate away from the app. I'm sure the solution is as simple as overriding onPause or something like that, but I've been poking away in the documentation for 30 minutes or so and haven't found anything obvious. A: Kotlin You must override onSaveInstanceState and onRestoreInstanceState to store and retrieve your variables you want to be persistent Life cycle graph Store variables public override fun onSaveInstanceState(savedInstanceState: Bundle) { super.onSaveInstanceState(savedInstanceState) // prepare variables here savedInstanceState.putInt("kInt", 10) savedInstanceState.putBoolean("kBool", true) savedInstanceState.putDouble("kDouble", 4.5) savedInstanceState.putString("kString", "Hello Kotlin") } Retrieve variables public override fun onRestoreInstanceState(savedInstanceState: Bundle) { super.onRestoreInstanceState(savedInstanceState) val myInt = savedInstanceState.getInt("kInt") val myBoolean = savedInstanceState.getBoolean("kBool") val myDouble = savedInstanceState.getDouble("kDouble") val myString = savedInstanceState.getString("kString") // use variables here } A: Not sure if my solution is frowned upon or not, but I use a bound service to persist ViewModel state. Whether you store it in memory in the service or persist and retrieve it from a SQLite database depends on your requirements. This is what services of any flavor do, they provide services such as maintaining application state and abstract common business logic. Because of memory and processing constraints inherent on mobile devices, I treat Android views in a similar way to a web page. The page does not maintain state, it is purely a presentation layer component whose only purpose is to present application state and accept user input. Recent trends in web app architecture employ the use of the age-old Model, View, Controller (MVC) pattern, where the page is the View, domain data is the model, and the controller sits behind a web service. The same pattern can be employed in Android with the View being, well ... the View, the model is your domain data, and the Controller is implemented as an Android bound service. Whenever you want a view to interact with the controller, bind to it on start/resume and unbind on stop/pause. This approach gives you the added bonus of enforcing the Separation of Concern design principle in that all of you application business logic can be moved into your service which reduces duplicated logic across multiple views and allows the view to enforce another important design principle, Single Responsibility. A: Both methods are useful and valid and both are best suited for different scenarios: * *The user terminates the application and re-opens it at a later date, but the application needs to reload data from the last session – this requires a persistent storage approach such as using SQLite. *The user switches application and then comes back to the original and wants to pick up where they left off - save and restore bundle data (such as application state data) in onSaveInstanceState() and onRestoreInstanceState() is usually adequate. If you save the state data in a persistent manner, it can be reloaded in an onResume() or onCreate() (or actually on any lifecycle call). This may or may not be desired behaviour. If you store it in a bundle in an InstanceState, then it is transient and is only suitable for storing data for use in the same user ‘session’ (I use the term session loosely) but not between ‘sessions’. It is not that one approach is better than the other, like everything, it is just important to understand what behaviour you require and to select the most appropriate approach. A: Saving state is a kludge at best as far as I'm concerned. If you need to save persistent data, just use an SQLite database. Android makes it SOOO easy. Something like this: import java.util.Date; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; public class dataHelper { private static final String DATABASE_NAME = "autoMate.db"; private static final int DATABASE_VERSION = 1; private Context context; private SQLiteDatabase db; private OpenHelper oh ; public dataHelper(Context context) { this.context = context; this.oh = new OpenHelper(this.context); this.db = oh.getWritableDatabase(); } public void close() { db.close(); oh.close(); db = null; oh = null; SQLiteDatabase.releaseMemory(); } public void setCode(String codeName, Object codeValue, String codeDataType) { Cursor codeRow = db.rawQuery("SELECT * FROM code WHERE codeName = '"+ codeName + "'", null); String cv = "" ; if (codeDataType.toLowerCase().trim().equals("long") == true){ cv = String.valueOf(codeValue); } else if (codeDataType.toLowerCase().trim().equals("int") == true) { cv = String.valueOf(codeValue); } else if (codeDataType.toLowerCase().trim().equals("date") == true) { cv = String.valueOf(((Date)codeValue).getTime()); } else if (codeDataType.toLowerCase().trim().equals("boolean") == true) { String.valueOf(codeValue); } else { cv = String.valueOf(codeValue); } if(codeRow.getCount() > 0) //exists-- update { db.execSQL("update code set codeValue = '" + cv + "' where codeName = '" + codeName + "'"); } else // does not exist, insert { db.execSQL("INSERT INTO code (codeName, codeValue, codeDataType) VALUES(" + "'" + codeName + "'," + "'" + cv + "'," + "'" + codeDataType + "')" ); } } public Object getCode(String codeName, Object defaultValue){ //Check to see if it already exists String codeValue = ""; String codeDataType = ""; boolean found = false; Cursor codeRow = db.rawQuery("SELECT * FROM code WHERE codeName = '"+ codeName + "'", null); if (codeRow.moveToFirst()) { codeValue = codeRow.getString(codeRow.getColumnIndex("codeValue")); codeDataType = codeRow.getString(codeRow.getColumnIndex("codeDataType")); found = true; } if (found == false) { return defaultValue; } else if (codeDataType.toLowerCase().trim().equals("long") == true) { if (codeValue.equals("") == true) { return (long)0; } return Long.parseLong(codeValue); } else if (codeDataType.toLowerCase().trim().equals("int") == true) { if (codeValue.equals("") == true) { return (int)0; } return Integer.parseInt(codeValue); } else if (codeDataType.toLowerCase().trim().equals("date") == true) { if (codeValue.equals("") == true) { return null; } return new Date(Long.parseLong(codeValue)); } else if (codeDataType.toLowerCase().trim().equals("boolean") == true) { if (codeValue.equals("") == true) { return false; } return Boolean.parseBoolean(codeValue); } else { return (String)codeValue; } } private static class OpenHelper extends SQLiteOpenHelper { OpenHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE IF NOT EXISTS code" + "(id INTEGER PRIMARY KEY, codeName TEXT, codeValue TEXT, codeDataType TEXT)"); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { } } } A simple call after that dataHelper dh = new dataHelper(getBaseContext()); String status = (String) dh.getCode("appState", "safetyDisabled"); Date serviceStart = (Date) dh.getCode("serviceStartTime", null); dh.close(); dh = null; A: Simple quick to solve this problem is using IcePick First, setup the library in app/build.gradle repositories { maven {url "https://clojars.org/repo/"} } dependencies { compile 'frankiesardo:icepick:3.2.0' provided 'frankiesardo:icepick-processor:3.2.0' } Now, let's check this example below how to save state in Activity public class ExampleActivity extends Activity { @State String username; // This will be automatically saved and restored @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Icepick.restoreInstanceState(this, savedInstanceState); } @Override public void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); Icepick.saveInstanceState(this, outState); } } It works for Activities, Fragments or any object that needs to serialize its state on a Bundle (e.g. mortar's ViewPresenters) Icepick can also generate the instance state code for custom Views: class CustomView extends View { @State int selectedPosition; // This will be automatically saved and restored @Override public Parcelable onSaveInstanceState() { return Icepick.saveInstanceState(this, super.onSaveInstanceState()); } @Override public void onRestoreInstanceState(Parcelable state) { super.onRestoreInstanceState(Icepick.restoreInstanceState(this, state)); } // You can put the calls to Icepick into a BaseCustomView and inherit from it // All Views extending this CustomView automatically have state saved/restored } A: I think I found the answer. Let me tell what I have done in simple words: Suppose I have two activities, activity1 and activity2 and I am navigating from activity1 to activity2 (I have done some works in activity2) and again back to activity 1 by clicking on a button in activity1. Now at this stage I wanted to go back to activity2 and I want to see my activity2 in the same condition when I last left activity2. For the above scenario what I have done is that in the manifest I made some changes like this: <activity android:name=".activity2" android:alwaysRetainTaskState="true" android:launchMode="singleInstance"> </activity> And in the activity1 on the button click event I have done like this: Intent intent = new Intent(); intent.setFlags(Intent.FLAG_ACTIVITY_REORDER_TO_FRONT); intent.setClassName(this,"com.mainscreen.activity2"); startActivity(intent); And in activity2 on button click event I have done like this: Intent intent=new Intent(); intent.setClassName(this,"com.mainscreen.activity1"); startActivity(intent); Now what will happen is that whatever the changes we have made in the activity2 will not be lost, and we can view activity2 in the same state as we left previously. I believe this is the answer and this works fine for me. Correct me if I am wrong. A: onSaveInstanceState() for transient data (restored in onCreate()/onRestoreInstanceState()), onPause() for persistent data (restored in onResume()). From Android technical resources: onSaveInstanceState() is called by Android if the Activity is being stopped and may be killed before it is resumed! This means it should store any state necessary to re-initialize to the same condition when the Activity is restarted. It is the counterpart to the onCreate() method, and in fact the savedInstanceState Bundle passed in to onCreate() is the same Bundle that you construct as outState in the onSaveInstanceState() method. onPause() and onResume() are also complimentary methods. onPause() is always called when the Activity ends, even if we instigated that (with a finish() call for example). We will use this to save the current note back to the database. Good practice is to release any resources that can be released during an onPause() as well, to take up less resources when in the passive state. A: Now Android provides ViewModels for saving state, you should try to use that instead of saveInstanceState. A: Really onSaveInstanceState() is called when the Activity goes to background. Quote from the docs: "This method is called before an activity may be killed so that when it comes back sometime in the future it can restore its state." Source A: The savedInstanceState is only for saving state associated with a current instance of an Activity, for example current navigation or selection info, so that if Android destroys and recreates an Activity, it can come back as it was before. See the documentation for onCreate and onSaveInstanceState For more long lived state, consider using a SQLite database, a file, or preferences. See Saving Persistent State. A: Note that it is not safe to use onSaveInstanceState and onRestoreInstanceState for persistent data, according to the documentation on Activity. The document states (in the 'Activity Lifecycle' section): Note that it is important to save persistent data in onPause() instead of onSaveInstanceState(Bundle) because the later is not part of the lifecycle callbacks, so will not be called in every situation as described in its documentation. In other words, put your save/restore code for persistent data in onPause() and onResume()! For further clarification, here's the onSaveInstanceState() documentation: This method is called before an activity may be killed so that when it comes back some time in the future it can restore its state. For example, if activity B is launched in front of activity A, and at some point activity A is killed to reclaim resources, activity A will have a chance to save the current state of its user interface via this method so that when the user returns to activity A, the state of the user interface can be restored via onCreate(Bundle) or onRestoreInstanceState(Bundle). A: Meanwhile I do in general no more use Bundle savedInstanceState & Co The life cycle is for most activities too complicated and not necessary. And Google states itself, it is NOT even reliable. My way is to save any changes immediately in the preferences: SharedPreferences p; p.edit().put(..).commit() In some way SharedPreferences work similar like Bundles. And naturally and at first such values have to be read from preferences. In the case of complex data you may use SQLite instead of using preferences. When applying this concept, the activity just continues to use the last saved state, regardless of whether it was an initial open with reboots in between or a reopen due to the back stack. A: To help reduce boilerplate I use the following interface and class to read/write to a Bundle for saving instance state. First, create an interface that will be used to annotate your instance variables: import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Documented @Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.FIELD }) public @interface SaveInstance { } Then, create a class where reflection will be used to save values to the bundle: import android.app.Activity; import android.app.Fragment; import android.os.Bundle; import android.os.Parcelable; import android.util.Log; import java.io.Serializable; import java.lang.reflect.Field; /** * Save and load fields to/from a {@link Bundle}. All fields should be annotated with {@link * SaveInstance}.</p> */ public class Icicle { private static final String TAG = "Icicle"; /** * Find all fields with the {@link SaveInstance} annotation and add them to the {@link Bundle}. * * @param outState * The bundle from {@link Activity#onSaveInstanceState(Bundle)} or {@link * Fragment#onSaveInstanceState(Bundle)} * @param classInstance * The object to access the fields which have the {@link SaveInstance} annotation. * @see #load(Bundle, Object) */ public static void save(Bundle outState, Object classInstance) { save(outState, classInstance, classInstance.getClass()); } /** * Find all fields with the {@link SaveInstance} annotation and add them to the {@link Bundle}. * * @param outState * The bundle from {@link Activity#onSaveInstanceState(Bundle)} or {@link * Fragment#onSaveInstanceState(Bundle)} * @param classInstance * The object to access the fields which have the {@link SaveInstance} annotation. * @param baseClass * Base class, used to get all superclasses of the instance. * @see #load(Bundle, Object, Class) */ public static void save(Bundle outState, Object classInstance, Class<?> baseClass) { if (outState == null) { return; } Class<?> clazz = classInstance.getClass(); while (baseClass.isAssignableFrom(clazz)) { String className = clazz.getName(); for (Field field : clazz.getDeclaredFields()) { if (field.isAnnotationPresent(SaveInstance.class)) { field.setAccessible(true); String key = className + "#" + field.getName(); try { Object value = field.get(classInstance); if (value instanceof Parcelable) { outState.putParcelable(key, (Parcelable) value); } else if (value instanceof Serializable) { outState.putSerializable(key, (Serializable) value); } } catch (Throwable t) { Log.d(TAG, "The field '" + key + "' was not added to the bundle"); } } } clazz = clazz.getSuperclass(); } } /** * Load all saved fields that have the {@link SaveInstance} annotation. * * @param savedInstanceState * The saved-instance {@link Bundle} from an {@link Activity} or {@link Fragment}. * @param classInstance * The object to access the fields which have the {@link SaveInstance} annotation. * @see #save(Bundle, Object) */ public static void load(Bundle savedInstanceState, Object classInstance) { load(savedInstanceState, classInstance, classInstance.getClass()); } /** * Load all saved fields that have the {@link SaveInstance} annotation. * * @param savedInstanceState * The saved-instance {@link Bundle} from an {@link Activity} or {@link Fragment}. * @param classInstance * The object to access the fields which have the {@link SaveInstance} annotation. * @param baseClass * Base class, used to get all superclasses of the instance. * @see #save(Bundle, Object, Class) */ public static void load(Bundle savedInstanceState, Object classInstance, Class<?> baseClass) { if (savedInstanceState == null) { return; } Class<?> clazz = classInstance.getClass(); while (baseClass.isAssignableFrom(clazz)) { String className = clazz.getName(); for (Field field : clazz.getDeclaredFields()) { if (field.isAnnotationPresent(SaveInstance.class)) { String key = className + "#" + field.getName(); field.setAccessible(true); try { Object fieldVal = savedInstanceState.get(key); if (fieldVal != null) { field.set(classInstance, fieldVal); } } catch (Throwable t) { Log.d(TAG, "The field '" + key + "' was not retrieved from the bundle"); } } } clazz = clazz.getSuperclass(); } } } Example usage: public class MainActivity extends Activity { @SaveInstance private String foo; @SaveInstance private int bar; @SaveInstance private Intent baz; @SaveInstance private boolean qux; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Icicle.load(savedInstanceState, this); } @Override public void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); Icicle.save(outState, this); } } Note: This code was adapted from a library project named AndroidAutowire which is licensed under the MIT license. A: There is a way to make Android save the states without implementing any method. Just add this line to your Manifest in Activity declaration: android:configChanges="orientation|screenSize" It should look like this: <activity android:name=".activities.MyActivity" android:configChanges="orientation|screenSize"> </activity> Here you can find more information about this property. It's recommended to let Android handle this for you than the manually handling. A: Kotlin Solution: For custom class save in onSaveInstanceState you can be converted your class to JSON string and restore it with Gson convertion and for single String, Double, Int, Long value save and restore as following. The following example is for Fragment and Activity: For Activity: For put data in saveInstanceState: override fun onSaveInstanceState(outState: Bundle) { super.onSaveInstanceState(outState) //for custom class----- val gson = Gson() val json = gson.toJson(your_custom_class) outState.putString("CUSTOM_CLASS", json) //for single value------ outState.putString("MyString", stringValue) outState.putBoolean("MyBoolean", true) outState.putDouble("myDouble", doubleValue) outState.putInt("MyInt", intValue) } Restore data: override fun onRestoreInstanceState(savedInstanceState: Bundle) { super.onRestoreInstanceState(savedInstanceState) //for custom class restore val json = savedInstanceState?.getString("CUSTOM_CLASS") if (!json!!.isEmpty()) { val gson = Gson() testBundle = gson.fromJson(json, Session::class.java) } //for single value restore val myBoolean: Boolean = savedInstanceState?.getBoolean("MyBoolean") val myDouble: Double = savedInstanceState?.getDouble("myDouble") val myInt: Int = savedInstanceState?.getInt("MyInt") val myString: String = savedInstanceState?.getString("MyString") } You can restore it on Activity onCreate also. For fragment: For put class in saveInstanceState: override fun onSaveInstanceState(outState: Bundle) { super.onSaveInstanceState(outState) val gson = Gson() val json = gson.toJson(customClass) outState.putString("CUSTOM_CLASS", json) } Restore data: override fun onActivityCreated(savedInstanceState: Bundle?) { super.onActivityCreated(savedInstanceState) //for custom class restore if (savedInstanceState != null) { val json = savedInstanceState.getString("CUSTOM_CLASS") if (!json!!.isEmpty()) { val gson = Gson() val customClass: CustomClass = gson.fromJson(json, CustomClass::class.java) } } // for single value restore val myBoolean: Boolean = savedInstanceState.getBoolean("MyBoolean") val myDouble: Double = savedInstanceState.getDouble("myDouble") val myInt: Int = savedInstanceState.getInt("MyInt") val myString: String = savedInstanceState.getString("MyString") } A: using Android ViewModel & SavedStateHandle to persist the serializable data public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); ActivityMainBinding binding = ActivityMainBinding.inflate(getLayoutInflater()); binding.setViewModel(new ViewModelProvider(this).get(ViewModel.class)); binding.setLifecycleOwner(this); setContentView(binding.getRoot()); } public static class ViewModel extends AndroidViewModel { //This field SURVIVE the background process reclaim/killing & the configuration change public final SavedStateHandle savedStateHandle; //This field NOT SURVIVE the background process reclaim/killing but SURVIVE the configuration change public final MutableLiveData<String> inputText2 = new MutableLiveData<>(); public ViewModel(@NonNull Application application, SavedStateHandle savedStateHandle) { super(application); this.savedStateHandle = savedStateHandle; } } } in layout file <?xml version="1.0" encoding="utf-8"?> <layout xmlns:android="http://schemas.android.com/apk/res/android"> <data> <variable name="viewModel" type="com.xxx.viewmodelsavedstatetest.MainActivity.ViewModel" /> </data> <LinearLayout xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context=".MainActivity"> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:autofillHints="" android:hint="This field SURVIVE the background process reclaim/killing &amp; the configuration change" android:text='@={(String)viewModel.savedStateHandle.getLiveData("activity_main/inputText", "")}' /> <SeekBar android:layout_width="match_parent" android:layout_height="wrap_content" android:max="100" android:progress='@={(Integer)viewModel.savedStateHandle.getLiveData("activity_main/progress", 50)}' /> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="This field SURVIVE the background process reclaim/killing &amp; the configuration change" android:text='@={(String)viewModel.savedStateHandle.getLiveData("activity_main/inputText", "")}' /> <SeekBar android:layout_width="match_parent" android:layout_height="wrap_content" android:max="100" android:progress='@={(Integer)viewModel.savedStateHandle.getLiveData("activity_main/progress", 50)}' /> <EditText android:layout_width="match_parent" android:layout_height="wrap_content" android:hint="This field NOT SURVIVE the background process reclaim/killing but SURVIVE the configuration change" android:text='@={viewModel.inputText2}' /> </LinearLayout> </layout> Test: 1. start the test activity 2. press home key to go home 3. adb shell kill <the test activity process> 4. open recent app list and restart the test activity A: To answer the original question directly. savedInstancestate is null because your Activity is never being re-created. Your Activity will only be re-created with a state bundle when: * *Configuration changes such as changing the orientation or phone language which may requires a new activity instance to be created. *You return to the app from the background after the OS has destroyed the activity. Android will destroy background activities when under memory pressure or after they've been in the background for an extended period of time. When testing your hello world example there are a few ways to leave and return to the Activity. * *When you press the back button the Activity is finished. Re-launching the app is a brand new instance. You aren't resuming from the background at all. *When you press the home button or use the task switcher the Activity will go into the background. When navigating back to the application onCreate will only be called if the Activity had to be destroyed. In most cases if you're just pressing home and then launching the app again the activity won't need to be re-created. It already exists in memory so onCreate() won't be called. There is an option under Settings -> Developer Options called "Don't keep activities". When it's enabled Android will always destroy activities and recreate them when they're backgrounded. This is a great option to leave enabled when developing because it simulates the worst case scenario. ( A low memory device recycling your activities all the time ). The other answers are valuable in that they teach you the correct ways to store state but I didn't feel they really answered WHY your code wasn't working in the way you expected. A: The onSaveInstanceState(bundle) and onRestoreInstanceState(bundle) methods are useful for data persistence merely while rotating the screen (orientation change). They are not even good while switching between applications (since the onSaveInstanceState() method is called but onCreate(bundle) and onRestoreInstanceState(bundle) is not invoked again. For more persistence use shared preferences. read this article A: What to save and what not to? Ever wondered why the text in the EditText gets saved automatically while an orientation change? Well, this answer is for you. When an instance of an Activity gets destroyed and the System recreates a new instance (for example, configuration change). It tries to recreate it using a set of saved data of old Activity State (instance state). Instance state is a collection of key-value pairs stored in a Bundle object. By default System saves the View objects in the Bundle for example. * *Text in EditText *Scroll position in a ListView, etc. If you need another variable to be saved as a part of instance state you should OVERRIDE onSavedInstanceState(Bundle savedinstaneState) method. For example, int currentScore in a GameActivity More detail about the onSavedInstanceState(Bundle savedinstaneState) while saving data @Override public void onSaveInstanceState(Bundle savedInstanceState) { // Save the user's current game state savedInstanceState.putInt(STATE_SCORE, mCurrentScore); // Always call the superclass so it can save the view hierarchy state super.onSaveInstanceState(savedInstanceState); } So by mistake if you forget to call super.onSaveInstanceState(savedInstanceState);the default behavior will not work ie Text in EditText will not save. Which to choose for restoring Activity state? onCreate(Bundle savedInstanceState) OR onRestoreInstanceState(Bundle savedInstanceState) Both methods get the same Bundle object, so it does not really matter where you write your restoring logic. The only difference is that in onCreate(Bundle savedInstanceState) method you will have to give a null check while it is not needed in the latter case. Other answers have already code snippets. You can refer them. More detail about the onRestoreInstanceState(Bundle savedinstaneState) @Override public void onRestoreInstanceState(Bundle savedInstanceState) { // Always call the superclass so it can restore the view hierarchy super.onRestoreInstanceState(savedInstanceState); // Restore state members from the saved instance mCurrentScore = savedInstanceState.getInt(STATE_SCORE); } Always call super.onRestoreInstanceState(savedInstanceState); so that System restore the View hierarchy by default Bonus The onSaveInstanceState(Bundle savedInstanceState) is invoked by the system only when the user intends to come back to the Activity. For example, you are using App X and suddenly you get a call. You move to the caller app and come back to the app X. In this case the onSaveInstanceState(Bundle savedInstanceState) method will be invoked. But consider this if a user presses the back button. It is assumed that the user does not intend to come back to the Activity, hence in this case onSaveInstanceState(Bundle savedInstanceState) will not be invoked by the system. Point being you should consider all the scenarios while saving data. Relevant links: Demo on default behavior Android Official Documentation. A: Instead of that, you should use ViewModel, which will retain the data until the activity life cycle. A: Now it makes sense to do 2 ways in the view model. if you want to save the first as a saved instance: You can add state parameter in view model like this https://developer.android.com/topic/libraries/architecture/viewmodel-savedstate#java or you can save variables or object in view model, in this case the view model will hold the life cycle until the activity is destroyed. public class HelloAndroidViewModel extends ViewModel { public Booelan firstInit = false; public HelloAndroidViewModel() { firstInit = false; } ... } public class HelloAndroid extends Activity { private TextView mTextView = null; HelloAndroidViewModel viewModel = ViewModelProviders.of(this).get(HelloAndroidViewModel.class); /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mTextView = new TextView(this); //Because even if the state is deleted, the data in the viewmodel will be kept because the activity does not destroy if(!viewModel.firstInit){ viewModel.firstInit = true mTextView.setText("Welcome to HelloAndroid!"); }else{ mTextView.setText("Welcome back."); } setContentView(mTextView); } } A: In 2020 we have some changes : If you want your Activity to restore its state after the process is killed and started again, you may want to use the “saved state” functionality. Previously, you needed to override two methods in the Activity: onSaveInstanceState and onRestoreInstanceState. You can also access the restored state in the onCreate method. Similarly, in Fragment, you have onSaveInstanceState method available (and the restored state is available in the onCreate, onCreateView, and onActivityCreated methods). Starting with AndroidX SavedState 1.0.0, which is the dependency of the AndroidX Activity and the AndroidX Fragment, you get access to the SavedStateRegistry. You can obtain the SavedStateRegistry from the Activity/Fragment and then register your SavedStateProvider : class MyActivity : AppCompatActivity() { companion object { private const val MY_SAVED_STATE_KEY = "MY_SAVED_STATE_KEY " private const val SOME_VALUE_KEY = "SOME_VALUE_KEY " } private lateinit var someValue: String private val savedStateProvider = SavedStateRegistry.SavedStateProvider { Bundle().apply { putString(SOME_VALUE_KEY, someValue) } } override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) savedStateRegistry.registerSavedStateProvider(MY_SAVED_STATE_KEY, savedStateProvider) someValue = savedStateRegistry.consumeRestoredStateForKey(MY_SAVED_STATE_KEY)?.getString(SOME_VALUE_KEY) ?: "" } } As you can see, SavedStateRegistry enforces you to use the key for your data. This can prevent your data from being corrupted by another SavedStateProvider attached to the same Activity/Fragment.Also you can extract your SavedStateProvider to another class to make it work with your data by using whatever abstraction you want and in such a way achieve the clean saved state behavior in your application. A: Now a days you can use live data and Life-cycle-Aware Components https://developer.android.com/topic/libraries/architecture/lifecycle A: You need to override onSaveInstanceState(Bundle savedInstanceState) and write the application state values you want to change to the Bundle parameter like this: @Override public void onSaveInstanceState(Bundle savedInstanceState) { super.onSaveInstanceState(savedInstanceState); // Save UI state changes to the savedInstanceState. // This bundle will be passed to onCreate if the process is // killed and restarted. savedInstanceState.putBoolean("MyBoolean", true); savedInstanceState.putDouble("myDouble", 1.9); savedInstanceState.putInt("MyInt", 1); savedInstanceState.putString("MyString", "Welcome back to Android"); // etc. } The Bundle is essentially a way of storing a NVP ("Name-Value Pair") map, and it will get passed in to onCreate() and also onRestoreInstanceState() where you would then extract the values from activity like this: @Override public void onRestoreInstanceState(Bundle savedInstanceState) { super.onRestoreInstanceState(savedInstanceState); // Restore UI state from the savedInstanceState. // This bundle has also been passed to onCreate. boolean myBoolean = savedInstanceState.getBoolean("MyBoolean"); double myDouble = savedInstanceState.getDouble("myDouble"); int myInt = savedInstanceState.getInt("MyInt"); String myString = savedInstanceState.getString("MyString"); } Or from a fragment. @Override public void onViewStateRestored(@Nullable Bundle savedInstanceState) { super.onViewStateRestored(savedInstanceState); // Restore UI state from the savedInstanceState. // This bundle has also been passed to onCreate. boolean myBoolean = savedInstanceState.getBoolean("MyBoolean"); double myDouble = savedInstanceState.getDouble("myDouble"); int myInt = savedInstanceState.getInt("MyInt"); String myString = savedInstanceState.getString("MyString"); } You would usually use this technique to store instance values for your application (selections, unsaved text, etc.). A: My colleague wrote an article explaining application state on Android devices, including explanations on activity lifecycle and state information, how to store state information, and saving to state Bundle and SharedPreferences. Take a look at it here. The article covers three approaches: Store local variable/UI control data for application lifetime (i.e. temporarily) using an instance state bundle [Code sample – Store state in state bundle] @Override public void onSaveInstanceState(Bundle savedInstanceState) { // Store UI state to the savedInstanceState. // This bundle will be passed to onCreate on next call. EditText txtName = (EditText)findViewById(R.id.txtName); String strName = txtName.getText().toString(); EditText txtEmail = (EditText)findViewById(R.id.txtEmail); String strEmail = txtEmail.getText().toString(); CheckBox chkTandC = (CheckBox)findViewById(R.id.chkTandC); boolean blnTandC = chkTandC.isChecked(); savedInstanceState.putString(“Name”, strName); savedInstanceState.putString(“Email”, strEmail); savedInstanceState.putBoolean(“TandC”, blnTandC); super.onSaveInstanceState(savedInstanceState); } Store local variable/UI control data between application instances (i.e. permanently) using shared preferences [Code sample – store state in SharedPreferences] @Override protected void onPause() { super.onPause(); // Store values between instances here SharedPreferences preferences = getPreferences(MODE_PRIVATE); SharedPreferences.Editor editor = preferences.edit(); // Put the values from the UI EditText txtName = (EditText)findViewById(R.id.txtName); String strName = txtName.getText().toString(); EditText txtEmail = (EditText)findViewById(R.id.txtEmail); String strEmail = txtEmail.getText().toString(); CheckBox chkTandC = (CheckBox)findViewById(R.id.chkTandC); boolean blnTandC = chkTandC.isChecked(); editor.putString(“Name”, strName); // value to store editor.putString(“Email”, strEmail); // value to store editor.putBoolean(“TandC”, blnTandC); // value to store // Commit to storage editor.commit(); } Keeping object instances alive in memory between activities within application lifetime using a retained non-configuration instance [Code sample – store object instance] private cMyClassType moInstanceOfAClass; // Store the instance of an object @Override public Object onRetainNonConfigurationInstance() { if (moInstanceOfAClass != null) // Check that the object exists return(moInstanceOfAClass); return super.onRetainNonConfigurationInstance(); } A: My problem was that I needed persistence only during the application lifetime (i.e. a single execution including starting other sub-activities within the same app and rotating the device etc). I tried various combinations of the above answers but did not get what I wanted in all situations. In the end what worked for me was to obtain a reference to the savedInstanceState during onCreate: mySavedInstanceState=savedInstanceState; and use that to obtain the contents of my variable when I needed it, along the lines of: if (mySavedInstanceState !=null) { boolean myVariable = mySavedInstanceState.getBoolean("MyVariable"); } I use onSaveInstanceStateand onRestoreInstanceState as suggested above but I guess i could also or alternatively use my method to save the variable when it changes (e.g. using putBoolean) A: Although the accepted answer is correct, there is a faster and easier method to save the Activity state on Android using a library called Icepick. Icepick is an annotation processor that takes care of all the boilerplate code used in saving and restoring state for you. Doing something like this with Icepick: class MainActivity extends Activity { @State String username; // These will be automatically saved and restored @State String password; @State int age; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Icepick.restoreInstanceState(this, savedInstanceState); } @Override public void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); Icepick.saveInstanceState(this, outState); } } Is the same as doing this: class MainActivity extends Activity { String username; String password; int age; @Override public void onSaveInstanceState(Bundle savedInstanceState) { super.onSaveInstanceState(savedInstanceState); savedInstanceState.putString("MyString", username); savedInstanceState.putString("MyPassword", password); savedInstanceState.putInt("MyAge", age); /* remember you would need to actually initialize these variables before putting it in the Bundle */ } @Override public void onRestoreInstanceState(Bundle savedInstanceState) { super.onRestoreInstanceState(savedInstanceState); username = savedInstanceState.getString("MyString"); password = savedInstanceState.getString("MyPassword"); age = savedInstanceState.getInt("MyAge"); } } Icepick will work with any object that saves its state with a Bundle. A: you can use the Live Data and View Model For Lifecycle Handel From JetPack. see this Reference : https://developer.android.com/topic/libraries/architecture/livedata A: When an activity is created it's onCreate() method is called. @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } savedInstanceState is an object of Bundle class which is null for the first time, but it contains values when it is recreated. To save Activity's state you have to override onSaveInstanceState(). @Override protected void onSaveInstanceState(Bundle outState) { outState.putString("key","Welcome Back") super.onSaveInstanceState(outState); //save state } put your values in "outState" Bundle object like outState.putString("key","Welcome Back") and save by calling super. When activity will be destroyed it's state get saved in Bundle object and can be restored after recreation in onCreate() or onRestoreInstanceState(). Bundle received in onCreate() and onRestoreInstanceState() are same. @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //restore activity's state if(savedInstanceState!=null){ String reStoredString=savedInstanceState.getString("key"); } } or //restores activity's saved state @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { String restoredMessage=savedInstanceState.getString("key"); } A: There are basically two ways to implement this change. * *using onSaveInstanceState() and onRestoreInstanceState(). *In manifest android:configChanges="orientation|screenSize". I really do not recommend to use second method. Since in one of my experience it was causing half of the device screen black while rotating from portrait to landscape and vice versa. Using first method mentioned above , we can persist data when orientation is changed or any config change happens. I know a way in which you can store any type of data inside savedInstance state object. Example: Consider a case if you want to persist Json object. create a model class with getters and setters . class MyModel extends Serializable{ JSONObject obj; setJsonObject(JsonObject obj) { this.obj=obj; } JSONObject getJsonObject() return this.obj; } } Now in your activity in onCreate and onSaveInstanceState method do the following. It will look something like this: @override onCreate(Bundle savedInstaceState){ MyModel data= (MyModel)savedInstaceState.getSerializable("yourkey") JSONObject obj=data.getJsonObject(); //Here you have retained JSONObject and can use. } @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); //Obj is some json object MyModel dataToSave= new MyModel(); dataToSave.setJsonObject(obj); oustate.putSerializable("yourkey",dataToSave); } A: This is a classic 'gotcha' of Android development. There are two issues here: * *There is a subtle Android Framework bug which greatly complicates application stack management during development, at least on legacy versions (not entirely sure if/when/how it was fixed). I'll discuss this bug below. *The 'normal' or intended way to manage this issue is, itself, rather complicated with the duality of onPause/onResume and onSaveInstanceState/onRestoreInstanceState Browsing across all these threads, I suspect that much of the time developers are talking about these two different issues simultaneously ... hence all the confusion and reports of "this doesn't work for me". First, to clarify the 'intended' behavior: onSaveInstance and onRestoreInstance are fragile and only for transient state. The intended usage (as far as I can tell) is to handle Activity recreation when the phone is rotated (orientation change). In other words, the intended usage is when your Activity is still logically 'on top', but still must be reinstantiated by the system. The saved Bundle is not persisted outside of the process/memory/GC, so you cannot really rely on this if your activity goes to the background. Yes, perhaps your Activity's memory will survive its trip to the background and escape GC, but this is not reliable (nor is it predictable). So if you have a scenario where there is meaningful 'user progress' or state that should be persisted between 'launches' of your application, the guidance is to use onPause and onResume. You must choose and prepare a persistent store yourself. But - there is a very confusing bug which complicates all of this. Details are here: * *Activity stack behaves incorrectly during the first run of an app when started from Eclipse (#36907463) *Marketplace / browser app installer allows second instance off app (#36911210) Basically, if your application is launched with the SingleTask flag, and then later on you launch it from the home screen or launcher menu, then that subsequent invocation will create a NEW task ... you'll effectively have two different instances of your app inhabiting the same stack ... which gets very strange very fast. This seems to happen when you launch your app during development (i.e. from Eclipse or IntelliJ), so developers run into this a lot. But also through some of the app store update mechanisms (so it impacts your users as well). I battled through these threads for hours before I realized that my main issue was this bug, not the intended framework behavior. A great write-up and workaround (UPDATE: see below) seems to be from user @kaciula in this answer: Home key press behaviour UPDATE June 2013: Months later, I have finally found the 'correct' solution. You don't need to manage any stateful startedApp flags yourself. You can detect this from the framework and bail appropriately. I use this near the beginning of my LauncherActivity.onCreate: if (!isTaskRoot()) { Intent intent = getIntent(); String action = intent.getAction(); if (intent.hasCategory(Intent.CATEGORY_LAUNCHER) && action != null && action.equals(Intent.ACTION_MAIN)) { finish(); return; } } A: Here is a comment from Steve Moseley's answer (by ToolmakerSteve) that puts things into perspective (in the whole onSaveInstanceState vs onPause, east cost vs west cost saga) @VVK - I partially disagree. Some ways of exiting an app don't trigger onSaveInstanceState (oSIS). This limits the usefulness of oSIS. Its worth supporting, for minimal OS resources, but if an app wants to return the user to the state they were in, no matter how the app was exited, it is necessary to use a persistent storage approach instead. I use onCreate to check for bundle, and if it is missing, then check persistent storage. This centralizes the decision making. I can recover from a crash, or back button exit or custom menu item Exit, or get back to screen user was on many days later. – ToolmakerSteve Sep 19 '15 at 10:38 A: Kotlin code: save: override fun onSaveInstanceState(outState: Bundle) { super.onSaveInstanceState(outState.apply { putInt("intKey", 1) putString("stringKey", "String Value") putParcelable("parcelableKey", parcelableObject) }) } and then in onCreate() or onRestoreInstanceState() val restoredInt = savedInstanceState?.getInt("intKey") ?: 1 //default int val restoredString = savedInstanceState?.getString("stringKey") ?: "default string" val restoredParcelable = savedInstanceState?.getParcelable<ParcelableClass>("parcelableKey") ?: ParcelableClass() //default parcelable Add default values if you don't want to have Optionals A: onSaveInstanceState is called when the system needs memory and kills an application. It is not called when the user just closes the application. So I think application state should also be saved in onPause. It should be saved to some persistent storage like Preferences or SQLite. A: To get activity state data stored in onCreate(), first you have to save data in savedInstanceState by overriding SaveInstanceState(Bundle savedInstanceState) method. When activity destroy SaveInstanceState(Bundle savedInstanceState) method gets called and there you save data you want to save. And you get same in onCreate() when activity restart.(savedInstanceState wont be null since you have saved some data in it before activity get destroyed)
{ "language": "en", "url": "https://stackoverflow.com/questions/151777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2848" }
Q: Alternatives to static methods in Java I'm making a mini ORM for a Java program I'm writing... there is a class for each table in my db, all inheriting from ModelBase. ModelBase is abstract & provides a bunch of static methods for finding & binding objects from the db, for example: public static ArrayList findAll(Class cast_to_class) { //build the sql query & execute it } So you can do things like ModelBase.findAll(Albums.class) to get a list of all persisted albums. My problem is that in this static context, I need to get the appropriate sql string from the concrete class Album. I can't have a static method like public class Album extends ModelBase { public static String getSelectSQL() { return "select * from albums.....";} } because there is no polymorphism for static methods in Java. But I don't want to make getSelectSQL() an instance method in Album because then I need to create an instance of it just to get a string that is really static in behavior. At the moment, findAll() uses reflection to get the appropriate sql for the class in question: select_sql = (String)cast_to_class.getDeclaredMethod("getSelectSql", new Class[]{} ).invoke(null, null); But that's pretty gross. So any ideas? It's a general problem I'm having time and time again - the inability to specify abstract static methods in classes or interfaces. I know why static method polymorphism doesn't and can't work, but that doesn't stop me from wanting to use it time again! Is there any pattern/construct that allows me to ensure that concrete subclasses X and Y implement a class method(or failing that, a class constant!)? A: Static is the wrong thing to be using here. Conceptually static is wrong because it's only for services that don't correspond to an an actual object, physical or conceptual. You have a number of tables, and each should be represented by an actual object in the system, not just be a class. That sounds like it's a bit theoretical but it has actual consequences, as we'll see. Each table is of a different class, and that's OK. Since you can only ever have one of each table, limit the number of instances of each class to one (use a flag - don't make it a Singleton). Make the program create an instance of the class before it accesses the table. Now you have a couple of advantages. You can use the full power of inheritance and overriding since your methods are no longer static. You can use the constructor to do any initialisation, including associating SQL with the table (SQL that your methods can use later). This should make all your problems above go away, or at least get much simpler. It seems like there is extra work in having to create the object, and extra memory, but it's really trivial compared with the advantages. A few bytes of memory for the object won't be noticed, and a handful of constructor calls will take maybe ten minutes to add. Against that is the advantage that code to initialise any tables doesn't need to be run if the table isn't used (the constructor shouldn't be called). You will find it simplifies things a lot. A: Albeit, I totally agree in the point of "Static is the wrong thing to be using here", I kind of understand what you're trying to address here. Still instance behavior should be the way to work, but if you insist this is what I would do: Starting from your comment "I need to create an instance of it just to get a string that is really static in behaviour" It is not completely correct. If you look well, you are not changing the behavior of your base class, just changing the parameter for a method. In other words you're changing the data, not the algorithm. Inheritance is more useful when a new subclass wants to change the way a method works, if you just need to change the "data" the class uses to work probably an approach like this would do the trick. class ModelBase { // Initialize the queries private static Map<String,String> selectMap = new HashMap<String,String>(); static { selectMap.put( "Album", "select field_1, field_2 from album"); selectMap.put( "Artist", "select field_1, field_2 from artist"); selectMap.put( "Track", "select field_1, field_2 from track"); } // Finds all the objects for the specified class... // Note: it is better to use "List" rather than "ArrayList" I'll explain this later. public static List findAll(Class classToFind ) { String sql = getSelectSQL( classToFind ); results = execute( sql ); //etc... return .... } // Return the correct select sql.. private static String getSelectSQL( Class classToFind ){ String statement = tableMap.get( classToFind.getSimpleName() ); if( statement == null ) { throw new IllegalArgumentException("Class " + classToFind.getSimpleName + " is not mapped"); } return statement; } } That is, map all the statements with a Map. The "obvious" next step to this is to load the map from an external resource, such as a properties file, or a xml or even ( why not ) a database table, for extra flexibility. This way you can keep your class clients ( and your self ) happy, because you don't needed "creating an instance" to do the work. // Client usage: ... List albums = ModelBase.findAll( Album.class ); ... Another approach is to create the instances from behind, and keep your client interface intact while using instance methods, the methods are marked as "protected" to avoid having external invocation. In a similar fashion of the previous sample you can also do this // Second option, instance used under the hood. class ModelBase { // Initialize the queries private static Map<String,ModelBase> daoMap = new HashMap<String,ModelBase>(); static { selectMap.put( "Album", new AlbumModel() ); selectMap.put( "Artist", new ArtistModel()); selectMap.put( "Track", new TrackModel()); } // Finds all the objects for the specified class... // Note: it is better to use "List" rather than "ArrayList" I'll explain this later. public static List findAll(Class classToFind ) { String sql = getSelectSQL( classToFind ); results = execute( sql ); //etc... return .... } // Return the correct select sql.. private static String getSelectSQL( Class classToFind ){ ModelBase dao = tableMap.get( classToFind.getSimpleName() ); if( statement == null ) { throw new IllegalArgumentException("Class " + classToFind.getSimpleName + " is not mapped"); } return dao.selectSql(); } // Instance class to be overrided... // this is "protected" ... protected abstract String selectSql(); } class AlbumModel extends ModelBase { public String selectSql(){ return "select ... from album"; } } class ArtistModel extends ModelBase { public String selectSql(){ return "select ... from artist"; } } class TrackModel extends ModelBase { public String selectSql(){ return "select ... from track"; } } And you don't need to change the client code, and still have the power of polymorphism. // Client usage: ... List albums = ModelBase.findAll( Album.class ); // Does not know , behind the scenes you use instances. ... I hope this helps. A final note on using List vs. ArrayList. It is always better to program to the interface than to the implementation, this way you make your code more flexible. You can use another List implementation that is faster, or does something else, without changing your client code. A: Why not using annotations? They fitpretty well what you're doing: to add meta-information (here an SQL query) to a class. A: As suggested, you could use annotations, or you could move the static methods to factory objects: public abstract class BaseFactory<E> { public abstract String getSelectSQL(); public List<E> findAll(Class<E> clazz) { // Use getSelectSQL(); } } public class AlbumFactory extends BaseFactory<Album> { public String getSelectSQL() { return "select * from albums....."; } } But it is not a very good smell to have objects without any state. A: If you are passing a Class to findAll, why can't you pass a class to getSelectSQL in ModelBase? A: asterite: do you mean that getSelectSQL exists only in ModelBase, and it uses the passed in class to make a tablename or something like that? I cant do that, because some of the Models have wildy differeing select constructs, so I cant use a universal "select * from " + classToTableName();. And any attempt to get information from the Models about their select construct runs into the same problem from the original question - you need an instance of the Model or some fancy reflection. gizmo: I will definatly have a look into annotations. Although I cant help but wonder what people did with these problems before there was reflection? A: You could have your SQL methods as instance methods in a separate class. Then pass the model object into the constructor of this new class and call its methods for getting SQL. A: Wow - this is a far better example of something I asked previously in more general terms - how to implement properties or methods that are Static to each implementing class in a way that avoids duplication, provides Static access without needing to instantiate the class concerned and feels 'Right'. Short answer (Java or .NET): You can't. Longer answer - you can if you don't mind to use a Class level annotation (reflection) or instantiating an object (instance method) but neither are truly 'clean'. See my previous (related) question here: How to handle static fields that vary by implementing class I thought the answers were all really lame and missed the point. Your question is much better worded. A: I agree with Gizmo: you're either looking at annotations or some sort of configuration file. I'd take a look at Hibernate and other ORM frameworks (and maybe even libraries like log4j!) to see how they handle loading of class-level meta-information. Not everything can or should be done programmatically, I feel this may be one of those cases.
{ "language": "en", "url": "https://stackoverflow.com/questions/151778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Enable local scripts and flash in ie7 I'm running an old little app which runs in the browser from local files, and I keep getting the, "To help protect your security, Internet Explorer has restricted this webpage from running scripts or Activex controls that could access your computer" message. Is there a registry setting or something I can tweak to allow it to run automatically? I'm aware of Mark of the Web but it is not practical in this case (neither is running in Firefox, unfortunately). A: Internet Options > Advanced > Security > Allow active content to run in files on My Computer A: you need to add the local file system (file://*) to the safe websites in the security settings. A: It's in Internet Options > Advanced > Security > Allow active content to run in files on My Computer.
{ "language": "en", "url": "https://stackoverflow.com/questions/151781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Will MS drop support for XP in .Net 4.* or 5.*? Does it matter to developers that the current, and newer versions of .Net don't support windows 2000? It scares me to think that several of my clients still use Windows 2000 and although I may decide to stop supporting Windows 2000 one day, I don't like that Microsoft is pushing it on people's products. Could anyone see Microsoft doing this with XP in the future to spur sales of Vista and later? Just to clarify, this is not a bashing of MS in any way, I love MS, but it is a genuine concern that I would like opinions on. In contrast I can't see C++0x implementors saying "it won't work on windows 2000" I'm really trying to convince myself that I should be switching to .Net but this is one of my concerns. A: Supporting older operating systems costs money. It's not necessarily a push to spur sales of new systems so much as avoiding the cost of trying to make things work on old systems that they've already ceased supporting. Just as Windows 2000 support has ended, so will Windows XP support, and Vista support, and Windows 7 support, etc etc. Continuing to support the .NET framework on operating systems that are no longer supported in any other way does not seem prudent. EDIT: To address the notion that since the CLR is the same for .NET 2.0 and the newer framework versions, the restriction was artificial. Although it is still working on the same CLR, that doesn't mean that all the support they've added will effectively work on Windows 2000. There are performance and hardware considerations to be made and I think considering the age of Windows 2000 and some of the more intensive features added to 3.0 and 3.5 frameworks, it was a reasonable decision to abandon WIndows 2k. Whenever we as developers consider supporting a particular user-base, there has to be a consideration of the resources needed to add that additional user-base over the benefits of supporting them. Testing, bug fixing, and support costs have to be factored in. As Windows 2000 is no longer given any security updates, they would need to resurrect an update mechanism just for .NET updates. I suspect that the benefits do not outweigh the costs in this scenario. It therefore makes sense to me that Microsoft should artificially prevent newer frameworks from running on Windows 2000 as they are then saving themselves these additional costs. A: Considering that Microsoft has a double interest in this matter (selling you the new OS and producing the .NET framework), I would be very suspicious. In actual fact, you will be able to support new .NET versions on older OSes using Mono, which is pretty much designed to be cross-platform and backwards-compatible. A: Since the question changed after my last response, I'll add that 3.0 and 3.5 support for Windows 2k wasn't dropped "without warning". There was plenty of indication this was happening before the betas were ended, so I don't think the question is really worded fairly in this regard. A: I guess it greatly depends upon the company. For example, I've been working with mixed IBM and Microsoft technologies and our customer has this AS400 platform which is very very old, they don't even support transactions or relations on their database but these big companies have invested a lot of time and money on their systems and they want to keep them like that. What we done is to add a layer so they can use this information on a website. I dont see IBM leaving its customers behind, they still develop software componets to connect to these old techologies for .Net for exmaple and I believe Microsoft will do the same if they do a research and found that they have many customers still using Windows 2000. You might not have all the features of the newest technologies but at least im pretty sure they will maintain a layer of compatibility for it with their newest technologies. Its not easy to tell a company of more than 10k employees and millions of dollars invested to just switch to the newest OS or Database system, for they it wouldn't make sense and believe me, even when Microsoft wants you to buy the most modern software they won't stop supporting their old technologies especially if these big companies pressure them to either keep their legacy systems compatible or buying the other's company solution. A: No matter what technology stack you're using, there's always going to be a tension between "supporting the latest features" and "maintaining backwards compatibility". Where to make that tradeoff depends largely on the type of product you're building and the type of customers you have. I used to develop a warehouse management application using C++ and SQL, and we always had to support at least two versions back from the "current version" of SQL Server because our customers were extremely reluctant to upgrade. A: Well here's what I think: * *Windows 2000 is a 9 year old product that will most likely lose support by next year, so that might be a good excuse to cease support for it *It is very very easy to install the .NET Framework *The .NET Framework has very little impact to disk space (~20 - 30 MB), so I don't think "pushing" it to clients is an issue in terms of HDD space *There are tons of programs out there that do use the .NET Framework, especially on enterprise environments, so there is a fair chance your clients already have them Honestly, I'm not really sure what you're worried about. BTW, there are ways to use the .NET 3.5 Framework features with only .NET 2.0 installed, and it has been covered in some SO questions already. A: If you look at recent technical innovations, notably Netbooks based on Atom processors, I think XP will be with us for a while yet, as most of this kit doesn't run Vista. Similarly in the mobile market, outside of Windows CE varients, we have XP embedded, not Vista. While major manufacturers, such as Dell, are still introducing new kit that doesn't support Vista, XP is here to stay, A: Since I've gone through this recently here is Microsoft's stated support guidelines. Lifecycle guidance. FYI support for XP should go through at least 2010 and if they are willing to pay for support possibly another few years. Will .Net 3[4].XX work on XP then? Possibly, but who is to know? Win2k is a very old system at this point and there are things that are just missing form the OS. Let it go.
{ "language": "en", "url": "https://stackoverflow.com/questions/151782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Which CPU architectures support Compare And Swap (CAS)? just curious to know which CPU architectures support compare and swap atomic primitives? A: A different and easier way to answer this question may be to list multiprocessor platforms that do NOT support a compare and swap (or a load-link/store-conditional that can be used to write one). The only one I know of is PARISC, which only has an atomic clear word instruction. This can be used to construct a mutex (provided one aligns the word on a 16 byte boundary). There is no CAS on this archetecture (unlike x86, ia64, ppc, sparc, mips, s390, ...) A: A few people commented/asked about whether the "lock" prefix is needed on x86/x64 for cmpxchg. The answer is yes for multicore machines. The instruction is completely atomic for single core machines without lock. It's been a while since I studied this stuff that deeply but I seem to remember that the instruction is technically restartable - it can abort the instruction mid-flight (if it hasn't had any side effects yet) to avoid delaying interrupt handling for too long. A: Intel x86 has this support. IBM in it's Solaris to Linux Porting Guide gives this example: bool_t My_CompareAndSwap(IN int *ptr, IN int old, IN int new) { unsigned char ret; /* Note that sete sets a 'byte' not the word */ __asm__ __volatile__ ( " lock\n" " cmpxchgl %2,%1\n" " sete %0\n" : "=q" (ret), "=m" (*ptr) : "r" (new), "m" (*ptr), "a" (old) : "memory"); return ret; } A: Starting with the ARMv6 architecture ARM has the LDREX/STREX instructions that can be used to implement an atomic compare-exchange operation. A: Just to complete the list, MIPS has Load Linked (ll) and Store Conditional (sc) instructions which load a value from memory and later conditionally store if no other CPU has accessed the location. Its true that you can use these instructions to perform swap, increment, and other operations. However the disadvantage is that with a large number of CPUs exercising locks very heavily you get into livelock: the conditional store will frequently fail and necessitate another loop to try again, which will fail, etc. The software mutex_lock implementation can become very complicated trying to implement an exponential backoff if these situations are considered important enough to worry about. In one system I worked on with 128 cores, they were. A: The x86 and Itanium have CMPXCHG (compare and exchange) A: Compare and swap was added to IBM mainframes in 1973. It (and compare double and swap) are still on the IBM mainframes (along with more recent multi-processor functions like PLO - perform locked operation). A: Sparc v9 has a cas instruction. The SPARC v9 architecture manual discusses the use of the CAS instruction in Annex J, look specifically at examples J.11 and J.12. I believe the name of the instruction is actually "casa", because it can access either the current address space or an alternate. "cas" is an assembler macro which accesses the current ASI. There is also an article on developers.sun.com discussing the various atomic instructions which Sparc processors have implemented over the years, including cas. A: Powerpc has more powerful primitives available: "lwarx" and "stwcx" lwarx loads a value from memory but remembers the location. Any other thread or cpu that touches that location will cause the "stwcx", a conditional store instruction, to fail. So the lwarx /stwcx combo allows you to implement atomic increment / decrement, compare and swap, and more powerful atomic operations like "atomic increment circular buffer index" A: Sorry for a lot of letters. :( Almost all instructions in the x86 ISA (except so called string instructions, and maybe few others), including CMPXCHG, are atomic in the context of unicore CPU. This is because according to the x86 architecture, CPU checks for arrived interrupts after each instruction execution completion and never in the middle. As a result, interrupt request can be detected and it handling be launched only on boundary between execution of two consecutive instructions. Due to this all memory references taken by CPU during execution of single instruction are isolated and can't be interleaved by any other activities. That behavior is common for unicore and multicore CPUs. But if in the context of unicore CPU there is only one unit of the system that performs access to the memory, in the context of multicore CPU there are more then one unit of the system which performs access to the memory simultaneously. Instruction isolation isn't enough for consistency in such environment, because memory accesses made by different CPUs in the same time can interleave each other. Due to this additional protection layer must be applied to the data changing protocol. For the x86 this layer is lock prefix, that initiates atomic transaction on the system bus. Summary: It is safe and less costly to use sync instructions like CMPXCHG, XADD, BTS, etc. without lock prefix if you assured, that the data accessed by this instruction can be accessed only by one core. If you are not assured in this, apply lock prefix to provide safety by trading off performance. There are two major approach for hardware synchronization support by CPU: * *Atomic transaction based. *Cache coherence protocol based. No one is silver bullet. Both approaches have they advantages and disadvantages. Atomic transactions based approach relies to the supporting of the special type of transactions on the memory bus. During such transaction only one agent (CPU core) connected to the bus is eligible to access memory. As result, on the one hand, all memory references made by the bus owner during atomic transaction are assured to be made as a single uninterruptible transaction. On the another hand all other bus agents (CPU cores) will be enforced to wait the atomic transaction completion, to get back the ability to access memory. It doesn't matter, what memory cells they want to access, even if they want to access the memory region that is not referenced by bus owner during atomic transaction. As result extensive use of lock prefixed instructions will slow down the system significantly. On the other hand, due to the fact that the bus arbiter gives access to the bus for each bus agent according to the round robin scheduling, there is a guarantee that each bus agent will have relatively fair access to the memory and all agents will be able to made progress and made it with the same speed. In addition, ABA problem come into the play in case of atomic transactions, because by its nature, atomic transactions is very short (few memory references made by single instruction) and all actions taken on memory during transaction rely only to the value of memory region, without taking into the account, is that memory region was accessed by some one else between two transactions. Good example of atomic transaction based sync support is x86 architecture, in which lock prefixed instructions enforce CPU execute them in atomic transactions. Cache coherence protocol based approach rely to the fact that the memory line can be cached only in the one L1 cache in the one instant of time. The memory access protocol in cache coherence system is similar to next sequence of actions: * *CPU A store the memory line X in L1 cache. In the same time CPU B desire to access memory line X. (X --> CPU A L1) *CPU B issue memory line X access transaction on the bus. (X --> CPU A L1) *All bus agents (CPU cores) have a so called snooping agent that listen all transactions on the bus and check if memory line access to which was requested by transaction is stored in its owner CPU L1 cache. So, CPU A snooping agent detects that CPU A owns the memory line requested by CPU B. (X --> CPU A L1) *CPU A suspend memory access transaction issued by CPU B. (X --> CPU A L1) *CPU A flush the memory line requested by B from its L1 cache. (X --> memory) *CPU A resume previously suspended transaction. (X --> memory) *CPU B fetch memory line X from the memory. (X --> CPU B L1) Thank to that protocol CPU core always access the actual data in memory, and accesses to the memory are serialized in strict order, one access in time. Cache coherence protocol based sync support rely to the fact, that CPU can easily detect, that the particular memory line was accessed between two time points. During the first memory access to the line X that must open transaction, CPU can mark that memory line in L1 cache must be controlled by snooping agent. In its turn snooping agent can during cache line flush in addition perform check to identify is that line is marked for control, and raise internal flag if controlled line flushed. As result, if CPU will check the internal flag during memory access that close the transaction, it will know is controlled memory line was able to be changed by someone else and conclude is transaction must be accomplished with success or must be considered as failed. This is the way of LL\SC instruction class implementation. This approach more simple that atomic transaction and provides much more flexibility in synchronization, because much more number of different sync primitives can be build on it base in comparison with atomic transactions approach. This approach is more scalable and efficient, because it doesn't block access to the memory for all other parts of the system. And as you can see it solves the ABA problem, because it base on the fact of memory region access detection, but not on value of memory region change detection. Any access to the memory region participating in ongoing transaction will be considered as an transaction fail. And this can be good and bad in the same time, because particular algorithm can be interested only in the value of memory region and doesn't take in the account is that location was accessed by someone in the middle, until that access change the memory. In that case read of memory value in the middle will lead to false negative transaction fail. In addition that approach can lead to huge performance degradation of control flows contenting on the same memory line, because they are able to constantly steel memory line from each other, and by this preventing each other from completion transaction with success. That is really significant problem because in terminal case it can turn system in livelock. Cache coherence protocol based sync support usually used in RISC CPU, because of it simplicity and flexibility. But it must be noted that Intel decided to support such approach for synchronization support in x86 architecture too. At last year Intel announced the Transactional Synchronization Extensions to x86 architecture that will be implemented in Haswell generation of Intel processors. In result, it looks like, the x86 will have most powerful support of synchronization and allow system developers to use advantages of both approaches.
{ "language": "en", "url": "https://stackoverflow.com/questions/151783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Unit Testing the Views? Any idea on how to unit test the views in ASP.NET MVC? I am sick of encountering the yellow screen of death when I launch my MVC project just because I forget to update the views when one of the Action methods of my controller changes name. A: You could write integration tests using Watin, but if you just need a quick check to see if you've any errors in your views, you could also try the solution mentioned in this post: How can I compile ASP.NET pages before loading them with a webserver. Prebuild your aspx pages and you're good to go! A: Set <MvcBuildViews> to true in your project file and the compiler will let you know about this sort of problem whenever you build. This assumes that your project file also contains the following section (gets added automatically in ASP.NET MVC 1.0) <Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'"> <AspNetCompiler VirtualPath="temp" PhysicalPath="$(ProjectDir)\..\$(ProjectName)" /> </Target> A: Well, in addition to the Stephen Walther's blog entry noted by AugustLights, there are some other options as well... Jim Zimmerman talks about on his blog about some code he wrote to dynamically precompile his ASP.NET MVC view pages to find any simple mistakes. You could also use the Spark View Engine, which has the feature of precompilation, instead of using the default ASPX View Engine that is enabled by ASP.NET MVC. This is a pretty dramatic change and might not be an option for you ;) If you want more information about Spark or other View Engines, Scott Hanselman has a great blog post describing them and what they might look like. A: S. Walther has something that addresses this, but it looks a little cumbersome... http://stephenwalther.com/archive/2008/07/26/asp-net-mvc-tip-25-unit-test-your-views-without-a-web-server I'm gonna look into this some more...seems like a reasonable thing to do... A: T4MVC to avoid magic values Your problem boils down to magic strings doesn't it. So why not try solving those? T4MVC is about exactly this to avoid those nasty magic strings. So whenever you'd rename an action method, your compilation would break because old action references wouldn't exist any more. Unit testing a view? Testing static non functional things? Unit testing is about testing functional aspects of your application and not static ones (in your case that would be links in someone else's case it would be some I don't know... CSS classes or particular HTML elements on the page). That just doesn't buy me to do unit testing on views... So instead unit testing your views solve your problem at its root.
{ "language": "en", "url": "https://stackoverflow.com/questions/151794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Horizontal lines when clicking the button in Firefox I've some CSS problem in Firefox 3. I have several image buttons on my page and when I'm clicking on them I've seen to horizontal lines across the screen at top and bottom button border. I saw such issue on other web sites, so I think it's something known. A: This is because of the outline style in CSS - the idea is that it highlights the element in focus so that people with disabilities notice it better. You can set the outline style to none in the css and this should fix it.
{ "language": "en", "url": "https://stackoverflow.com/questions/151795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Sqlite update field if it contains Given a database field named "widget_ids", containing data like "67/797/124/" or "45/", where the numbers are slash separated widget_ids... how would you make an update statement with SQL that would say: "if the widget_ids of the row with id X contains the text "somenumber/" do nothing, otherwise append "somenumber/" to it's current value" Can you do something like that with SQL, or more specifically, sqlite? Is that something that is better done in the program for some reason or is there support for "if-then" like syntax in SQL? A: Updates are kind of like if-thens themselves, and there is also if-then support of some sort in most SQL implementations. A simple solution might be: update <tablename> set widget_id = widget_id + "somenumber/" where row_id = X and widget_id not like "%/somenumber/%" and widget_id not like "somenumber/%"; A: First, get rid of the symbol-separated list. Use another table, with one widget id per row. CREATE TABLE ThingieWidgets ( thingie_id INT REFERENCES Thingies, widget_id INT REFERENCES Widgets, PRIMARY KEY(thingie_id, widget_id) ); Fill the table with values from the slash-separated list: INSERT INTO ThingieWidgets (thingie_id, widget_id) VALUES (1234, 67), (1234, 797), (1234, 124); Now you can test if Thingie 1234 references Widget 45: SELECT * FROM ThingieWidgets WHERE thingie_id = 1234 AND widget_id = 45; You can try to insert and recover if there's a duplicate key error: INSERT INTO ThingieWidgets (thingie_id, widget_id) VALUES (1234, 45);
{ "language": "en", "url": "https://stackoverflow.com/questions/151800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: System.IO.FileSystemWatcher to monitor a network-server folder - Performance considerations I want to watch a folder tree on a network server for changes. The files all have a specific extension. There are about 200 folders in the tree and about 1200 files with the extension I am watching. I can't write a service to run on the server (off-limits!) so the solution has to be local to the client. Timeliness is not particularly important. I can live with a minute or more delay in notifications. I am watching for Create, Delete, Rename and Changes. Would using the .NET System.IO.fileSystemWatcher create much of a load on the server? How about 10 separate watchers to cut down the number of folders/files being watched? (down to 200 from 700 folders, 1200 from 5500 files in total) More network traffic instead of less? My thoughts are a reshuffle on the server to put the watched files under 1 tree. I may not always have this option hence the team of watchers. I suppose the other solution is a periodic check if the FSW creates an undue load on the server, or if it doesn't work for a whole bunch of SysAdmin type reasons. Is there a better way to do this? A: From a server load point of view, using the IO.FileSystemWatcher for remote change notifications in the scenario you describe is probably the most efficient method possible. It uses the FindFirstChangeNotification and ReadDirectoryChangesW Win32 API functions internally, which in turn communicate with the network redirector in an optimized way (assuming standard Windows networking: if a third-party redirector is used, and it doesn't support the required functionality, things won't work at all). The .NET wrapper also uses async I/O and everything, further assuring maximum efficiency. The only problem with this solution is that it's not very reliable. Other than having to deal with network connections going away temporarily (which isn't too much of a problem, since IO.FileSystemWatcher will fire an error event in this case which you can handle), the underlying mechanism has certain fundamental limitations. From the MSDN documentation for the Win32 API functions: * *ReadDirectoryChangesW fails with ERROR_INVALID_PARAMETER when the buffer length is greater than 64 KB and the application is monitoring a directory over the network. This is due to a packet size limitation with the underlying file sharing protocols *Notifications may not be returned when calling FindFirstChangeNotification for a remote file system In other words: under high load (when you would need a large buffer) or, worse, under random unspecified circumstances, you may not get the notifications you expect. This is even an issue with local file system watchers, but it's much more of a problem over the network. Another question here on SO details the inherent reliability problems with the API in a bit more detail. When using file system watchers, your application should be able to deal with these limitations. For example: * *If the files you're looking for have sequence numbers, store the last sequence number you got notified about, so you can look for 'gaps' on future notifications and process the files for which you didn't get notified; *On receiving a notification, always do a full directory scan. This may sound really bad, but since the scan is event-driven, it's still much more efficient than dumb polling. Also, as long as you keep the total number of files in a single directory, as well as the number of directories to scan, under a thousand or so, the impact of this operation on performance should be pretty minimal anyway. Setting up multiple listeners is something you should avoid as much as possible: if anything, this will make things even less reliable... Anyway, if you absolutely have to use file system watchers, things can work OK as long as you're aware of the limitations, and don't expect 1:1 notification for every file modified/created. So, if you have other options (essentially, having the process writing the files notify you in a non-file system based way: any regular RPC method will be an improvement...), those are definitely worth looking into from a reliability point of view. A: In my experience, an FSW does not create high network traffic. However, if there is a performance problem, your approach of using several watchers and breaking it down to fewer folders being watched sounds reasonable. I had some big problems with FSW on network drives, though: Deleting a file always threw the error event, never the deleted event. I did not find a solution, so I now avoid using FSW if there is a way around it... A: The MSDN documentation indicates that you can use the FileSystemWatcher component to watch for filesystem changes on a network drive. It also indicates that the watcher component listens for file system change notifications rather than periodically interrogating the target drive for changes. Based on that the amount of network traffic depends entirely on how much you expect the contents of that network drive to change. The FSW component will not add to the level of network traffic. A: I've used the file system watchers from C# a number of times. The first time I used them, I had problems with them stopping working, mainly due to the fact that I was processing the changes in the thread that reported the change. Now however, I just push the change onto a queue and process the queue on another thread. This seems to solve the problem I originally had. For your problem, you could have multiple watchers pushing onto the same queue. However, I haven't used this with your sort of scale of problem. A: Watcher looks 100% reliable - just watch the buffer size on the watcher object. I've tested thousands of file-updates, none lost. I recommend using a multi-threaded approach - The trigger being the file watcher. It can launch a thread for each file-change detected. The watcher can process much faster with less chance of overflow. (use Async thread) A: After using System.IO.FileSystemWatcher for sometime. It is not stable enough to handle events that is coming in too fast. To ensure 100% read of the files. I use simple directory methods to search through the files. After reading it, immediately copy the files to another folder. To isolate it from new files being added while you are reading the files. Timer is used to regularly reading the folder. By copying the already read file to archive folder, this make sure it will not be read again. The subsequent read will be always new files. var fileNames = Directory.GetFiles(srcFolder); foreach (string fileName in fileNames) { string[] lines = File.ReadAllLines(fileName); } A: I wouldn't think there's any sort of active state or communication between the computer with the FSW and the computer whose location is being monitored. In other words, the FSW isn't pinging the networked OS to check on the file. One would imagine that a message or event is only raised/sent to the networked FSW when a change occurs. But this is all just speculation. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/151804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Search for information on building large enterprise systems How do you organize DB layer, business logic and cross-platform API of your information management system, if uploading and processing 500000 data records in one session is a normal operation (C# .NET 3.5 + MS SQL 2005)? I’m specifically interested in production-proven paging patterns that behave well with the concurrency, scalability and reliability. Does anybody have any ideas, in what direction to dig? * *Open Source Projects (don’t care about the language or platform, as long as it is not Ook) *books *articles *Google keywords *forums or newsgroups Any help would greatly appreciated! Update: * *simple paging (i.e.: rownumber in SQL 2005) does not work, since there are a lot of concurrent changes to the database. Item, that is deleted or inserted between the page requests, automatically makes current page index invalid. A: This is a good book to start with: Patterns of Enterprise Application Architecture by Martin Fowler A: When it comes to DB optimization for huge amount of data you’ll most probably benefit from using “BigTable” technique. I found article here very useful. Shortly the idea is to use DB denormalization to trade disk space for better performance. For paging in MS SQL 2005 you’ll want to find more info on using ROW_NUMBER function. Here is just a simple example, you’ll find tons of them using google (keywords: ROW_NUMBER paging SQL 2005). Do not dig to much though – there is no magic in implementation, rather in how are you going to use/present the paging itself. Google search is a good example. Note: we found NHibernate framework native paging support not sufficient for our solution. Also you’ll probably be interested in creating FULLTEXT index and using full text search. Here is MSDN article on creating full text index, and some info on full text search. Good luck. A: Done the implementation. I have been informed recently that one of the uploads was about 2148849 records. Tiers did successfully deal with the a couple of broken connections and dozens of deadlocks at the DB level during this upload. In case somebody else needs some info: * *Product page *Exception Handling Action Policies A: dandikas, thank you for mentioning the partial denormalization. Yes, that's the approach I'm considering for improving performance of some queries. Unfortunately, NHibernate ORM does not fit into the solution, due to the performance overhead it adds. Same with the SQL paging - it does not work in the scenario of numerous concurrent edits (as detected by the stress-testing) A: I look after an enterprise data warehouse which uploads some feeds of hundreds of thousands of records. I'm not sure if this is your scenario, but we: * *Receive text files whch we upload to a Sybase database. *Format the different feeds using awk so they're in a common format. *Load them into a denormalised intermediate table using bcp. *Running stored procedures to populate the normalised database structre. *Delete from the denormalised intermediate table. This runs fairly well, but we force our uploads to be sequential. I.e. when feeds arrive they go into a queue, and we process the feed at the head of the queue entirely before looking at the rest. Is any of that helpful? A: Same with the SQL paging - it does not work in the scenario of numerous concurrent edits (as detected by the stress-testing) As I mentioned, there is no magic in implementing paging – you either use ROW_NUMBER or a temporary table. The magic here is in evaluating what is your most common real world usage scenario. Using temporary table along with user tracking might help a bit in overcoming concurrent edits scenario. Though I sense that you’ll win more by answering questions: * *How long user stays on one page before moving to another one? *How often user moves from first to any other page? *What is the common pages count that user will look through? *How critical it is if some information changes while user is moving from one page to another and back? *How critical it is if some information gets deleted while user is on the page that shows the information? Try not to concentrate on question like: “How to handle any possible concurrent edits scenario while paging?” before you answer above questions first and then handle only situations that really matter. Another note is UI. Check out as much paging UI as you can find, as there are much better solutions than just right and left arrows, or lined up page numbers. Some of solutions help to hide/overcome technically not solvable paging scenarios. P.S. If this answer is useful I’ll combine it with my first one.
{ "language": "en", "url": "https://stackoverflow.com/questions/151812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is Occam-pi a good language to learn to program LEGO MINDSTORMS & Surveyor Corporation SRV-1? Is Occam-pi a good language to learn to program LEGO MINDSTORMS & Surveyor Corporation SRV-1 robots for an programming newbie. Are there any opensource projects making use of the same -- to read source code. url for occam-pi :- http://www.transterpreter.org/ A: I have used the occam-pi on the transterpreter and it seems to do a good job. You may want to check this blog out. It is written by one of the developers. If you give Matt an email he may be able to point you in the right direction for material. A: I woulda echo Modan's comments earlier. Occam is in one sense a very good language - it does explicit concurrency in a reliable robust way that is quite possibly second to none. But it is not a general purpose programming language unfortunately. Nor is it simple to learn if you've only ever done languages like C and Java. It requires a different mental approach and this is part of what makes it so good at concurrency, particularly in embedded systems such as the NXT. The necessary thinking is more akin to that used by hardware designers than by most programmers (in particular, OO programmers may struggle with its rejection of reference aliasing - one of the things that allows Occam to guarantee correct concurrent behaviour; more detail can be found here). The necessary way of thinking is more like that needed for a certain plastic brick construction toy product. So in summary it's a good choice ... but unfortunately one that would frustrate a large number of inexperienced users. Try it if you fancy a challenging adventure! A: I am finding Occam-pi is THE robotics programming language to use, without fail. It is intuitive in the way that other languages are not, when considering active robots that sense and act at the same time--in parallel. Programming in Occam-pi is like wiring up the physical robot. You know which hardware components do what, so you connect them to the right place. A similar thinking style occurs while programming in concurrent programming languages, like Occam-pi. You figure out how the particular process you need must be written in order to function and then you connect it to the other processes via Channels (much like wires). In order to do the same things in languages like C, C++ and Java, on microcontrollers it is necessary to fight with such beasts as: timer interrupts, volatile variables, and intricately woven 'for' loops. Put simply, Occam-pi simplifies robotics programming immensely. bringFire A: If you're interested in parallel programming on the SRV-1, I can say that yes, Occam-Pi is great. Matt Jadud (one of the developers of Occam-Pi) was a professor of mine a couple of years back, and we worked almost exclusively with the Occam-Pi/SRV-1 combination. It has its quirks (or at least it did at the time) but we were largely able to resolve them. It isn't that bad of a language to learn for a new programmer (it was my first language, and I'm doing alright!) I definitely recommend pinging Matt with any questions, he's very easy to get a response from. I'm also happy to answer any questions you may have, you can email me at: bp at brdpwrs dot com Good luck! A: Occam is definitely not a language for a programming newbie. I would recommend a newbie try and gain skills that will be usable in multiple situations, and help you in the future as your career/hobby progresses. The other recommmendation I would give is to learn in an area where there is a vibrant community of fellow developers to learn from. If you choose the right language you will find friendly tutorials to ease you in to the process. That being said, my experience of Occam is from nearly 10 years ago, and there is still part of me that would like to go back to it and have a play again. It is a very rewarding experience when it works, although infuriating when you have a bug that slowly degrades in performance as your processes get blocked. I would recommend you take the time to learn Occam only once you are already experienced at programming Lego Mindstorms, and even then only if you have a lot of time and patience. If you do get the chance, it is a great language, definitely the best I have come accross for highly parallel programming. I doubt it will ever become mainstream though. A: Probably not. It's a research language, so there won't be the support for what you want. sorry. The LEGO MINDSTORMS Education NXT Software is probably a better bet for a programming newbie. The SRV-1 is cool - but doesn't really have the supporting resources for beginners. Good luck! A: Well, let's put it this way: it's not like you need to interface with a database ALL THE TIME. I think you might find that it is worth learning a research language in order to master new features in other languages. I should write more, but there is no time. A: I used 'not quite c' back when the original mindstorms came out - proper programming syntax but pretty easy to use.
{ "language": "en", "url": "https://stackoverflow.com/questions/151825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: High-level Compare And Swap (CAS) functions? I'd like to document what high-level (i.e. C++ not inline assembler ) functions or macros are available for Compare And Swap (CAS) atomic primitives... E.g., WIN32 on x86 has a family of functions _InterlockedCompareExchange in the <_intrin.h> header. A: glib, a common system library on Linux and Unix systems (but also supported on Windows and Mac OS X), defines several atomic operations, including g_atomic_int_compare_and_exchange and g_atomic_pointer_compare_and_exchange. A: GCC has some built-ins for atomic accesses, too. A: On Solaris there is "atomic.h" (i.e. <sys/atomic.h>). A: MacOS X has OSAtomic.h A: There have been a series of working group papers on this subject proposing changes to the C++ Standard Library. WG N2427 (C++ Atomic Types and Operations) is the most recent, which contributes to section 29 -- Atomic operations library -- of the pending standard. A: I'll let others list the various platform-specific APIs, but for future reference in C++09 you'll get the atomic_compare_exchange() operation in the new "Atomic operations library". A: java has this CAS operation, too see here there are practical uses for this, like a lock-free hashtable used in multiprocessor system
{ "language": "en", "url": "https://stackoverflow.com/questions/151841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Unicode Characters that can be used to trick a string sorter? Since Unicode lacks a series of zero width sorting characters, I need to determine equivalent characters that will allow me to force a certain order on a list that is automatically sorted by character values. Unfortunately the list items are not in an alphabetical order, nor is it acceptable to prefix them with visible characters to ensure the result of the sort matches the wanted outcome. What Unicode characters can be thrown in front of regular Latin alphabet text, and will not appear, but still allow me to "spike" the sort in the way I require? (BTW this is being done with Drupal 5 with a user profile list field. Don't bother suggesting changing that to a vocabulary/category.) A: Zero-width space (U+200B) should probably do what you want. From the Unicode spec: Zero Width Space. The U+200B ZERO WIDTH SPACE indicates a line break opportunity, except that it has no width. Zero-width space characters are intended to be used in languages that have no visible word spacing to represent line break opportunities, such as Thai, Khmer, and Japanese. Should be in most fonts you run into, but YMMV. A: Personally, I just prefer to use a primary/secondary sort key. It's less kludgy, and easy to implement in a typical sql query (ORDER BY column_a,column_b). Edited to add: In Php, you could use usort(array, comparisonFunction) with a custom comparison function to add additional logic for sorting, if you can't use SQL to do the trick. However, if you only have one column to work with and that's unfixable, just prefix with a certain number of unlikely characters like underscores for sorting, then strip them just before you display them. (using regexp substitution or similar). Unicode-based hacks will depend heavily on what fonts are used, what locale's collation/sorting order you're using, and may produce undesirable side effects on clients you don't have control over (different browsers, different oses, different client locales). Most "unprintable" characters yield the "unknown character" when displayed on systems without support for them, which usually looks like an empty square. There are some zero-width characters used for languages like Arabic, but they shouldn't affect sorting except in applications with very perverse Unicode support.
{ "language": "en", "url": "https://stackoverflow.com/questions/151844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Get other running processes window sizes in Python This isn't as malicious as it sounds, I want to get the current size of their windows, not look at what is in them. The purpose is to figure out that if every other window is fullscreen then I should start up like that too. Or if all the other processes are only 800x600 despite there being a huge resolution then that is probably what the user wants. Why make them waste time and energy resizing my window to match all the others they have? I am primarily a Windows devoloper but it wouldn't upset me in the least if there was a cross platform way to do this. A: Check out the win32gui module in the Windows extensions for Python. It may provide some of the functionality you're looking for. A: Using hints from WindowMover article and Nattee Niparnan's blog post I managed to create this: import win32con import win32gui def isRealWindow(hWnd): '''Return True iff given window is a real Windows application window.''' if not win32gui.IsWindowVisible(hWnd): return False if win32gui.GetParent(hWnd) != 0: return False hasNoOwner = win32gui.GetWindow(hWnd, win32con.GW_OWNER) == 0 lExStyle = win32gui.GetWindowLong(hWnd, win32con.GWL_EXSTYLE) if (((lExStyle & win32con.WS_EX_TOOLWINDOW) == 0 and hasNoOwner) or ((lExStyle & win32con.WS_EX_APPWINDOW != 0) and not hasNoOwner)): if win32gui.GetWindowText(hWnd): return True return False def getWindowSizes(): ''' Return a list of tuples (handler, (width, height)) for each real window. ''' def callback(hWnd, windows): if not isRealWindow(hWnd): return rect = win32gui.GetWindowRect(hWnd) windows.append((hWnd, (rect[2] - rect[0], rect[3] - rect[1]))) windows = [] win32gui.EnumWindows(callback, windows) return windows for win in getWindowSizes(): print win You need the Win32 Extensions for Python module for this to work. EDIT: I discovered that GetWindowRect gives more correct results than GetClientRect. Source has been updated. A: I'm a big fan of AutoIt. They have a COM version which allows you to use most of their functions from Python. import win32com.client oAutoItX = win32com.client.Dispatch( "AutoItX3.Control" ) oAutoItX.Opt("WinTitleMatchMode", 2) #Match text anywhere in a window title width = oAutoItX.WinGetClientSizeWidth("Firefox") height = oAutoItX.WinGetClientSizeHeight("Firefox") print width, height A: I updated the GREAT @DZinX code adding the title/text of the windows: import win32con import win32gui def isRealWindow(hWnd): #'''Return True iff given window is a real Windows application window.''' if not win32gui.IsWindowVisible(hWnd): return False if win32gui.GetParent(hWnd) != 0: return False hasNoOwner = win32gui.GetWindow(hWnd, win32con.GW_OWNER) == 0 lExStyle = win32gui.GetWindowLong(hWnd, win32con.GWL_EXSTYLE) if (((lExStyle & win32con.WS_EX_TOOLWINDOW) == 0 and hasNoOwner) or ((lExStyle & win32con.WS_EX_APPWINDOW != 0) and not hasNoOwner)): if win32gui.GetWindowText(hWnd): return True return False def getWindowSizes(): Return a list of tuples (handler, (width, height)) for each real window. ''' def callback(hWnd, windows): if not isRealWindow(hWnd): return rect = win32gui.GetWindowRect(hWnd) text = win32gui.GetWindowText(hWnd) windows.append((hWnd, (rect[2] - rect[0], rect[3] - rect[1]), text )) windows = [] win32gui.EnumWindows(callback, windows) return windows for win in getWindowSizes(): print(win) A: I Modify someone code , This well help to run other application and get PID , import win32process import subprocess import win32gui import time def get_hwnds_for_pid (pid): def callback (hwnd, hwnds): if win32gui.IsWindowVisible (hwnd) and win32gui.IsWindowEnabled (hwnd): _, found_pid = win32process.GetWindowThreadProcessId (hwnd) if found_pid == pid: hwnds.append (hwnd) return True hwnds = [] win32gui.EnumWindows (callback, hwnds) return hwnds # This the process I want to get windows size. notepad = subprocess.Popen ([r"C:\\Users\\dniwa\\Adb\\scrcpy.exe"]) time.sleep (2.0) while True: for hwnd in get_hwnds_for_pid (notepad.pid): rect = win32gui.GetWindowRect(hwnd) print(hwnd, "=>", win32gui.GetWindowText (hwnd)) # You need to test if your resolution really get exactly because mine is doesn't . # I use . 16:9 Monitor , Calculate the percent using this calculations , , (x * .0204082) and (y * .0115774) print((hwnd, (rect[2] - rect[0], rect[3] - rect[1]))) x = rect[2] - rect[0] y = rect[3] - rect[1] print(type(x), type(y)) time.sleep(1)
{ "language": "en", "url": "https://stackoverflow.com/questions/151846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Why would you use an assignment in a condition? In many languages, assignments are legal in conditions. I never understood the reason behind this. Why would you write: if (var1 = var2) { ... } instead of: var1 = var2; if (var1) { ... } ? A: The short answer is that expression-oriented programming languages allow more succinct code. They don't force you to separate commands from queries. A: I find it most useful in chains of actions which often involve error detection, etc. if ((rc = first_check(arg1, arg2)) != 0) { report error based on rc } else if ((rc = second_check(arg2, arg3)) != 0) { report error based on new rc } else if ((rc = third_check(arg3, arg4)) != 0) { report error based on new rc } else { do what you really wanted to do } The alternative (not using the assignment in the condition) is: rc = first_check(arg1, arg2); if (rc != 0) { report error based on rc } else { rc = second_check(arg2, arg3); if (rc != 0) { report error based on new rc } else { rc = third_check(arg3, arg4); if (rc != 0) { report error based on new rc } else { do what you really wanted to do } } } With protracted error checking, the alternative can run off the RHS of the page whereas the assignment-in-conditional version does not do that. The error checks could also be 'actions' — first_action(), second_action(), third_action() — of course, rather than just checks. That is, they could be checked steps in the process that the function is managing. (Most often in the code I work with, the functions are along the lines of pre-condition checks, or memory allocations needed for the function to work, or along similar lines). A: It's more useful if you are calling a function: if (n = foo()) { /* foo returned a non-zero value, do something with the return value */ } else { /* foo returned zero, do something else */ } Sure, you can just put the n = foo(); on a separate statement then if (n), but I think the above is a fairly readable idiom. A: In PHP, for example, it's useful for looping through SQL database results: while ($row = mysql_fetch_assoc($result)) { // Display row } This looks much better than: $row = mysql_fetch_assoc($result); while ($row) { // Display row $row = mysql_fetch_assoc($result); } A: It can be useful if you're calling a function that returns either data to work on or a flag to indicate an error (or that you're done). Something like: while ((c = getchar()) != EOF) { // process the character } // end of file reached... Personally it's an idiom I'm not hugely fond of, but sometimes the alternative is uglier. A: The other advantage comes during the usage of GDB. In the following code, the error code is not known if we were to single step. while (checkstatus() != -1) { // Process } Rather while (true) { int error = checkstatus(); if (error != -1) // Process else // Fail } Now during single stepping, we can know what was the return error code from the checkstatus(). A: GCC can help you detect (with -Wall) if you unintentionally try to use an assignment as a truth value, in case it recommends you write if ((n = foo())) { ... } I.e. use extra parenthesis to indicate that this is really what you want. A: It's more useful for loops than if statements. while(var = GetNext()) { ...do something with 'var' } Which would otherwise have to be written var = GetNext(); while(var) { ...do something var = GetNext(); } A: The idiom is more useful when you're writing a while loop instead of an if statement. For an if statement, you can break it up as you describe. But without this construct, you would either have to repeat yourself: c = getchar(); while (c != EOF) { // ... c = getchar(); } or use a loop-and-a-half structure: while (true) { c = getchar(); if (c == EOF) break; // ... } I would usually prefer the loop-and-a-half form. A: I find it very useful with functions returning optionals (boost::optional or std::optional in C++17): std::optional<int> maybe_int(); // function maybe returns an int if (auto i = maybe_int()) { use_int(*i); } This reduces the scope of my variable, makes code more compact and does not hinder readability (I find). Same with pointers: int* ptr_int(); if (int* i = ptr_int()) { use_int(*i); } A: I used it today while programing in Arduino (Subset of C++ language). Case I have a transmitter and a receiver. The transmitter wants to send the data until it is received. I want to set a flag when the process is done. while (!(newtork_joined = transmitter.send(data))) { Serial.println("Not Joined"); } Here: * *"transmitter.send" - returns true when the transmission is successful *"newtork_joined " - is the flag I set when successful Result * *When the transmission is not successful, the flag is not set, while loop is true, it keeps executing *When successful, the flag is set, while loop is false, we exit Isn't beautiful? A: The reason is: * *Performance improvement (sometimes) *Less code (always) Take an example: There is a method someMethod() and in an if condition you want to check whether the return value of the method is null. If not, you are going to use the return value again. If(null != someMethod()){ String s = someMethod(); ...... //Use s } It will hamper the performance since you are calling the same method twice. Instead use: String s; If(null != (s = someMethod())) { ...... //Use s }
{ "language": "en", "url": "https://stackoverflow.com/questions/151850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: cannot access excel file after renaming I have renamed some excel files on my web server and after renaming the users cannot download those files. What could be the problem. They are all excel files. A: Some possibilities I can think of: * *Make sure the extension on the files is still .xls (that's how it associates the mime type) *Check the file protections on the files in case it, or the ownership got changed when you renamed them *Make sure Excel didn't open them on the webserver (if a windows web server) as it will keep them locked until the EXCEL.EXE process is killed. Might be one of those ... A: Another possible problem is that your webserver is case-sensitive when it comes to file names, and when you renamed the files, the name might have different capitalisation to what it did before (think .XLS to .xls) A: One reason is renaming file in webserver will requires refrseh or reloading the page.Did u try that A: I renamed an excel file from Columbus to Columbus PPWK by right clicking on the file name in the folder and scrolling to 'rename' and typing what I wanted. I have done this a trillion times with other files and no issues. When I tried to open this one, it would ask how do I want to open it with a bunch of options but nothing worked. In addition the little picture next to the newly renamed file was just a blank white rectangle and not the normal white rectangle with the green X on it. I fixed it by simply renaming it to Columbus PPWK.xls Once I added the .xls, it allowed me to open it again and I re-saved it to make sure it appeared and functioned normal, which it did. A: I was trying to search some solution until I found it on my own. This may help. Renaming the file too long was the issue I encountered. Hence, make the file name short.
{ "language": "en", "url": "https://stackoverflow.com/questions/151851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: root folder equivalent in windows Is C drive treated as the root folder in windows when one says \folder1\folder2\ in linux and windows C:\folder1\folder2. A: If you're running Windows CE then \ is the root directory. This resembles Unix's / root directory. This is the only kind of Windows where you can get a simple answer to your question. If you're running Windows NT/2000/XP/2003 then the closest equivalent is the partition containing files NTLDR, NTDETECT.COM, BOOT.INI, and BOOTFONT.BIN. The BIOS and MBR find this partition by finding which drive to start booting, scanning the MBR, and looking for the active partition. Microsoft calls this the system partition. I'm not completely sure how a program can find which partition this was. Anyway, when you find which drive letter this is, say letter L, then you could say that L:\ is the root directory. 99% of the time this will be drive letter C:. Also if you're running Windows NT/2000/XP/2003 then you also have a partition which contains the Windows system files, such as directory \Windows or others. Microsoft calls this the boot partition. You can get the drive letter from the symbol %SystemDrive% as someone else said. If this is drive letter Q then you can say that Q:\ is the root of the system drive. If you're running Vista then things are more complicated. If you installed by booting the DVD, then the boot partition (containing the system files) is C: and your system partition (containing the boot files) is D:, unless they're the same partition and then the partition is C:. But if you installed by having Windows running already, inserting the DVD and starting the installer under that Windows installation, then the drive letters could be almost anything. In Windows 95/98/ME the BIOS and MBR would look for files IO.SYS, COMMAND.COM, and some others, in the active partition. This would usually get drive letter C: so the root partition would be C:. As always, the Windows system files could be installed in directory \Windows or others on any partition. Some people talk about a desktop. Well sure, each logged in user has a desktop. This is somewhat like each Unix user's home directory. It sure isn't a root directory. Addendum: In the second-to-last paragraph, about Windows 95/98/ME, I typed "so the root partition would be C:." That is, letter C, a colon, a backslash, and then a period for the end of the sentence (not part of the directory name). When viewing the page, the backslash isn't showing. But when editing this answer to add this addendum, the backslash is there exactly as it should be, exactly as I typed it. A: In windows you do not have a special root node, instead you have some entry point on the filesystem in form of environment variables: %AppData% %ProgramFiles% %CommonProgramFiles% %SystemDrive% %SystemRoot% the better equivalent of a root could be the %SystemDrive%, even if the concept of root is out of context in windows. A: As others have mentioned Windows is unlike UNIX where the file systems have a single logical "path" space for all the devices (each device mounts into this space, such as at /dev/floppy). In Windows, each device (be it a Hard Disk partition, a CD/DVD Rom or a flash drive) has its own logical path space, rooted at the "\" directory of its logical drive letter. While Windows Explorer does a half-decent job of organizing all the drives under "My Computer", this is pure UI sugar, and there's no way to get from one drive letter to another via relative paths. Each individual drive filesystem does however behave similarly to UNIX, and does have a root called "\". A: Windows doesn't share the UNIX concept of a root folder. Instead, each partition or device with file storage has its own root folder. Given that the C: partition/drive is (almost) invariably the home of the operating system, however, you may consider its root folder to be the same for Windows. A: In windows the root folder would be the desktop. Desktop->Computer->C:\folder1\folder2 with the IShellFolder Interface. A: In Windows it's relative to what drive your current working directory is at the time. If your current directory is in the C drive then C:\ would be the root. If the current directory is the D drive then D:\ would be the root. There is no absolute root. A: At the filesystem level the Win32 API has no root folder, but as others have pointed out the Shell API does, ie. the Desktop. The Shell namespace is browsed with the (graphical) shell, which happens to be Explorer.exe. At a much lower level, the Windows kernel also has a root folder, and the registry and filesystem are subfolders of it. This is relevant if you are writing a device driver. The Object Manager namespace can be browsed with a tool called WinObj. A: Unix uses the file system to represent almost all parts of the system, from top to bottom, which means the root file system folder logically also represents the "system root". But in Windows, the file system is not tied to the system so intimately, so within the file system there is no concept of a "system root". Hugh explains it in more detail. A: yes, "\" is the root folder of the current drive. E.g. DOS command "cd \" changes current directory to the root folder, or "cd \folder1\folder2" goes to "c:\folder1\folder2" A: As a matter of fact, windows has a root folder. The folder, though not visible, is known as 'i386'
{ "language": "en", "url": "https://stackoverflow.com/questions/151860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can we set the visibility of a button placed inside a Infragistrics Windows Grid control I have a form where i have used Infragistics windows grid control to display the data. In this, i have placed a button on one of the cell. I want to set its visibility either True or False based on the row condition. I have handled the InitializeRow event of UltraWinGrid control and able to disable the button. But i am unable to set the button's visible to False. A: UltraGridRow row = ... row.Cells[buttonCellIndex].Hidden = true; (I'm using the UltraGrid in Infragistics NetAdvantage for Windows Forms 2008 Vol. 2 CLR 2.0.) A: At first yo must achieve the row and cell , then use findControl method and assign that to a button , now the button is in your hand . you can set the visibility :)
{ "language": "en", "url": "https://stackoverflow.com/questions/151872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is a good extendable blogging application for ASP.NET? I am looking for a relatively good and well supported, and preferably open source blog application that runs on ASP.NET and SQL Server. It doesn’t need to be packed full of features, it just needs the basics, such as tagging, comments, etc. Extra features are a bonus. I would also like it to be either open source or have an extendable framework for customization of not only the look, but the functionality; preferably written in C# if it's open source, as that is my language of choice. Good performance, etc., the usual stuff when looking for applications. Even if it's a CMS with a blog in it, that would be beneficial to point out as well. Please give the name, a link, and some of the the things you find good about it. Even if someone has posted what you were going to post, but you have other things you like about it, please add those things anyway. I have garnered from the responses to check out BlogEngine.Net, subtext, dasBlog, and to stay away from the blog in DotNetNuke. I will start with BlogEngine. A: I used BlogEngine.NET for one of my clients, can definitely recommend it. A: The two big ones are BlogRngine.NET and Subtext. dasblog is good, but just be aware that it's run on XML; there's no database backend which is both a pro and con. I would strongly recommend you stay well away from the DotNetNuke blogging engine. It's low on features, difficult to configure and not very intuitive to use. A: Subtext is a good option. I haven't seen it provide tag clouds, but other than that it works very well! To quote from their site: Subtext is an open source project licensed under the BSD license. It is a fork of the popular .TEXT blogging platform. It run as an ASP.NET/C# site and connects to SQL Server. A: There are two I know of: BlogEngine.NET and DasBlog I'm using BlogEngine.NET. It is very easy to extend as it uses the providers for membership and such. It can be configured to run on SQL Server, but by default it will run using XML. DasBlog should also be a great blog engine, although I have never really played with it. Both are open source and are in C#.
{ "language": "en", "url": "https://stackoverflow.com/questions/151873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Not able to retrieve dates I am working with web Dynpro java.. I have created a stateless session bean wherein I have created business methods for inserting and retrieving records from my dictionary table. My table has two fields of java.sql.Date type The web service that i have created is working fine for insertRecords(), but for showRecords() I am not able to fetch the dates.. This is the following code I have applied.. public WrapperClass[] showRecords() { ArrayList arr = new ArrayList(); WrapperClass model; WrapperClass[] modelArr = null; try { InitialContext ctx = new InitialContext(); DataSource ds = (DataSource)ctx.lookup("jdbc/SAPSR3DB"); Connection conn = ds.getConnection(); PreparedStatement stmt = conn.prepareStatement("select * from TMP_DIC"); ResultSet rs = stmt.executeQuery(); while(rs.next()) { model = new WrapperClass(); model.setTitle(rs.getString("TITLE")); model.setStatus(rs.getString("STATUS")); model.setSt_date(rs.getDate("START_DATE")); model.setEnd_date(rs.getDate("END_DATE")); arr.add(model); //arr.add(rs.getString(2)); //arr.add(rs.getString(3)); } modelArr = new WrapperClass[arr.size()]; for(int j=0;j<arr.size();j++) { model = (WrapperClass)arr.get(j); modelArr[j] = model; } stmt.close(); conn.close(); } catch (NamingException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } arr.toArray(modelArr); return modelArr; } Can anybody please help.. Thanks Ankita A: Did you try getTimestamp() instead of getDate()? What is the error you get when you attempt to get it as a date? A: I use another approach. I create in addition to the bean, also the services, where I create functions that contains the query to manipulate the DB tables. Now, in the Java Wb Dynpro I put something like this: try { ctx = new InitialContext(); Object o = ctx .lookup("sc.fiat.com/um~pers_app/LOCAL/UserServices/com.fiat.sc.um.pers.services.UserServicesLocal"); userServices = (UserServicesLocal) o; } catch (Exception e) { logger.traceThrowableT(Severity.ERROR, e.getMessage(), e); msgMgr.reportException(e); } in the wdDoInit method. I also declare like this private UserServicesLocal userServices; the object. Now I will able to manipulate my DB tables calling methods of the services classes...
{ "language": "en", "url": "https://stackoverflow.com/questions/151874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: saslpasswd2: generic failure on Windows I get a generic failure when I try to run: saslpasswd2 username This was installed by Collanet's Subversion 1.5.2. A: The problem that I had was that the sasl executables are trying to access the sasldb file at: C:\CMU\sasldb2 Make sure that you create the directory C:\CMU
{ "language": "en", "url": "https://stackoverflow.com/questions/151900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Best way to make double insert What's the best way of inserting information in table A and using the index from table A to relate to table B. The "solution" I tried is inserting the info in table A (which has a automatically generated ID), then, select the last index and insert it in table B. This may not be very useful, as the last index may change between the inserts because another user could generate a new index in table A I have had this problem with various DBMS postgreSQL, Informix, MySQL and MSSQL (thanks to lomaxx for the answer) A: if you're using MSSQL you could use SCOPE_IDENTITY to return the last id inserted in your current session. You can then use that to insert into table B. This article from MSDN gives a decent example on how to do it. A: This is the sequence solution (for postgres), you'd have to do it in a stored procedure or on your application code, of course. postgres=# create table foo(id serial primary key, text varchar); NOTICE: CREATE TABLE will create implicit sequence "foo_id_seq" for serial column "foo.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "foo_pkey" for table "foo" CREATE TABLE postgres=# create table bar(id int references foo, text varchar); CREATE TABLE postgres=# select nextval('foo_id_seq'); nextval --------- 1 (1 row) postgres=# insert into foo values (1,'a'); insert into bar values(1,'b'); INSERT 0 1 INSERT 0 1 For MySQL, the transaction is important not to trip on your own feet in case you're using the same connection for more than one insert. For LAST_INSERT_ID(), the most recently generated ID is maintained in the server on a per-connection basis. It is not changed by another client. It is not even changed if you update another AUTO_INCREMENT column with a non-magic value (that is, a value that is not NULL and not 0). Using LAST_INSERT_ID() and AUTO_INCREMENT columns simultaneously from multiple clients is perfectly valid. Each client will receive the last inserted ID for the last statement that client executed. mysql> create table foo(id int primary key auto_increment, text varchar(10)) Engine=InnoDB; Query OK, 0 rows affected (0.06 sec) mysql> create table bar(id int references foo, text varchar(10)) Engine=InnoDB; Query OK, 0 rows affected (0.01 sec) mysql> begin; Query OK, 0 rows affected (0.00 sec) mysql> insert into foo(text) values ('x'); Query OK, 1 row affected (0.00 sec) mysql> insert into bar values (last_insert_id(),'y'); Query OK, 1 row affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.04 sec) A: The other option is to create a sequence and before insert into table get the sequence value in a variable and use this to insert into both tables. A: In ORACLE, use sequences to keep PK values, and use the RETURNING clause INSERT INTO table1 ( pk_table1, value1 ) VALUES ( table1_seq.NEXTVAL, p_value1 ) RETURNING pk_table1 INTO l_table1_id; INSERT INTO table2 ( pk_table2, pk_table1, value2 ) VALUES ( table2_seq.NEXTVAL, l_table1_id, p_value2 ); It's best practice to use PACKAGES in Oracle to store all the SQL / Data manipulation layer of the appilcation. A: With IBM Informix Dynamic Server (IDS), it depends on the language that you are using to implement the double-insert. If it is the server (SPL - stored procedure language), and if you are using a SERIAL column, then you use DBINFO('sqlca.sqlerrd2') to represent the serial value added to Table A when inserting to Table B. If you are working in a client (ESQL/C, I4GL, JDBC, ODBC), you collect the serial via the approved interface (sqlca.sqlerrd[1] in ESQL/C, sqlca.sqlerrd[2] in I4GL) and then transfer it back again. IDS also supports sequences, so you can use that technique instead. IDS 11.50 supports SERIAL8 and BIGSERIAL as well as SERIAL (a 4-byte integer); the detailed interfaces are slightly different for each of these, but the basic principle is the same. A: If your tables are UUID-keyed, generate UUID and use it in both inserts. A: The Access 2000+ (Jet 4.0) answer is described in the Microsoft Knowledge Base. Basically, you can use SELECT @@Identity to retrieve the value of the auto-increment field that is generated on your connection. A: Another Access 2000+ (Jet 4.0) answer is to create a Jet 4.0 VIEW (in Access terms: a SELECT query saved as a Query object) with an INNER JOIN on the IDENTITY (Autonumber) column; the joining columns must be exposed in the SELECT clause and the referenced table . Then INSERT INTO the VIEW supplying values for all the NOT NULL columns that have no DEFAULT. The IDENTITY column value may either be omitted, in which case the engine will auto-generating the value as usual, or an explicit value supplied and honoured; if the value of the joining column in the other table (the one without the IDENTITY column) is additionally supplied then it must be the same as the IDENTITY value otherwise an error will occur; if the IDENTITY value is omitted then any value supplied for the joining column will be ignored. Note that a FOREIGN KEY would normally be expected between such tables but is not a prerequisite for this process to work. Quick example (ANSI-92 Query Mode Jet 4.0 syntax): CREATE TABLE Table1 ( key_col INTEGER IDENTITY NOT NULL PRIMARY KEY, data_col_1 INTEGER NOT NULL ) ; CREATE TABLE Table2 ( key_col INTEGER NOT NULL, data_col_2 INTEGER NOT NULL, PRIMARY KEY (key_col, data_col_2) ) ; CREATE VIEW View1 AS SELECT T1.key_col AS key_col_1, T2.key_col AS key_col_2, T1.data_col_1, T2.data_col_2 FROM Table2 AS T2 INNER JOIN Table1 AS T1 ON T1.key_col = T2.key_col ; INSERT INTO View1 (data_col_1, data_col_2) VALUES (1, 2) ; A: if you're using sql server 2005+ you can also use the OUTPUT clause which outputs the data that has been update, inserted or deleted. It's pretty cool and exactly for the type of stuff you need it for. http://msdn.microsoft.com/en-us/library/ms177564.aspx A: In SQL Server you use the @@IDENTITY field, and also wrap the INSERT's in a transaction. DEFINE ... etc etc BEGIN TRANSACTION INSERT INTO table1 ( value1 ) VALUES ( @p_value1 ) SET @pk_table1 = @@IDENTITY INSERT INTO table2 ( pk_table1, value2 ) VALUES ( @pk_table1, @p_value2 ) COMMIT It's best practice in TSQL to store the @@IDENTITY value in a variable immediatly after the INSERT to avoid the value being corrupted by future maintenance code. It's also best practice to use stored procedures. A: If its in Informix and JSP, there is a function that returns the Serial field of a table after the insert. import com.informix.jdbc.*; cmd = "insert into serialTable(i) values (100)"; stmt.executeUpdate(cmd); System.out.println(cmd+"...okay"); int serialValue = ((IfmxStatement)stmt).getSerial(); System.out.println("serial value: " + serialValue); Here's the Link (For some reason in my work computer it describes everything in spanish, maybe because in in Mexico) A: Use a transaction to avoid this problem: "This may not be very useful, as the last index may change between the inserts because another user could generate a new index in table A." And, in PostgreSQL, you can use 'nextval' and 'currval' to accomplish what you want to do: BEGIN; INSERT INTO products (prod_id, prod_name, description) VALUES ( nextval('products_prod_id_seq') , 'a product' , 'a product description' ); INSERT INTO prices (price_id, prod_id, price) VALUES ( nextval('prices_price_id_seq') , currval('products_prod_id_seq') , 0.99 ); COMMIT; Let me know if you need a DDL snippet as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/151905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: AutoLock in Java - how to? What is the best way to free resources (in this case unlock the ReadWriteLock) when leaving the scope ? How to cover all possible ways (return, break, exceptions etc)? A: Like mike said, a finally block should be your choice. see the finally block tutorial, where it is stated: The finally block always executes when the try block exits. This ensures that the finally block is executed even if an unexpected exception occurs. A: A try/finally block is the closest thing that you can get to this behaviour: Lock l = new Lock(); l.lock(); // Call the lock before calling try. try { // Do some processing. // All code must go in here including break, return etc. return something; } finally { l.unlock(); } A: A nicer way to do it is to use the try-with-resources statement, which lets you mimick C++'s RAII mechanism: public class MutexTests { static class Autolock implements AutoCloseable { Autolock(ReentrantLock lock) { this.mLock = lock; mLock.lock(); } @Override public void close() { mLock.unlock(); } private final ReentrantLock mLock; } public static void main(String[] args) throws InterruptedException { final ReentrantLock lock = new ReentrantLock(); try (Autolock alock = new Autolock(lock)) { // Whatever you need to do while you own the lock } // Here, you have already released the lock, regardless of exceptions } }
{ "language": "en", "url": "https://stackoverflow.com/questions/151917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: When should I use __forceinline instead of inline? Visual Studio includes support for __forceinline. The Microsoft Visual Studio 2005 documentation states: The __forceinline keyword overrides the cost/benefit analysis and relies on the judgment of the programmer instead. This raises the question: When is the compiler's cost/benefit analysis wrong? And, how am I supposed to know that it's wrong? In what scenario is it assumed that I know better than my compiler on this issue? A: The only way to be sure is to measure performance with and without. Unless you are writing highly performance critical code, this will usually be unnecessary. A: You know better than the compiler only when your profiling data tells you so. A: For SIMD code. SIMD code often uses constants/magic numbers. In a regular function, every const __m128 c = _mm_setr_ps(1,2,3,4); becomes a memory reference. With __forceinline, compiler can load it once and reuse the value, unless your code exhausts registers (usually 16). CPU caches are great but registers are still faster. P.S. Just got 12% performance improvement by __forceinline alone. A: The one place I am using it is licence verification. One important factor to protect against easy* cracking is to verify being licenced in multiple places rather than only one, and you don't want these places to be the same function call. *) Please don't turn this in a discussion that everything can be cracked - I know. Also, this alone does not help much. A: The inline directive will be totally of no use when used for functions which are: recursive, long, composed of loops, If you want to force this decision using __forceinline A: Actually, even with the __forceinline keyword. Visual C++ sometimes chooses not to inline the code. (Source: Resulting assembly source code.) Always look at the resulting assembly code where speed is of importance (such as tight inner loops needed to be run on each frame). Sometimes using #define instead of inline will do the trick. (of course you loose a lot of checking by using #define, so use it only when and where it really matters). A: The compiler is making its decisions based on static code analysis, whereas if you profile as don says, you are carrying out a dynamic analysis that can be much farther reaching. The number of calls to a specific piece of code is often largely determined by the context in which it is used, e.g. the data. Profiling a typical set of use cases will do this. Personally, I gather this information by enabling profiling on my automated regression tests. In addition to forcing inlines, I have unrolled loops and carried out other manual optimizations on the basis of such data, to good effect. It is also imperative to profile again afterwards, as sometimes your best efforts can actually lead to decreased performance. Again, automation makes this a lot less painful. More often than not though, in my experience, tweaking alogorithms gives much better results than straight code optimization. A: Actually, boost is loaded with it. For example BOOST_CONTAINER_FORCEINLINE flat_tree& operator=(BOOST_RV_REF(flat_tree) x) BOOST_NOEXCEPT_IF( (allocator_traits_type::propagate_on_container_move_assignment::value || allocator_traits_type::is_always_equal::value) && boost::container::container_detail::is_nothrow_move_assignable<Compare>::value) { m_data = boost::move(x.m_data); return *this; } BOOST_CONTAINER_FORCEINLINE const value_compare &priv_value_comp() const { return static_cast<const value_compare &>(this->m_data); } BOOST_CONTAINER_FORCEINLINE value_compare &priv_value_comp() { return static_cast<value_compare &>(this->m_data); } BOOST_CONTAINER_FORCEINLINE const key_compare &priv_key_comp() const { return this->priv_value_comp().get_comp(); } BOOST_CONTAINER_FORCEINLINE key_compare &priv_key_comp() { return this->priv_value_comp().get_comp(); } public: // accessors: BOOST_CONTAINER_FORCEINLINE Compare key_comp() const { return this->m_data.get_comp(); } BOOST_CONTAINER_FORCEINLINE value_compare value_comp() const { return this->m_data; } BOOST_CONTAINER_FORCEINLINE allocator_type get_allocator() const { return this->m_data.m_vect.get_allocator(); } BOOST_CONTAINER_FORCEINLINE const stored_allocator_type &get_stored_allocator() const { return this->m_data.m_vect.get_stored_allocator(); } BOOST_CONTAINER_FORCEINLINE stored_allocator_type &get_stored_allocator() { return this->m_data.m_vect.get_stored_allocator(); } BOOST_CONTAINER_FORCEINLINE iterator begin() { return this->m_data.m_vect.begin(); } BOOST_CONTAINER_FORCEINLINE const_iterator begin() const { return this->cbegin(); } BOOST_CONTAINER_FORCEINLINE const_iterator cbegin() const { return this->m_data.m_vect.begin(); } A: I've developed software for limited resource devices for 9 years or so and the only time I've ever seen the need to use __forceinline was in a tight loop where a camera driver needed to copy pixel data from a capture buffer to the device screen. There we could clearly see that the cost of a specific function call really hogged the overlay drawing performance. A: There are several situations where the compiler is not able to determine categorically whether it is appropriate or beneficial to inline a function. Inlining may involve trade-off's that the compiler is unwilling to make, but you are (e.g,, code bloat). In general, modern compilers are actually pretty good at making this decision. A: When you know that the function is going to be called in one place several times for a complicated calculation, then it is a good idea to use __forceinline. For instance, a matrix multiplication for animation may need to be called so many times that the calls to the function will start to be noticed by your profiler. As said by the others, the compiler can't really know about that, especially in a dynamic situation where the execution of the code is unknown at compile time. A: wA Case For noinline I wanted to pitch in with an unusual suggestion and actually vouch for __noinline in MSVC or the noinline attribute/pragma in GCC and ICC as an alternative to try out first over __forceinline and its equivalents when staring at profiler hotspots. YMMV but I've gotten so much more mileage (measured improvements) out of telling the compiler what to never inline than what to always inline. It also tends to be far less invasive and can produce much more predictable and understandable hotspots when profiling the changes. While it might seem very counter-intuitive and somewhat backward to try to improve performance by telling the compiler what not to inline, I'd claim based on my experience that it's much more harmonious with how optimizing compilers work and far less invasive to their code generation. A detail to keep in mind that's easy to forget is this: Inlining a callee can often result in the caller, or the caller of the caller, to cease to be inlined. This is what makes force inlining a rather invasive change to the code generation that can have chaotic results on your profiling sessions. I've even had cases where force inlining a function reused in several places completely reshuffled all top ten hotspots with the highest self-samples all over the place in very confusing ways. Sometimes it got to the point where I felt like I'm fighting with the optimizer making one thing faster here only to exchange a slowdown elsewhere in an equally common use case, especially in tricky cases for optimizers like bytecode interpretation. I've found noinline approaches so much easier to use successfully to eradicate a hotspot without exchanging one for another elsewhere. It would be possible to inline functions much less invasively if we could inline at the call site instead of determining whether or not every single call to a function should be inlined. Unfortunately, I've not found many compilers supporting such a feature besides ICC. It makes much more sense to me if we are reacting to a hotspot to respond by inlining at the call site instead of making every single call of a particular function forcefully inlined. Lacking this wide support among most compilers, I've gotten far more successful results with noinline. Optimizing With noinline So the idea of optimizing with noinline is still with the same goal in mind: to help the optimizer inline our most critical functions. The difference is that instead of trying to tell the compiler what they are by forcefully inlining them, we are doing the opposite and telling the compiler what functions definitely aren't part of the critical execution path by forcefully preventing them from being inlined. We are focusing on identifying the rare-case non-critical paths while leaving the compiler still free to determine what to inline in the critical paths. Say you have a loop that executes for a million iterations, and there is a function called baz which is only very rarely called in that loop once every few thousand iterations on average in response to very unusual user inputs even though it only has 5 lines of code and no complex expressions. You've already profiled this code and the profiler shows in the disassembly that calling a function foo which then calls baz has the largest number of samples with lots of samples distributed around calling instructions. The natural temptation might be to force inline foo. I would suggest instead to try marking baz as noinline and time the results. I've managed to make certain critical loops execute 3 times faster this way. Analyzing the resulting assembly, the speedup came from the foo function now being inlined as a result of no longer inlining baz calls into its body. I've often found in cases like these that marking the analogical baz with noinline produces even bigger improvements than force inlining foo. I'm not a computer architecture wizard to understand precisely why but glancing at the disassembly and the distribution of samples in the profiler in such cases, the result of force inlining foo was that the compiler was still inlining the rarely-executed baz on top of foo, making foo more bloated than necessary by still inlining rare-case function calls. By simply marking baz with noinline, we allow foo to be inlined when it wasn't before without actually also inlining baz. Why the extra code resulting from inlining baz as well slowed down the overall function is still not something I understand precisely; in my experience, jump instructions to more distant paths of code always seemed to take more time than closer jumps, but I'm at a loss as to why (maybe something to do with the jump instructions taking more time with larger operands or something to do with the instruction cache). What I can definitely say for sure is that favoring noinline in such cases offered superior performance to force inlining and also didn't have such disruptive results on the subsequent profiling sessions. So anyway, I'd suggest to give noinline a try instead and reach for it first before force inlining. Human vs. Optimizer In what scenario is it assumed that I know better than my compiler on this issue? I'd refrain from being so bold as to assume. At least I'm not good enough to do that. If anything, I've learned over the years the humbling fact that my assumptions are often wrong once I check and measure things I try with the profiler. I have gotten past the stage (over a couple of decades of making my profiler my best friend) to avoid completely blind stabs at the dark only to face humbling defeat and revert my changes, but at my best, I'm still making, at most, educated guesses. Still, I've always known better than my compiler, and hopefully, most of us programmers have always known this better than our compilers, how our product is supposed to be designed and how it is is going to most likely be used by our customers. That at least gives us some edge in the understanding of common-case and rare-case branches of code that compilers don't possess (at least without PGO and I've never gotten the best results with PGO). Compilers don't possess this type of runtime information and foresight of common-case user inputs. It is when I combine this user-end knowledge, and with a profiler in hand, that I've found the biggest improvements nudging the optimizer here and there in teaching it things like what to inline or, more commonly in my case, what to never inline.
{ "language": "en", "url": "https://stackoverflow.com/questions/151919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Errors with Python's mechanize module I'm using the mechanize module to execute some web queries from Python. I want my program to be error-resilient and handle all kinds of errors (wrong URLs, 403/404 responsese) gracefully. However, I can't find in mechanize's documentation the errors / exceptions it throws for various errors. I just call it with: self.browser = mechanize.Browser() self.browser.addheaders = [('User-agent', browser_header)] self.browser.open(query_url) self.result_page = self.browser.response().read() How can I know what errors / exceptions can be thrown here and handle them ? A: $ perl -0777 -ne'print qq($1) if /__all__ = \[(.*?)\]/s' __init__.py | grep Error 'BrowserStateError', 'ContentTooShortError', 'FormNotFoundError', 'GopherError', 'HTTPDefaultErrorHandler', 'HTTPError', 'HTTPErrorProcessor', 'LinkNotFoundError', 'LoadError', 'ParseError', 'RobotExclusionError', 'URLError', Or: >>> import mechanize >>> filter(lambda s: "Error" in s, dir(mechanize)) ['BrowserStateError', 'ContentTooShortError', 'FormNotFoundError', 'GopherError' , 'HTTPDefaultErrorHandler', 'HTTPError', 'HTTPErrorProcessor', 'LinkNotFoundErr or', 'LoadError', 'ParseError', 'RobotExclusionError', 'URLError'] A: While this has been posted a long time ago, I think there is still a need to answer the question correctly since it comes up in Google's search results for this very question. As I write this, mechanize (version = (0, 1, 11, None, None)) in Python 265 raises urllib2.HTTPError and so the http status is available through catching this exception, eg: import urllib2 try: ... br.open("http://www.example.org/invalid-page") ... except urllib2.HTTPError, e: ... print e.code ... 404 A: I found this in their docs: One final thing to note is that there are some catch-all bare except: statements in the module, which are there to handle unexpected bad input without crashing your program. If this happens, it's a bug in mechanize, so please mail me the warning text. So I guess they don't raise any exceptions. You can also search the source code for Exception subclasses and see how they are used.
{ "language": "en", "url": "https://stackoverflow.com/questions/151929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Does an empty array in .NET use any space? I have some code where I'm returning an array of objects. Here's a simplified example: string[] GetTheStuff() { List<string> s = null; if( somePredicate() ) { s = new List<string>(); // imagine we load some data or something } return (s == null) ? new string[0] : s.ToArray(); } The question is, how expensive is the new string[0] ? Should I just return null and make the caller accept null as a valid way of indicating "nothing was found"? NB: This is being called in a loop which gets run hundreds and hundreds of times, so it's one of the few cases where I think this kind of optimiziation is not actually 'premature'. PS: And even if it was premature, I'd still like to know how it works :-) Update: Initially when I asked if it used any space, I was thinking of things from the 'C/C++' point of view, kind of like how in C, writing char a[5]; will allocate 5 bytes of space on the stack, and char b[0]; will allocate 0 bytes. I realise this is not a good fit for the .NET world, but I was curious if this was something that the compiler or CLR would detect and optimize out, as a non-resizeable array of size zero really shouldn't (as far as I can see?) require any storage space. A: The upcoming version 4.6 of .NET (later in 2015) contains a static method returning a length-zero string[]: Array.Empty<string>() I suppose it returns the same instance if called many times. A: Even if it's being called "hundreds and hundreds" of times, I'd say it's a premature optimization. If the result is clearer as an empty array, use that. Now for the actual answer: yes, an empty array takes some memory. It has the normal object overhead (8 bytes on x86, I believe) and 4 bytes for the count. I don't know whether there's anything beyond that, but it's not entirely free. (It is incredibly cheap though...) Fortunately, there's an optimization you can make without compromising the API itself: have a "constant" of an empty array. I've made another small change to make the code clearer, if you'll permit... private static readonly string[] EmptyStringArray = new string[0]; string[] GetTheStuff() { if( somePredicate() ) { List<string> s = new List<string>(); // imagine we load some data or something return s.ToArray(); } else { return EmptyStringArray; } } If you find yourself needing this frequently, you could even create a generic class with a static member to return an empty array of the right type. The way .NET generics work makes this trivial: public static class Arrays<T> { public static readonly Empty = new T[0]; } (You could wrap it in a property, of course.) Then just use: Arrays<string>.Empty; EDIT: I've just remembered Eric Lippert's post on arrays. Are you sure that an array is the most appropriate type to return? A: Declared arrays will always have to contain the following information: * *Rank (number of dimensions) *Type to be contained *Length of each dimension This would most likely be trivial, but for higher numbers of dimensions and higher lengths it will have a performance impact on loops. As for return types, I agree that an empty array should be returned instead of null. More information here: Array Types in .NET A: Yes, as others have said, the empty array takes up a few bytes for the object header and the length field. But if you're worried about performance you're focusing on the wrong branch of execution in this method. I'd be much more concerned about the ToArray call on the populated list which will result in a memory allocation equal to its internal size and a memory copy of the contents of the list into it. If you really want to improve performance then (if possible) return the list directly by making the return type one of: List<T>, IList<T>, ICollection<T>, IEnumerable<T> depending on what facilities you need from it (note that less specific is better in the general case). A: I would guess that an empty array uses only the space needed to allocate the object pointer itself. From memory the API guidelines say that you should always return an empty array from a method that returns an array rather than returning null, so I'd leave your code the way it is regardless. That way the caller knows he's guaranteed to get an array (even an empty one) and need not check for null with each call. Edit: A link about returning empty arrays: http://wesnerm.blogs.com/net_undocumented/2004/02/empty_arrays.html A: Others have answered your question nicely. So just a simple point to make... I'd avoid returning an array (unless you can't). Stick with IEnumerable and then you can use Enumerable.Empty<T>() from the LINQ APIs. Obviously Microsoft have optimised this scenario for you. IEnumerable<string> GetTheStuff() { List<string> s = null; if (somePredicate()) { var stuff = new List<string>(); // load data return stuff; } return Enumerable.Empty<string>(); } A: This is not a direct answer to your question. Read why arrays are considered somewhat harmful. I would suggest you to return an IList<string> in this case and restructure the code a little bit: IList<string> GetTheStuff() { List<string> s = new List<string>(); if( somePredicate() ) { // imagine we load some data or something } return s; } In this way the caller doesn't have to care about empty return values. EDIT: If the returned list should not be editable you can wrap the List inside a ReadOnlyCollection. Simply change the last line to. I also would consider this best practice. return new ReadOnlyCollection(s); A: I know this is old question, but it's a basic question and I needed a detailed answer. So I explored this and got results: In .Net when you create an array (for this example I use int[]) you take 6 bytes before any memory is allocated for your data. Consider this code [In a 32 bit application!]: int[] myArray = new int[0]; int[] myArray2 = new int[1]; char[] myArray3 = new char[0]; And look at the memory: myArray: a8 1a 8f 70 00 00 00 00 00 00 00 00 myArray2: a8 1a 8f 70 01 00 00 00 00 00 00 00 00 00 00 00 myArray3: 50 06 8f 70 00 00 00 00 00 00 00 00 Lets explain that memory: * *Looks like the first 2 bytes are some kind of metadata, as you can see it changes between int[] and char[] (a8 1a 8f 70 vs 50 06 8f 70) *Then it holds the size of the array in integer variable (little endian). so it's 00 00 00 00 for myArray and 01 00 00 00 for myArray2 *Now it's our precious Data [I tested with Immediate Window] *After that we see a constant (00 00 00 00). I don't know what its meaning. Now I feel a lot better about zero length array, I know how it works =] A: If I understand correctly, a small amount of memory will be allocated for the string arrays. You code essentially requires a generic list to be created anyway, so why not just return that? [EDIT]Removed the version of the code that returned a null value. The other answers advising against null return values in this circumstance appear to be the better advice[/EDIT] List<string> GetTheStuff() { List<string> s = new List<string(); if (somePredicarte()) { // more code } return s; }
{ "language": "en", "url": "https://stackoverflow.com/questions/151936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How do I control how Emacs makes backup files? Emacs puts backup files named foo~ everywhere and I don't like having to remember to delete them. Also, if I edit a file that has a hard link somewhere else in the file system, the hard link points to the backup when I'm done editing, and that's confusing and awful. How can I either eliminate these backup files, or have them go somewhere other than the same directory? A: Emacs backup/auto-save files can be very helpful. But these features are confusing. Backup files Backup files have tildes (~ or ~9~) at the end and shall be written to the user home directory. When make-backup-files is non-nil Emacs automatically creates a backup of the original file the first time the file is saved from a buffer. If you're editing a new file Emacs will create a backup the second time you save the file. No matter how many times you save the file the backup remains unchanged. If you kill the buffer and then visit the file again, or the next time you start a new Emacs session, a new backup file will be made. The new backup reflects the file's content after reopened, or at the start of editing sessions. But an existing backup is never touched again. Therefore I find it useful to created numbered backups (see the configuration below). To create backups explicitly use save-buffer (C-x C-s) with prefix arguments. diff-backup and dired-diff-backup compares a file with its backup or vice versa. But there is no function to restore backup files. For example, under Windows, to restore a backup file C:\Users\USERNAME\.emacs.d\backups\!drive_c!Users!USERNAME!.emacs.el.~7~ it has to be manually copied as C:\Users\USERNAME\.emacs.el Auto-save files Auto-save files use hashmarks (#) and shall be written locally within the project directory (along with the actual files). The reason is that auto-save files are just temporary files that Emacs creates until a file is saved again (like with hurrying obedience). * *Before the user presses C-x C-s (save-buffer) to save a file Emacs auto-saves files - based on counting keystrokes (auto-save-interval) or when you stop typing (auto-save-timeout). *Emacs also auto-saves whenever it crashes, including killing the Emacs job with a shell command. When the user saves the file, the auto-saved version is deleted. But when the user exits the file without saving it, Emacs or the X session crashes, the auto-saved files still exist. Use revert-buffer or recover-file to restore auto-save files. Note that Emacs records interrupted sessions for later recovery in files named ~/.emacs.d/auto-save-list. The recover-session function will use this information. The preferred method to recover from an auto-saved filed is M-x revert-buffer RET. Emacs will ask either "Buffer has been auto-saved recently. Revert from auto-save file?" or "Revert buffer from file FILENAME?". In case of the latter there is no auto-save file. For example, because you have saved before typing another auto-save-intervall keystrokes, in which case Emacs had deleted the auto-save file. Auto-save is nowadays disabled by default because it can slow down editing when connected to a slow machine, and because many files contain sensitive data. Configuration Here is a configuration that IMHO works best: (defvar --backup-directory (concat user-emacs-directory "backups")) (if (not (file-exists-p --backup-directory)) (make-directory --backup-directory t)) (setq backup-directory-alist `(("." . ,--backup-directory))) (setq make-backup-files t ; backup of a file the first time it is saved. backup-by-copying t ; don't clobber symlinks version-control t ; version numbers for backup files delete-old-versions t ; delete excess backup files silently delete-by-moving-to-trash t kept-old-versions 6 ; oldest versions to keep when a new numbered backup is made (default: 2) kept-new-versions 9 ; newest versions to keep when a new numbered backup is made (default: 2) auto-save-default t ; auto-save every buffer that visits a file auto-save-timeout 20 ; number of seconds idle time before auto-save (default: 30) auto-save-interval 200 ; number of keystrokes between auto-saves (default: 300) ) Sensitive data Another problem is that you don't want to have Emacs spread copies of files with sensitive data. Use this mode on a per-file basis. As this is a minor mode, for my purposes I renamed it sensitive-minor-mode. To enable it for all .vcf and .gpg files, in your .emacs use something like: (setq auto-mode-alist (append (list '("\\.\\(vcf\\|gpg\\)$" . sensitive-minor-mode) ) auto-mode-alist)) Alternatively, to protect only some files, like some .txt files, use a line like // -*-mode:asciidoc; mode:sensitive-minor; fill-column:132-*- in the file. A: The accepted answer is good, but it can be greatly improved by additionally backing up on every save and backing up versioned files. First, basic settings as described in the accepted answer: (setq version-control t ;; Use version numbers for backups. kept-new-versions 10 ;; Number of newest versions to keep. kept-old-versions 0 ;; Number of oldest versions to keep. delete-old-versions t ;; Don't ask to delete excess backup versions. backup-by-copying t) ;; Copy all files, don't rename them. Next, also backup versioned files, which Emacs does not do by default (you don't commit on every save, right?): (setq vc-make-backup-files t) Finally, make a backup on each save, not just the first. We make two kinds of backups: * *per-session backups: once on the first save of the buffer in each Emacs session. These simulate Emac's default backup behavior. *per-save backups: once on every save. Emacs does not do this by default, but it's very useful if you leave Emacs running for a long time. The backups go in different places and Emacs creates the backup dirs automatically if they don't exist: ;; Default and per-save backups go here: (setq backup-directory-alist '(("" . "~/.emacs.d/backup/per-save"))) (defun force-backup-of-buffer () ;; Make a special "per session" backup at the first save of each ;; emacs session. (when (not buffer-backed-up) ;; Override the default parameters for per-session backups. (let ((backup-directory-alist '(("" . "~/.emacs.d/backup/per-session"))) (kept-new-versions 3)) (backup-buffer))) ;; Make a "per save" backup on each save. The first save results in ;; both a per-session and a per-save backup, to keep the numbering ;; of per-save backups consistent. (let ((buffer-backed-up nil)) (backup-buffer))) (add-hook 'before-save-hook 'force-backup-of-buffer) I became very interested in this topic after I wrote $< instead of $@ in my Makefile, about three hours after my previous commit :P The above is based on an Emacs Wiki page I heavily edited. A: If you've ever been saved by an Emacs backup file, you probably want more of them, not less of them. It is annoying that they go in the same directory as the file you're editing, but that is easy to change. You can make all backup files go into a directory by putting something like the following in your .emacs. (setq backup-directory-alist `(("." . "~/.saves"))) There are a number of arcane details associated with how Emacs might create your backup files. Should it rename the original and write out the edited buffer? What if the original is linked? In general, the safest but slowest bet is to always make backups by copying. (setq backup-by-copying t) If that's too slow for some reason you might also have a look at backup-by-copying-when-linked. Since your backups are all in their own place now, you might want more of them, rather than less of them. Have a look at the Emacs documentation for these variables (with C-h v). (setq delete-old-versions t kept-new-versions 6 kept-old-versions 2 version-control t) Finally, if you absolutely must have no backup files: (setq make-backup-files nil) It makes me sick to think of it though. A: Another way of configuring backup options is via the Customize interface. Enter: M-x customize-group And then at the Customize group: prompt enter backup. If you scroll to the bottom of the buffer you'll see Backup Directory Alist. Click Show Value and set the first entry of the list as follows: Regexp matching filename: .* Backup directory name: /path/to/your/backup/dir Alternatively, you can turn backups off my setting Make Backup Files to off. If you don't want Emacs to automatically edit your .emacs file you'll want to set up a customisations file. A: You can disable them altogether by (setq make-backup-files nil) A: (setq delete-auto-save-files t) deletes buffer's auto save file when it is saved or when killed with no changes in it. thus you still get some of the safety of auto save files, since they are only left around when a buffer is killed with unsaved changes or if emacs exits unexpectedly. if you want to do this, but really need history on some of your files, consider putting them under source control.
{ "language": "en", "url": "https://stackoverflow.com/questions/151945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "361" }
Q: What is Native Code? The Project's Web section (under project properties in VS2008) has a list of debuggers: ASP.NET, Native Code, SQL Server. What is Native Code? A: Native code doesn't run on the Common Language Runtime (CLR). An example would be a non-managed C++ application. A: Native code is machine code executed directly by the CPU. This is in contrast to .NET bytecode, which is interpreted by the .NET virtual machine. A nice MSDN hit: Debugging Native Code A: Native code is essentially data in memory that the central processing chip in the computer can read and execute directly. Think of the CPU sucking in data, and that data flipping switches as it goes through, turning things off and on: [ CPU ] ==================================== [ RAM ] ^^^^^ | | LOAD _memoryAddress12, D1 ; tells the CPU to get data from slot 12 ; in RAM, and put it in register D1 inside the CPU ^^^^^ | | ADD D1, 24 ; tells the CPU to do an internal calculation ^^^^^ | | STORE R0, _memoryAddress23 ; tells the CPU to put the answer into slot 23 in ram You can think of the instructions like punch cards, or those musical piano roll things, that flip switches in the CPU as they go in. The important part is that this is in HARDWARE: it's literally happening on wires / circuitry, almost at the speed of light. But there are a lot of switches to flip. So, each of those "native instructions" going into the machine gets executed at the "clock speed" of the machine (around 2.5 billion times per second on a modern PC). In reality, it's a bit more complex, with some instructions taking a bit longer, some being done at the same time, etc. Now, in contrast, virtual machines run non-native code (often called bytecode), literally on a virtual, fake machine. When it comes to languages, a virtual machine is a program that runs ANOTHER program, instead of the program just running directly in hardware. So, where the above program loads data, adds to it, and stores a result in just three native instructions a virtual program might do something more like this (Disclaimer: this is rusty, pseudo-assembly code): load _firstInstruction, D1 if_equal D1, 12 jump _itsAnAddInstructionHandleIt if_equal D1, 13 jump _itsASubstractInstructionHandleIt if_equal D1, 14 jump _itsAMultiplyInstructionHandleIt if_equal D1, 15 jump _itsADivideInstructionHandleIt if_equal D1, 16 jump _itsALoadInstructionHandleIt ... _itsALoadInstructionHandleIt: load D1, D2 add 4, D2 load D2, D3 return And so on. This is just an example of handling ONE of the above native instructions in a non-native way: about 10 instructions (depending on implementation) instead of the first, single, native instruction, and I left out important details, like unboxing data! The point is that, probably in an average of about 20-30 instructions later, you'll have accomplished the same thing as ONE line from the native code above. NOW. All that said, GOOD virtual machines have a JIT, which can convert SOME of this into native code as they're executed, or just before executing them. But there are a lot of things, like Boxed types, which can't be directly converted, because the whole point of a virtual machine is that it doesn't work in the a low level, circuitry-friendly way, that native code would. Virtual machines are easier to program, but much slower. Another huge disadvantage of virtual machines is that they often have big memory overheads, which makes them pretty useless if you want to code up millions of items of data all in memory at the same time. At times like that, a VM, though intended to make code higher-level and more readable, can force you to do LOWER-level, nastier things than native code, because the benefits start to become drawbacks. A: For starters, the native code is just an Intermediate Language tailored to run on a particular assembly. It resembles the object code as in other HLLs.
{ "language": "en", "url": "https://stackoverflow.com/questions/151952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Good database design for recall and comparison of 2D data arrays? I am looking to store 2D arrays of 900x100 elements in a database. Efficient recall and comparison of the arrays is important. I could use a table with a schema like [A, x, y, A(x,y)] such that a single array would compromise 90,000 records. This seems like an ~ok~ table design to store the array, and would provide for efficient recall of single elements, but inefficient recall of a whole array and would make for very inefficient array comparisons. Should I leave the table design this way and build and compare my arrays in code? Or is there a better way to structure the table such that I can get efficient array comparisons using database only operations? thanks A: If the type of data allows, store it in a concatenated format and compare in memory after it has been de-concatenated. The database operation will be much faster and the in-memory operations will be faster than database retrievals as well. Who knows, you may even be able to compare it without de-concatenating. A: 900 x 100 elements is actually very small (even if the elements are massive 1K things that'd only be 90 MB). Can't you just compare in memory when needed and store on disk in some serialized format? It doesn't make sense to store 2D arrays in the database, especially if it is immutable data. A: When I used to work in the seismic industry we used to just dump our arrays (typically 1d of a few thousand elements) to binary file. The database would only be used for what was essentially meta data (location, indexing, etc). This would be considerably quicker, but it also allowed the data to decoupled if necessary: In production this was usual, a few thousand elements doesn't sound much, but a typical dataset could easily be hundreds of GB - this is the 1990s, so we had to decouple to tape.
{ "language": "en", "url": "https://stackoverflow.com/questions/151957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Date Conversion with SQL Server/Reporting Services I have 2 fields in the database month (numeric) and year (numeric) and I want to combine them in a report that combines those 2 fields and format them with MMM-YYYY. e.g 7-2008 becomes Jul-2008. How do I do that? A: DateSerial is the correct answer: http://msdn.microsoft.com/en-us/library/bbx05d0c(VS.80).aspx SSRS uses VB.Net for expressions. Use the expression editor to browse the available functions, one of which is DateSerial. To format the date, set the Format property on the textbox. You should be able to use "MMM-yyyy" as the format. Update: As Peter points out, you would specify the parameters as needed. If you just care about year and month, just supply a value of 1 for the day. Since you are formatting the value without the day component it really doesn't matter what value you use (as long as it creates a valid date). A: =DateSerial(year, month, day) A: Brannon's answer is correct except that he omits the fact that you merely specify a literal for the day. Any value between 1 and 28 will do.
{ "language": "en", "url": "https://stackoverflow.com/questions/151959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Asp.Net MVC: How do I get virtual url for the current controller/view? Is it possible to get the route/virtual url associated with a controller action or on a view? I saw that Preview 4 added LinkBuilder.BuildUrlFromExpression helper, but it's not very useful if you want to use it on the master, since the controller type can be different. Any thoughts are appreciated. A: You can use <%= Url.Action(action, controller, values) %> to build the URL from within the master page. Are you doing this to maybe highlight a tab for the current page or something? If so you can use ViewContext from the view and get the values you need. A: I always try to implement the simplest solution that meets the project requirements. As Enstein said, "Make things as simple as possible, but not simpler." Try this. <%: Request.Path %> A: This worked for me: <%= this.Url.RouteUrl(this.ViewContext.RouteData.Values) %> It returns the current Url as such; /Home/About Maybe there is a simpler way to return the actual route string? A: You can get that data from ViewContext.RouteData. Below are some examples for how to access (and use) that information: /// These are added to my viewmasterpage, viewpage, and viewusercontrol base classes: public bool IsController(string controller) { if (ViewContext.RouteData.Values["controller"] != null) { return ViewContext.RouteData.Values["controller"].ToString().Equals(controller, StringComparison.OrdinalIgnoreCase); } return false; } public bool IsAction(string action) { if (ViewContext.RouteData.Values["action"] != null) { return ViewContext.RouteData.Values["action"].ToString().Equals(action, StringComparison.OrdinalIgnoreCase); } return false; } public bool IsAction(string action, string controller) { return IsController(controller) && IsAction(action); } /// Some extension methods that I added to the UrlHelper class. public static class UrlHelperExtensions { /// <summary> /// Determines if the current view equals the specified action /// </summary> /// <typeparam name="TController">The type of the controller.</typeparam> /// <param name="helper">Url Helper</param> /// <param name="action">The action to check.</param> /// <returns> /// <c>true</c> if the specified action is the current view; otherwise, <c>false</c>. /// </returns> public static bool IsAction<TController>(this UrlHelper helper, LambdaExpression action) where TController : Controller { MethodCallExpression call = action.Body as MethodCallExpression; if (call == null) { throw new ArgumentException("Expression must be a method call", "action"); } return (call.Method.Name.Equals(helper.ViewContext.ViewName, StringComparison.OrdinalIgnoreCase) && typeof(TController) == helper.ViewContext.Controller.GetType()); } /// <summary> /// Determines if the current view equals the specified action /// </summary> /// <param name="helper">Url Helper</param> /// <param name="actionName">Name of the action.</param> /// <returns> /// <c>true</c> if the specified action is the current view; otherwise, <c>false</c>. /// </returns> public static bool IsAction(this UrlHelper helper, string actionName) { if (String.IsNullOrEmpty(actionName)) { throw new ArgumentException("Please specify the name of the action", "actionName"); } string controllerName = helper.ViewContext.RouteData.GetRequiredString("controller"); return IsAction(helper, actionName, controllerName); } /// <summary> /// Determines if the current view equals the specified action /// </summary> /// <param name="helper">Url Helper</param> /// <param name="actionName">Name of the action.</param> /// <param name="controllerName">Name of the controller.</param> /// <returns> /// <c>true</c> if the specified action is the current view; otherwise, <c>false</c>. /// </returns> public static bool IsAction(this UrlHelper helper, string actionName, string controllerName) { if (String.IsNullOrEmpty(actionName)) { throw new ArgumentException("Please specify the name of the action", "actionName"); } if (String.IsNullOrEmpty(controllerName)) { throw new ArgumentException("Please specify the name of the controller", "controllerName"); } if (!controllerName.EndsWith("Controller", StringComparison.OrdinalIgnoreCase)) { controllerName = controllerName + "Controller"; } bool isOnView = helper.ViewContext.ViewName.SafeEquals(actionName, StringComparison.OrdinalIgnoreCase); return isOnView && helper.ViewContext.Controller.GetType().Name.Equals(controllerName, StringComparison.OrdinalIgnoreCase); } /// <summary> /// Determines if the current request is on the specified controller /// </summary> /// <param name="helper">The helper.</param> /// <param name="controllerName">Name of the controller.</param> /// <returns> /// <c>true</c> if the current view is on the specified controller; otherwise, <c>false</c>. /// </returns> public static bool IsController(this UrlHelper helper, string controllerName) { if (String.IsNullOrEmpty(controllerName)) { throw new ArgumentException("Please specify the name of the controller", "controllerName"); } if (!controllerName.EndsWith("Controller", StringComparison.OrdinalIgnoreCase)) { controllerName = controllerName + "Controller"; } return helper.ViewContext.Controller.GetType().Name.Equals(controllerName, StringComparison.OrdinalIgnoreCase); } /// <summary> /// Determines if the current request is on the specified controller /// </summary> /// <typeparam name="TController">The type of the controller.</typeparam> /// <param name="helper">The helper.</param> /// <returns> /// <c>true</c> if the current view is on the specified controller; otherwise, <c>false</c>. /// </returns> public static bool IsController<TController>(this UrlHelper helper) where TController : Controller { return (typeof(TController) == helper.ViewContext.Controller.GetType()); } } A: I wrote a helper class that allows me to access the route parameters. With this helper, you can get the controller, action, and all parameters passed to the action.
{ "language": "en", "url": "https://stackoverflow.com/questions/151963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Why are no Symbols loaded when remote debugging? I want to use remote debugging. The program that I want to debug runs on machine b. Visual Studio runs on machine a. On machine b I have a folder with the following files: * *msvcr72.dll *msvsmon.exe *NatDbgDE.dll *NatDbgDEUI.dll *NatDbgEE.dll *NatDbgEEUI.dll If you think some files are missing, could you also describe where they are usually located? In the next step I started the msvsmon.exe and my program on machine b. On machine a, I started Visual Studio 2008 and my solution in which the program was written. Then I choose "Debug - Attach to Process". I chose "Remote Transport (Native Only with no authentication)". I used the correct IP as a qualifier and took the right process (program.exe). After a while the following message occurred in a popup-window: Unhandled exception at 0x7c812a7b in program.exe: 0xE0434F4D: 0xe0434f4d I can continue or break; When continuing, the exception occurs again and again and again. So I pressed break and the following message occurred: No symbols are loaded for any call stack frame. The source code cannot be displayed. A: Remote debugging in .NET will not work if you don't place the .PDB files into the same directory where the debugged code exists. If VS still can't find source for debugging, the debugged code and the VS project source are not the same version. The solution is rebuilding and redeploying the project. A: 0xE0434F4D is an exception from the CLR (i.e., managed code). You need to do remote debugging with authentication and choose to debug managed code. Alternatively, it is possible to extract out the managed exception info using some debugger extensions but it is a bit more hard work. References: If broken it is... A: 1800 INFORMATION is right, you have to do remote debugging with Windows authentication in order to debug managed code, otherwise you won't be able to load the symbols for managed assemblies. Getting this to work with authentication is pretty tricky, as it requires local accounts on both machines with identical passwords, among other things. This question and everyone's answers are quite useful for getting that working. Remote Debugging in Visual Studio (VS2008), Windows Forms Application A: While the above answers are correct, I ran into instances where the PDBs that were built with assembly being debugged were in place at the remote location, and were not being picked up. If you are using TFS or another build mechanism that supports publishing your debug symbols, I would recommend doing it. Then in Visual Studio Options>Debugging>Symbols you can add that location to the Symbol Servers option to load those symbols anytime they are found to match. This has allowed me debug darned near anything that is running that I have written even if it is a dynamically called assembly (something that I could not get to work for the life of me when only publishing symbols with the assembly). Make use of this very handy feature! A: Make sure you copy the .PDB file that is generated with your assembly into the same folder on the remote machine. This will allow the debugger to pickup the debug symbols. A: I had the same issues. Found the answer on the msdn forums I'll just copy/paste the correct answer here: Make sure that you're using the correct version of msvsmon.exe!!! That's all it was! i had the same problem while remote debugging a C# application. I was using x64 msvsmon.exe because the server runs Windows Server 2008 64-bit, but the application was written for x86, so I had to run the x86 version of msvsmon.exe in order to get rid of this annoying error. Nothing else was needed. Just run the version of msvsmon.exe that corresponds to the target architecture of your application ^_^ A: * *On the Tools menu in Visual studio 2010, choose Options. *In the Options dialog box, open the Debugging node and then click Generals. *Check Show all settings if needed and locate Enable Just My Code (Managed only) *Uncheck it and click OK After you can attach the remote process A: I also ran into this when using a custom build configuration. (DEV instead of Debug) To correct this, I modified the Project Properties-->Build-->Output-->Advanced setting and ensured the Output-->Debug Info setting was full or pdb-only. The default Release configuration is typically set to none. A: * *Add a shared folder on your dev machine that points to the location of the .pdb files *Set up an environment variable called _NT_SYMBOL_PATH on the remote machine that points to the shared folder on your dev machine The remote debugger will now search your dev machine for symbols. No need to copy them over for every build. See MS Video here. Start watching 8-9 minutes in. He demonstrates how to setup the remote debugger to load symbols from a drive share on your development machine. Good luck! A: I was able to get this working by going to Project Properties, Compile tab and setting the Build output path to my remote machine eg \myserver\myshare\myappdir In the debug tab I have Use remote machine checked and set to myserver A: Go to Tools->Options->Debugging->Symbols and add the path to the .pdb files for the executable. The path on my local machine worked fine. A: According to documentation, for managed (I tried attaching to a managed windows service (built against .net 4.5) on remote machine with visual studio 2012) the symbols should be on the remote machine. So, I just kept symbols (make sure they match the modules/assemblies of the application on remote machine) on remote machine, share it and referred to it via symbol settings from local system (where vs is running). Note: the service and symbols need not be on same directory as its working for me with 2k12 + .net 4.5 windows service. for details: http://msdn.microsoft.com/en-us/library/bt727f1t(v=vs.100).aspx Excerpt from the link: Locating Symbol (.pdb) Files Symbol files contain the debugging information for compiled executables. The symbol files of the application to be debugged must be the files that were created when the application executables were compiled. The symbol files must also be located where the debugger can find them. •The symbol files for native applications must be located on the Visual Studio host computer. •The symbol files for managed applications must be located on the remote computer. •The symbol files for mixed (managed and native) applications must be located on both the Visual Studio host computer and the remote computer. Regards! A: I ran into this issue and the above solutions did not fix it for me. In my case, my VS2010 solution had many projects in it. The project I was trying to remotely debug was not set in my VS2010 solution as the StartUp Project, because my make scripts were not quite right. I right-clicked on the project within my solution I was trying to debug and selected Set as StartUp Project and then my symbols loaded properly and my breakpoint was hit. A: I had the same problem while remote debugging, it got resolved with the following steps on VS 2008: * *you copy the local pdb file along with your binaries *Run the same version of msvmon as your application was built for, If your application is built for x86 architecture, you need to run the x86 version of msvmon, even If you are running it on x64 machine. It will give warning when you try to run, but it should run.
{ "language": "en", "url": "https://stackoverflow.com/questions/151966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: When should I use 'self' over '$this'? In PHP 5, what is the difference between using self and $this? When is each appropriate? A: The keyword self does NOT refer merely to the 'current class', at least not in a way that restricts you to static members. Within the context of a non-static member, self also provides a way of bypassing the vtable (see wiki on vtable) for the current object. Just as you can use parent::methodName() to call the parents version of a function, so you can call self::methodName() to call the current classes implementation of a method. class Person { private $name; public function __construct($name) { $this->name = $name; } public function getName() { return $this->name; } public function getTitle() { return $this->getName()." the person"; } public function sayHello() { echo "Hello, I'm ".$this->getTitle()."<br/>"; } public function sayGoodbye() { echo "Goodbye from ".self::getTitle()."<br/>"; } } class Geek extends Person { public function __construct($name) { parent::__construct($name); } public function getTitle() { return $this->getName()." the geek"; } } $geekObj = new Geek("Ludwig"); $geekObj->sayHello(); $geekObj->sayGoodbye(); This will output: Hello, I'm Ludwig the geek Goodbye from Ludwig the person sayHello() uses the $this pointer, so the vtable is invoked to call Geek::getTitle(). sayGoodbye() uses self::getTitle(), so the vtable is not used, and Person::getTitle() is called. In both cases, we are dealing with the method of an instantiated object, and have access to the $this pointer within the called functions. A: $this refers to the current class object, and self refers to the current class (Not object). The class is the blueprint of the object. So you define a class, but you construct objects. So in other words, use self for static and this for none-static members or methods. Also in a child/parent scenario, self / parent is mostly used to identify child and parent class members and methods. A: Additionally since $this:: has not been discussed yet. For informational purposes only, as of PHP 5.3 when dealing with instantiated objects to get the current scope value, as opposed to using static::, one can alternatively use $this:: like so. http://ideone.com/7etRHy class Foo { const NAME = 'Foo'; //Always Foo::NAME (Foo) due to self protected static $staticName = self::NAME; public function __construct() { echo $this::NAME; } public function getStaticName() { echo $this::$staticName; } } class Bar extends Foo { const NAME = 'FooBar'; /** * override getStaticName to output Bar::NAME */ public function getStaticName() { $this::$staticName = $this::NAME; parent::getStaticName(); } } $foo = new Foo; //outputs Foo $bar = new Bar; //outputs FooBar $foo->getStaticName(); //outputs Foo $bar->getStaticName(); //outputs FooBar $foo->getStaticName(); //outputs FooBar Using the code above is not common or recommended practice, but is simply to illustrate its usage, and is to act as more of a "Did you know?" in reference to the original poster's question. It also represents the usage of $object::CONSTANT for example echo $foo::NAME; as opposed to $this::NAME; A: Use self if you want to call a method of a class without creating an object/instance of that class, thus saving RAM (sometimes use self for that purpose). In other words, it is actually calling a method statically. Use this for object perspective. A: self:: A keyword used for the current class and basically it is used to access static members, methods, and constants. But in case of $this, you cannot call the static member, method, and functions. You can use the self:: keyword in another class and access the static members, method, and constants. When it will be extended from the parent class and the same in case of the $this keyword. You can access the non-static members, method and function in another class when it will be extended from the parent class. The code given below is an example of the self:: and $this keywords. Just copy and paste the code in your code file and see the output. class cars{ var $doors = 4; static $car_wheel = 4; public function car_features(){ echo $this->doors . " Doors <br>"; echo self::$car_wheel . " Wheels <br>"; } } class spec extends cars{ function car_spec(){ print(self::$car_wheel . " Doors <br>"); print($this->doors . " Wheels <br>"); } } /********Parent class output*********/ $car = new cars; print_r($car->car_features()); echo "------------------------<br>"; /********Extend class from another class output**********/ $car_spec_show = new spec; print($car_spec_show->car_spec()); A: Do not use self::. Use static::* There is another aspect of self:: that is worth mentioning. Annoyingly, self:: refers to the scope at the point of definition, not at the point of execution. Consider this simple class with two methods: class Person { public static function status() { self::getStatus(); } protected static function getStatus() { echo "Person is alive"; } } If we call Person::status() we will see "Person is alive" . Now consider what happens when we make a class that inherits from this: class Deceased extends Person { protected static function getStatus() { echo "Person is deceased"; } } Calling Deceased::status() we would expect to see "Person is deceased". However, we see "Person is alive" as the scope contains the original method definition when the call to self::getStatus() was defined. PHP 5.3 has a solution. The static:: resolution operator implements "late static binding" which is a fancy way of saying that it's bound to the scope of the class called. Change the line in status() to static::getStatus() and the results are what you would expect. In older versions of PHP you will have to find a kludge to do this. See PHP Documentation So to answer the question not as asked... $this-> refers to the current object (an instance of a class), whereas static:: refers to a class. A: From this blog post: * *self refers to the current class *self can be used to call static functions and reference static member variables *self can be used inside static functions *self can also turn off polymorphic behavior by bypassing the vtable *$this refers to the current object *$this can be used to call static functions *$this should not be used to call static member variables. Use self instead. *$this can not be used inside static functions A: In PHP, you use the self keyword to access static properties and methods. The problem is that you can replace $this->method() with self::method()anywhere, regardless if method() is declared static or not. So which one should you use? Consider this code: class ParentClass { function test() { self::who(); // will output 'parent' $this->who(); // will output 'child' } function who() { echo 'parent'; } } class ChildClass extends ParentClass { function who() { echo 'child'; } } $obj = new ChildClass(); $obj->test(); In this example, self::who() will always output ‘parent’, while $this->who() will depend on what class the object has. Now we can see that self refers to the class in which it is called, while $this refers to the class of the current object. So, you should use self only when $this is not available, or when you don’t want to allow descendant classes to overwrite the current method. A: To really understand what we're talking about when we talk about self versus $this, we need to actually dig into what's going on at a conceptual and a practical level. I don't really feel any of the answers do this appropriately, so here's my attempt. Let's start off by talking about what a class and an object is. Classes And Objects, Conceptually So, what is a class? A lot of people define it as a blueprint or a template for an object. In fact, you can read more About Classes In PHP Here. And to some extent that's what it really is. Let's look at a class: class Person { public $name = 'my name'; public function sayHello() { echo "Hello"; } } As you can tell, there is a property on that class called $name and a method (function) called sayHello(). It's very important to note that the class is a static structure. Which means that the class Person, once defined, is always the same everywhere you look at it. An object on the other hand is what's called an instance of a Class. What that means is that we take the "blueprint" of the class, and use it to make a dynamic copy. This copy is now specifically tied to the variable it's stored in. Therefore, any changes to an instance is local to that instance. $bob = new Person; $adam = new Person; $bob->name = 'Bob'; echo $adam->name; // "my name" We create new instances of a class using the new operator. Therefore, we say that a Class is a global structure, and an Object is a local structure. Don't worry about that funny -> syntax, we're going to go into that in a little bit. One other thing we should talk about, is that we can check if an instance is an instanceof a particular class: $bob instanceof Person which returns a boolean if the $bob instance was made using the Person class, or a child of Person. Defining State So let's dig a bit into what a class actually contains. There are 5 types of "things" that a class contains: * *Properties - Think of these as variables that each instance will contain. class Foo { public $bar = 1; } *Static Properties - Think of these as variables that are shared at the class level. Meaning that they are never copied by each instance. class Foo { public static $bar = 1; } *Methods - These are functions which each instance will contain (and operate on instances). class Foo { public function bar() {} } *Static Methods - These are functions which are shared across the entire class. They do not operate on instances, but instead on the static properties only. class Foo { public static function bar() {} } *Constants - Class resolved constants. Not going any deeper here, but adding for completeness: class Foo { const BAR = 1; } So basically, we're storing information on the class and object container using "hints" about static which identify whether the information is shared (and hence static) or not (and hence dynamic). State and Methods Inside of a method, an object's instance is represented by the $this variable. The current state of that object is there, and mutating (changing) any property will result in a change to that instance (but not others). If a method is called statically, the $this variable is not defined. This is because there's no instance associated with a static call. The interesting thing here is how static calls are made. So let's talk about how we access the state: Accessing State So now that we have stored that state, we need to access it. This can get a bit tricky (or way more than a bit), so let's split this into two viewpoints: from outside of an instance/class (say from a normal function call, or from the global scope), and inside of an instance/class (from within a method on the object). From Outside Of An Instance/Class From the outside of an instance/class, our rules are quite simple and predictable. We have two operators, and each tells us immediately if we're dealing with an instance or a class static: * *-> - object-operator - This is always used when we're accessing an instance. $bob = new Person; echo $bob->name; It's important to note that calling Person->foo does not make sense (since Person is a class, not an instance). Therefore, that is a parse error. *:: - scope-resolution-operator - This is always used to access a Class static property or method. echo Foo::bar() Additionally, we can call a static method on an object in the same way: echo $foo::bar() It's extremely important to note that when we do this from outside, the object's instance is hidden from the bar() method. Meaning that it's the exact same as running: $class = get_class($foo); $class::bar(); Therefore, $this is not defined in the static call. From Inside Of An Instance/Class Things change a bit here. The same operators are used, but their meaning becomes significantly blurred. The object-operator -> is still used to make calls to the object's instance state. class Foo { public $a = 1; public function bar() { return $this->a; } } Calling the bar() method on $foo (an instance of Foo) using the object-operator: $foo->bar() will result in the instance's version of $a. So that's how we expect. The meaning of the :: operator though changes. It depends on the context of the call to the current function: * *Within a static context Within a static context, any calls made using :: will also be static. Let's look at an example: class Foo { public function bar() { return Foo::baz(); } public function baz() { return isset($this); } } Calling Foo::bar() will call the baz() method statically, and hence $this will not be populated. It's worth noting that in recent versions of PHP (5.3+) this will trigger an E_STRICT error, because we're calling non-static methods statically. *Within an instance context Within an instance context on the other hand, calls made using :: depend on the receiver of the call (the method we're calling). If the method is defined as static, then it will use a static call. If it's not, it will forward the instance information. So, looking at the above code, calling $foo->bar() will return true, since the "static" call happens inside of an instance context. Make sense? Didn't think so. It's confusing. Short-Cut Keywords Because tying everything together using class names is rather dirty, PHP provides 3 basic "shortcut" keywords to make scope resolving easier. * *self - This refers to the current class name. So self::baz() is the same as Foo::baz() within the Foo class (any method on it). *parent - This refers to the parent of the current class. *static - This refers to the called class. Thanks to inheritance, child classes can override methods and static properties. So calling them using static instead of a class name allows us to resolve where the call came from, rather than the current level. Examples The easiest way to understand this is to start looking at some examples. Let's pick a class: class Person { public static $number = 0; public $id = 0; public function __construct() { self::$number++; $this->id = self::$number; } public $name = ""; public function getName() { return $this->name; } public function getId() { return $this->id; } } class Child extends Person { public $age = 0; public function __construct($age) { $this->age = $age; parent::__construct(); } public function getName() { return 'child: ' . parent::getName(); } } Now, we're also looking at inheritance here. Ignore for a moment that this is a bad object model, but let's look at what happens when we play with this: $bob = new Person; $bob->name = "Bob"; $adam = new Person; $adam->name = "Adam"; $billy = new Child; $billy->name = "Billy"; var_dump($bob->getId()); // 1 var_dump($adam->getId()); // 2 var_dump($billy->getId()); // 3 So the ID counter is shared across both instances and the children (because we're using self to access it. If we used static, we could override it in a child class). var_dump($bob->getName()); // Bob var_dump($adam->getName()); // Adam var_dump($billy->getName()); // child: Billy Note that we're executing the Person::getName() instance method every time. But we're using the parent::getName() to do it in one of the cases (the child case). This is what makes this approach powerful. Word Of Caution #1 Note that the calling context is what determines if an instance is used. Therefore: class Foo { public function isFoo() { return $this instanceof Foo; } } Is not always true. class Bar { public function doSomething() { return Foo::isFoo(); } } $b = new Bar; var_dump($b->doSomething()); // bool(false) Now it is really weird here. We're calling a different class, but the $this that gets passed to the Foo::isFoo() method is the instance of $bar. This can cause all sorts of bugs and conceptual WTF-ery. So I'd highly suggest avoiding the :: operator from within instance methods on anything except those three virtual "short-cut" keywords (static, self, and parent). Word Of Caution #2 Note that static methods and properties are shared by everyone. That makes them basically global variables. With all the same problems that come with globals. So I would be really hesitant to store information in static methods/properties unless you're comfortable with it being truly global. Word Of Caution #3 In general you'll want to use what's known as Late-Static-Binding by using static instead of self. But note that they are not the same thing, so saying "always use static instead of self is really short-sighted. Instead, stop and think about the call you want to make and think if you want child classes to be able to override that static resolved call. TL/DR Too bad, go back and read it. It may be too long, but it's that long because this is a complex topic TL/DR #2 Ok, fine. In short, self is used to reference the current class name within a class, where as $this refers to the current object instance. Note that self is a copy/paste short-cut. You can safely replace it with your class name, and it'll work fine. But $this is a dynamic variable that can't be determined ahead of time (and may not even be your class). TL/DR #3 If the object-operator is used (->), then you always know you're dealing with an instance. If the scope-resolution-operator is used (::), you need more information about the context (are we in an object-context already? Are we outside of an object? etc). A: Inside a class definition, $this refers to the current object, while self refers to the current class. It is necessary to refer to a class element using self, and refer to an object element using $this. self::STAT // refer to a constant value self::$stat // static variable $this->stat // refer to an object variable A: self refers to the current class (in which it is called), $this refers to the current object. You can use static instead of self. See the example: class ParentClass { function test() { self::which(); // Outputs 'parent' $this->which(); // Outputs 'child' } function which() { echo 'parent'; } } class ChildClass extends ParentClass { function which() { echo 'child'; } } $obj = new ChildClass(); $obj->test(); Output: parent child A: Here is an example of correct usage of $this and self for non-static and static member variables: <?php class X { private $non_static_member = 1; private static $static_member = 2; function __construct() { echo $this->non_static_member . ' ' . self::$static_member; } } new X(); ?> A: According to Static Keyword, there isn't any $self. There is only $this, for referring to the current instance of the class (the object), and self, which can be used to refer to static members of a class. The difference between an object instance and a class comes into play here. A: * *The object pointer $this to refers to the current object. *The class value static refers to the current object. *The class value self refers to the exact class it was defined in. *The class value parent refers to the parent of the exact class it was defined in. See the following example which shows overloading. <?php class A { public static function newStaticClass() { return new static; } public static function newSelfClass() { return new self; } public function newThisClass() { return new $this; } } class B extends A { public function newParentClass() { return new parent; } } $b = new B; var_dump($b::newStaticClass()); // B var_dump($b::newSelfClass()); // A because self belongs to "A" var_dump($b->newThisClass()); // B var_dump($b->newParentClass()); // A class C extends B { public static function newSelfClass() { return new self; } } $c = new C; var_dump($c::newStaticClass()); // C var_dump($c::newSelfClass()); // C because self now points to "C" class var_dump($c->newThisClass()); // C var_dump($b->newParentClass()); // A because parent was defined *way back* in class "B" Most of the time you want to refer to the current class which is why you use static or $this. However, there are times when you need self because you want the original class regardless of what extends it. (Very, Very seldom) A: Case 1: Use self can be used for class constants class classA { const FIXED_NUMBER = 4; self::POUNDS_TO_KILOGRAMS } If you want to call it outside of the class, use classA::POUNDS_TO_KILOGRAMS to access the constants Case 2: For static properties class classC { public function __construct() { self::$_counter++; $this->num = self::$_counter; } } A: Short Answer Use $this to refer to the current object. Use self to refer to the current class. In other words, use $this->member for non-static members, use self::$member for static members. Full Answer Here is an example of correct usage of $this and self for non-static and static member variables: <?php class X { private $non_static_member = 1; private static $static_member = 2; function __construct() { echo $this->non_static_member . ' ' . self::$static_member; } } new X(); ?> Here is an example of incorrect usage of $this and self for non-static and static member variables: <?php class X { private $non_static_member = 1; private static $static_member = 2; function __construct() { echo self::$non_static_member . ' ' . $this->static_member; } } new X(); ?> Here is an example of polymorphism with $this for member functions: <?php class X { function foo() { echo 'X::foo()'; } function bar() { $this->foo(); } } class Y extends X { function foo() { echo 'Y::foo()'; } } $x = new Y(); $x->bar(); ?> Here is an example of suppressing polymorphic behaviour by using self for member functions: <?php class X { function foo() { echo 'X::foo()'; } function bar() { self::foo(); } } class Y extends X { function foo() { echo 'Y::foo()'; } } $x = new Y(); $x->bar(); ?> The idea is that $this->foo() calls the foo() member function of whatever is the exact type of the current object. If the object is of type X, it thus calls X::foo(). If the object is of type Y, it calls Y::foo(). But with self::foo(), X::foo() is always called. From http://www.phpbuilder.com/board/showthread.php?t=10354489: By http://board.phpbuilder.com/member.php?145249-laserlight A: I believe the question was not whether you can call the static member of the class by calling ClassName::staticMember. The question was what's the difference between using self::classmember and $this->classmember. For example, both of the following examples work without any errors, whether you use self:: or $this-> class Person{ private $name; private $address; public function __construct($new_name,$new_address){ $this->name = $new_name; $this->address = $new_address; } } class Person{ private $name; private $address; public function __construct($new_name,$new_address){ self::$name = $new_name; self::$address = $new_address; } } A: Here is a small benchmark (7.2.24 on repl.it): Speed (in seconds) Percentage $this-> 0.91760206222534 100 self:: 1.0047659873962 109.49909865716 static:: 0.98066782951355 106.87288857386 Results for 4 000 000 runs. Conclusion: it doesn't matter. And here is the code I used: <?php class Foo { public function calling_this() { $this->called(); } public function calling_self() { self::called(); } public function calling_static() { static::called(); } public static function called() {} } $foo = new Foo(); $n = 4000000; $times = []; // warmup for ($i = 0; $i < $n; $i++) { $foo->calling_this(); } for ($i = 0; $i < $n; $i++) { $foo->calling_self(); } for ($i = 0; $i < $n; $i++) { $foo->calling_static(); } $start = microtime(true); for ($i = 0; $i < $n; $i++) { $foo->calling_this(); } $times["this"] = microtime(true)-$start; $start = microtime(true); for ($i = 0; $i < $n; $i++) { $foo->calling_self(); } $times["self"] = microtime(true)-$start; $start = microtime(true); for ($i = 0; $i < $n; $i++) { $foo->calling_static(); } $times["static"] = microtime(true)-$start; $min = min($times); echo $times["this"] . "\t" . ($times["this"] / $min)*100 . "\n"; echo $times["self"] . "\t" . ($times["self"] / $min)*100 . "\n"; echo $times["static"] . "\t" . ($times["static"] / $min)*100 . "\n"; A: When self is used with the :: operator it refers to the current class, which can be done both in static and non-static contexts. $this refers to the object itself. In addition, it is perfectly legal to use $this to call static methods (but not to refer to fields). A: self (not $self) refers to the type of class, whereas $this refers to the current instance of the class. self is for use in static member functions to allow you to access static member variables. $this is used in non-static member functions, and is a reference to the instance of the class on which the member function was called. Because this is an object, you use it like: $this->member Because self is not an object, it's basically a type that automatically refers to the current class. You use it like: self::member A: $this-> is used to refer to a specific instance of a class's variables (member variables) or methods. Example: $derek = new Person(); $derek is now a specific instance of Person. Every Person has a first_name and a last_name, but $derek has a specific first_name and last_name (Derek Martin). Inside the $derek instance, we can refer to those as $this->first_name and $this->last_name ClassName:: is used to refer to that type of class, and its static variables, static methods. If it helps, you can mentally replace the word "static" with "shared". Because they are shared, they cannot refer to $this, which refers to a specific instance (not shared). Static Variables (i.e. static $db_connection) can be shared among all instances of a type of object. For example, all database objects share a single connection (static $connection). Static Variables Example: Pretend we have a database class with a single member variable: static $num_connections; Now, put this in the constructor: function __construct() { if(!isset $num_connections || $num_connections==null) { $num_connections=0; } else { $num_connections++; } } Just as objects have constructors, they also have destructors, which are executed when the object dies or is unset: function __destruct() { $num_connections--; } Every time we create a new instance, it will increase our connection counter by one. Every time we destroy or stop using an instance, it will decrease the connection counter by one. In this way, we can monitor the number of instances of the database object we have in use with: echo DB::num_connections; Because $num_connections is static (shared), it will reflect the total number of active database objects. You may have seen this technique used to share database connections among all instances of a database class. This is done because creating the database connection takes a long time, so it's best to create just one, and share it (this is called a Singleton Pattern). Static Methods (i.e. public static View::format_phone_number($digits)) can be used WITHOUT first instantiating one of those objects (i.e. They do not internally refer to $this). Static Method Example: public static function prettyName($first_name, $last_name) { echo ucfirst($first_name).' '.ucfirst($last_name); } echo Person::prettyName($derek->first_name, $derek->last_name); As you can see, public static function prettyName knows nothing about the object. It's just working with the parameters you pass in, like a normal function that's not part of an object. Why bother, then, if we could just have it not as part of the object? * *First, attaching functions to objects helps you keep things organized, so you know where to find them. *Second, it prevents naming conflicts. In a big project, you're likely to have two developers create getName() functions. If one creates a ClassName1::getName(), and the other creates ClassName2::getName(), it's no problem at all. No conflict. Yay static methods! SELF:: If you are coding outside the object that has the static method you want to refer to, you must call it using the object's name View::format_phone_number($phone_number); If you are coding inside the object that has the static method you want to refer to, you can either use the object's name View::format_phone_number($pn), OR you can use the self::format_phone_number($pn) shortcut The same goes for static variables: Example: View::templates_path versus self::templates_path Inside the DB class, if we were referring to a static method of some other object, we would use the object's name: Example: Session::getUsersOnline(); But if the DB class wanted to refer to its own static variable, it would just say self: Example: self::connection; A: I ran into the same question and the simple answer was: * *$this requires an instance of the class *self:: doesn't Whenever you are using static methods or static attributes and want to call them without having an object of the class instantiated, you need to use self: to call them, because $this always requires an object to be created. A: According to php.net there are three special keywords in this context: self, parent and static. They are used to access properties or methods from inside the class definition. $this, on the other hand, is used to call an instance and methods of any class as long as that class is accessible.
{ "language": "en", "url": "https://stackoverflow.com/questions/151969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2198" }
Q: C++ blogs that you regularly follow? What are all the C++ blogs that you follow? Please add one url for one posting. A: Sutter's Mill A: Google Testing Blog covers all kinds of great testing tips and techniques. A: Andrew Koenig's Blog A: Reddit C++ A: https://stackoverflow.com/questions/tagged/c++ ;-) A: mr-edd.co.uk A: Rambling Comments (Len Holgate's blog) A: I used to follow the GotW. A: Visual C++ Team Blog Stephan T. Lavavej's (STL's!) posts in particular. A: Attractive Chaos A: C/C++ About.com A: Garry's Bit Patterns (though it's mainly about Python these days). A: Just Software Solutions A: LightSleeper A: The Fastware Project by Scott Meyers A: ACCU (Association of C and C++ Users) journal: http://accu.org/index.php/journals/c78/ A: C++ Soup! C++ Tips, Tricks, Reviews, and Commentary By Dean Michael Berris. A: Learning c++ by Abhishek Padmanabh A: C++ Truths A: DrDobb's C++ Blog A: The Old New Thing A: Learn C++ A: Intel Software Blogs (not purely C++, but a lot of postings are) A: Just Software Solutions / C++ (C++ by Anthony Williams) A: STLab Papers and Presentations A: The C++ Standards Committee WG papers A: Danny Kalev's blog and articles and the updates on his c++ reference guide both on informIT A: C++Next A: Power of 2 Games Update: The articles are now hosted at Games From Within. A: The C++ Source A: The View from Aristeia formerly Scott Meyers Mailing List A: C++ FAQ Lite A: Thinking Asynchronously in C++ A: Bartosz Milewski’s Programming Cafe: Concurrency, Multicore, Language Design, D, C++ A: Not a blog but new C++ contents are added frequently: More C++ Idioms A: Poco A: DevX: C++ Zone A: KenanLee A: C++ Next A: Bannalia: trivial notes on themes diverse from Joaquín M López Muñoz, the author of Boost.Multiindex, includes some interesting C++ articles. A: Visual C++ Weekly news A: Andrzej's C++ blog A: http://www.codemaestro.com/ A: http://embracingcpp.blogspot.com A: http://cplusplus.co.il A: Modern C++ A: Google C++ Style Guide A: C / C++ Programming Recipes For The Impatient Programmer A: MasterCplusplus (For Beginners) A: CodeProject. A: Mad Software
{ "language": "en", "url": "https://stackoverflow.com/questions/151974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "183" }
Q: Does the GroupBox Header in WPF swallow mouse-clicks? Have a look at this very simple example WPF program: <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <GroupBox> <GroupBox.Header> <CheckBox Content="Click Here"/> </GroupBox.Header> </GroupBox> </Window> So I have a GroupBox whose header is a CheckBox. We've all done something like this - typically you bind the content of the GroupBox in such a way that it's disabled when the CheckBox is unchecked. However, when I run this application and click on the CheckBox, I've found that sometimes my mouse clicks are swallowed and the CheckBox's status doesn't change. If I'm right, it's when I click on the exact row of pixels that the GroupBox's top border sits on. Can someone duplicate this? Why would this occur, and is there a way around it? Edit: Setting the GroupBox's BorderThickness to 0 solves the problem, but obviously it removes the border, so it doesn't look like a GroupBox anymore. A: If you change the GroupBox's BorderBrush, it works! <GroupBox BorderBrush="{x:Null}"> I know this defeats the objective but it does prove where the problem lies! A: Ian Oakes answer stuffs up the tab order such that the header comes after the content. It's possible to modify the control template such that the border can't receive focus. To do this, modify the template so that the 2nd and 3rd borders (both in Grid Row 1) have IsHitTestVisible=false Complete template below <BorderGapMaskConverter x:Key="GroupBoxBorderGapMaskConverter" /> <Style x:Key="{x:Type GroupBox}" TargetType="{x:Type GroupBox}"> <Setter Property="Control.BorderBrush" Value="#FFD5DFE5" /> <Setter Property="Control.BorderThickness" Value="1" /> <Setter Property="Control.Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GroupBox}"> <Grid SnapsToDevicePixels="True"> <Grid.ColumnDefinitions> <ColumnDefinition Width="6" /> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="6" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> <RowDefinition Height="6" /> </Grid.RowDefinitions> <Border Name="Header" Padding="3,1,3,0" Grid.Row="0" Grid.RowSpan="2" Grid.Column="1"> <ContentPresenter ContentSource="Header" RecognizesAccessKey="True" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}" /> </Border> <Border CornerRadius="4" Grid.Row="1" Grid.RowSpan="3" Grid.Column="0" Grid.ColumnSpan="4" BorderThickness="{TemplateBinding Control.BorderThickness}" BorderBrush="#00FFFFFF" Background="{TemplateBinding Control.Background}" IsHitTestVisible="False" /> <ContentPresenter Grid.Row="2" Grid.Column="1" Grid.ColumnSpan="2" Margin="{TemplateBinding Control.Padding}" SnapsToDevicePixels="{TemplateBinding UIElement.SnapsToDevicePixels}"/> <Border CornerRadius="4" Grid.Row="1" Grid.RowSpan="3" Grid.ColumnSpan="4" BorderThickness="{TemplateBinding Control.BorderThickness}" BorderBrush="#FFFFFFFF" IsHitTestVisible="False"> <UIElement.OpacityMask> <MultiBinding Converter="{StaticResource GroupBoxBorderGapMaskConverter}" ConverterParameter="7"> <Binding ElementName="Header" Path="ActualWidth" /> <Binding Path="ActualWidth" RelativeSource="{RelativeSource Self}" /> <Binding Path="ActualHeight" RelativeSource="{RelativeSource Self}" /> </MultiBinding> </UIElement.OpacityMask> <Border BorderThickness="{TemplateBinding Control.BorderThickness}" BorderBrush="{TemplateBinding Control.BorderBrush}" CornerRadius="3"> <Border BorderThickness="{TemplateBinding Control.BorderThickness}" BorderBrush="#FFFFFFFF" CornerRadius="2" /> </Border> </Border> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> A: It appears to be a subtle bug in the control template for the GroupBox. I found by editing the default template for the GroupBox and moving the Border named 'Header' to the last item in the control templates Grid element, the issue resolves itself. The reason is that the one of the other Border elements with a TemplateBinding of BorderBrush was further down in the visual tree and was capturing the mouse click, that's why setting the BorderBrush to null allowed the CheckBox to correctly receive the mouse click. Below is resulting style for the GroupBox. It is nearly identical to the default template for the control, except for the Border element named 'Header', which is now the last child of the Grid, rather than the second. <BorderGapMaskConverter x:Key="BorderGapMaskConverter"/> <Style x:Key="GroupBoxStyle1" TargetType="{x:Type GroupBox}"> <Setter Property="BorderBrush" Value="#D5DFE5"/> <Setter Property="BorderThickness" Value="1"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GroupBox}"> <Grid SnapsToDevicePixels="true"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> <RowDefinition Height="6"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="6"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="6"/> </Grid.ColumnDefinitions> <Border Grid.Column="0" Grid.ColumnSpan="4" Grid.Row="1" Grid.RowSpan="3" Background="{TemplateBinding Background}" BorderBrush="Transparent" BorderThickness="{TemplateBinding BorderThickness}" CornerRadius="4"/> <ContentPresenter Margin="{TemplateBinding Padding}" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" Grid.Column="1" Grid.ColumnSpan="2" Grid.Row="2"/> <Border Grid.ColumnSpan="4" Grid.Row="1" Grid.RowSpan="3" BorderBrush="White" BorderThickness="{TemplateBinding BorderThickness}" CornerRadius="4"> <Border.OpacityMask> <MultiBinding Converter="{StaticResource BorderGapMaskConverter}" ConverterParameter="7"> <Binding Path="ActualWidth" ElementName="Header"/> <Binding Path="ActualWidth" RelativeSource="{RelativeSource Self}"/> <Binding Path="ActualHeight" RelativeSource="{RelativeSource Self}"/> </MultiBinding> </Border.OpacityMask> <Border BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" CornerRadius="3"> <Border BorderBrush="White" BorderThickness="{TemplateBinding BorderThickness}" CornerRadius="2"/> </Border> </Border> <Border x:Name="Header" Grid.Column="1" Grid.Row="0" Grid.RowSpan="2" Padding="3,1,3,0"> <ContentPresenter SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" ContentSource="Header" RecognizesAccessKey="True"/> </Border> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> A: An alternative solution I made is implementing OnApplyTemplate in a derived GroupBox: public override void OnApplyTemplate() { base.OnApplyTemplate(); if (Children.Count == 0) return; var grid = GetVisualChild(0) as Grid; if (grid != null && grid.Children.Count > 3) { var bd = grid.Children[3] as Border; if (bd != null) { bd.IsHitTestVisible = false; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/151979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Server side virus scanning I need to scan uploaded files for viruses on a Linux server, but I'm not sure how to go about it. What are my options, if any? I'm also interested in how the scanners perform when multiple users send multiple files at the same time. A: If you're concerned about performance, consider using clamd/clamdscan as your implementation. clamd runs as a daemon so all the initialization costs are only done once. When you then scan a file with clamdscan it just feeds the file to a forked clamd to do the actual scanning. If you have a ton of traffic it's much more efficient. If you have performance concerns beyond that, you should consider using a commercial product. Most of the big players have Linux/Unix versions these days. A: You should look into opswat's MetaScan. This tool manages the updating and multiple-engine scanning of files. It bundles with AVG, CA eTrust™. ClamWin, ESET NOD32 Antivirus Engine, MicroWorld eScan Engine, Norman Virus Control, and VirusBuster EDK. Additionally it will invoke the Nortons and such. The advantage is that you get multiple engines running against the file. A: Here are my results for ClamAV when tested against known viruses (the problem is, none of these should have passed): +-----------+------------------------------+ | Results | File | +-----------+------------------------------+ | infected | AdvancedXPFixerInstaller.exe | | pass | auto.exe | | pass | cartao.exe | | infected | cartoes_natal.exe | | pass | codec.exe | | pass | e421.exe | | pass | fixtool.exe | | infected | flash_install.exe | | infected | issj.exe | | infected | iwmdo.exe | | infected | jobxxc.exe | | infected | kbmt.exe | | pass | killer_cdj.exe | | pass | killer_javqhc.exe | | infected | killer_rodog.exe | | infected | kl.exe | | infected | MacromediaFlash.exe | | infected | MacromediaFlashPlayer.exe | | infected | paraense.exe | | infected | pibzero.exe | | pass | scan.exe | | pass | uaqxtg.exe | | pass | vejkcfu.exe | | infected | VIDeoSS.exe | | infected | wujowpq.exe | | pass | X-IrCBOT.exe | +-----------+------------------------------+ A: I would have a look at Clam AntiVirus. It provides a clamscan program that can scan a given file and return a pass/fail indication. It's free and automatically updates its database regularly. As for integrating such a product into your file upload process, that would be specific to whatever file upload process you actually use. A: You should try to find an anti-virus vendor that has a public API for it's scanner. That way you can programmatically scan a file. It will make it much easier in the long run than trying to mess with other processes via your upload script. A: Have you run them through commercial scanners? I used to be an admin for a product that ran files through 4 commercial scanners in parallel. I had a test virus corpus of several hundred and none of the commercial scanners could find them all... A: Clamscan will scan files once they are stored, and not prevent an infected file from being uploaded or downloaded I have a squid(https+cache) <-> HAVP (with clamAV) <-> Tomcat reverse proxy setup. HAVP (http://www.server-side.de/) is a way to scan http traffic though ClamAV or any other commercial antivirus software. It will prevent users to download infected files. Nevertheless, it does not work at upload, so it will not prevent the files to be stored on servers, but prevent the files to be downloaded and thus propagated. So use it with a regular file scanning (eg clamscan)
{ "language": "en", "url": "https://stackoverflow.com/questions/152003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How can currying be done in C++? What is currying? How can currying be done in C++? Please Explain binders in STL container? A: Some great answers here. I thought I would add my own because it was fun to play around with the concept. Partial function application: The process of "binding" a function with only some of its parameters, deferring the rest to be filled in later. The result is another function with fewer parameters. Currying: Is a special form of partial function application where you can only "bind" a single argument at a time. The result is another function with exactly 1 fewer parameter. The code I'm about to present is partial function application from which currying is possible, but not the only possibility. It offers a few benefits over the above currying implementations (mainly because it's partial function application and not currying, heh). * *Applying over an empty function: auto sum0 = [](){return 0;}; std::cout << partial_apply(sum0)() << std::endl; *Applying multiple arguments at a time: auto sum10 = [](int a, int b, int c, int d, int e, int f, int g, int h, int i, int j){return a+b+c+d+e+f+g+h+i+j;}; std::cout << partial_apply(sum10)(1)(1,1)(1,1,1)(1,1,1,1) << std::endl; // 10 *constexpr support that allows for compile-time static_assert: static_assert(partial_apply(sum0)() == 0); *A useful error message if you accidentally go too far in providing arguments: auto sum1 = [](int x){ return x;}; partial_apply(sum1)(1)(1); error: static_assert failed "Attempting to apply too many arguments!" Other answers above return lambdas that bind an argument and then return further lambdas. This approach wraps that essential functionality into a callable object. Definitions for operator() allow the internal lambda to be called. Variadic templates allow us to check for someone going too far, and an implicit conversion function to the result type of the function call allows us to print the result or compare the object to a primitive. Code: namespace detail{ template<class F> using is_zero_callable = decltype(std::declval<F>()()); template<class F> constexpr bool is_zero_callable_v = std::experimental::is_detected_v<is_zero_callable, F>; } template<class F> struct partial_apply_t { template<class... Args> constexpr auto operator()(Args... args) { static_assert(sizeof...(args) == 0 || !is_zero_callable, "Attempting to apply too many arguments!"); auto bind_some = [=](auto... rest) -> decltype(myFun(args..., rest...)) { return myFun(args..., rest...); }; using bind_t = decltype(bind_some); return partial_apply_t<bind_t>{bind_some}; } explicit constexpr partial_apply_t(F fun) : myFun(fun){} constexpr operator auto() { if constexpr (is_zero_callable) return myFun(); else return *this; // a callable } static constexpr bool is_zero_callable = detail::is_zero_callable_v<F>; F myFun; }; Live Demo A few more notes: * *I chose to use is_detected mainly for enjoyment and practice; it serves the same as a normal type trait would here. *There could definitely be more work done to support perfect forwarding for performance reasons *The code is C++17 because it requires for constexpr lambda support in C++17 * *And it seems that GCC 7.0.1 is not quite there yet, either, so I used Clang 5.0.0 Some tests: auto sum0 = [](){return 0;}; auto sum1 = [](int x){ return x;}; auto sum2 = [](int x, int y){ return x + y;}; auto sum3 = [](int x, int y, int z){ return x + y + z; }; auto sum10 = [](int a, int b, int c, int d, int e, int f, int g, int h, int i, int j){return a+b+c+d+e+f+g+h+i+j;}; std::cout << partial_apply(sum0)() << std::endl; //0 static_assert(partial_apply(sum0)() == 0, "sum0 should return 0"); std::cout << partial_apply(sum1)(1) << std::endl; // 1 std::cout << partial_apply(sum2)(1)(1) << std::endl; // 2 std::cout << partial_apply(sum3)(1)(1)(1) << std::endl; // 3 static_assert(partial_apply(sum3)(1)(1)(1) == 3, "sum3 should return 3"); std::cout << partial_apply(sum10)(1)(1,1)(1,1,1)(1,1,1,1) << std::endl; // 10 //partial_apply(sum1)(1)(1); // fails static assert auto partiallyApplied = partial_apply(sum3)(1)(1); std::function<int(int)> finish_applying = partiallyApplied; std::cout << std::boolalpha << (finish_applying(1) == 3) << std::endl; // true auto plus2 = partial_apply(sum3)(1)(1); std::cout << std::boolalpha << (plus2(1) == 3) << std::endl; // true std::cout << std::boolalpha << (plus2(3) == 5) << std::endl; // true A: 1. What is currying? Currying simply means a transformation of a function of several arguments to a function of a single argument. This is most easily illustrated using an example: Take a function f that accepts three arguments: int f(int a,std::string b,float c) { // do something with a, b, and c return 0; } If we want to call f, we have to provide all of its arguments f(1,"some string",19.7f). Then a curried version of f, let's call it curried_f=curry(f) only expects a single argument, that corresponds to the first argument of f, namely the argument a. Additionally, f(1,"some string",19.7f) can also be written using the curried version as curried_f(1)("some string")(19.7f). The return value of curried_f(1) on the other hand is just another function, that handles the next argument of f. In the end, we end up with a function or callable curried_f that fulfills the following equality: curried_f(first_arg)(second_arg)...(last_arg) == f(first_arg,second_arg,...,last_arg). 2. How can currying be achieved in C++? The following is a little bit more complicated, but works very well for me (using c++11)... It also allows currying of arbitrary degree like so: auto curried=curry(f)(arg1)(arg2)(arg3) and later auto result=curried(arg4)(arg5). Here it goes: #include <functional> namespace _dtl { template <typename FUNCTION> struct _curry; // specialization for functions with a single argument template <typename R,typename T> struct _curry<std::function<R(T)>> { using type = std::function<R(T)>; const type result; _curry(type fun) : result(fun) {} }; // recursive specialization for functions with more arguments template <typename R,typename T,typename...Ts> struct _curry<std::function<R(T,Ts...)>> { using remaining_type = typename _curry<std::function<R(Ts...)> >::type; using type = std::function<remaining_type(T)>; const type result; _curry(std::function<R(T,Ts...)> fun) : result ( [=](const T& t) { return _curry<std::function<R(Ts...)>>( [=](const Ts&...ts){ return fun(t, ts...); } ).result; } ) {} }; } template <typename R,typename...Ts> auto curry(const std::function<R(Ts...)> fun) -> typename _dtl::_curry<std::function<R(Ts...)>>::type { return _dtl::_curry<std::function<R(Ts...)>>(fun).result; } template <typename R,typename...Ts> auto curry(R(* const fun)(Ts...)) -> typename _dtl::_curry<std::function<R(Ts...)>>::type { return _dtl::_curry<std::function<R(Ts...)>>(fun).result; } #include <iostream> void f(std::string a,std::string b,std::string c) { std::cout << a << b << c; } int main() { curry(f)("Hello ")("functional ")("world!"); return 0; } View output OK, as Samer commented, I should add some explanations as to how this works. The actual implementation is done in the _dtl::_curry, while the template functions curry are only convenience wrappers. The implementation is recursive over the arguments of the std::function template argument FUNCTION. For a function with only a single argument, the result is identical to the original function. _curry(std::function<R(T,Ts...)> fun) : result ( [=](const T& t) { return _curry<std::function<R(Ts...)>>( [=](const Ts&...ts){ return fun(t, ts...); } ).result; } ) {} Here the tricky thing: For a function with more arguments, we return a lambda whose argument is bound to the first argument to the call to fun. Finally, the remaining currying for the remaining N-1 arguments is delegated to the implementation of _curry<Ts...> with one less template argument. Update for c++14 / 17: A new idea to approach the problem of currying just came to me... With the introduction of if constexpr into c++17 (and with the help of void_t to determine if a function is fully curried), things seem to get a lot easier: template< class, class = std::void_t<> > struct needs_unapply : std::true_type { }; template< class T > struct needs_unapply<T, std::void_t<decltype(std::declval<T>()())>> : std::false_type { }; template <typename F> auto curry(F&& f) { /// Check if f() is a valid function call. If not we need /// to curry at least one argument: if constexpr (needs_unapply<decltype(f)>::value) { return [=](auto&& x) { return curry( [=](auto&&...xs) -> decltype(f(x,xs...)) { return f(x,xs...); } ); }; } else { /// If 'f()' is a valid call, just call it, we are done. return f(); } } int main() { auto f = [](auto a, auto b, auto c, auto d) { return a * b * c * d; }; return curry(f)(1)(2)(3)(4); } See code in action on here. With a similar approach, here is how to curry functions with arbitrary number of arguments. The same idea seems to work out also in C++14, if we exchange the constexpr if with a template selection depending on the test needs_unapply<decltype(f)>::value: template <typename F> auto curry(F&& f); template <bool> struct curry_on; template <> struct curry_on<false> { template <typename F> static auto apply(F&& f) { return f(); } }; template <> struct curry_on<true> { template <typename F> static auto apply(F&& f) { return [=](auto&& x) { return curry( [=](auto&&...xs) -> decltype(f(x,xs...)) { return f(x,xs...); } ); }; } }; template <typename F> auto curry(F&& f) { return curry_on<needs_unapply<decltype(f)>::value>::template apply(f); } A: Currying is a way of reducing a function that takes multiple arguments into a sequence of nested functions with one argument each: full = (lambda a, b, c: (a + b + c)) print full (1, 2, 3) # print 6 # Curried style curried = (lambda a: (lambda b: (lambda c: (a + b + c)))) print curried (1)(2)(3) # print 6 Currying is nice because you can define functions that are simply wrappers around other functions with pre-defined values, and then pass around the simplified functions. C++ STL binders provide an implementation of this in C++. A: In short, currying takes a function f(x, y) and given a fixed Y, gives a new function g(x) where g(x) == f(x, Y) This new function may be called in situations where only one argument is supplied, and passes the call on to the original f function with the fixed Y argument. The binders in the STL allow you to do this for C++ functions. For example: #include <functional> #include <iostream> #include <vector> using namespace std; // declare a binary function object class adder: public binary_function<int, int, int> { public: int operator()(int x, int y) const { return x + y; } }; int main() { // initialise some sample data vector<int> a, b; a.push_back(1); a.push_back(2); a.push_back(3); // here we declare a function object f and try it out adder f; cout << "f(2, 3) = " << f(2, 3) << endl; // transform() expects a function with one argument, so we use // bind2nd to make a new function based on f, that takes one // argument and adds 5 to it transform(a.begin(), a.end(), back_inserter(b), bind2nd(f, 5)); // output b to see what we got cout << "b = [" << endl; for (vector<int>::iterator i = b.begin(); i != b.end(); ++i) { cout << " " << *i << endl; } cout << "]" << endl; return 0; } A: I implemented currying with variadic templates as well (see Julian's answer). However, I did not make use of recursion or std::function. Note: It uses a number of C++14 features. The provided example (main function) actually runs at compile time, proving that the currying method does not trump essential optimizations by the compiler. The code can be found here: https://gist.github.com/Garciat/c7e4bef299ee5c607948 with this helper file: https://gist.github.com/Garciat/cafe27d04cfdff0e891e The code still needs (a lot of) work, which I may or may not complete soon. Either way, I'm posting this here for future reference. Posting code in case links die (though they shouldn't): #include <type_traits> #include <tuple> #include <functional> #include <iostream> // --- template <typename FType> struct function_traits; template <typename RType, typename... ArgTypes> struct function_traits<RType(ArgTypes...)> { using arity = std::integral_constant<size_t, sizeof...(ArgTypes)>; using result_type = RType; template <size_t Index> using arg_type = typename std::tuple_element<Index, std::tuple<ArgTypes...>>::type; }; // --- namespace details { template <typename T> struct function_type_impl : function_type_impl<decltype(&T::operator())> { }; template <typename RType, typename... ArgTypes> struct function_type_impl<RType(ArgTypes...)> { using type = RType(ArgTypes...); }; template <typename RType, typename... ArgTypes> struct function_type_impl<RType(*)(ArgTypes...)> { using type = RType(ArgTypes...); }; template <typename RType, typename... ArgTypes> struct function_type_impl<std::function<RType(ArgTypes...)>> { using type = RType(ArgTypes...); }; template <typename T, typename RType, typename... ArgTypes> struct function_type_impl<RType(T::*)(ArgTypes...)> { using type = RType(ArgTypes...); }; template <typename T, typename RType, typename... ArgTypes> struct function_type_impl<RType(T::*)(ArgTypes...) const> { using type = RType(ArgTypes...); }; } template <typename T> struct function_type : details::function_type_impl<typename std::remove_cv<typename std::remove_reference<T>::type>::type> { }; // --- template <typename Args, typename Params> struct apply_args; template <typename HeadArgs, typename... Args, typename HeadParams, typename... Params> struct apply_args<std::tuple<HeadArgs, Args...>, std::tuple<HeadParams, Params...>> : std::enable_if< std::is_constructible<HeadParams, HeadArgs>::value, apply_args<std::tuple<Args...>, std::tuple<Params...>> >::type { }; template <typename... Params> struct apply_args<std::tuple<>, std::tuple<Params...>> { using type = std::tuple<Params...>; }; // --- template <typename TupleType> struct is_empty_tuple : std::false_type { }; template <> struct is_empty_tuple<std::tuple<>> : std::true_type { }; // ---- template <typename FType, typename GivenArgs, typename RestArgs> struct currying; template <typename FType, typename... GivenArgs, typename... RestArgs> struct currying<FType, std::tuple<GivenArgs...>, std::tuple<RestArgs...>> { std::tuple<GivenArgs...> given_args; FType func; template <typename Func, typename... GivenArgsReal> constexpr currying(Func&& func, GivenArgsReal&&... args) : given_args(std::forward<GivenArgsReal>(args)...), func(std::move(func)) { } template <typename... Args> constexpr auto operator() (Args&&... args) const& { using ParamsTuple = std::tuple<RestArgs...>; using ArgsTuple = std::tuple<Args...>; using RestArgsPrime = typename apply_args<ArgsTuple, ParamsTuple>::type; using CanExecute = is_empty_tuple<RestArgsPrime>; return apply(CanExecute{}, std::make_index_sequence<sizeof...(GivenArgs)>{}, std::forward<Args>(args)...); } template <typename... Args> constexpr auto operator() (Args&&... args) && { using ParamsTuple = std::tuple<RestArgs...>; using ArgsTuple = std::tuple<Args...>; using RestArgsPrime = typename apply_args<ArgsTuple, ParamsTuple>::type; using CanExecute = is_empty_tuple<RestArgsPrime>; return std::move(*this).apply(CanExecute{}, std::make_index_sequence<sizeof...(GivenArgs)>{}, std::forward<Args>(args)...); } private: template <typename... Args, size_t... Indices> constexpr auto apply(std::false_type, std::index_sequence<Indices...>, Args&&... args) const& { using ParamsTuple = std::tuple<RestArgs...>; using ArgsTuple = std::tuple<Args...>; using RestArgsPrime = typename apply_args<ArgsTuple, ParamsTuple>::type; using CurryType = currying<FType, std::tuple<GivenArgs..., Args...>, RestArgsPrime>; return CurryType{ func, std::get<Indices>(given_args)..., std::forward<Args>(args)... }; } template <typename... Args, size_t... Indices> constexpr auto apply(std::false_type, std::index_sequence<Indices...>, Args&&... args) && { using ParamsTuple = std::tuple<RestArgs...>; using ArgsTuple = std::tuple<Args...>; using RestArgsPrime = typename apply_args<ArgsTuple, ParamsTuple>::type; using CurryType = currying<FType, std::tuple<GivenArgs..., Args...>, RestArgsPrime>; return CurryType{ std::move(func), std::get<Indices>(std::move(given_args))..., std::forward<Args>(args)... }; } template <typename... Args, size_t... Indices> constexpr auto apply(std::true_type, std::index_sequence<Indices...>, Args&&... args) const& { return func(std::get<Indices>(given_args)..., std::forward<Args>(args)...); } template <typename... Args, size_t... Indices> constexpr auto apply(std::true_type, std::index_sequence<Indices...>, Args&&... args) && { return func(std::get<Indices>(std::move(given_args))..., std::forward<Args>(args)...); } }; // --- template <typename FType, size_t... Indices> constexpr auto curry(FType&& func, std::index_sequence<Indices...>) { using RealFType = typename function_type<FType>::type; using FTypeTraits = function_traits<RealFType>; using CurryType = currying<FType, std::tuple<>, std::tuple<typename FTypeTraits::template arg_type<Indices>...>>; return CurryType{ std::move(func) }; } template <typename FType> constexpr auto curry(FType&& func) { using RealFType = typename function_type<FType>::type; using FTypeArity = typename function_traits<RealFType>::arity; return curry(std::move(func), std::make_index_sequence<FTypeArity::value>{}); } // --- int main() { auto add = curry([](int a, int b) { return a + b; }); std::cout << add(5)(10) << std::endl; } A: These Links are relevant: The Lambda Calculus page on Wikipedia has a clear example of currying http://en.wikipedia.org/wiki/Lambda_calculus#Motivation This paper treats currying in C/C++ http://asg.unige.ch/site/papers/Dami91a.pdf A: Simplifying Gregg's example, using tr1: #include <functional> using namespace std; using namespace std::tr1; using namespace std::tr1::placeholders; int f(int, int); .. int main(){ function<int(int)> g = bind(f, _1, 5); // g(x) == f(x, 5) function<int(int)> h = bind(f, 2, _1); // h(x) == f(2, x) function<int(int,int)> j = bind(g, _2); // j(x,y) == g(y) } Tr1 functional components allow you to write rich functional-style code in C++. As well, C++0x will allow for in-line lambda functions to do this as well: int f(int, int); .. int main(){ auto g = [](int x){ return f(x,5); }; // g(x) == f(x, 5) auto h = [](int x){ return f(2,x); }; // h(x) == f(2, x) auto j = [](int x, int y){ return g(y); }; // j(x,y) == g(y) } And while C++ doesn't provide the rich side-effect analysis that some functional-oriented programming languages perform, const analysis and C++0x lambda syntax can help: struct foo{ int x; int operator()(int y) const { x = 42; // error! const function can't modify members } }; .. int main(){ int x; auto f = [](int y){ x = 42; }; // error! lambdas don't capture by default. } Hope that helps. A: Have a look at Boost.Bind which makes the process shown by Greg more versatile: transform(a.begin(), a.end(), back_inserter(b), bind(f, _1, 5)); This binds 5 to f's second argument. It’s worth noting that this is not currying (instead, it’s partial application). However, using currying in a general way is hard in C++ (in fact, it only recently became possible at all) and partial application is often used instead. A: Other answers nicely explain binders, so I won't repeat that part here. I will only demonstrate how currying and partial application can be done with lambdas in C++0x. Code example: (Explanation in comments) #include <iostream> #include <functional> using namespace std; const function<int(int, int)> & simple_add = [](int a, int b) -> int { return a + b; }; const function<function<int(int)>(int)> & curried_add = [](int a) -> function<int(int)> { return [a](int b) -> int { return a + b; }; }; int main() { // Demonstrating simple_add cout << simple_add(4, 5) << endl; // prints 9 // Demonstrating curried_add cout << curried_add(4)(5) << endl; // prints 9 // Create a partially applied function from curried_add const auto & add_4 = curried_add(4); cout << add_4(5) << endl; // prints 9 } A: If you're using C++14 it's very easy: template<typename Function, typename... Arguments> auto curry(Function function, Arguments... args) { return [=](auto... rest) { return function(args..., rest...); }; // don't forget semicolumn } You can then use it like this: auto add = [](auto x, auto y) { return x + y; } // curry 4 into add auto add4 = curry(add, 4); add4(6); // 10 A: C++20 provides bind_front for doing currying. For older C++ version it can be implemented (for single argument) as follows: template <typename TFunc, typename TArg> class CurryT { private: TFunc func; TArg arg ; public: template <typename TFunc_, typename TArg_> CurryT(TFunc_ &&func, TArg_ &&arg) : func(std::forward<TFunc_>(func)) , arg (std::forward<TArg_ >(arg )) {} template <typename... TArgs> auto operator()(TArgs &&...args) const -> decltype( func(arg, std::forward<TArgs>(args)...) ) { return func(arg, std::forward<TArgs>(args)...); } }; template <typename TFunc, typename TArg> CurryT<std::decay_t<TFunc>, std::remove_cv_t<TArg>> Curry(TFunc &&func, TArg &&arg) { return {std::forward<TFunc>(func), std::forward<TArg>(arg)}; } https://coliru.stacked-crooked.com/a/82856e39da5fa50d void Abc(std::string a, int b, int c) { std::cerr << a << b << c << std::endl; } int main() { std::string str = "Hey"; auto c1 = Curry(Abc, str); std::cerr << "str: " << str << std::endl; c1(1, 2); auto c2 = Curry(std::move(c1), 3); c2(4); auto c3 = Curry(c2, 5); c3(); } Output: str: Hey12 Hey34 Hey35 If you use long chains of currying then std::shared_ptr optimization can be used to avoid copying all previous curried parameters to each new carried function. template <typename TFunc> class SharedFunc { public: struct Tag{}; // For avoiding shadowing copy/move constructors with the // templated constructor below which accepts any parameters. template <typename... TArgs> SharedFunc(Tag, TArgs &&...args) : p_func( std::make_shared<TFunc>(std::forward<TArgs>(args)...) ) {} template <typename... TArgs> auto operator()(TArgs &&...args) const -> decltype( (*p_func)(std::forward<TArgs>(args)...) ) { return (*p_func)(std::forward<TArgs>(args)...); } private: std::shared_ptr<TFunc> p_func; }; template <typename TFunc, typename TArg> SharedFunc< CurryT<std::decay_t<TFunc>, std::remove_cv_t<TArg>> > CurryShared(TFunc &&func, TArg &&arg) { return { {}, std::forward<TFunc>(func), std::forward<TArg>(arg) }; } https://coliru.stacked-crooked.com/a/6e71f41e1cc5fd5c
{ "language": "en", "url": "https://stackoverflow.com/questions/152005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60" }
Q: change mime type of output in php I've got a php script. Most of the time the script returns html, which is working fine, but on one occasion (parameter ?Format=XML) the script returns XML instead of HTML. Is there any way to change the returned mime type of the php output on the fly from text/html to text/xml or application/xml? A: You should send a Content-Type header before you send any output. header('Content-Type: text/xml'); A: header('Content-Type: application/xml; charset=utf-8'); You can add encoding as well in the same line. I added utf-8, which is most common. A: I will answer to the update, since the previous answers are good. I have read that Internet Explorer is well known for ignoring Mime type headers (most of the time?) to rely on content of the file (which can cause problems in some cases). Mmm, I did a simple test: <?php header('Content-Type: text/xml'); echo '<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <root><foo a="b">Tada</foo></root>'; ?> Internet Explorer 6 displays it correctly as XML. Even if I remove the xml declaration. You should indicate which version is problematic. Actually, as I wrote above, with IE (6 at least), you don't even need a content-type, it recognize XML data and display it as a tree. Is your XML correct? [Update] Tried with IE7 as well, adding ?format=xml too, still displaying XML correctly. If I send malformed XML, IE displays an error. Tested on WinXP Pro SP2+ A: Set the Content-Type header: header('Content-Type: text/xml'); Though you should probably use "application/xml" instead. A: header('Content-type: application/xml'); More information available at the PHP documentation for header() A: I just used the following: NOTE: I am using "i" for sql improved extension. Start XML file, echo parent node header("Content-type: text/xml"); echo "<?xml version='1.0' encoding='UTF-8'?>"; echo "<marker>"; Iterate through the rows, printing XML nodes for each while ($row = @mysqli_fetch_assoc($results)){ // Add to XML document node echo '<marker '; echo 'id="' . $ind . '" '; echo 'name="' . parseToXML($row['name']) . '" '; echo 'address="' . parseToXML($row['address']) . '" '; echo 'lat="' . $row['lat'] . '" '; echo 'lng="' . $row['lng'] . '" '; echo 'type="' . $row['type'] . '" '; echo '/>'; } // End XML file echo "</marker>";
{ "language": "en", "url": "https://stackoverflow.com/questions/152006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: .NET Micro Framework on a ARM Cortex-M3 Core I have a RDK-IDM from Luminary Micro. This board has a 32-bit ARM® Cortex™-M3 core. Has anybody tried to run a .NET Micro Framework application on such a device? A: I don't have any hands on experience but based on http://www.microsoft.com/netmf/about/gettingstarted.mspx The smallest footprint supported is 64kb RAM, 256kb Flash and MMU is not required. Therefore your applications needs would be the determining factor. FYI: the .NET Micro Framework was released as Open Source under the Apache 2.0 License November 16, 2009 A: It seems that the LM3S6918 (The chip on the RDK-IDM) has only 256KB Flash and 64Kb SRAM but .NET Micro Framework requires 256KB RAM and 512K Flash/ROM! Read more here A: We have ported .NET Micro Framework to TI Stellaris MCU, ARM Cortex-M3 core, currently we have a port for EK-LM3S8962 board, and it is working. .NET Micro Framework Minimal memory footprint: * *Flash: 155KB *RAM: 32KB A: The cortex M3 is a very cut-down core, it lacks an MMU, for example, and is intended to run very simple operating systems. Specifically, not Symbian/Windows Mobile/Linux/etc. Rather OSEck, OSEK, iTRON, or similar. I think this is actually totally infeasible due to that.
{ "language": "en", "url": "https://stackoverflow.com/questions/152015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Detecting CPU architecture compile-time What is the most reliable way to find out CPU architecture when compiling C or C++ code? As far as I can tell, different compilers have their own set of non-standard preprocessor definitions (_M_X86 in MSVS, __i386__, __arm__ in GCC, etc). Is there a standard way to detect the architecture I'm building for? If not, is there a source for a comprehensive list of such definitions for various compilers, such as a header with all the boilerplate #ifdefs? A: If you want a cross-compiler solution then just use Boost.Predef which contains * *BOOST_ARCH_ for system/CPU architecture one is compiling for. *BOOST_COMP_ for the compiler one is using. *BOOST_LANG_ for language standards one is compiling against. *BOOST_LIB_C_ and BOOST_LIB_STD_ for the C and C++ standard library in use. *BOOST_OS_ for the operating system we are compiling to. *BOOST_PLAT_ for platforms on top of operating system or compilers. *BOOST_ENDIAN_ for endianness of the os and architecture combination. *BOOST_HW_ for hardware specific features. *BOOST_HW_SIMD for SIMD (Single Instruction Multiple Data) detection. Note that although Boost is usually thought of as a C++ library, Boost.Predef is pure header-only and works for C For example #include <boost/predef.h> // or just include the necessary headers // #include <boost/predef/architecture.h> // #include <boost/predef/other.h> #if BOOST_ARCH_X86 #if BOOST_ARCH_X86_64 std::cout << "x86-64\n"; #elif BOOST_ARCH_X86_32 std::cout << "x86-32\n"; #else std::cout << "x86-" << BOOST_ARCH_WORD_BITS << '\n'; // Probably x86-16 #endif #elif BOOST_ARCH_ARM #if BOOST_ARCH_ARM >= BOOST_VERSION_NUMBER(8, 0, 0) #if BOOST_ARCH_WORD_BITS == 64 std::cout << "ARMv8+ Aarch64\n"; #elif BOOST_ARCH_WORD_BITS == 32 std::cout << "ARMv8+ Aarch32\n"; #else std::cout << "Unexpected ARMv8+ " << BOOST_ARCH_WORD_BITS << "bit\n"; #endif #elif BOOST_ARCH_ARM >= BOOST_VERSION_NUMBER(7, 0, 0) std::cout << "ARMv7 (ARM32)\n"; #elif BOOST_ARCH_ARM >= BOOST_VERSION_NUMBER(6, 0, 0) std::cout << "ARMv6 (ARM32)\n"; #else std::cout << "ARMv5 or older\n"; #endif #elif BOOST_ARCH_MIPS #if BOOST_ARCH_WORD_BITS == 64 std::cout << "MIPS64\n"; #else std::cout << "MIPS32\n"; #endif #elif BOOST_ARCH_PPC_64 std::cout << "PPC64\n"; #elif BOOST_ARCH_PPC std::cout << "PPC32\n"; #else std::cout << "Unknown " << BOOST_ARCH_WORD_BITS << "-bit arch\n"; #endif You can find out more on how to use it here Demo on Godbolt A: There's nothing standard. Brian Hook documented a bunch of these in his "Portable Open Source Harness", and even tries to make them into something coherent and usable (ymmv regarding that). See the posh.h header on this site: * *http://hookatooka.com/poshlib/ Note, the link above may require you to enter some bogus userid/password due to a DOS attack some time ago. A: Enjoy, I was the original author of this. extern "C" { const char *getBuild() { //Get current architecture, detectx nearly every architecture. Coded by Freak #if defined(__x86_64__) || defined(_M_X64) return "x86_64"; #elif defined(i386) || defined(__i386__) || defined(__i386) || defined(_M_IX86) return "x86_32"; #elif defined(__ARM_ARCH_2__) return "ARM2"; #elif defined(__ARM_ARCH_3__) || defined(__ARM_ARCH_3M__) return "ARM3"; #elif defined(__ARM_ARCH_4T__) || defined(__TARGET_ARM_4T) return "ARM4T"; #elif defined(__ARM_ARCH_5_) || defined(__ARM_ARCH_5E_) return "ARM5" #elif defined(__ARM_ARCH_6T2_) || defined(__ARM_ARCH_6T2_) return "ARM6T2"; #elif defined(__ARM_ARCH_6__) || defined(__ARM_ARCH_6J__) || defined(__ARM_ARCH_6K__) || defined(__ARM_ARCH_6Z__) || defined(__ARM_ARCH_6ZK__) return "ARM6"; #elif defined(__ARM_ARCH_7__) || defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) return "ARM7"; #elif defined(__ARM_ARCH_7A__) || defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) return "ARM7A"; #elif defined(__ARM_ARCH_7R__) || defined(__ARM_ARCH_7M__) || defined(__ARM_ARCH_7S__) return "ARM7R"; #elif defined(__ARM_ARCH_7M__) return "ARM7M"; #elif defined(__ARM_ARCH_7S__) return "ARM7S"; #elif defined(__aarch64__) || defined(_M_ARM64) return "ARM64"; #elif defined(mips) || defined(__mips__) || defined(__mips) return "MIPS"; #elif defined(__sh__) return "SUPERH"; #elif defined(__powerpc) || defined(__powerpc__) || defined(__powerpc64__) || defined(__POWERPC__) || defined(__ppc__) || defined(__PPC__) || defined(_ARCH_PPC) return "POWERPC"; #elif defined(__PPC64__) || defined(__ppc64__) || defined(_ARCH_PPC64) return "POWERPC64"; #elif defined(__sparc__) || defined(__sparc) return "SPARC"; #elif defined(__m68k__) return "M68K"; #else return "UNKNOWN"; #endif } } A: There's no inter-compiler standard, but each compiler tends to be quite consistent. You can build a header for yourself that's something like this: #if MSVC #ifdef _M_X86 #define ARCH_X86 #endif #endif #if GCC #ifdef __i386__ #define ARCH_X86 #endif #endif There's not much point to a comprehensive list, because there are thousands of compilers but only 3-4 in widespread use (Microsoft C++, GCC, Intel CC, maybe TenDRA?). Just decide which compilers your application will support, list their #defines, and update your header as needed. A: There's a list of the #defines here. There was a previous highly voted answer that included this link but it was deleted by a mod presumably due to SO's "answers must have code" rule. So here's a random sample. Follow the link for the full list. AMD64 Type Macro Description Identification __amd64__ __amd64 __x86_64__ __x86_64 Defined by GNU C and Sun Studio Identification _M_X64 _M_AMD64 Defined by Visual Studio A: If you would like to dump all available features on a particular platform, you could run GCC like: gcc -march=native -dM -E - </dev/null It would dump macros like #define __SSE3__ 1, #define __AES__ 1, etc. A: If you need a fine-grained detection of CPU features, the best approach is to ship also a CPUID program which outputs to stdout or some "cpu_config.h" file the set of features supported by the CPU. Then you integrate that program with your build process.
{ "language": "en", "url": "https://stackoverflow.com/questions/152016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: Will a future version of .NET support tuples in C#? .Net 3.5 doesn't support tuples. Too bad, But not sure whether the future version of .net will support tuples or not? A: I've just read this article from the MSDN Magazine: Building Tuple Here are excerpts: The upcoming 4.0 release of Microsoft .NET Framework introduces a new type called System.Tuple. System.Tuple is a fixed-size collection of heterogeneously typed data.       Like an array, a tuple has a fixed size that can't be changed once it has been created. Unlike an array, each element in a tuple may be a different type, and a tuple is able to guarantee strong typing for each element.   There is already one example of a tuple floating around the Microsoft .NET Framework, in the System.Collections.Generic namespace: KeyValuePair. While KeyValuePair can be thought of as the same as Tuple, since they are both types that hold two things, KeyValuePair feels different from Tuple because it evokes a relationship between the two values it stores (and with good reason, as it supports the Dictionary class). Furthermore, tuples can be arbitrarily sized, whereas KeyValuePair holds only two things: a key and a value. While some languages like F# have special syntax for tuples, you can use the new common tuple type from any language. Revisiting the first example, we can see that while useful, tuples can be overly verbose in languages without syntax for a tuple: class Program { static void Main(string[] args) { Tuple<string, int> t = new Tuple<string, int>("Hello", 4); PrintStringAndInt(t.Item1, t.Item2); } static void PrintStringAndInt(string s, int i) { Console.WriteLine("{0} {1}", s, i); } } Using the var keyword from C# 3.0, we can remove the type signature on the tuple variable, which allows for somewhat more readable code. var t = new Tuple<string, int>("Hello", 4); We've also added some factory methods to a static Tuple class which makes it easier to build tuples in a language that supports type inference, like C#. var t = Tuple.Create("Hello", 4); A: #region tuples public class Tuple<T> { public Tuple(T first) { First = first; } public T First { get; set; } } public class Tuple<T, T2> : Tuple<T> { public Tuple(T first, T2 second) : base(first) { Second = second; } public T2 Second { get; set; } } public class Tuple<T, T2, T3> : Tuple<T, T2> { public Tuple(T first, T2 second, T3 third) : base(first, second) { Third = third; } public T3 Third { get; set; } } public class Tuple<T, T2, T3, T4> : Tuple<T, T2, T3> { public Tuple(T first, T2 second, T3 third, T4 fourth) : base(first, second, third) { Fourth = fourth; } public T4 Fourth { get; set; } } #endregion And to make declarations prettier: public static class Tuple { //Allows Tuple.New(1, "2") instead of new Tuple<int, string>(1, "2") public static Tuple<T1, T2> New<T1, T2>(T1 t1, T2 t2) { return new Tuple<T1, T2>(t1, t2); } //etc... } A: C# 7 supports tuples natively: var unnamedTuple = ("Peter", 29); var namedTuple = (Name: "Peter", Age: 29); (string Name, double Age) typedTuple = ("Peter", 29); A: C# supports simple tuples via generics quite easily (as per an earlier answer), and with "mumble typing" (one of many possible C# language enhancements) to improve type inference they could be very, very powerful. For what it is worth, F# supports tuples natively, and having played with it, I'm not sure that (anonymous) tuples add much... what you gain in brevity you lose very quickly in code clarity. For code within a single method, there are anonymous types; for code going outside of a method, I think I'll stick to simple named types. Of course, if a future C# makes it easier to make these immutable (while still easy to work with) I'll be happy. A: My open source .NET Sasa library has had tuples for years (along with plenty of other functionality, like full MIME parsing). I've been using it in production code for a good few years now. A: Here's my set of tuples, they're autogenerated by a Python script, so I've perhaps gone a bit overboard: Link to Subversion repository You'll need a username/password, they're both guest They are based on inheritance, but Tuple<Int32,String> will not compare equal to Tuple<Int32,String,Boolean> even if they happen to have the same values for the two first members. They also implement GetHashCode and ToString and so forth, and lots of smallish helper methods. Example of usage: Tuple<Int32, String> t1 = new Tuple<Int32, String>(10, "a"); Tuple<Int32, String, Boolean> t2 = new Tuple<Int32, String, Boolean>(10, "a", true); if (t1.Equals(t2)) Console.Out.WriteLine(t1 + " == " + t2); else Console.Out.WriteLine(t1 + " != " + t2); Will output: 10, a != 10, a, True A: There is a proper (not quick) C# Tuple implementation in Lokad Shared Libraries (Open-source, of course) that includes following required features: * *2-5 immutable tuple implementations *Proper DebuggerDisplayAttribute *Proper hashing and equality checks *Helpers for generating tuples from the provided parameters (generics are inferred by compiler) and extensions for collection-based operations. *production-tested. A: Implementing Tuple classes or reusing F# classes within C# is only half the story - these give you the ability to create tuples with relative ease, but not really the syntactic sugar which makes them so nice to use in languages like F#. For example in F# you can use pattern matching to extract both parts of a tuple within a let statment, eg let (a, b) = someTupleFunc Unfortunately to do the same using the F# classes from C# would be much less elegant: Tuple<int,int> x = someTupleFunc(); int a = x.get_Item1(); int b = x.get_Item2(); Tuples represent a powerful method for returning multiple values from a function call without the need to litter your code with throwaway classes, or resorting to ugly ref or out parameters. However, in my opinion, without some syntactic sugar to make their creation and access more elegant, they are of limited use. A: In my opinion, the anonymous types feature is not a tuple, but a very similar construct. The output of some LINQ Queries are collections of anonymous types, which behave like tuples. Here is a statement, which creates a typed tuple :-) on the fly: var p1 = new {a = "A", b = 3}; see: http://www.developer.com/net/csharp/article.php/3589916 A: If I remember my Computer Science classes correctly tuples are just data. If you want grouped data - create classes that contain properties. If you need something like the KeyValuePair then there it is. A: I'd be surprised - C# is a strongly-typed language, whereas tuples are suited for more dynamically typed languages. C# has been drifting more dynamic as time goes on, but that's syntactic sugar, not a real shift in the underlying data types. If you want two values in one instance, a KeyValuePair<> is a decent substitute, albeit clumsy. You can also make a struct or a class that'll do the same thing, and is expandable. A: To make these useful in a hashtable or dictionary, you will likely want to provide overloads for GetHashCode and Equals.
{ "language": "en", "url": "https://stackoverflow.com/questions/152019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Deploying Websphere Portals onto a 6.1 Server Any time I try to publish my Portal project on a Websphere Portal 6.1 Server, I get the following error message: Portal project publishing is not supported on WebSphere Portal v6.1 Server Is that really true or have I done something wrong? I'm trying to deploy a portal project, with the underlying goal of publishing a new theme. Unfortunately, any time I try to deploy, I get the error message listed above from the IDE and no errors in the console. The RAD version is 7.0.0.7. A: Not sure this will help, but: Limitation: Although the WebSphere Portal installer contains an advanced option to install an empty portal, Portal Designer relies on administration portlets for setting access control; therefore, publishing a portal project to an empty portal is not supported. Are you trying to deploy a portlet or an entire portal project? What version of RAD are you using? Any other information in the error log? (Both within RAD & on WP server) Is the error message posted verbatim? A: It is possible to deploy such a project to a WebSphere Portal. I use v6.2 and deploy portlets, which are parts of a big Portal project, every day. I'm only a starter in such a stuff, but I can say that WebSphere is really buggy. You know, it is a big difference to run a deployed app "locally, in a workspace" or "on a server". I suggest you to work out with server options - administrative console.
{ "language": "en", "url": "https://stackoverflow.com/questions/152022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: WSDL validator? Is there any online service available to validate Web Service WSDL file? A: If you're using Eclipse, just have your WSDL in a .wsdl file, eclipse will validate it automatically. From the Doc The WSDL validator handles validation according to the 4 step process defined above. Steps 1 and 2 are both delegated to Apache Xerces (and XML parser). Step 3 is handled by the WSDL validator and any extension namespace validators (more on extensions below). Step 4 is handled by any declared custom validators (more on this below as well). Each step must pass in order for the next step to run. A: You can try using one of their tools: http://www.ws-i.org/deliverables/workinggroup.aspx?wg=testingtools These will check both WSDL validity and Basic Profile 1.1 compliance. A: If you would to validate WSDL programatically then you use WSDL Validator out of eclipse. http://wiki.eclipse.org/Using_the_WSDL_Validator_Outside_of_Eclipse should help or try this tool Graphical WSDL 1.1/2.0 editor. A: you might want to look at the online version of xsv A: You can try out wsdl validator http://docs.wso2.org/wiki/display/ESB451/WSDL+Validator
{ "language": "en", "url": "https://stackoverflow.com/questions/152023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: How to select all users who made more than 10 submissions I have a submission table that is very simple: userId, submissionGuid I want to select the username (simple inner join to get it) of all the users who have more than 10 submissions in the table. I would do this with embedded queries and a group by to count submissions... but is there a better way of doing it (without embedded queries)? Thanks! A: This is the simplest way, I believe: select userId from submission group by userId having count(submissionGuid) > 10 A: select userId, count(*) from submissions having count(*) > 10 group by userId A: SELECT username FROM usertable JOIN submissions ON usertable.userid = submissions.userid GROUP BY usertable.username HAVING Count(*) > 1 *Assuming that your "Users" table is call usertable and that it has a column called "UserName" A: I think the correct query is this (SQL Server): SELECT s.userId, u.userName FROM submission s INNER JOIN users u on u.userId = s.userId GROUP BY s.userId, u.username HAVING COUNT(submissionGuid) > 10 If you don't have the HAVING clause: SELECT u.userId, u.userName FROM users u INNER JOIN ( SELECT userId, COUNT(submissionGuid) AS cnt FROM submission GROUP BY userId ) sc ON sc.userId = u.userId WHERE sc.cnt > 10 A: select userid, count(submissionGUID) as submitCount from Submissions group by userid, submitCount having submitCount > 10
{ "language": "en", "url": "https://stackoverflow.com/questions/152024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Are there any OK image recognition libraries for .NET? I want to be able to compare an image taken from a webcam to an image stored on my computer. The library doesn't need to be one hundred percent accurate as it won't be used in anything mission critical (e.g. police investigation), I just want something OK I can work with. I have tried a demonstration project for Image Recognition from CodeProject, and it only works with small images / doesn't work at all when I compare an exact same image 120x90 pixels (this is not classified as OK :P ). Has there been any success with image recognition before? If so, would you be able to provide a link to a library I could use in either C# or VB.NET? A: You could try this: http://code.google.com/p/aforge/ It includes a comparison analysis that will give you a score. There are many other great imaging features of all types included as well. // The class also can be used to get similarity level between two image of the same size, which can be useful to get information about how different/similar are images: // Create template matching algorithm's instance // Use zero similarity to make sure algorithm will provide anything ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0); // Compare two images TemplateMatch[] matchings = tm.ProcessImage( image1, image2 ); // Check similarity level if (matchings[0].Similarity > 0.95) { // Do something with quite similar images } A: I did it simply. Just download the EyeOpen library here. Then use it in your C# class and write this: use eyeopen.imaging.processing Write ComparableImage cc; ComparableImage pc; int sim; void compare(object sender, EventArgs e){ pc = new ComparableImage(new FileInfo(files)); cc = new ComparableImage(new FileInfo(file)); pc.CalculateSimilarity(cc); sim = pc.CalculateSimilarity(cc); int sim2 = sim*100 Messagebox.show(sim2 + "% similar"); } A: You can exactly use EmguCV for .NET.
{ "language": "en", "url": "https://stackoverflow.com/questions/152028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Structuring projects & dependencies of large winforms applications in C# UPDATE: This is one of my most-visited questions, and yet I still haven't really found a satisfactory solution for my project. One idea I read in an answer to another question is to create a tool which can build solutions 'on the fly' for projects that you pick from a list. I have yet to try that though. How do you structure a very large application? * *Multiple smallish projects/assemblies in one big solution? *A few big projects? *One solution per project? And how do you manage dependencies in the case where you don't have one solution. Note: I'm looking for advice based on experience, not answers you found on Google (I can do that myself). I'm currently working on an application which has upward of 80 dlls, each in its own solution. Managing the dependencies is almost a full time job. There is a custom in-house 'source control' with added functionality for copying dependency dlls all over the place. Seems like a sub-optimum solution to me, but is there a better way? Working on a solution with 80 projects would be pretty rough in practice, I fear. (Context: winforms, not web) EDIT: (If you think this is a different question, leave me a comment) It seems to me that there are interdependencies between: * *Project/Solution structure for an application *Folder/File structure *Branch structure for source control (if you use branching) But I have great difficulty separating these out to consider them individually, if that is even possible. I have asked another related question here. A: Source Control We have 20 or 30 projects being built into 4 or 5 discrete solutions. We are using Subversion for SCM. 1) We have one tree in SVN containing all the projects organised logically by namespace and project name. There is a .sln at the root that will build them all, but that is not a requirement. 2) For each actual solution we have a new trunks folder in SVN with SVN:External references to all the required projects so that they get updated from their locations under the main tree. 3) In each solution is the .sln file plus a few other required files, plus any code that is unique to that solution and not shared across solutions. Having many smaller projects is a bit of a pain at times (for example the TortoiseSVN update messages get messy with all those external links) but does have the huge advantage that dependancies are not allowed to be circular, so our UI projects depend on the BO projects but the BO projects cannot reference the UI (and nor should they!). Architecture We have completely switched over to using MS SCSF and CAB enterprise pattern to manage the way our various projects combine and interact in a Win Forms interface. I am unsure if you have the same problems (multiple modules need to share space in a common forms environment) but if you do then this may well bring some sanity and convention to how you architect and assemble your solutions. I mention that because SCSF tends to merge BO and UI type functions into the same module, whereas previously we maintained a strict 3 level policy: FW - Framework code. Code whose function relates to software concerns. BO - Business Objects. Code whose function relates to problem domain concerns. UI - Code which relates to the UI. In that scenario dependancies are strictly UI -> BO -> FW We have found that we can maintain that structure even while using SCSF generated modules so all is good in the world :-) A: To manage dependencies, whatever the number of assemblies/namespaces/projects you have, you can have a glance at the tool NDepend. Personnaly, I foster few large projects, within one or several solutions if needed. I wrote about my motivations to do so here: Benefit from the C# and VB.NET compilers perf A: I think it's quite important that you have a solution that contains all your 80 projects, even if most developers use other solutions most of the time. In my experience, I tend to work with one large solution, but to avoid the pain of rebuilding all the projects each time I hit F5, I go to Solution Explorer, right-click on the projects I'm not interested in right now, and do "Unload Project". That way, the project stays in the solution but it doesn't cost me anything. Having said that, 80 is a large number. Depending on how well those 80 break down into dicrete subsystems, I might also create other solution files that each contain a meaningful subset. That would save me the effort of lots of right-click/Unload operations. Nevertheless, the fact that you'd have one big solution means there's always a definitive view of their inter-dependencies. In all the source control systems that I've worked with, their VS integration chooses to put the .sln file in source control, and many don't work properly unless that .sln file is in source control. I find that intriguing, since the .sln file used to be considered a personal thing, rather than a project-wide thing. I think the only kind of .sln file that definitely merits source control is the "one-big-solution" that contains all projects. You can use it for automated builds, for example. As I said, individuals might create their own solutions for convenience, and I'm not against those going into source control, but they're more meaningful to individuals than to the project. A: I think the best solution is to break it in to smaller solutions. At the company I currently work for, we have the same problem; 80 projects++ in on solution. What we have done, is to split into several smaller solutions with projects belonging together. Dependent dll's from other projects are built and linked in to the project and checked in to the source control system together with the project. It uses more disk space, but disk is cheap. Doing it this way, we can stay with version 1 of a project until upgrading to version 1.5 is absolutely necessary. You still have the job with adding dll's when deciding to upgrade to a other version of the dll though. There is a project on google code called TreeFrog that shows how to structure the solution and development tree. It doesn't contain mush documentation yet, but I guess you can get a idea of how to do it by looking at the structure. A: A method that i've seen work well is having one big solution which contains all the projects, for allowing a project wide build to be tested (No one really used this to build on though as it was too big.), and then having smaller projects for developers to use which had various related projects grouped together. These did have depencies on other projects but, unless the interfaces changed, or they needed to update the version of the dll they were using, they could continue to use the smaller projects without worrying about everything else. Thus they could check-in projects while they were working on them, and then pin them (after changing the version number), when other users should start using them. Finally once or twice a week or even more frequently the entire solution was rebuild using pinned code only, thus checking if the integration was working correctly, and giving testers a good build to test against. We often found that huge sections of code didn't change frequently, so it was pointless loading it all the time. (When you're working on the smaller projects.) Another advantage of using this approach is in certain cases we had pieces of functionality which took months to complete, by using the above approach meant this could continue without interrupting other streams of work. I guess one key criteria for this is not having lots of cross dependencies all over your solutions, if you do, this approach might not be appropriate, if however the dependencies are more limited, then this might be the way to go. A: For a couple of systems I've worked on we had different solutions for different components. Each solution had a common Output folder (with Debug and Release sub-folders) We used project references within a solution and file references between them. Each project used Reference Paths to locate the assemblies from other solutions. We had to manually edit the .csproj.user files to add a $(Configuration) msbuild variable to the reference paths as VS insists on validating the path. For builds outside of VS I've written msbuild scripts that recursively identify project dependencies, fetch them from subversion and build them. A: I gave up on project references (although your macros sound wonderful) for the following reasons: * *It wasn't easy to switch between different solutions where sometimes dependency projects existed and sometimes didn't. *Needed to be able to open the project by itself and build it, and deploy it independently from other projects. If built with project references, this sometimes caused issues with deployment, because a project reference caused it to look for a specific version or higher, or something like that. It limited the mix and match ability to swap in and out different versions of dependencies. *Also, I had projects pointing to different .NET Framework versions, and so a true project reference wasn't always happening anyways. (FYI, everything I have done is for VB.NET, so not sure if any subtle difference in behavior for C#) So, I: * *I build against any project that is open in the solution, and those that aren't, from a global folder, like C:\GlobalAssemblies *My continuous integration server keeps this up to date on a network share, and I have a batch file to sync anything new to my local folder. *I have another local folder like C:\GlobalAssembliesDebug where each project has a post build step that copies its bin folder's contents to this debug folder, only when in DEBUG mode. *Each project has these two global folders added to their reference paths. (First the C:\GlobalAssembliesDebug, and then C:\GlobalAssemblies). I have to manually add this reference paths to the .vbproj files, because Visual Studio's UI addes them to the .vbprojuser file instead. *I have a pre-build step that, if in RELEASE mode, deletes the contents from C:\GlobalAssembliesDebug. *In any project that is the host project, if there are non dlls that I need to copy (text files outputted to other project's bin folders that I need), then I put a prebuild step on that project to copy them into the host project. *I have to manually specify the project dependencies in the solution properties, to get them to build in the correct order. So, what this does is: * *Allows me to use projects in any solution without messing around with project references. *Visual Studio still lets me step into dependency projects that are open in the solution. *In DEBUG mode, it builds against open loaded projects. So, first it looks to the C:\GlobalAssembliesDebug, then if not there, to C:\GlobalAssemblies *In RELEASE mode, since it deletes everything from C:\GlobalAssembliesDebug, it only looks to C:\GlobalAssemblies. The reason I want this is so that released builds aren't built against anything that was temporarily changed in my solution. *It is easy to load and unload projects without much effort. Of course, it isn't perfect. The debugging experience is not as nice as a project reference. (Can't do things like "go to definition" and have it work right), and some other little quirky things. Anyways, that's where I am on my attempt to make things work for the best for us. A: We have one gigantic solution on the source control, on the main branch. But, every developer/team working on the smaller part of the project, has its own branch which contains one solution with only few projects which are needed. In that way, that solution is small enough to be easily maintenaced, and do not influence on the other projects/dlls in the larger solution. However, there is one condition for this: there shouldn't be too much interconnected projects within solution. A: OK, having digested this information, and also answers to this question about project references, I'm currently working with this configuration, which seems to 'work for me': * *One big solution, containing the application project and all the dependency assembly projects *I've kept all project references, with some extra tweaking of manual dependencies (right click on project) for some dynamically instantiated assemblies. *I've got three Solution folders (_Working, Synchronised and Xternal) - given that my source control isn't integrated with VS (sob), this allows me to quickly drag and drop projects between _Working and Synchronised so I don't lose track of changes. The XTernal folder is for assemblies that 'belong' to colleagues. *I've created myself a 'WorkingSetOnly' configuration (last option in Debug/Release drop-down), which allows me to limit the projects which are rebuilt on F5/F6. *As far as disk is concerned, I have all my projects folders in just one of a few folders (so just one level of categorisation above projects) *All projects build (dll, pdb & xml) to the same output folder, and have the same folder as a reference path. (And all references are set to Don't copy) - this leaves me the choice of dropping a project from my solution and easily switching to file reference (I've got a macro for that). *At the same level as my 'Projects' folder, I have a 'Solutions' folder, where I maintain individual solutions for some assemblies - together with Test code (for example) and documentation/design etc specific to the assembly. This configuration seems to be working ok for me at the moment, but the big test will be trying to sell it to my colleagues, and seeing if it will fly as a team setup. Currently unresolved drawbacks: * *I still have a problem with the individual assembly solutions, as I don't always want to include all the dependent projects. This creates a conflict with the 'master' solution. I've worked around this with (again) a macro which converts broken project references to file references, and restores file references to project references if the project is added back. *There's unfortunately no way (that I've found so far) of linking Build Configuration to Solution Folders - it would be useful to be able to say 'build everything in this folder' - as it stands, I have to update this by hand (painful, and easy to forget). (You can right click on a Solution Folder to build, but that doesn't handle the F5 scenario) *There is a (minor) bug in the Solution folder implementation which means that when you re-open a solution, the projects are shown in the order they were added, and not in alphabetical order. (I've opened a bug with MS, apparently now corrected, but I guess for VS2010) *I had to uninstall the CodeRushXPress add-in, because it was choking on all that code, but this was before having modified the build config, so I'm going to give it another try. Summary - things I didn't know before asking this question which have proved useful: * *Use of solution folders to organise solutions without messing with disk *Creation of build configurations to exclude some projects *Being able to manually define dependencies between projects, even if they are using file references This is my most popular question, so I hope this answer helps readers. I'm still very interested in further feedback from other users.
{ "language": "en", "url": "https://stackoverflow.com/questions/152053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How to measure performance in a C++ (MFC) application? What good profilers do you know? What is a good way to measure and tweak the performance of a C++ MFC application? Is Analysis of algorithms really neccesary? http://en.wikipedia.org/wiki/Algorithm_analysis A: I strongly recommend AQTime if you are staying on the Windows platform. It comes with a load of profilers, including static code analysis, and works with most important Windows compilers and systems, including Visual C++, .NET, Delphi, Borland C++, Intel C++ and even gcc. And it integrates into Visual Studio, but can also be used standalone. I love it. A: If you're (still) using Visual C++ 6.0, I suggest using the built-in profiler. For more recent versions you could try Compuware DevPartner Performance Analysis Community Edition. A: For Windows, check out Xperf which ships free with the Windows SDK. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like: Who is using the most CPU? Drill down to function name using call stacks. Who is allocating the most memory? Who is doing the most registry queries? Disk writes? etc. You will be quite surprised when you find the bottlenecks, as they are probably not where you expected! A: It's been a while since I profiled unmanaged code, but when I did I had good results with Intel's vtune. I'm sure somebody else will tell us if that's been overtaken. Algorithmic analysis has the potential to improve your performance more profoundly than anything you'll find with a profiler, but only for certain classes of application. If you operate over reasonably large sets of data, algorithmic analysis might find ways to be more efficient in CPU/Memory/both, but if your app is mainly form-fill with a relational database for storage, it might not offer you much. A: Intel Thread Checker via Vtune performance analyzer- Check this picture for the view i use the most that tells me which function eats up the most of my time. I can further drill down inside and decompose which functions inside them eats up more time etc. There are different views based on what you are watching (total time = time within fn + children), self time (time spent only in code running inside the function etc). This tool does a lot more than profiling but i haven't explored them all. I would definitely recommend it. The tool is also available for downloading as a fully functional trial version that can run for 30 days. If you have cost constraints, i would say this window is all that you require to pin point your problem. Trial download here - https://registrationcenter.intel.com/RegCenter/AutoGen.aspx?ProductID=907&AccountID=&ProgramID=&RequestDt=&rm=EVAL&lang= ps : I have also played with Rational Rational but for some reason I did not take much to it. I suspect Rational might be more expensive than Intel too. A: Tools (like true time from DevPartner) that let you see hit counts for source lines let you quickly find algorithms that have bad 'Big O' complexity. You still have to analyse the algorithm to determine how to reduce the complexity. A: I second AQTime, having both AQTime and Compuwares DevPartner, for most cases. The reason being that AQTime will profile any executable that has a valid PDB file, whereas TrueTime requires you to make an instrumented build. This greatly speeds up and simplifies ad hoc profiling. DevPartner is also quite a bit more expensive if this is an issue. Where DevPartner comes into its own is with BoundsChecker, which I still rate as a better tool for catching leaks and overwrites than AQTimes execution profiler. TrueTime can be slighly more accurate than AQTime, but I have never found this to be an issue. Is profiling worthwhile, IMO yes, if you need performance gains on a local application. I think you also learn a lot about how your program and algorithms really work, and the cost implications of using certain types of object classes for storing and iterating through your data. A: Glowcode is a very nice profiler (when it works). It can attach to a running program and requires only symbol files - you don't need to rebuild. A: Some versions pf visual studio 2005 (and maybe 2008) actually come with a pretty good performance profiler. if you have it it should be available under the tools menu or you can search for a way to open the "performance explorer" window to start a new performance session. A link to MSDN A: FYI, Some versions of Visual Studio only come with a non-optimizing compiler. For one of my my MFC apps if I compile it with MINGW/MSYS ( gcc compiler ) with -o3 then it runs about 5-10x as fast as my release compile with Visual Studio. For example I have an openstreetmap xml compiler and it takes about 3 minutes ( the gcc compiled version) to process a 2.7GB xml file. My visual studio compile of the same code takes about 18 minutes to run.
{ "language": "en", "url": "https://stackoverflow.com/questions/152064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is it OK to inspect properties beginning with underscore? I've been working on a very simple crud generator for pylons. I came up with something that inspects SomeClass._sa_class_manager.mapper.c Is it ok to inspect this (or to call methods begining with underscore)? I always kind of assumed this is legal though frowned upon as it relies heavily on the internal structure of a class/object. But hey, since python does not really have interfaces in the Java sense maybe it is OK. A: It is intentional (in Python) that there are no "private" scopes. It is a convention that anything that starts with an underscore should not ideally be used, and hence you may not complain if its behavior or definition changes in a next version. A: In general, this usually indicates that the method is effectively internal, rather than part of the documented interface, and should not be relied on. Future versions of the library are free to rename or remove such methods, so if you care about future compatability without having to rewrite, avoid doing it. A: If it works, why not? You could have problems though when _sa_class_manager gets restructured, binding yourself to this specific version of SQLAlchemy, or creating more work to track the changes. As SQLAlchemy is a fast moving target, you may be there in a year already. The preferable way would be to integrate your desired API into SQLAlchemy itself. A: It's generally not a good idea, for reasons already mentioned. However, Python deliberately allows this behaviour in case there is no other way of doing something. For example, if you have a closed-source compiled Python library where the author didn't think you'd need direct access to a certain object's internal state—but you really do—you can still get at the information you need. You have the same problems mentioned before of keeping up with different versions (if you're lucky enough that it's still maintained) but at least you can actually do what you wanted to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/152068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: If you change RightToLeft, ShowInTaskbar properties, Form.ShowDialog() unexpectedly ends Dialog closes with Cancel result, no exceptions, as if you have pressed its close button. The only safe place to set RightToLeft property is in the form constructor. It occured to me that this information might save somebody else's time. If you are able to elaborate on the issue: if there is an official bug confirmation, what else might cause ShowDialog to end unexpectedly, please, do. Re: close to tray - MSDN Forums change Form RightToLeft property at runtime Quote from the second link: I have found a second bug in less than two days . This new bug is very critical . I have Normal Form with RightToLeft property set to its default value ( RightToLeft=False) . Let us show this form with Show Function ( Form1.Show(me) ) At this Form there is a Button which change Form RightToLeft to Yes instead of No: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Me.RightToLeft = Windows.Forms.RightToLeft.Yes End Sub The Form will change its Title successfully to Right Side. Up To This there is no problem. Problem Occure as following If we Display this Form to the user using ShowDialog(Me) Function instead of display it using Show(Me) . Then Click Button which will change Form RightToLeft to Yes instead of No , Form will Close Suddenly with no reasons , and even not throw any exceptions . This is the new problem & it's exist also in .NET 3.0 ( Orcase ) Too . A: Ok... I have a quick fix for you. It is nasty, it is a hack but it kinda works. From my answer to the original question: private bool _rightToLeft; private void SetRTL(bool setRTL) { _rightToLeft = true; ApplyRTL(setRTL, this); } private void ApplyRTL(bool yes, Control startControl) { if ((startControl is Panel) || (startControl is GroupBox)) { foreach (Control control in startControl.Controls) { control.Location = CalculateRTL(control.Location, startControl.Size, control.Size); } } foreach (Control control in startControl.Controls) ApplyRTL(yes, control); } private Point CalculateRTL(Point currentPoint, Size parentSize, Size currentSize) { return new Point(parentSize.Width - currentSize.Width - currentPoint.X, currentPoint.Y); } private void Form2_FormClosing(object sender, FormClosingEventArgs e) { if (_rightToLeft) { _rightToLeft = false; e.Cancel = true; } } The shneaky part it to attach to the form closing event and then tell it to not close if you have just conducted a right to left swap (_rightToLeft). Having told it not close you remove the right to left flag and let life continue on. *bug: there is a bug that occurs when closing a form open with .Show(this), but I am sure you can fix that!
{ "language": "en", "url": "https://stackoverflow.com/questions/152069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Display Boolean Field in Visual Studio Report Designer I'm trying to display a boolean field in Report Designer in Visual Studio 2008. When I tried to run it, an error occurred: "An error has occurred during report processing. String was not recognized as a valid Boolean." I tried to convert it using CBool() but it didn't work. A: =iif(Fields!YourBool.Value, "True", "False") Am I missing anything? A: I may be mistaken here, but CBool is to convert to boolean. What you probably want is to convert to string so that it can be displayed. However, I'm not sure what the default behaviour would be (i.e. 0/1, true/false, -1/0, Yes/No, etc.) so you could add a function to the code section in the report to display a boolean the exact way you want. A: I'm using SQL Server 2005. The data type is bit.
{ "language": "en", "url": "https://stackoverflow.com/questions/152071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL schema to hold history of employee actions when employees come/go/get promoted, etc I'm writing an app that contains the following tables: (1) employee_type, (2) employee and (3) employee_action. Employee_action is foreign-keyed to employee, and contains a description of what happened and the date of the event, just as you might expect. However, employees can change their type over time (promotion, demotion, relocation, etc). If my schema was just as simple as this, then you might generate a historical report that says that John was the CEO of the company when he was out delivering pizzas 10 years ago. What would be the best way for me to save the fact that employees had a certain set of characteristics at the time that they performed an action, which are not necessarily their characteristics at the present time? I'm stating my problem simply here. I have a lot more tables than 3, and the employees position is not the only characteristic that i'm worried about. It's not an option for me to just denormalize everything and make a history table with every possible employee field in it. Thanks, I hope this is clear. A: Representing time data in SQL is tricky. There is a very good book on the subject, and it's even available for free online from the author: http://www.cs.arizona.edu/people/rts/tdbbook.pdf. The Amazon page is on http://www.amazon.com/Developing-Time-Oriented-Database-Applications-Management/dp/1558604367, but it's out of print. If you are serious about modeling changes over time in SQL, this book is a must-read. I learned a lot from it and I only understand maybe 25% of it, having read it only once :) A: Have you considered introducing a transition (many-to-many) table linking the employee_type and the employee, and then linking the employee action to this transition table? The transition table could have an additional column for timestamping, that way allowing you to keep track of things chronologically. A: And to answer your question, I think your only option (if you can't do any redesign of the schema) is to add a table where you store the timespans in which a certain condition held true, eg a table employment_history with the employee, a position ('CEO', 'delivery boy' or the ID's of those positions if you have them normalized to a table) and a field for the begin- and one for the enddate that the employee had that position. That way you can join on the employment_history table to get the position people currently have. Of course, if you need to store more 'properties' than just the position that's going to be an exponentially growing PITA. Read the book above for more discussion :)
{ "language": "en", "url": "https://stackoverflow.com/questions/152074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Programming against interfaces: Do you write interfaces for all your domain classes? I agree, that programming against interfaces is a good practice. In most cases in Java "interface" in this sense means the language construct interface, so that you write an interface and an implementation class and that you use the interface instead of the implementation class most of the time. I wonder if this is a good practice for writing domain models as well. So, for example if you've got a domain class Customer and each customer may have a list of Orders, would you generally also write interfaces ICustomer and IOrder. And also would Customer have a list of IOrders instead of Orders? Or would you use interfaces in the domain model, only if it is really driven by the domain, e.g. you've got at least two different types of Orders? In other words, would you use interfaces because of only technical needs in the domain model, or only when it is really appropriate with respect to the actual domain? A: Interfaces are an excellent way of isolating components for unit testing purposes and general dependency management. Saying that, I often prefer abstract classes so at least some of the common behaviour is offloaded there rather than forcing some of the duplication that interfaces bring. Modern IDEs make it quick and easy to generate interfaces so they aren't that much work :-) A: No, I only use interfaces on Domain Objects to keep them loosely coupled. For me the main hook of developing my own code with interfaces is that I can easily create mocks when doing Unit Testing. I don't see the point in mocking Domain Objects as they don't have the same dependencies that a service layer or DAO layer class would have. This certainly doesn't mean to stray away from using Interfaces in Domain Objects. Use where appropriate. For example, lately I've been working on a webapp where different types of Domain Objects correspond to permalink pages in which users can leave comments. So, each of these Domain Objects now implement the "Commentable" interface. All of the comment-based code is then programmed to the Commentable interface instead of the Domain Objects. A: I'd recommend staying lean and agile - don't do anything until you need to do it, then let your IDE do the refactoring for you when you need it. Its pretty trivial in IDEA/eclipse to turn a concrete class into an interface when you decide you need one. Then use dependency injection with Spring or Guice if you have many implementations of the interface to inject into the places in your code you're using 'new' A: Actually, this question is an example of the common misunderstanding about "programming against interfaces". You see, this is a very good principle, but it does not mean what many people think it means! "Program to an interface, not an implementation" (from the GoF book) means just that, not that you should go out of your way to create separate interfaces for everything. If you have an ArrayList object, then declare the variable/field/parameter/return type as being of type List. Client code will then only deal with the interface type. In the "Effective Java" book from Joshua Bloch, the principle is expressed more clearly, in "Item 52: Refer to objects by their interfaces". It even says, in bold letters: It is entirely appropriate to refer to an object by a class rather than an interface if no appropriate interface exists. For unit testing, the situation is entirely dependent on the capabilities of the mocking tool used. With my own tool, JMockit, I can write unit tests just as easily for code that uses interfaces and Dependency Injection as for code that uses final classes instantiated from inside the code under test. So, for me, the answer is: always use interfaces that already exist, but avoid creating new ones if there is no good reason to do so (and testability, by itself, should not be one). A: Writing interfaces "just because" strikes me as a waste of time and energy, not to mention a violation of the KISS-principle. I write them when they are actually useful in representing common behavior of related classes, not just as a fancy header file. A: Writing interface before it gets needed (either for testing or architecture) is an overkill. Additionally, writing an interface manually is a waste of time. You can use Resharper's refactoring "Pull Members" to let it create new interface out of the specific class in a matter of seconds. Other refactoring tools that integrate with IDE should have similar functionality as well. A: Don't over design your system. If you find out that you have several types of Orders and think it's appropriate to declare an interface for Orders than refactor it when the need arises. For domain models, the probability is high that the specific interface will change much over the lifetime of development, so it rarely useful to write an interface early. A: I usually only use interfaces where it makes sense on smaller projects. However, my latest job has a large project where nearly every domain object has an interface. It's probably overkill and it's definitely annoying, but the way we do testing and use Spring for dependency injection sort of requires it. A: We mostly write web applications with Wicket, Spring and Hibernate and we use interfaces for Spring Beans, e.g. Services and DAOs. For these classes, interfaces make totally sense. But we also use interfaces for every single domain class and I think this is just overkill. A: Even if you are pretty sure there is only going to be one concrete type of a model object, using interfaces makes mocking and testing somewhat easier (but these days there are framworks that can help you generate mock classes automatically, even for concrete Java classes- Mockito, JTestR, Spring, Groovy...) But I even more often use interfaces for services, since it is even more important to mock away them during testing, and programming against interfaces helps you think about stuff like encapsulation. A: Writing interfaces for domain classes can make sense if you're using it for unit testing. We use Mock objects in unit testing. So, if you have an interface for a domain object and your domain object itself is NOT ready but your client can test its use of the interface by the help of mock objects. Intefaces also test multiple implementations of your interfaces for your domain model. So, I don't think it is always overkill. A: I think that the main reason for programming against interfaces is testability. Hence, for the domain objects - just stick to POJOs or POC#Os :) etc., i.e., just keep your classes from adding any specific framework so to prevent them from differend build and runtime dependencies and that's all. Creating interfaces for DAOs is a good idea though. A: We extract interfaces from everything just because it assists in testing (mocking) and for stuff like AOP. Eclipse can do this automatically: Refactor->Extract Interface. And if you need to modify the class later, you can use Refactor->Pull Up... to pull up the methods you need onto the interface. A: *I do it because I need it for creating proxies of my domain objects. A: That's another thing to keep in mind that I've run in to, especially with generated domain and DAO objects. A lot of the interfaces are just too specific. Say a lot of domain objects have an ID and a status field, Why don't they share a common interface? This has caused me frustration, an unnecessarily flat (inheritance-wise) domain model.
{ "language": "en", "url": "https://stackoverflow.com/questions/152077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Fixed point combinators in C++ I'm interested in actual examples of using fixed point combinators (such as the y-combinator in C++. Have you ever used a fixed point combinator with egg or bind in real live code? I found this example in egg a little dense: void egg_example() { using bll::_1; using bll::_2; int r = fix2( bll::ret<int>( // \(f,a) -> a == 0 ? 1 : a * f(a-1) bll::if_then_else_return( _2 == 0, 1, _2 * lazy(_1)(_2 - 1) ) ) ) (5); BOOST_CHECK(r == 5*4*3*2*1); } Can you explain how this all works? Is there a nice simple example perhaps using bind with perhaps fewer dependancies than this one? A: #include <functional> #include <iostream> template <typename Lamba, typename Type> auto y (std::function<Type(Lamba, Type)> f) -> std::function<Type(Type)> { return std::bind(f, std::bind(&y<Lamba, Type>, f), std::placeholders::_1); } int main(int argc,char** argv) { std::cout << y < std::function<int(int)>, int> ([](std::function<int(int)> f, int x) { return x == 0 ? 1 : x * f(x - 1); }) (5) << std::endl; return 0; } A: Here is the same code converted into boost::bind notice the y-combinator and its application site in the main function. I hope this helps. #include <boost/function.hpp> #include <boost/bind.hpp> #include <iostream> // Y-combinator compatible factorial int fact(boost::function<int(int)> f,int v) { if(v == 0) return 1; else return v * f(v -1); } // Y-combinator for the int type boost::function<int(int)> y(boost::function<int(boost::function<int(int)>,int)> f) { return boost::bind(f,boost::bind(&y,f),_1); } int main(int argc,char** argv) { boost::function<int(int)> factorial = y(fact); std::cout << factorial(5) << std::endl; return 0; } A: Can you explain how this all works? fix2 is a y-combinator (specifically, it is a combinator for functions with two arguments; the first argument is the function (for the purpose of recursion), the second argument is a "proper" function argument). It creates recursive functions. bll::ret(...) appears to create some form of a function object, the body of which is if(second arg == 0) { return 1; } else { return second arg * first arg(second arg - 1); } The "lazy" is presumably there to stop an infinite expansion of the first (function) argument (read up on the difference between lazy and strict y combinators to see why). The code is quite horrible. Anonymous functions are nice to have, but the hackery to work around C++'s lack of syntactic support make them not worth the effort.
{ "language": "en", "url": "https://stackoverflow.com/questions/152084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: ASP.NET Single Login - Is distributing session the answer We have 5 balanced web servers with various websites. What I am trying to achieve is to ensure a single login. i.e. the same user account cannot login to the same website more than once at any given time. The method i'm considering for solving this, is to share session amongst the servers so I can control which session is assigned to which account. I can then have control over my logins. If a user logs in and there is already a session assigned to their user account, I can just expire the first session or reject the login. I don't want to lose the benefit of the balanced servers, so using a single Sql Server as my session state server, or a single server to handle login is not an option. Is distributed session (something like Scaleout Sofware) the correct approach to achieve this? Or is there another mechanism to handle single login that i'm blissfully unaware of? A: You have two set of problems here: 1) Allowing just one connected user in a web farm scenario 2) Detecting user logoff To solve the first the only solution is a central storage for some kind of user state, using a central server to store the ASP.Net session or some other kind of centralized user state. This central storage can be SQL Server using the native management of session state (btw also Oracle, from Oracle 11, can support session storage), the AspState service or an external solution, like ScaleOut (as you said) or its open source alternative memcached (see https://sourceforge.net/projects/memcacheddotnet/). Or you can design a simple centralized web service that check active logins against a SQL Server database, this way you can also quickly create reporting tools about logged on users and so on. Real problem, in my opinion, lies in the second part, that you need to maintain the different "wrong logoff" scenarios that are available in a web world (like closing the browser due to a crash or shutting down applications without logging off), giving you application some way to gracefully work with user that has an old session enabled (as you said simply expiring the first session can work). Keep also in mind that using a state server like SQL server will not make you loose the balanced servers, if's the way of working to have a web farm environmet and sharing session, only problem lies in performance (if session state become large) and the cost involved in using SQL Server if you do not already have the proper license.
{ "language": "en", "url": "https://stackoverflow.com/questions/152093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Logging/monitoring all function calls from an application we have a problem with an application we're developing. Very seldom, like once in a hundred, the application crashes at start up. When the crash happens it brings down the whole system, the computer starts to beep and freezes up completely, the only way to recover is to turn off the power (we're using Windows XP). The rarity of the crash combined with the fact that we can't break into the debugger or even generate a stackdump when it occurs makes it extremely hard to debug. I'm looking for something that logs all function calls to a file. Does such a tool exist? It shouldn't be impossible to implement, profilers like VTune does something very similar. We're using visual studio 2008 (C++). Thanks A.B. A: Logging function entries/exits is a low-level approach to your problem. I would suggest using automatic debugger instrumentation (using Debugger key under Image File Execution Options with regedit or using gflags from the package I provide a link to below) and trying to repro the problem until it crashes. Additionally, you can have the debugger log function call history of suspected module(s) using a script or have collect any other information. But not knowing the details of your application it is very hard to suggest a solution. Is it a user app, service or a driver? What does "crashes at startup" mean - at windows startup or app's startup? Use this debugger package to troubleshoot. A: For Visual C++ _penter() and _pexit() can be used to instrument your code. See also Method Call Interception in C++. A: The only problem with the logging idea is that when the system crashes, the latest log entries might still be in the cache and have no chance to be written to disk... If it was me I would try running the program on a different PC - it might be flaky hardware or drivers causing the problem. An application program "shouldn't" be able to bring down the system. A: A few Ideas- There is a good chance that just prior to your crash there is some sort of exception in the application. if you set you handler for all unhandled exceptions using SetUnhandledExceptionFilter() and write a stack trace to your log file, you might have a chance to catch the crash in action. Just remember to flush the file after every write. Another option is to use a tool such as strace which logs all of the system calls into the kernel (there are multiple flavors and implementations for that so pick your favorite). if you look at the log just before the crash you might find the culprit A: Have you considered using a second machine as a remote debugger (via the network)? When the application (and system) crashes, the second machine should still show some useful information, if not the actual point of the problem. I believe VC++ has that ability, at least in some versions. A: GCC (including the version MingGW for Windows development) has a code generation switch called -finstrument-functions that tells the compiler to emit special calls to functions called __cyg_profile_func_enter and __cyg_profile_func_exit around every function call. For Visual C++, there are similar options called /GH and /Gh. These cause the compiler to emit calls to __penter and __pexit around function calls. These instrumentation modes can be used to implement a logging system, with you implementing the calls that the compiler generates to output to your local filesystem or to another computer on your network. If possible, I'd also try running your system using valgrind or a similar checking tool. This might catch your problem before it gets out-of-hand.
{ "language": "en", "url": "https://stackoverflow.com/questions/152097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: I want to prevent ASP.NET GridView from reacting to the enter button I have an ASP.NET page with a gridview control on it with a CommandButton column with delete and select commands active. Pressing the enter key causes the first command button in the gridview to fire, which deletes a row. I don't want this to happen. Can I change the gridview control in a way that it does not react anymore to pressing the enter key? There is a textbox and button on the screen as well. They don't need to be responsive to hitting enter, but you must be able to fill in the textbox. Currently we popup a confirmation dialog to prevent accidental deletes, but we need something better than this. This is the markup for the gridview, as you can see it's inside an asp.net updatepanel (i forgot to mention that, sorry): (I left out most columns and the formatting) <asp:UpdatePanel ID="upContent" runat="server" UpdateMode="Conditional"> <Triggers> <asp:AsyncPostBackTrigger ControlID="btnFilter" /> <asp:AsyncPostBackTrigger ControlID="btnEdit" EventName="Click" /> </Triggers> <ContentTemplate> <div id="CodeGrid" class="Grid"> <asp:GridView ID="dgCode" runat="server"> <Columns> <asp:CommandField SelectImageUrl="~/Images/Select.GIF" ShowSelectButton="True" ButtonType="Image" CancelText="" EditText="" InsertText="" NewText="" UpdateText="" DeleteImageUrl="~/Images/Delete.GIF" ShowDeleteButton="True" /> <asp:BoundField DataField="Id" HeaderText="ID" Visible="False" /> </Columns> </asp:GridView> </div> </ContentTemplate> </asp:UpdatePanel> A: Every once in a while I get goofy issues like this too... but usually I just implement a quick hack, and move on :) myGridView.Attributes.Add("onkeydown", "if(event.keyCode==13)return false;"); Something like that should work. A: This solution is blocking the enter key on the entire page Disable Enter Key A: In the PreRender event you can toggle Private Sub gvSerials_PreRender(ByVal sender As Object, ByVal e As System.EventArgs) Handles gvSerials.PreRender If gvSerials.EditIndex < 0 'READ ONLY MODE 'Enables the form submit during Read mode on my 'search' submit button Me.bnSearch.UseSubmitBehavior = True Else 'EDIT MODE 'disables the form submit during edit mode, this allows the APPLY/Update button to be activated after Enter Key is pressed (which really is creating a form submit) Me.bnSearch.UseSubmitBehavior = False End If A: In Page_Load, set the focus on the textbox. A: Here is a good jQuery way of doing it. This function will automagically add the keydown event handler to all TextBoxes on the page. By using different selectors, you can control it to more or less granular levels: //jQuery document ready function – fires when document structure loads $(document).ready(function() { //Find all input controls of type text and bind the given //function to them $(":text").keydown(function(e) { if (e.keyCode == 13) { return false; } }); }); This will make all textboxes ignore the Enter key and has the advantage of being automatically applied to any new HTML or ASP.Net controls that you might add to the page (or that may be generated by ASP.Net). A: For anyone else having the same problem where this code is not preventing the event from firing, this worked for me: if (window.event.keyCode == 13) { event.returnValue=false; event.cancel = true; }
{ "language": "en", "url": "https://stackoverflow.com/questions/152099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: MVC tutorial that doesn't rely on a framework? I want to learn MVC "architecture pattern" but I don't want to jump into a framework like Rails or Django just yet. I want to understand the concept first and write some simple code in my currently familiar environment, which happens to be PHP/HTML/CSS/MySQL. I don't necessarily need a tutorial that is based on PHP, as I do understand a lot of different languages. And I don't want to have to install any frameworks or APIs or libraries. I just want to learn how to think in MVC and apply it to my projects. Any suggestions? A: In addition to Sander's reply, I'd say that most frameworks confuse front controller and MVC. They are really two completely separate concepts, but they are often both present in frameworks. So watch out for that. A: Almost every framework does MVC differently, so you might end up getting even more confused. The general principles of MVC are very simple: "Model is state; view reacts to model; controller reacts to view; controller changes model". The model, view and controller are concepts - they are whatever you feel them to be. Classes, bunches of classes, instances of classes with XML configuration files, you name it. I actually think that about covers the basic principles. Without a framework, you'd not get much further. What matters is how a particular framework defines model, view and controller and their interactions. A: MVC is basically just splitting up your code into a Model, which deals with the data, a View which displays the data, and a Controller which passes data from the Model to the View. It's nothing you need an API or framework for, it's just a way of splitting up your code. The reason many frameworks use it is because it's a very simple concept, it works well for many things (it fits webpages perfectly), and is fairly flexible (for example, with Rails, you could do everything in your view, or model/controller, if you so-desired..) A quick example in python, of an example MVC structured Python script. Not necessarily "best practices", but it works, and is fairly simple: class Model: def get_post(self, id): # Would query database, perhaps return {"title": "A test", "body": "An example.."} class Controller: def __init__(self): self.model = Model() self.view = View() def main(self): post = self.model.get_post(1) self.view.display(post) class View: def display(self, item): print "<h1>%(title)s</h1>\n%(body)s" % item c = Controller() c.main() A: The main advantage of MVC is separation of concerns. When you write code, and if you're not careful, it can become a big mess. So knowing how to put Models, Views, and Controllers in different "silos" saves you time in the long term. Any strategy is good. So here is mine : * *models are files found under /lib in the project tree *views are files ending in .html in the project tree *controllers are urls in <form> action attributes A: Know it is late but I am sure people will come along later with the same question. I think the very good code example above is better put like this but YMMV: #!/usr/bin/python class Model: def get_post(self): return {"title":"A test","body":"An example.."} class View: def display(self,items): print 'Title:',items['title'],'\n'+'Body:',items['body'] class Controller: def __init__(self): self.model=Model() self.view=View() def main(self): post=self.model.get_post() self.view.display(post) mvc=Controller() mvc.main() Here is another example using inheritance which can be very useful in python/php..... #!/usr/bin/python3 class Control: def find(self,user): return self._look(user) def _look(self,user): if user in self.users: return self.users[user] else: return 'The data class ({}) has no {}'.format(self.userName(),user) def userName(self): return self.__class__.__name__.lower() class Model(Control): users=dict(one='Bob',two='Michael',three='Dave') class View(): def user(self,users): print(users.find('two')) def main(): users=Model() find=View() print('--> The user two\'s "real name" is:\n') find.user(users) if __name__=="__main__": main() If this makes sense go to django now your ready. Just read the free book if this makes sense you will go through it rapidly. Your right though you must be able to know about OOP and MVC paradigms before using django as it is built and used via these paradigms. As you see it is not complex it is just one of the many ways to keep your code in order. This explains MVC in django
{ "language": "en", "url": "https://stackoverflow.com/questions/152101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Problems using jeditable and autogrow I work on a Webproject using jQuery and CakePHP. I use jeditable as an inplace edit plugin. For textareas I extend it using the autogrow plugin. Well, I have two problems with this: * *First, autogrow does only work on Firefox, not on IE, Safari, Opera and Chrome. *Second, I need a callback event for jeditable, when its finished showing the edit-component, to recalculate the scrollbar Im not so familiar with Javascript, so i can't extend/correct this two libraries by my own. Has anyone used another js-library for inplace edit with auto growing textareas (no complete editors like TinyMCE, I need a solution for plain text)? I also found Growfield, it would work for other browsers, but there's no jeditable integration... (sorry for my english) A: I didn't see any problems using Autogrow with jeditable in any browsers but here is an implementation of Growfield with jeditable. It works much in the same way that the Autogrow plugin for jeditable does. You create a special input type for jeditable and just apply .growfield() to it. The necessary javascript is below, a demo can be found here. <script type="text/javascript"> /* This is the growfield integration into jeditable You can use almost any field plugin you'd like if you create an input type for it. It just needs the "element" function (to create the editable field) and the "plugin" function which applies the effect to the field. This is very close to the code in the jquery.jeditable.autogrow.js input type that comes with jeditable. */ $.editable.addInputType('growfield', { element : function(settings, original) { var textarea = $('<textarea>'); if (settings.rows) { textarea.attr('rows', settings.rows); } else { textarea.height(settings.height); } if (settings.cols) { textarea.attr('cols', settings.cols); } else { textarea.width(settings.width); } // will execute when textarea is rendered textarea.ready(function() { // implement your scroll pane code here }); $(this).append(textarea); return(textarea); }, plugin : function(settings, original) { // applies the growfield effect to the in-place edit field $('textarea', this).growfield(settings.growfield); } }); /* jeditable initialization */ $(function() { $('.editable_textarea').editable('postto.html', { type: "growfield", // tells jeditable to use your growfield input type from above submit: 'OK', // this and below are optional tooltip: "Click to edit...", onblur: "ignore", growfield: { } // use this to pass options to the growfield that gets created }); }) A: knight_killer: What do you mean Autogrow works only with FireFox? I just tested with FF3, FF2, Safari, IE7 and Chrome. Works fine with all of them. I did not have Opera available. Alex: Is there a download link for your Growfield Jeditable custom input? I would like to link it from my blog. It is really great! A: Mika Tuupola: If you are Interested in my modified jeditable (added two callback events), you can get it here. It would be great if you would provide these events in your official version of jeditable! Here is my (simplified) integration code. I use the events for more then just for the hover effect. It's just one usecase. $('.edit_memo').editable('/cakephp/efforts/updateValue', { id : 'data[Effort][id]', name : 'data[Effort][value]', type : 'growfield', cancel : 'Abort', submit : 'Save', tooltip : 'click to edit', indicator : "<span class='save'>saving...</span>", onblur : 'ignore', placeholder : '<span class="hint">&lt;click to edit&gt;</span>', loadurl : '/cakephp/efforts/getValue', loadtype : 'POST', loadtext : 'loading...', width : 447, onreadytoedit : function(value){ $(this).removeClass('edit_memo_hover'); //remove css hover effect }, onfinishededit : function(value){ $(this).addClass('edit_memo_hover'); //add css hover effect } }); A: Thank you Alex! Your growfield-Plugin works. In meantime I managed to solve the other problem. I took another Scroll-Library and hacked a callback event into the jeditable-plugin. It was not that hard as I thought...
{ "language": "en", "url": "https://stackoverflow.com/questions/152104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Gaussian distributions with PHP on a 24h time period How can I set points on a 24h period spreaded by the Gaussian distributions? For example to have the peak at 10 o'clock? A: The following code generates a gaussian distributed random time (in hours, plus fractions of an hour) centered at a given time, and with a given standard deviation. The random times may 'wrap around' the clock, especially if the standard deviation is several hours: this is handled correctly. A different 'wrapping' algorithm may be more efficient if your standard deviations are very large (many days), but the distribution will be almost uniform in this case, anyway. $peak=10; // Peak at 10-o-clock $stdev=2; // Standard deviation of two hours $hoursOnClock=24; // 24-hour clock do // Generate gaussian variable using Box-Muller { $u=2.0*mt_rand()/mt_getrandmax()-1.0; $v=2.0*mt_rand()/mt_getrandmax()-1.0; $s = $u*$u+$v*$v; } while ($s > 1); $gauss=$u*sqrt(-2.0*log($s)/$s); $gauss = $gauss*$stdev + $peak; // Transform to correct peak and standard deviation while ($gauss < 0) $gauss+=$hoursOnClock; // Wrap around hours to keep the random time $result = fmod($gauss,$hoursOnClock); // on the clock echo $result; A: If you have trouble generating gaussian distributed random points look up http://en.wikipedia.org/wiki/Box-Muller_transform Else please clarify your question.
{ "language": "en", "url": "https://stackoverflow.com/questions/152115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Are there any open source projects using DDD (Domain Driven Design)? I'm trying to understand the concepts behind DDD, but I find it hard to understand just by reading books as they tend to discuss the topic in a rather abstract way. I would like to see some good implementations of DDD in code, preferably in C#. Are there any good examples of projects practicing DDD in the open source world? A: I'm surprised no one mentioned Macto, Ayende's DDD sample. The most important thing about Macto is that the hole thinking process before the actual coding is presented in a series of posts. DDD is not about implementing a model, is about modeling a business domain. Decisions like "some concept is an Aggregate Root/Entity/Value Object" are far more important than how will an AR be persisted. Also i would recommend Udi Dahan's videos about SOA and CQRS which might provide a better context on which to apply DDD. A: This is not an open source project, but still it is an example in code: http://www.codeplex.com/dddpds The example is used in the book .NET Domain-Driven Design with C#: Problem-Design-Solution A made-up example that seems promising but might have died: http://www.codeplex.com/domaindrivendesign A: I'm afraid that http://www.codeplex.com/domaindrivendesign has indeed died, but if anyone is interested in contributing feel free to contact me. Overall I would recommend against relying too much on examples of DDD, at best examples can show the results of the domain modelling and/or one approach for implementing the patterns. I would thus recommend reading the book and then asking questions at the forum. A: http://kigg.codeplex.com/ is a good example for me. A: Eric Evans and a Swedish consulting company have released a sample application based on the shipping example that Eric uses throughout the book. It's in Java, but the concepts are well documented on the project page. http://dddsample.sourceforge.net/ However, be warned that DDD is more about the journey than the destination. Understand that the sample code you are looking took many forms before it became what you see now. You did not see the awkward models that were used initially and you're missing the steps taken to refactor the model based on insight gained along the way. While the building blocks are important in DDD, Eric belives they are over-emphasized, so take all samples with a grain of salt. A: I'm not sure how complete it is, but I found the NDDD Sample on Google Code. A: A good read is Jimmi Nilssons book (and blog for that matter) Applying domain driven design It's a mixture of Evans and Fowlers books (Domain-Driven Design - Evans), and (Patterns of Enterprise Application Architecture - Fowler) A: I know it is not C#, but this is a java meta-framework that follows a domain driven approach: I don't know much about it but I'm willing to study it in the near future: Roma Framework A: http://sellandbuy.codeplex.com/ another project DDD A: I haven't used any myself, but there are some tools mentioned on the DDD Wikipedia page. Most of them seem to be implemented in Java though. http://en.wikipedia.org/wiki/Domain-driven_design#Software_tools_to_support_domain-driven_design A: Ok, I found this, but it's Java not C#: http://timeandmoney.domainlanguage.com/ A: Code Camp Server, Jeffrey Palermo's sample code for the book ASP.NET MVC in Action is open source and uses DDD. (Same as my answer in Good Domain Driven Design samples)
{ "language": "en", "url": "https://stackoverflow.com/questions/152120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Animation in C++ What are ways to draw animations in C++? GDI+? OpenGL? Would you recommend a class pattern in particular to get the drawing and redrawing done? Do you know of any open source project where animations are made so I can take a peek at the code? Where would you start if you wanted to code geometrical animations? Do you know of any good libraries? Post links to tutorials and any other interesting information... A: QT QGraphicsScene It was specifically designed to make writing 2D games easy and effortless, with great support for animation. QT is a very mature cross platform toolkit which also have an open source flavor. A: I'm a developer of openframeworks (openframeworks.cc / openframeworks.cc/download) and also, I teach a course about animation in c++ in ny, there are some code examples up now (and more over the next few months): http://makingthingsmove.org/blog there is some example code in OF here too: http://wiki.openframeworks.cc/index.php?title=OfAmsterdam and some processing animation code here: http://thesystemis.com/makingthingsmove that might be a helpful starting point. have fun ! zach A: Your question is a bit too open. There are tons of graphics library, lot of them supporting animation. You don't even give the scope of your question. Since you mention GDI+, I suppose you want it for Windows, but there are good portable solutions, like SDL, Allegro, Cairo, etc. Lot of game frameworks can do that too. A: There are good lists of different libraries: http://www.twilightsembrace.com/personal/gamelibs.php http://www.thefreecountry.com/sourcecode/graphics.shtml A: As others have stated this is a very broad question. I woudl advise to go check some computer graphics and game development books. They usually ahve the "easy to understand"material on that area. If you want to peek at code there are several open source game engine like Ogre3d, Nebuladevice and Irrlicht. But peeking at that code without knowing what you are looking for is not recommended at least by me. Graphic engines are usually huge and complex code bases. Try looking for game development tutorials in google, you will find a lot of very simple examples. They usually do not reflect the exact same techniques used in full fledged engines, but understanding those first will make possible to understand the later. A: I know of this: http://www.openframeworks.cc/ and this: http://www.contextfreeart.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/152123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using Lucene to count results in categories I am trying to use Lucene Java 2.3.2 to implement search on a catalog of products. Apart from the regular fields for a product, there is field called 'Category'. A product can fall in multiple categories. Currently, I use FilteredQuery to search for the same search term with every Category to get the number of results per category. This results in 20-30 internal search calls per query to display the results. This is slowing down the search considerably. Is there a faster way of achieving the same result using Lucene? A: Here's what I did, though it's a bit heavy on memory: What you need is to create in advance a bunch of BitSets, one for each category, containing the doc id of all the documents in a category. Now, on search time you use a HitCollector and check the doc ids against the BitSets. Here's the code to create the bit sets: public BitSet[] getBitSets(IndexSearcher indexSearcher, Category[] categories) { BitSet[] bitSets = new BitSet[categories.length]; for(int i=0; i<categories.length; i++) { Query query = categories[i].getQuery(); final BitSet bitset = new BitSet() indexSearcher.search(query, new HitCollector() { public void collect(int doc, float score) { bitSet.set(doc); } }); bitSets[i] = bitSet; } return bitSets; } This is just one way to do this. You could probably use TermDocs instead of running a full search if your categories are simple enough, but this should only run once when you load the index anyway. Now, when it's time to count categories of search results you do this: public int[] getCategroryCount(IndexSearcher indexSearcher, Query query, final BitSet[] bitSets) { final int[] count = new int[bitSets.length]; indexSearcher.search(query, new HitCollector() { public void collect(int doc, float score) { for(int i=0; i<bitSets.length; i++) { if(bitSets[i].get(doc)) count[i]++; } } }); return count; } What you end up with is an array containing the count of every category within the search results. If you also need the search results, you should add a TopDocCollector to your hit collector (yo dawg...). Or, you could just run the search again. 2 searches are better than 30. A: I don't have enough reputation to comment (!) but in Matt Quail's answer I'm pretty sure you could replace this: int numDocs = 0; td.seek(terms); while (td.next()) { numDocs++; } with this: int numDocs = terms.docFreq() and then get rid of the td variable altogether. This should make it even faster. A: You may want to consider looking through all the documents that match categories using a TermDocs iterator. This example code goes through each "Category" term, and then counts the number of documents that match that term. public static void countDocumentsInCategories(IndexReader reader) throws IOException { TermEnum terms = null; TermDocs td = null; try { terms = reader.terms(new Term("Category", "")); td = reader.termDocs(); do { Term currentTerm = terms.term(); if (!currentTerm.field().equals("Category")) { break; } int numDocs = 0; td.seek(terms); while (td.next()) { numDocs++; } System.out.println(currentTerm.field() + " : " + currentTerm.text() + " --> " + numDocs); } while (terms.next()); } finally { if (td != null) td.close(); if (terms != null) terms.close(); } } This code should run reasonably fast even for large indexes. Here is some code that tests that method: public static void main(String[] args) throws Exception { RAMDirectory store = new RAMDirectory(); IndexWriter w = new IndexWriter(store, new StandardAnalyzer()); addDocument(w, 1, "Apple", "fruit", "computer"); addDocument(w, 2, "Orange", "fruit", "colour"); addDocument(w, 3, "Dell", "computer"); addDocument(w, 4, "Cumquat", "fruit"); w.close(); IndexReader r = IndexReader.open(store); countDocumentsInCategories(r); r.close(); } private static void addDocument(IndexWriter w, int id, String name, String... categories) throws IOException { Document d = new Document(); d.add(new Field("ID", String.valueOf(id), Field.Store.YES, Field.Index.UN_TOKENIZED)); d.add(new Field("Name", name, Field.Store.NO, Field.Index.UN_TOKENIZED)); for (String category : categories) { d.add(new Field("Category", category, Field.Store.NO, Field.Index.UN_TOKENIZED)); } w.addDocument(d); } A: Sachin, I believe you want faceted search. It does not come out of the box with Lucene. I suggest you try using SOLR, that has faceting as a major and convenient feature. A: So let me see if I understand the question correctly: Given a query from the user, you want to show how many matches there are for the query in each category. Correct? Think of it like this: your query is actually originalQuery AND (category1 OR category2 or ...) except as well an overall score you want to get a number for each of the categories. Unfortunately the interface for collecting hits in Lucene is very narrow, only giving you an overall score for a query. But you could implement a custom Scorer/Collector. Have a look at the source for org.apache.lucene.search.DisjunctionSumScorer. You could copy some of that to write a custom scorer that iterates through category matches while your main search is going on. And you could keep a Map<String,Long> to keep track of matches in each category.
{ "language": "en", "url": "https://stackoverflow.com/questions/152127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Animated Gifs in Flex 3 Believe it or not I need a way of displaying animated gifs in Flex 3. This guy has a component for sale but it's Flex 2 only: http://dougmccune.com/blog/2007/01/19/how-to-load-animated-gifs-using-adobe-flex-20/. And I’ve implemented this example: http://www.bytearray.org/?p=95, but for larger gifs it’s very CPU intensive and causes the UI to hang. Does anyone know of any alternative solutions? Failing that, I may have to look at re-factoring the second example to use pseudo-threading. Eugh. A: i bought and used the mccune product but have now changed to flex 3. i found your question by searching for the same answer as you and have just come across this http://flexology.wordpress.com/2008/09/30/loadinganimated_gif_in_flex/ A: Thanks for the response. The google code project in that link (http://code.google.com/p/as3gif/) is the same as the one on bytearray (http://www.bytearray.org/?p=95) which I implemented. I knocked up a quick pseudo-threading example using this code and it's far too slow. Looks like I will need to SWF the GIFs on the fly... A: Does this help? Edit: I've no idea how well that suggestion works on a larger GIF, but if you're still having issues, it might be worth importing the GIF into Flash and turning it into its own SWF. Flex should be more than able to play that without any issues. A: Check out swfmill for going for creating swf's. I believe they support animated gif's http://swfmill.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/152129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: ActionUrl in ASP.NET MVC Preview 5 I don't need a Link but rather only the href= part of the ActionLink. But if I call Html.ActionLink(...) I get a back. Is there a way to just return the URL of the Action while not getting the ? A: Edit: in response to commment, now including parameters: <% =Html.BuildUrlFromExpression<YourController>(c => c.YourAction(parameter)) %> A: MVC also provides a UrlHelper class which can do the same thing: <%=Url.Action(actionName)%> <%=Url.Action(actionName, htmlValues)%> <%=Url.Action(actionName, controllerName, htmlValues)%>
{ "language": "en", "url": "https://stackoverflow.com/questions/152137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Java 1.4 synchronization: only allow one instance of method to run (non blocking)? I have a class proposing translations utilities. The translations themselves should be reloaded every 30 minutes. I use Spring Timer support for that. Basically, my class looks like : public interface Translator { public void loadTranslations(); public String getTranslation(String key); } loadTranslations() can be pretty long to run, so while it is running the old translations are still available. This is done by loading the translations in a local Map and just changing the reference when all translations are loaded. My problem is : how do I make sure that when a thread is already loading translations, is a second one also tries to run, it detects that and returns immediately, without starting a second update. A synchronized method will only queue the loads ... I'm still on Java 1.4, so no java.util.concurrent. Thanks for your help ! A: Use some form of locking mechanism to only perform the task if it is not already in progress. Acquiring the locking token must be a one-step process. See: /** * @author McDowell */ public abstract class NonconcurrentTask implements Runnable { private boolean token = true; private synchronized boolean acquire() { boolean ret = token; token = false; return ret; } private synchronized void release() { token = true; } public final void run() { if (acquire()) { try { doTask(); } finally { release(); } } } protected abstract void doTask(); } Test code that will throw an exception if the task runs concurrently: public class Test { public static void main(String[] args) { final NonconcurrentTask shared = new NonconcurrentTask() { private boolean working = false; protected void doTask() { System.out.println("Working: " + Thread.currentThread().getName()); if (working) { throw new IllegalStateException(); } working = true; try { Thread.sleep(1000); } catch (InterruptedException e) { throw new RuntimeException(e); } if (!working) { throw new IllegalStateException(); } working = false; } }; Runnable taskWrapper = new Runnable() { public void run() { while (true) { try { Thread.sleep(100); } catch (InterruptedException e) { throw new RuntimeException(e); } shared.run(); } } }; for (int i = 0; i < 100; i++) { new Thread(taskWrapper).start(); } } } A: I am from a .net background(no java experience at all), but you could try a simple static flag of some sort that checks at the beginning of the method if its alrady running. Then all you need to do is make sure any read/write of that flag is synchronized. So at beginning check the flag, if its not set, set it, if it is set, return. If its not set, run the rest of the method, and after its complete, unset it. Just make sure to put the code in a try/finally and the flag iunsetting in the finally so it always gets unset in case of error. Very simplified but may be all you need. Edit: This actually probably works better than synchronizing the method. Because do you really need a new translation immediately after the one before it finishes? And you may not want to lock up a thread for too long if it has to wait a while. A: Keep a handle on the load thread to see if it's running? Or can't you just use a synchronized flag to indicate if a load is in progress? A: This is actually identical to the code that is required to manage the construction of a Singleton (gasp!) when done the classical way: if (instance == null) { synchronized { if (instance == null) { instance = new SomeClass(); } } } The inner test is identical to the outer test. The outer test is so that we dont routinely enter a synchronised block, the inner test is to confirm that the situation has not changed since we last made the test (the thread could have been preempted before entering Synchronized). In your case: if (translationsNeedLoading()) { synchronized { if (translationsNeedLoading()) { loadTranslations(); } } } UPDATE: This way of constructing a singleton will not work reliably under your JDK1.4. For explanation see here. However I think you are you will be OK in this scenario.
{ "language": "en", "url": "https://stackoverflow.com/questions/152138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Setup wxWidget in Netbeans 6.1 C++ On MS Windows? Im running Netbeans 6.1 with C++ Plugin and cygwin (gcc compiler) how do I setup wxWidget to work with it? A: http://www.daltonfilho.com/2008/02/23/wxwidgets-on-windows-using-netbeans-60-with-mingw-msys/ seams to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/152140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using noweb on a large Java project Has anyone used the noweb literate programming tool on a large Java project, where several source code files must be generated in different subdirectories? How did you manage this with noweb? Are there any resources and/or best practices out there? A: Noweb will dump out files relative to the current working directory, or at the absolute path you specify. Just don't use * at the end of your filename (to avoid inserting the # preprocessor directives). I would recommend using %def with @ to show where you define and use names. <</path/to/file.java>>= reallyImportantVariable += 1; @ %def reallyImportantVariable noweb lets you reorder and (the real win) reuse snippets of code, which I don't think javac would understand. I'd agree that since most people expect that you'll use javadoc, you're probably swimming against the stream to use noweb. A: Literate Programming works its best if the generated intermediate code can point back to the original source file to allow debugging, and analyzing compiler errors. This usually means pre processor support, which Java doesn't support. Additionally Literate Programming is really not necessary for Java, as the original need for a strict sequential order - which was what prompted Knuth to write a tool to put snippets together in the appropriate sequence - is not present. The final benefit of literate programming, namely being able to write prose about the code, is also available as Javadoc which allow you to put everything in as comments. To me, there is no benefit in literate programming for Java, and only troubles (just imagine getting IDE support). Any particular reason you are considering it?
{ "language": "en", "url": "https://stackoverflow.com/questions/152160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What authentication to pick for the cross-platform WCF service? What type of authentication would you suggest for the service that is: * *implemented as WCF and exposed via varios enpoints (including XML-RPC) *has to be consumed easily by various cross-platform clients Why? Options that I'm aware of are: * *Forms-based authentication for IIS-hosted WCF (easy to implement, but has horrible cross-platform support, plus it is not REST) *Sending plain-text username/pwd with every call (easy to use on any platform, but totally unsecure) *Using ticket-based authentication, when username&pwd are used to create a ticket that is valid for some time and is passed with every request (can be consumed by any client easily, but the API model is bound to this type of security) Thanks for your time! A: Since you mention REST, i assume over HTTP, you could look at HTTP Digest Authentication. However, keep in mind that XML-RPC is not RESTful. If you are going the way of WS/RPC, you might want to look at WS-Security. A: In the end I've picked the simplest approach: Web services are implemented as simple stateless SOAP services, where username and password get passed with every request. Product page
{ "language": "en", "url": "https://stackoverflow.com/questions/152187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can I create a desktop icon for a ClickOnce application? I have read in some of the ClickOnce posts that ClickOnce does not allow you to create a desktop icon for you application. Is there any way around this? A: In Visual Studio 2017 and 2019 you can do the following: Go to Project Properties -> Publish -> Manifests and select the option Create desktop shortcut A: It seems like there is a way to place an icon on the desktop in ClickOnce. * *Upgrade to Visual Studio 2008 SP 1, and there will be a placed an icon on the desktop check box in the options page of the publish section of the project properties window. *The second option is to add code to your application that copies the shortcut to the desktop on the first run of the application. See the blog post How to add Desktop Shortcut to ClickOnce Deployment Application. A: In Visual Studio 2005, ClickOnce does not have the ability to create a desktop icon, but it is now available in Visual Studio 2008 SP1. In Visual Studio 2005, you can use the following code to create a desktop icon for you when the application starts. I have used this code over several projects for a couple of months now without any problem. I must say that all my applications have been deployed over an intranet in a controlled environment. Also, the icon is not removed when the application is uninstalled. This code creates a shortcut to the shortcut on the start menu that ClickOnce creates. private void CreateDesktopIcon() { ApplicationDeployment ad = ApplicationDeployment.CurrentDeployment; if (ad.IsFirstRun) { Assembly assembly = Assembly.GetEntryAssembly(); string company = string.Empty; string description = string.Empty; if (Attribute.IsDefined(assembly, typeof(AssemblyCompanyAttribute))) { AssemblyCompanyAttribute ascompany = (AssemblyCompanyAttribute)Attribute.GetCustomAttribute( assembly, typeof(AssemblyCompanyAttribute)); company = ascompany.Company; } if (Attribute.IsDefined(assembly, typeof(AssemblyDescriptionAttribute))) { AssemblyDescriptionAttribute asdescription = (AssemblyDescriptionAttribute)Attribute.GetCustomAttribute( assembly, typeof(AssemblyDescriptionAttribute)); description = asdescription.Description; } if (!string.IsNullOrEmpty(company)) { string desktopPath = string.Empty; desktopPath = string.Concat( Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "\\", description, ".appref-ms"); string shortcutName = string.Empty; shortcutName = string.Concat( Environment.GetFolderPath(Environment.SpecialFolder.Programs), "\\", company, "\\", description, ".appref-ms"); System.IO.File.Copy(shortcutName, desktopPath, true); } } } } A: The desktop icon can be a shortcut to the .application file. Install this as one of the first things your application does. A: If you would like to user powershell you can create shortcut to .bat file: @ECHO OFF PowerShell -ExecutionPolicy Unrestricted .\script.ps1 >> "%TEMP%\StartupLog.txt" 2>&1 EXIT /B %errorlevel% which silently run script.ps1: $app = "http://your.site/YourApp/YourApp.application"; [Diagnostics.Process]::Start("rundll32.exe", "dfshim.dll,ShOpenVerbApplication " + $app); which open your ClickOnce app.
{ "language": "en", "url": "https://stackoverflow.com/questions/152188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Java: StringBuffer & Concatenation I'm using StringBuffer in Java to concat strings together, like so: StringBuffer str = new StringBuffer(); str.append("string value"); I would like to know if there's a method (although I didn't find anything from a quick glance at the documentation) or some other way to add "padding". Let me explain; every time I append something to the string, I want to add a space in the end, like so: String foo = "string value"; str.append(foo + " "); and I have several calls to append.. and every time, I want to add a space. Is there a way to set the object so that it will add a space automatically after each append? EDIT -- String input StringBuffer query = new StringBuffer(); Scanner scanner = new Scanner(System.in); scanner.UseDelimiter("\n"); do { System.out.println("sql> "); input = scanner.next(); if (!empty(input)) query.append(input); if (query.toString().trim().endsWith(";")) { //run query } } while (!input.equalsIgnoreCase("exit"); I'll use StringBuilder though as grom suggested, but that's how the code looks right now A: You should be using StringBuilder. Where possible, it is recommended that this class be used in preference to StringBuffer as it will be faster under most implementations. A: StringBuffer is final. You cannot derive from it. The Best solution really is to add the padding for yourself. Write a method for it and use a PADDING-Constant so that you can easily change it, or better put it in a parameter. A: Can you not create a new class which wraps around StringBuffer and add an appendWithTrailingSpace() method? CustomStringBuffer str = new CustomStringBuffer(); str.appendWithTrailingSpace("string value"); (Although you may want to call your method something a little shorter.) A: Just add the space yourself, it's easy enough, as per your own example. A: I think this is handled easier either with a helper method (untested code): public String myMethod() { StringBuilder sb = new StringBuilder(); addToBuffer(sb, "Hello").addToBuffer("there,"); addToBuffer(sb, "it").addToBuffer(sb, "works"); } private StringBuilder addToBuffer(StringBuilder sb, String what) { return sb.append(what).append(' '); // char is even faster here! ;) } Or even using a Builder pattern with a fluent interface (also untested code): public String myMethod() { SBBuilder builder = new SBBuilder() .add("Hello").add("there") .add("it", "works", "just", "fine!"); for (int i = 0; i < 10; i++) { builder.add("adding").add(String.valueOf(i)); } System.out.println(builder.build()); } public static class SBBuilder { private StringBuilder sb = new StringBuilder(); public SBBuilder add(String... parts) { for (String p : parts) { sb.append(p).append(' '); // char is even faster here! ;) } return this; } public String build() { return sb.toString(); } } Here's an article on the subject. Hope it helps! :) A: Another possibility is that StringBuilder objects return themselves when you call append, meaning you can do: str.append("string value").append(" "); Not quite as slick, but it is probably an easier solution than the + " " method. Another possibility is to build a wrapper class, like PaddedStringBuilder, that provides the same methods but applies the padding you want, since you can't inherit.
{ "language": "en", "url": "https://stackoverflow.com/questions/152190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Composite Oriented Programming (COP), .NET 4.0, MEF, and the Oslo Repository There seems to have been some interest over the past year around COP within the .NET community (ala Qi4j). A few folks have rolled there own COP frameworks (see links below) and it would appear .NET 4.0's Dynamic Dispatch and MEF might have a potential role in any .NET COP framework. On one hand a lot of this would appear to hark back to ideas from System/38 days (yes, I'm an old guy), though on the other it would also seem to be a pretty good fit with Oslo (Modeling and Repository). Can anyone comment on the whether Microsoft is doing any work on COP? Some recent .NET COP framework efforts: Hendry Luk - Roll Your Own COP Yves GoEleven.com - Cop - Proof of concept Anders Norås - Trick or Trait? Composite Oriented Programming with C# Magnus Mårtensson - Composite Oriented Programming spike on Unity Application Block A: Aku - There is considerable difference between the CAB / Composite WPF guidance and COP which is a fundamentally different approach to the expression of object behavior via the assembly of 'fragments' based on [Domain] context. The appearance of Mixins, Concerns, Constraints, and SideEffects in .NET 4.0 variously might point in that direction, but I guess I'm more specifically curious if Microsoft is by chance, or in any way, formally "doing COP" and in particular on top of the Oslo repository. A: Can anyone comment on the whether Microsoft is doing any work on COP? Microsoft released Composite Application Block and Composite WPF, They have DI FW (Unity). Now they are working on MEF. What should we comment here ? A: Check MEF http://mef.codeplex.com, currently shipped inside .NET 4, more in PDC session http://microsoftpdc.com/Sessions/FT24
{ "language": "en", "url": "https://stackoverflow.com/questions/152196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }