text
stringlengths
8
267k
meta
dict
Q: Scrollable regions in ActionScript 3 Visualization What is the best way to create several scrollable regions in an ActionScript 3 visualization that extends flash.display.Sprite and makes use of hierarchy of of low level DisplayObjects (Sprite'a, Shape's, TextField)? I have tried to use three mx.containers.Canvas objects added as children of the main Sprite and have also tried converting the main Sprite to a Canvas but am unable to get anything to show up using either method. I have also tried adding my DisplayObjects using both Canvas.addChild and Canvas.rawChildren.addChild. Is it necessary/possible to rewrite the whole thing to use mx.* components or is there a trick to displaying more primitive objects inside of a Canvas object? Here is some sample code for the way that works using sprites. We would like to make _colSprite, _rowSprite and _mapSprite sroll with linked scrolling bars. When i convert them to Canvas objects the code hangs silently before any display objects are drawn (at the addChild lines if i recall correctly). Below is an excerpt of the code. This is all from a single actionscript class that extends sprite. Seting up three regions i wish to scroll: this._log("Creating Sprites"); this._colSprite = new Sprite(); this._colSprite.y=0; this._colSprite.x=this._rowLabelWidth + this._rowLabelRightPadding + this._horizontalPadding; this._rowSprite = new Sprite(); this._rowSprite.y=this._columnLabelHeight+this._columnLabelBottomPadding + this._verticalPadding; this._rowSprite.x=this._horizontalPadding; this._mapSprite = new Sprite(); this._mapSprite.y=this._columnLabelHeight+this._columnLabelBottomPadding+ this._verticalPadding; this._mapSprite.x=this._rowLabelWidth + this._rowLabelRightPadding+this._horizontalPadding; this._log("adding kids"); addChild(this._mapSprite); addChild(this._rowSprite); addChild(this._colSprite); Sample drawing function: private function _drawColumLabels(colStartIndex: int): void { for (var col : int = colStartIndex; col < myData.g.length; col++) { var colName : String = this.myData.g[col].label; var bottomLeftPoint : Object = this._getCellXYTopLeft(0, col); bottomLeftPoint.y = this._columnLabelHeight + this._verticalPadding; var centerX : int = Math.round(this._cellWidth / 2 + (this._fontHeight / 2) - 1); var colLabel : TextField = new TextField(); colLabel.defaultTextFormat = this._labelTextFormat; colLabel.width = this._columnLabelHeight+this._columnLabelBottomPadding; colLabel.text = colName; colLabel.embedFonts = true; var colSprite : Sprite = new Sprite(); colSprite.addChild(colLabel); colSprite.x = bottomLeftPoint.x; colSprite.y = bottomLeftPoint.y; colSprite.rotation = -45; this._colSprite.addChild(colSprite); } } A: After adding the children to each Canvas you may need to call Canvas.invalidateSize() (on each one) to get them to recalculate their sizing. Needing to do this depends on which stage in the Component Lifecycle you're adding the children - i.e. when you're calling '_drawColumLabels'. I presume you're wanting a scollbar to appear on _colSprite (and _rowSprite) if there are more labels in it than can be displayed in it's visible area? If this is the case you'll need to use something other than Sprite, like Canvas as Sprite doesn't support scrolling. You may also want to debug the x/y/width/height values of each of your components to make sure they're what you expect - something I find helpful with doing layout is to draw the layout on paper and start writing sizes and coordinates so that I can see that my calculations are right.
{ "language": "en", "url": "https://stackoverflow.com/questions/163778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Fast Text Search Over Logs Here's the problem I'm having, I've got a set of logs that can grow fairly quickly. They're split into individual files every day, and the files can easily grow up to a gig in size. To help keep the size down, entries older than 30 days or so are cleared out. The problem is when I want to search these files for a certain string. Right now, a Boyer-Moore search is unfeasibly slow. I know that applications like dtSearch can provide a really fast search using indexing, but I'm not really sure how to implement that without taking up twice the space a log already takes up. Are there any resources I can check out that can help? I'm really looking for a standard algorithm that'll explain what I should do to build an index and use it to search. Edit: Grep won't work as this search needs to be integrated into a cross-platform application. There's no way I'll be able to swing including any external program into it. The way it works is that there's a web front end that has a log browser. This talks to a custom C++ web server backend. This server needs to search the logs in a reasonable amount of time. Currently searching through several gigs of logs takes ages. Edit 2: Some of these suggestions are great, but I have to reiterate that I can't integrate another application, it's part of the contract. But to answer some questions, the data in the logs varies from either received messages in a health-care specific format or messages relating to these. I'm looking to rely on an index because while it may take up to a minute to rebuild the index, searching currently takes a very long time (I've seen it take up to 2.5 minutes). Also, a lot of the data IS discarded before even recording it. Unless some debug logging options are turned on, more than half of the log messages are ignored. The search basically goes like this: A user on the web form is presented with a list of the most recent messages (streamed from disk as they scroll, yay for ajax), usually, they'll want to search for messages with some information in it, maybe a patient id, or some string they've sent, and so they can enter the string into the search. The search gets sent asychronously and the custom web server linearly searches through the logs 1MB at a time for some results. This process can take a very long time when the logs get big. And it's what I'm trying to optimize. A: grep usually works pretty well for me with big logs (sometimes 12G+). You can find a version for windows here as well. A: You'll most likely want to integrate some type of indexing search engine into your application. There are dozens out there, Lucene seems to be very popular. Check these two questions for some more suggestions: Best text search engine for integrating with custom web app? How do I implement Search Functionality in a website? A: Check out the algorithms that Lucene uses to do its thing. They aren't likely to be very simple, though. I had to study some of these algorithms once upon a time, and some of them are very sophisticated. If you can identify the "words" in the text you want to index, just build a large hash table of the words which maps a hash of the word to its occurrences in each file. If users repeat the same search frequently, cache the search results. When a search is done, you can then check each location to confirm the search term falls there, rather than just a word with a matching hash. Also, who really cares if the index is larger than the files themselves? If your system is really this big, with so much activity, is a few dozen gigs for an index the end of the world? A: More details on the kind of search you're performing could definitely help. Why, in particular do you want to rely on an index, since you'll have to rebuild it every day when the logs roll over? What kind of information is in these logs? Can some of it be discarded before it is ever even recorded? How long are these searches taking now? A: You may want to check out the source for BSD grep. You may not be able to rely on grep being there for you, but nothing says you can't recreate similar functionality, right? A: Splunk is great for searching through lots of logs. May be overkill for your purpose. You pay according to the amount of data (size of the logs) you want to process. I'm pretty sure they have an API so you don't have to use their front-end if you don't want to.
{ "language": "en", "url": "https://stackoverflow.com/questions/163783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Program to analyze a lot of XMLs I have a lot of XML files and I'd like to generate a report from them. The report should provide information such as: root 100% a*1 90% b*1 80% c*5 40% meaning that all documents have a root element, 90% have one a element in the root, 80% have one b element in the root, 40% have 5 c elements in b. If for example some documents have 4 c elements, some 5 and some 6, it should say something like: c*4.3 4 6 40% meaning that 40% have between 4 and 6 c elements there, and the average is 4.3. I am looking for free software, if it doesn't exist I'll write it. I was about to do it, but I thought about checking it. I may not be the first one to have to analyze and get an structural overview of thousand of XML files. A: Check out Gadget (source: mit.edu) A: Here's an XSLT 2.0 method. Assuming that $docs contains a sequence of document nodes that you want to scan, you want to create one line for each element that appears in the documents. You can use <xsl:for-each-group> to do that: <xsl:for-each-group select="$docs//*" group-by="name()"> <xsl:sort select="current-group-key()" /> <xsl:variable name="name" as="xs:string" select="current-grouping-key()" /> <xsl:value-of select="$name" /> ... </xsl:for-each-group> Then you want to find out the stats for that element amongst the documents. First, find the documents have an element of that name in them: <xsl:variable name="docs-with" as="document-node()+" select="$docs[//*[name() = $name]" /> Second, you need a sequence of the number of elements of that name in each of the documents: <xsl:variable name="elem-counts" as="xs:integer+" select="$docs-with/count(//*[name() = $name])" /> And now you can do the calculations. Average, minimum and maximum can be calculated with the avg(), min() and max() functions. The percentage is simply the number of documents that contain the element divided by the total number of documents, formatted. Putting that together: <xsl:for-each-group select="$docs//*" group-by="name()"> <xsl:sort select="current-group-key()" /> <xsl:variable name="name" as="xs:string" select="current-grouping-key()" /> <xsl:variable name="docs-with" as="document-node()+" select="$docs[//*[name() = $name]" /> <xsl:variable name="elem-counts" as="xs:integer+" select="$docs-with/count(//*[name() = $name])" /> <xsl:value-of select="$name" /> <xsl:text>* </xsl:text> <xsl:value-of select="format-number(avg($elem-counts), '#,##0.0')" /> <xsl:text> </xsl:text> <xsl:value-of select="format-number(min($elem-counts), '#,##0')" /> <xsl:text> </xsl:text> <xsl:value-of select="format-number(max($elem-counts), '#,##0')" /> <xsl:text> </xsl:text> <xsl:value-of select="format-number((count($docs-with) div count($docs)) * 100, '#0')" /> <xsl:text>%</xsl:text> <xsl:text>&#xA;</xsl:text> </xsl:for-each-group> What I haven't done here is indented the lines according to the depth of the element. I've just ordered the elements alphabetically to give you statistics. Two reasons for that: first, it's significantly harder (like too involved to write here) to display the element statistics in some kind of structure that reflects how they appear in the documents, not least because different documents may have different structures. Second, in many markup languages, the precise structure of the documents can't be known (because, for example, sections can nest within sections to any depth). I hope it's useful none the less. UPDATE: Need the XSLT wrapper and some instructions for running XSLT? OK. First, get your hands on Saxon 9B. You'll need to put all the files you want to analyse in a directory. Saxon allows you to access all the files in that directory (or its subdirectories) using a collection using a special URI syntax. It's worth having a look at that syntax if you want to search recursively or filter the files that you're looking at by their filename. Now the full XSLT: <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs"> <xsl:param name="dir" as="xs:string" select="'file:///path/to/default/directory?select=*.xml'" /> <xsl:output method="text" /> <xsl:variable name="docs" as="document-node()*" select="collection($dir)" /> <xsl:template name="main"> <xsl:for-each-group select="$docs//*" group-by="name()"> <xsl:sort select="current-group-key()" /> <xsl:variable name="name" as="xs:string" select="current-grouping-key()" /> <xsl:variable name="docs-with" as="document-node()+" select="$docs[//*[name() = $name]" /> <xsl:variable name="elem-counts" as="xs:integer+" select="$docs-with/count(//*[name() = $name])" /> <xsl:value-of select="$name" /> <xsl:text>* </xsl:text> <xsl:value-of select="format-number(avg($elem-counts), '#,##0.0')" /> <xsl:text> </xsl:text> <xsl:value-of select="format-number(min($elem-counts), '#,##0')" /> <xsl:text> </xsl:text> <xsl:value-of select="format-number(max($elem-counts), '#,##0')" /> <xsl:text> </xsl:text> <xsl:value-of select="format-number((count($docs-with) div count($docs)) * 100, '#0')" /> <xsl:text>%</xsl:text> <xsl:text>&#xA;</xsl:text> </xsl:for-each-group> </xsl:template> </xsl:stylesheet> And to run it you would do something like: > java -jar path/to/saxon.jar -it:main -o:report.txt dir=file:///path/to/your/directory?select=*.xml This tells Saxon to start the process with the template named main, to set the dir parameter to file:///path/to/your/directory?select=*.xml and send the output to report.txt. A: Beautiful Soup makes parsing XML trivial in python. A: [community post, here: no karma involved;) ] I propose a code-challenge here: parse all xml find in xmlfiles.com/examples and try to come up with the following output: Analyzing plant_catalog.xml: Analyzing note.xml: Analyzing portfolio.xml: Analyzing note_ex_dtd.xml: Analyzing home.xml: Analyzing simple.xml: Analyzing cd_catalog.xml: Analyzing portfolio_xsl.xml: Analyzing note_in_dtd.xml: Statistical Elements Analysis of 9 xml documents with 34 elements CATALOG*2 22% CD*26 50% ARTIST*26 100% COMPANY*26 100% COUNTRY*26 100% PRICE*26 100% TITLE*26 100% YEAR*26 100% PLANT*36 50% AVAILABILITY*36 100% BOTANICAL*36 100% COMMON*36 100% LIGHT*36 100% PRICE*36 100% ZONE*36 100% breakfast-menu*1 11% food*5 100% calories*5 100% description*5 100% name*5 100% price*5 100% note*3 33% body*1 100% from*1 100% heading*1 100% to*1 100% page*1 11% para*1 100% title*1 100% portfolio*2 22% stock*2 100% name*2 100% price*2 100% symbol*2 100% A: Here is a possible solution in ruby to this code-challenge... Since it is my very first ruby program, I am sure it is quite terribly coded, but at least it may answer J. Pablo Fernandez's question. Copy-paste it in a '.rb file and calls ruby on it. If you have an Internet connection, it will work ;) require "rexml/document" require "net/http" require "iconv" include REXML class NodeAnalyzer @@fullPathToFilesToSubNodesNamesToCardinalities = Hash.new() @@fullPathsToFiles = Hash.new() #list of files in which a fullPath node is detected @@fullPaths = Array.new # all fullpaths sorted alphabetically attr_reader :name, :father, :subNodesAnalyzers, :indent, :file, :subNodesNamesToCardinalities def initialize(aName="", aFather=nil, aFile="") @name = aName; @father = aFather; @subNodesAnalyzers = []; @file = aFile @subNodesNamesToCardinalities = Hash.new(0) if aFather && !aFather.name.empty? then @indent = " " else @indent = "" end if aFather @indent = @father.indent + self.indent @father.subNodesAnalyzers << self @father.updateSubNodesNamesToCardinalities(@name) end end @@nodesRootAnalyzer = NodeAnalyzer.new def NodeAnalyzer.nodesRootAnalyzer return @@nodesRootAnalyzer end def updateSubNodesNamesToCardinalities(aSubNodeName) aSubNodeCardinality = @subNodesNamesToCardinalities[aSubNodeName] @subNodesNamesToCardinalities[aSubNodeName] = aSubNodeCardinality + 1 end def NodeAnalyzer.recordNode(aNodeAnalyzer) if aNodeAnalyzer.fullNodePath.empty? == false if @@fullPaths.include?(aNodeAnalyzer.fullNodePath) == false then @@fullPaths << aNodeAnalyzer.fullNodePath end # record a full path in regard to its xml file (records it only one for a given xlm file) someFiles = @@fullPathsToFiles[aNodeAnalyzer.fullNodePath] if someFiles == nil someFiles = Array.new(); @@fullPathsToFiles[aNodeAnalyzer.fullNodePath] = someFiles; end if !someFiles.include?(aNodeAnalyzer.file) then someFiles << aNodeAnalyzer.file end end #record cardinalties of sub nodes for a given xml file someFilesToSubNodesNamesToCardinalities = @@fullPathToFilesToSubNodesNamesToCardinalities[aNodeAnalyzer.fullNodePath] if someFilesToSubNodesNamesToCardinalities == nil someFilesToSubNodesNamesToCardinalities = Hash.new(); @@fullPathToFilesToSubNodesNamesToCardinalities[aNodeAnalyzer.fullNodePath] = someFilesToSubNodesNamesToCardinalities ; end someSubNodesNamesToCardinalities = someFilesToSubNodesNamesToCardinalities[aNodeAnalyzer.file] if someSubNodesNamesToCardinalities == nil someSubNodesNamesToCardinalities = Hash.new(0); someFilesToSubNodesNamesToCardinalities[aNodeAnalyzer.file] = someSubNodesNamesToCardinalities someSubNodesNamesToCardinalities.update(aNodeAnalyzer.subNodesNamesToCardinalities) else aNodeAnalyzer.subNodesNamesToCardinalities.each() do |aSubNodeName, aCardinality| someSubNodesNamesToCardinalities[aSubNodeName] = someSubNodesNamesToCardinalities[aSubNodeName] + aCardinality end end #puts "someSubNodesNamesToCardinalities for #{aNodeAnalyzer.fullNodePath}: #{someSubNodesNamesToCardinalities}" end def file #if @file.empty? then @father.file else return @file end if @file.empty? then if @father != nil then return @father.file else return '' end else return @file end end def fullNodePath if @father == nil then return '' elsif @father.name.empty? then return @name else return @father.fullNodePath+"/"+@name end end def to_s s = "" if @name.empty? == false s = "#{@indent}#{self.fullNodePath} [#{self.file}]\n" end @subNodesAnalyzers.each() do |aSubNodeAnalyzer| s = s + aSubNodeAnalyzer.to_s end return s end def NodeAnalyzer.displayStats(aFullPath="") s = ""; if aFullPath.empty? then s = "Statistical Elements Analysis of #{@@nodesRootAnalyzer.subNodesAnalyzers.length} xml documents with #{@@fullPaths.length} elements\n" end someFullPaths = @@fullPaths.sort someFullPaths.each do |aFullPath| s = s + getIndentedNameFromFullPath(aFullPath) + "*" nbFilesWithThatFullPath = getNbFilesWithThatFullPath(aFullPath); aParentFullPath = getParentFullPath(aFullPath) nbFilesWithParentFullPath = getNbFilesWithThatFullPath(aParentFullPath); aNameFromFullPath = getNameFromFullPath(aFullPath) someFilesToSubNodesNamesToCardinalities = @@fullPathToFilesToSubNodesNamesToCardinalities[aParentFullPath] someCardinalities = Array.new() someFilesToSubNodesNamesToCardinalities.each() do |aFile, someSubNodesNamesToCardinalities| aCardinality = someSubNodesNamesToCardinalities[aNameFromFullPath] if aCardinality > 0 && someCardinalities.include?(aCardinality) == false then someCardinalities << aCardinality end end if someCardinalities.length == 1 s = s + someCardinalities.to_s + " " else anAvg = someCardinalities.inject(0) {|sum,value| Float(sum) + Float(value) } / Float(someCardinalities.length) s = s + sprintf('%.1f', anAvg) + " " + someCardinalities.min.to_s + "..." + someCardinalities.max.to_s + " " end s = s + sprintf('%d', Float(nbFilesWithThatFullPath) / Float(nbFilesWithParentFullPath) * 100) + '%' s = s + "\n" end return s end def NodeAnalyzer.getNameFromFullPath(aFullPath) if aFullPath.include?("/") == false then return aFullPath end aNameFromFullPath = aFullPath.dup aNameFromFullPath[/^(?:[^\/]+\/)+/] = "" return aNameFromFullPath end def NodeAnalyzer.getIndentedNameFromFullPath(aFullPath) if aFullPath.include?("/") == false then return aFullPath end anIndentedNameFromFullPath = aFullPath.dup anIndentedNameFromFullPath = anIndentedNameFromFullPath.gsub(/[^\/]+\//, " ") return anIndentedNameFromFullPath end def NodeAnalyzer.getParentFullPath(aFullPath) if aFullPath.include?("/") == false then return "" end aParentFullPath = aFullPath.dup aParentFullPath[/\/[^\/]+$/] = "" return aParentFullPath end def NodeAnalyzer.getNbFilesWithThatFullPath(aFullPath) if aFullPath.empty? return @@nodesRootAnalyzer.subNodesAnalyzers.length else return @@fullPathsToFiles[aFullPath].length; end end end class REXML::Document def analyze(node, aFatherNodeAnalyzer, aFile="") anNodeAnalyzer = NodeAnalyzer.new(node.name, aFatherNodeAnalyzer, aFile) node.elements.each() do |aSubNode| analyze(aSubNode, anNodeAnalyzer) end NodeAnalyzer.recordNode(anNodeAnalyzer) end end begin anXmlFilesDirectory = "xmlfiles.com/examples/" anXmlFilesRegExp = Regexp.new("http:\/\/" + anXmlFilesDirectory + "([^\"]*)") a = Net::HTTP.get(URI("http://www.google.fr/search?q=site:"+anXmlFilesDirectory+"+filetype:xml&num=100&as_qdr=all&filter=0")) someXmlFiles = a.scan(anXmlFilesRegExp) someXmlFiles.each() do |anXmlFile| anXmlFileContent = Net::HTTP.get(URI("http://" + anXmlFilesDirectory + anXmlFile.to_s)) anUTF8XmlFileContent = Iconv.conv("ISO-8859-1//ignore", 'UTF-8', anXmlFileContent).gsub(/\s+encoding\s*=\s*\"[^\"]+\"\s*\?/,"?") anXmlDocument = Document.new(anUTF8XmlFileContent) puts "Analyzing #{anXmlFile}: #{NodeAnalyzer.nodesRootAnalyzer.name}" anXmlDocument.analyze(anXmlDocument.root,NodeAnalyzer.nodesRootAnalyzer, anXmlFile.to_s) end NodeAnalyzer.recordNode(NodeAnalyzer.nodesRootAnalyzer) puts NodeAnalyzer.displayStats end A: Go with JeniT's answer - she's one of the first XSLT guru's I started learning from back on '02. To really appreciate the power of XML you should work with XPath and XSLT and learn to manipulate the nodes.
{ "language": "en", "url": "https://stackoverflow.com/questions/163796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Tutorial for creating rails models and scaffolds with foreign key relationships Where can I find a good Rails newbie-friendly reference about generating rails models with foreign key relationships? I've found some pages indicating that you should add has_many and belongs_to to the relevant models to specify these relationships, but haven't seen any instructions for getting the scaffolds to generate the correct controller and view code that would reflect these relationships. A: http://ruby.railstutorial.org/ruby-on-rails-tutorial-book and check Chap 11 and 12 for Rails 3 and Rails 3.2 I hope you like those chaps and it is very nice concept for foreign key relationships A: Look at question: Rails 3.1: Any tutorials for deeply nested models? Also take a look at nested_form gem and relative documentation: http://rubydoc.info/gems/nested_form/0.1.1/frames. Usage is pretty simple. A: It's not a tutorial as such but I find this page to be very useful when trying to figure out what my rails relationships should be. It's also an "official" guide so it's likely to be maintained. http://guides.rubyonrails.org/association_basics.html A: There are a bunch of StackOverflow questions asking for newbie reference materials. I recommend that you start with the two Peepcode screencasts: * *Rails From Scratch Part I *Rails From Scratch Part II They do a great job of visually introducing you to Rails 2 development. Then, I'd recommend you pick up the Rails 2.1 PDF by Ryan Daigle, to get the hang of the 2.1 features not covered in the screencasts. I'm not sure what you're driving at with your question. What are you expecting the scaffolding to do? Create multi-object relationship links automatically? That's something you have to start layering in yourself....and as you do so, the scaffolding starts to be replaced with a real application. The scaffolding is just a starting point: it's not meant to guess what your inter-object relationships are going to look like in the application.
{ "language": "en", "url": "https://stackoverflow.com/questions/163797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I select a .Net application configuration file from a command line parameter? I would like to override the use of the standard app.config by passing a command line parameter. How do I change the default application configuration file so that when I access ConfigurationManager.AppSettings I am accessing the config file specified on the command line? Edit: It turns out that the correct way to load a config file that is different than the name of the EXE plus .config is to use OpenMappedExeConfiguration. E.g. ExeConfigurationFileMap configFile = new ExeConfigurationFileMap(); configFile.ExeConfigFilename = Path.Combine(Environment.CurrentDirectory, "Shell2.exe.config"); currentConfiguration = ConfigurationManager.OpenMappedExeConfiguration(configFile,ConfigurationUserLevel.None); This partially works. I can see all of the keys in the appSettings section but all the values are null. A: A batch file that copies your desired configuration file to appname.exe.config and then runs the appname.exe. A: So here is the code that actually allows me to actually access the appSettings section in a config file other than the default one. ExeConfigurationFileMap configFile = new ExeConfigurationFileMap(); configFile.ExeConfigFilename = Path.Combine(Environment.CurrentDirectory, "Alternate.config"); Configuration config = ConfigurationManager.OpenMappedExeConfiguration(configFile,ConfigurationUserLevel.None); AppSettingsSection section = (AppSettingsSection)config.GetSection("appSettings"); string MySetting = section.Settings["MySetting"].Value; A: This is not exactly what you are wanting... to redirect the actual ConfigurationManager static object to point at a different path. But I think it is the right solution to your problem. Check out the OpenExeConfiguration method on the ConfigurationManager class. If the above method is not what you are looking for I think it would also be worth taking a look at using the Configuration capabilities of the Enterprise Library framework (developed and maintained by the Microsoft Patterns & Practices team). Specifically take a look at the FileConfigurationSource class. Here is some code that highlights the use of the FileConfigurationSource from Enterprise Library, I believe this fully meets your goals. The only assembly you need from Ent Lib for this is Microsoft.Practices.EnterpriseLibrary.Common.dll. static void Main(string[] args) { //read from current app.config as default AppSettingsSection ass = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None).AppSettings; //if args[0] is a valid file path assume it's a config for this example and attempt to load if (args.Length > 0 && File.Exists(args[0])) { //using FileConfigurationSource from Enterprise Library FileConfigurationSource fcs = new FileConfigurationSource(args[0]); ass = (AppSettingsSection) fcs.GetSection("appSettings"); } //print value from configuration Console.WriteLine(ass.Settings["test"].Value); Console.ReadLine(); //pause } A: I needed to do this for an app of mine as well, and dealing with the standard config objects turned into such a freakin' hassle for such a simple concept that I went this route: * *Keep multiple config files in XML format similar to app.config *Load the specified config file into a DataSet (via .ReadXML), and use the DataTable with the config info in it as my Configuration object. *So all my code just deals with the Configuration DataTable to retrieve values and not that craptastically obfuscated app config object. then I can pass in whatever config filename I need on the command line and if one isn't there - just load app.config into the DataSet. Jeezus it was sooo much simpler after that. :-) Ron A: This is the relevant part of the source for app that uses default config and accepts override via command line: Get current or user config into the Config object Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); string defCfgName = Environment.GetCommandLineArgs()[0] + ".config"; if (arg.Length != 0) { string ConfigFileName = arg[0]; if (!File.Exists(ConfigFileName)) Fatal("File doesn't exist: " + ConfigFileName, -1); config = ConfigurationManager.OpenMappedExeConfiguration(new ExeConfigurationFileMap { ExeConfigFilename = ConfigFileName }, ConfigurationUserLevel.None); } else if (!File.Exists(defCfgName)) Fatal("Default configuration file doesn't exist and no override is set." , -1); Use the config object AppSettingsSection s = (AppSettingsSection)config.GetSection("appSettings"); KeyValueConfigurationCollection a = s.Settings; ConnectionString = a["ConnectionString"].Value;
{ "language": "en", "url": "https://stackoverflow.com/questions/163803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Smart pagination algorithm I'm looking for an example algorithm of smart pagination. By smart, what I mean is that I only want to show, for example, 2 adjacent pages to the current page, so instead of ending up with a ridiculously long page list, I truncate it. Here's a quick example to make it clearer... this is what I have now: Pages: 1 2 3 4 [5] 6 7 8 9 10 11 This is what I want to end up with: Pages: ... 3 4 [5] 6 7 ... (In this example, I'm only showing 2 adjacent pages to the current page) I'm implementing it in PHP/Mysql, and the "basic" pagination (no trucating) is already coded, I'm just looking for an example to optimize it... It can be an example in any language, as long as it gives me an idea as to how to implement it... A: Here is some code based on original code from this very old link. It uses markup compatible with Bootstrap's pagination component, and outputs page links like this: [1] 2 3 4 5 6 ... 100 1 [2] 3 4 5 6 ... 100 ... 1 2 ... 14 15 [16] 17 18 ... 100 ... 1 2 ... 97 [98] 99 100 <?php // How many adjacent pages should be shown on each side? $adjacents = 3; //how many items to show per page $limit = 5; // if no page var is given, default to 1. $page = (int)$_GET["page"] ?? 1; //first item to display on this page $start = ($page - 1) * $limit; /* Get data. */ $data = $db ->query("SELECT * FROM mytable LIMIT $start, $limit") ->fetchAll(); $total_pages = count($data); /* Setup page vars for display. */ $prev = $page - 1; $next = $page + 1; $lastpage = ceil($total_pages / $limit); //last page minus 1 $lpm1 = $lastpage - 1; $first_pages = "<li class='page-item'><a class='page-link' href='?page=1'>1</a></li>" . "<li class='page-item'><a class='page-link' href='?page=2'>2</a>"; $ellipsis = "<li class='page-item disabled'><span class='page-link'>...</span></li>"; $last_pages = "<li class='page-item'><a class='page-link' href='?page=$lpm1'>$lpm1</a></li>" . "<li class='page-item'><a class='page-link' href='?page=$lastpage'>$lastpage</a>"; $pagination = "<nav aria-label='page navigation'>"; $pagincation .= "<ul class='pagination'>"; //previous button $disabled = ($page === 1) ? "disabled" : ""; $pagination.= "<li class='page-item $disabled'><a class='page-link' href='?page=$prev'>« previous</a></li>"; //pages //not enough pages to bother breaking it up if ($lastpage < 7 + ($adjacents * 2)) { for ($i = 1; $i <= $lastpage; $i++) { $active = $i === $page ? "active" : ""; $pagination .= "<li class='page-item $active'><a class='page-link' href='?page=$i'>$i</a></li>"; } } elseif($lastpage > 5 + ($adjacents * 2)) { //enough pages to hide some //close to beginning; only hide later pages if($page < 1 + ($adjacents * 2)) { for ($i = 1; $i < 4 + ($adjacents * 2); $i++) { $active = $i === $page ? "active" : ""; $pagination .= "<li class='page-item $active'><a class='page-link' href='?page=$i'>$i</a></li>"; } $pagination .= $ellipsis; $pagination .= $last_pages; } elseif($lastpage - ($adjacents * 2) > $page && $page > ($adjacents * 2)) { //in middle; hide some front and some back $pagination .= $first_pages; $pagination .= $ellipsis for ($i = $page - $adjacents; $i <= $page + $adjacents; $i++) { $active = $i === $page ? "active" : ""; $pagination .= "<li class='page-item $active'><a class='page-link' href='?page=$i'>$i</a></li>"; } $pagination .= $ellipsis; $pagination .= $last_pages; } else { //close to end; only hide early pages $pagination .= $first_pages; $pagination .= $ellipsis; $pagination .= "<li class='page-item disabled'><span class='page-link'>...</span></li>"; for ($i = $lastpage - (2 + ($adjacents * 2)); $i <= $lastpage; $i++) { $active = $i === $page ? "active" : ""; $pagination .= "<li class='page-item $active'><a class='page-link' href='?page=$i'>$i</a></li>"; } } } //next button $disabled = ($page === $last) ? "disabled" : ""; $pagination.= "<li class='page-item $disabled'><a class='page-link' href='?page=$next'>next »</a></li>"; $pagination .= "</ul></nav>"; if($lastpage <= 1) { $pagination = ""; } echo $pagination; foreach ($data as $row) { // display your data } echo $pagination; A: List<int> pages = new List<int>(); int pn = 2; //example of actual pagenumber int total = 8; for(int i = pn - 9; i <= pn + 9; i++) { if(i < 1) continue; if(i > total) break; pages.Add(i); } return pages; A: I made a pagination class and put in on Google Code a while ago. Check it out its pretty simple http://code.google.com/p/spaceshipcollaborative/wiki/PHPagination $paging = new Pagination(); $paging->set('urlscheme','class.pagination.php?page=%page%'); $paging->set('perpage',10); $paging->set('page',15); $paging->set('total',3000); $paging->set('nexttext','Next Page'); $paging->set('prevtext','Previous Page'); $paging->set('focusedclass','selected'); $paging->set('delimiter',''); $paging->set('numlinks',9); $paging->display(); A: Kinda late =), but here is my go at it: function Pagination($data, $limit = null, $current = null, $adjacents = null) { $result = array(); if (isset($data, $limit) === true) { $result = range(1, ceil($data / $limit)); if (isset($current, $adjacents) === true) { if (($adjacents = floor($adjacents / 2) * 2 + 1) >= 1) { $result = array_slice($result, max(0, min(count($result) - $adjacents, intval($current) - ceil($adjacents / 2))), $adjacents); } } } return $result; } Example: $total = 1024; $per_page = 10; $current_page = 2; $adjacent_links = 4; print_r(Pagination($total, $per_page, $current_page, $adjacent_links)); Output (@ Codepad): Array ( [0] => 1 [1] => 2 [2] => 3 [3] => 4 [4] => 5 ) Another example: $total = 1024; $per_page = 10; $current_page = 42; $adjacent_links = 4; print_r(Pagination($total, $per_page, $current_page, $adjacent_links)); Output (@ Codepad): Array ( [0] => 40 [1] => 41 [2] => 42 [3] => 43 [4] => 44 ) A: I started from the lazaro's post and tried to make a robust and light algorithm with javascript/jquery... No additional and/or bulky pagination libraries needed... Look on fiddle for an live example: http://jsfiddle.net/97JtZ/1/ var totalPages = 50, buttons = 5; var currentPage = lowerLimit = upperLimit = Math.min(9, totalPages); //Search boundaries for (var b = 1; b < buttons && b < totalPages;) { if (lowerLimit > 1 ) { lowerLimit--; b++; } if (b < buttons && upperLimit < totalPages) { upperLimit++; b++; } } //Do output to a html element for (var i = lowerLimit; i <= upperLimit; i++) { if (i == currentPage) $('#pager').append('<li>' + i + '</li> '); else $('#pager').append('<a href="#"><li><em>' + i + '</em></li></a> '); } A: I would use something simple on the page you are showing the paginator, like: if ( $page_number == 1 || $page_number == $last_page || $page_number == $actual_page || $page_number == $actual_page+1 || $page_number == $actual_page+2 || $page_number == $actual_page-1 || $page_number == $actual_page-2 ) echo $page_number; You can adapt it to show each 10 or so pages with % operator ... I think using switch() case would be better in this case, I just don't remember the syntax now Keep it Simple :) A: If it's possible to generate the pagination on the client, I would suggest my new Pagination plugin: http://www.xarg.org/2011/09/jquery-pagination-revised/ The solution to your question would be: $("#pagination").paging(1000, { // Your number of elements format: '. - nncnn - ', // Format to get Pages: ... 3 4 [5] 6 7 ... onSelect: function (page) { // add code which gets executed when user selects a page }, onFormat: function (type) { switch (type) { case 'block': // n and c return '<a>' + this.value + '</a>'; case 'fill': // - return '...'; case 'leap': // . return 'Pages:'; } } }); A: The code of the CodeIgniter pagination-class can be found on GitHub (what you call) Smart pagination can be achieved by configuration. $config['num_links'] = 2; The number of "digit" links you would like before and after the selected page number. For example, the number 2 will place two digits on either side, as in the example links at the very top of this page.
{ "language": "en", "url": "https://stackoverflow.com/questions/163809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields? I have a Person model that has a foreign key relationship to Book, which has a number of fields, but I'm most concerned about author (a standard CharField). With that being said, in my PersonAdmin model, I'd like to display book.author using list_display: class PersonAdmin(admin.ModelAdmin): list_display = ['book.author',] I've tried all of the obvious methods for doing so, but nothing seems to work. Any suggestions? A: Like the rest, I went with callables too. But they have one downside: by default, you can't order on them. Fortunately, there is a solution for that: Django >= 1.8 def author(self, obj): return obj.book.author author.admin_order_field = 'book__author' Django < 1.8 def author(self): return self.book.author author.admin_order_field = 'book__author' A: Please note that adding the get_author function would slow the list_display in the admin, because showing each person would make a SQL query. To avoid this, you need to modify get_queryset method in PersonAdmin, for example: def get_queryset(self, request): return super(PersonAdmin,self).get_queryset(request).select_related('book') Before: 73 queries in 36.02ms (67 duplicated queries in admin) After: 6 queries in 10.81ms A: This one's already accepted, but if there are any other dummies out there (like me) that didn't immediately get it from the presently accepted answer, here's a bit more detail. The model class referenced by the ForeignKey needs to have a __unicode__ method within it, like here: class Category(models.Model): name = models.CharField(max_length=50) def __unicode__(self): return self.name That made the difference for me, and should apply to the above scenario. This works on Django 1.0.2. A: As another option, you can do lookups like: #models.py class UserAdmin(admin.ModelAdmin): list_display = (..., 'get_author') def get_author(self, obj): return obj.book.author get_author.short_description = 'Author' get_author.admin_order_field = 'book__author' Since Django 3.2 you can use display() decorator: #models.py class UserAdmin(admin.ModelAdmin): list_display = (..., 'get_author') @admin.display(ordering='book__author', description='Author') def get_author(self, obj): return obj.book.author A: If you have a lot of relation attribute fields to use in list_display and do not want create a function (and it's attributes) for each one, a dirt but simple solution would be override the ModelAdmin instace __getattr__ method, creating the callables on the fly: class DynamicLookupMixin(object): ''' a mixin to add dynamic callable attributes like 'book__author' which return a function that return the instance.book.author value ''' def __getattr__(self, attr): if ('__' in attr and not attr.startswith('_') and not attr.endswith('_boolean') and not attr.endswith('_short_description')): def dyn_lookup(instance): # traverse all __ lookups return reduce(lambda parent, child: getattr(parent, child), attr.split('__'), instance) # get admin_order_field, boolean and short_description dyn_lookup.admin_order_field = attr dyn_lookup.boolean = getattr(self, '{}_boolean'.format(attr), False) dyn_lookup.short_description = getattr( self, '{}_short_description'.format(attr), attr.replace('_', ' ').capitalize()) return dyn_lookup # not dynamic lookup, default behaviour return self.__getattribute__(attr) # use examples @admin.register(models.Person) class PersonAdmin(admin.ModelAdmin, DynamicLookupMixin): list_display = ['book__author', 'book__publisher__name', 'book__publisher__country'] # custom short description book__publisher__country_short_description = 'Publisher Country' @admin.register(models.Product) class ProductAdmin(admin.ModelAdmin, DynamicLookupMixin): list_display = ('name', 'category__is_new') # to show as boolean field category__is_new_boolean = True As gist here Callable especial attributes like boolean and short_description must be defined as ModelAdmin attributes, eg book__author_verbose_name = 'Author name' and category__is_new_boolean = True. The callable admin_order_field attribute is defined automatically. Don't forget to use the list_select_related attribute in your ModelAdmin to make Django avoid aditional queries. A: if you try it in Inline, you wont succeed unless: in your inline: class AddInline(admin.TabularInline): readonly_fields = ['localname',] model = MyModel fields = ('localname',) in your model (MyModel): class MyModel(models.Model): localization = models.ForeignKey(Localizations) def localname(self): return self.localization.name A: I may be late, but this is another way to do it. You can simply define a method in your model and access it via the list_display as below: models.py class Person(models.Model): book = models.ForeignKey(Book, on_delete=models.CASCADE) def get_book_author(self): return self.book.author admin.py class PersonAdmin(admin.ModelAdmin): list_display = ('get_book_author',) But this and the other approaches mentioned above add two extra queries per row in your listview page. To optimize this, we can override the get_queryset to annotate the required field, then use the annotated field in our ModelAdmin method admin.py from django.db.models.expressions import F @admin.register(models.Person) class PersonAdmin(admin.ModelAdmin): list_display = ('get_author',) def get_queryset(self, request): queryset = super().get_queryset(request) queryset = queryset.annotate( _author = F('book__author') ) return queryset @admin.display(ordering='_author', description='Author') def get_author(self, obj): return obj._author A: For Django >= 3.2 The proper way to do it with Django 3.2 or higher is by using the display decorator class BookAdmin(admin.ModelAdmin): model = Book list_display = ['title', 'get_author_name'] @admin.display(description='Author Name', ordering='author__name') def get_author_name(self, obj): return obj.author.name A: According to the documentation, you can only display the __unicode__ representation of a ForeignKey: http://docs.djangoproject.com/en/dev/ref/contrib/admin/#list-display Seems odd that it doesn't support the 'book__author' style format which is used everywhere else in the DB API. Turns out there's a ticket for this feature, which is marked as Won't Fix. A: Despite all the great answers above and due to me being new to Django, I was still stuck. Here's my explanation from a very newbie perspective. models.py class Author(models.Model): name = models.CharField(max_length=255) class Book(models.Model): author = models.ForeignKey(Author) title = models.CharField(max_length=255) admin.py (Incorrect Way) - you think it would work by using 'model__field' to reference, but it doesn't class BookAdmin(admin.ModelAdmin): model = Book list_display = ['title', 'author__name', ] admin.site.register(Book, BookAdmin) admin.py (Correct Way) - this is how you reference a foreign key name the Django way class BookAdmin(admin.ModelAdmin): model = Book list_display = ['title', 'get_name', ] def get_name(self, obj): return obj.author.name get_name.admin_order_field = 'author' #Allows column order sorting get_name.short_description = 'Author Name' #Renames column head #Filtering on side - for some reason, this works #list_filter = ['title', 'author__name'] admin.site.register(Book, BookAdmin) For additional reference, see the Django model link here A: I just posted a snippet that makes admin.ModelAdmin support '__' syntax: http://djangosnippets.org/snippets/2887/ So you can do: class PersonAdmin(RelatedFieldAdmin): list_display = ['book__author',] This is basically just doing the same thing described in the other answers, but it automatically takes care of (1) setting admin_order_field (2) setting short_description and (3) modifying the queryset to avoid a database hit for each row. A: There is a very easy to use package available in PyPI that handles exactly that: django-related-admin. You can also see the code in GitHub. Using this, what you want to achieve is as simple as: class PersonAdmin(RelatedFieldAdmin): list_display = ['book__author',] Both links contain full details of installation and usage so I won't paste them here in case they change. Just as a side note, if you're already using something other than model.Admin (e.g. I was using SimpleHistoryAdmin instead), you can do this: class MyAdmin(SimpleHistoryAdmin, RelatedFieldAdmin). A: You can show whatever you want in list display by using a callable. It would look like this: def book_author(object): return object.book.author class PersonAdmin(admin.ModelAdmin): list_display = [book_author,] A: I prefer this: class CoolAdmin(admin.ModelAdmin): list_display = ('pk', 'submodel__field') @staticmethod def submodel__field(obj): return obj.submodel.field A: AlexRobbins' answer worked for me, except that the first two lines need to be in the model (perhaps this was assumed?), and should reference self: def book_author(self): return self.book.author Then the admin part works nicely.
{ "language": "en", "url": "https://stackoverflow.com/questions/163823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "404" }
Q: PHP templates - with PHP What's the most elegant templating (preferably in pure PHP!) solution you've seen? Specifically i'm interested in handling: * *Detecting in a repeating block whether it's the first or last element *Easy handling of odd/even cases, like a zebra striped table, or similar *Other modulos logic, where you'd do something every n'th time. I'm looking for something that makes this less of a pain: <?php $persons = array('John', 'Jack', 'Jill', 'Jason'); ?> <?php $i = 0; ?> <?php if (isset($persons)): ?> <ul> <?php foreach ($persons as $name): ?> <li class="<?= ($i++ % 2 === 0) ? 'odd' : 'even' ?>"><?= $name ?></li> <?php endforeach ?> </ul> <?php endif ?> Does it really take the mess above to create something like this below? <ul> <li class="odd">John</li> <li class="even">Jack</li> <li class="odd">Jill</li> <li class="even">Jason</li> </ul> Is it only me that find the above near hideous? All those starting and closing of php-tags makes me cringe. A: Tiny But Strong www.tinybutstrong.com It doesn't make the smarty mistake of embedding another macro language in the page, but does allow you to handle every practical web display issue I've ever thrown at it. In particular the above odd/even constructs are a doddle. For something like your code selecting from a database table In the PHP file $TBS->MergeBlock('blk1',$sqlconnect, "SELECT name from people "); And in the HTML file <ul> <li class="odd">[blk.name;block=ul]</li> <li class="even">[blk.name;block=ul]</li> </ul> And that's it. Notice that the HTML is completely Dreamweaver compatible. Furthermore if I wanted to make that alternate over three line styles all I'd need to do is add the extra line, maybe with different classes, so <ul> <li class="linestyle1">[blk.name;block=ul]</li> <li class="linestyle2">[blk.name;block=ul]</li> <li class="linestyle3">[blk.name;block=ul]</li> </ul> A: A small help on the looping: <? $b=false; foreach($MyList as $name) { ?> <li class="row<?= $b=!$b ?>"><?= htmlspecialchars($name); ?></li> <? } ?> By saying $b=!$b, it automatically alternates between true and false. Since false prints as "", and true prints as "1", then by defining css classes row and row1, you can get your altering rows without any trouble. consider using :first-child css to style the first one differently. A: It ain't pure PHP (the templating syntax then), but it works realy nice; Smarty. For loops you can do: <ul> {foreach from=$var name=loop item=test} {if $smarty.foreach.loop.first}<li>This is the first item</li>{/if} <li class="{cycle values="odd,even"}">{$var.name}</li> {if $smarty.foreach.loop.last}<li>This was the last item</li>{/if} {/foreach} </ul> A: have you considered phptal?. one main benefit of it (or something similar) is that you get templates which can pass validation. most php template engines seem to ignore this. A: I use PHPTAL for templating because it is written in 100% actual HTML with placeholder data, so it even works in a WYSIWYG editor for a web designer. That and it's just way easy to understand. Here's what it would look like for me. Please forgive the markup, I'm new here and the four spaces block wasn't working right for me (the list was a list, not the markup). PHP Code: $tal = new PHPTAL; $tal->setTemplate('people.htm') ->set('people', array('John', 'Jack', 'Jill', 'Jason')); echo $tal->execute(); Template: <ul> <li tal:repeat="person people" tal:content="person">John Doe</li> </ul> Output: *John *Jack *Jill *Jason Now obviously I wouldn't make a template for this little, but I could use a macro for it or build a whole page and include that variable. But you get the idea. Using PHPTAL has just about tripled my speed at templating and programming, just by the simplicity (no new syntax to learn like smarty). A: How's about XSLT? The only template system that has a standards body behind it. Works the same across programming languages. Learn it once, use it everywhere! A: Symfony Components: Templating (source: symfony-project.org) Symfony intends on moving to a new templating system based on the lightweight PHP templating system twig. The lead developer Fabien Potencier, explains the decision here: http://fabien.potencier.org/article/35/templating-engines-in-php-follow-up Symfony can usually be replied upon to make very informed decisions on such matters, so this framework should be something to look into. The component is here: http://components.symfony-project.org/templating/ A: You don't need to open the tags more than once. You can also make a function out of it if you do the same thing multiple times: <?php function makeul($items, $classes) { $c = count($classes); $out = ""; if (isset($items) && count($items) > 0) { $out = "<ul>\n"; foreach ($items as $item) { $out .= "\t<li class=\"" . $classes[$i++%$c] . "\">$item</li>\n"; } $out .= "</ul>\n"; } return $out; } ?> other page content <?php $persons = array('John', 'Jack', 'Jill', 'Jason'); $classes = array('odd', 'even'); print makeul($persons, $classes); ?> Also, if you don't mind using Javascript, Jquery makes things done mod 2 pretty easy (e.g., for zebra striping a table): $("tr:odd").addClass("odd"); $("tr:even").addClass("even"); A: I've used Smarty Template Engine in the past. It's Pretty solid. And as you can probably tell from the website; it has quite the large user-base and is updated regularly. It's in pure PHP as well. A: Savant is a lightweight, pure PHP templating engine. Version 2 has a cycle plugin similar to the Smarty one mentioned earlier. I haven't been able to find a reference to the same plugin in version 3, but I'm sure you could write it fairly easily. A: If is just to apply a CSS style, why don't you use the :nth-of-type(odd) selector. For example: li:nth-of-type(odd) { background: #f2f6f8; background: linear-gradient(top, #f2f6f8 0%, #e0eff9 100%); } http://jsfiddle.net/melonangie/nU7qK/enter code here A: I use Modulo like you did in your example all the time. A: If what cringes you is the opening and closing tags, write a function that creates the html string and then have it return it. At least it will save you some tags. A: I have been a fan of HAML for quite a while, it looks like PHP folk have HAML now: see http://phphaml.sourceforge.net/ A: <?= ($i++ % 2 === 0) ? 'odd' : 'even' ?> You're doing it the other way around. Your first item is now called even instead of odd. Use ++$i. I'm having the same problem. But I think your original solution is the neatest. So I'll go with that. A: I created a simple templating system in PHP to solve this problem a while ago: http://code.google.com/p/templatephp/ It takes a multidimensional array, and requires the addition of some extra tags to the HMTL to create the combined template. It's not as complicated (albeit powerful) as Smarty and some other solutons, but wins out in simplicity a lot of the time. A demo of the menu creation: <p>Main Menu</p> <ul> {block:menu_items} <li><a href="{var:link}">{var:name}</a></li> {/block:menu_items} </ul> Merged with... array ( 'menu_items' => array ( array ( 'link' => 'home.htm', 'name' => 'Home' ), array ( 'link' => 'about.htm', 'name' => 'About Us' ), array ( 'link' => 'portfolio.htm', 'name' => 'Portfolio' ), array ( 'link' => 'contact.htm', 'name' => 'Contact Us' ) ) ); Will create the menu... <p>Main Menu</p> <ul> <li><a href="home.htm">Home</a></li> <li><a href="about.htm">About Us</a></li> <li><a href="portfolio.htm">Portfolio</a></li> <li><a href="contact.htm">Contact Us</a></li> </ul> A: <?php define ('CRLF', "\r\n"); $persons = array('John', 'Jack', 'Jill', 'Jason'); $color = 'white'; // Init $color for striped list $ho = '<UL>' . CRLF; // Start HTML Output variable foreach ($persons as $name) { $ho .= ' <li class="' . $color . '">' . $name . '</li>' . CRLF; $color = ($color == 'white') ? 'grey' : 'white'; // if white, make it grey else white } $ho .= '</ul>' . CRLF; echo $ho; ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/163834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Refactoring Nicely with Version Control A co worker of mine asked me to review some of my code and he sent me a diff file. I'm not new to diffs or version control in general but the diff file was very difficult to read because of the changes he made. Specifically, he used the "extract method" feature and reordered some methods. Conceptually, very easy to understand but looking at the diff, it was very hard to tell what he had done. It was much easier for me to checkout the previous revision and use Eclipse's "compare" feature, but it was still quite clunky. Is there any version control system that stores metadata related to refactoring. Of course, it would be IDE and Programming Language specific, but we all use Eclipse and Java! Perhaps there might be some standard on which IDEs and version control implementations can play nicely? A: Eclipse can export refactoring history (see 3.2 release notes as well). You could then view the refactoring changes via preview in Eclipse. A: I don't know of compare tools that do a good job when the file has been rearranged. In general, this is a bad idea because of this type of problem. All too often people do it to simply meet their own style, which is a bad, bad reason to change code. It can effectively destroy the history, just like reformatting the entire file, and should never be done unless necessary (i.e. it is already a mess and unreadable). The other problem is that working code will likely get broken because of someones style preferences. If it ain't broken, don't fix it! A: I asked a similar question a while ago and never did get a satisfactory answer. I'll be watching your question to see what people come up with. For your particular situation, it might be best to review the latest version of the file, using the diff as a guide. That's what I have been doing in my situation too. A: The Refactoring History feature is new to me, but I like the way it sounds. For a less tool-specific method, I like sending patch files. The person reviewing just applies the patch and reviews the results, and then they can revert to the version in version control when they're done.
{ "language": "en", "url": "https://stackoverflow.com/questions/163835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Limiting impact of credit card processing scripts/bots I'm involved in building a donation form for non-profits. We recently got hit by a fast round of low dollar submissions. Many were invalid cards, but a few went through. Obviously someone wrote a script to check a bunch of card numbers for validity, possibly so they can sell them later. Any ideas on how to prevent or limit the impact of this in the future? We have control over all aspects of the system (code, webserver, etc). Yes the form runs over https. A: When a flood of invalid transactions from a single IP address or small range of addresses is detected, block that address / network. If a botnet is in use, this will not help. You can still detect floods of low dollar amount submissions and so deduce when you are under attack; during these times, stall low dollar amount submissions to make them take longer; introduce CAPTCHAs for low dollar amount donations; consult your bank's fraud prevention department in case they can make use of your server logs to catch the perpetrators. Force donors to create accounts in order to make donations; protect account creation with a CAPTCHA, and rate limit donations from any one account. Raise the minimum permissible donation to a point where it no longer makes financial sense for the scammers to use you in this way. A: Instead of CAPTCHAs, which will annoy users, you might want to take advantage of the fact that most people have javascript enabled while bots don't. Simply create a small piece of javascript that when run inserts a particular value in a hidden field. For those that have Javascript disabled you can show the CAPTCHA (use the <noscript> tag), and you can then accept a submission only if either of these measures check out. For maximum annoyance to evildoers you could make the difference between the succeed message and the failure message computationally hard to distinguish (say everything is the same, except for one picture that displays the message) but easy to understand for humans. A: limit submissions from the same IP address to one per minute, or whatever reasonable period of time it would take for a real person to fill out the form A: Raising the minimum donation to a point where it no longer makes financial sense for the scammers to use you in this way will help in general. This. How many legitimate donations do you get for under 5 bucks, anyway?
{ "language": "en", "url": "https://stackoverflow.com/questions/163837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: WPF Alternative for python Is there any alternative for WPF (windows presentation foundation) in python? http://msdn.microsoft.com/en-us/library/aa970268.aspx#Programming_with_WPF A: Here is a list of Python GUI Toolkits. Also, you can use IronPython to work with WPF directly. A: You might want to look at pygtk and glade. Here is a tutorial. There is a long list of alternatives on the Python Wiki. A: Try PyQt which binds python to QT graphics library. There are some other links at the end of that article: * *Anygui *PyGTK *FXPy *wxPython *win32ui A: If you are on Windows and you want to use WPF (as opposed to an alternative), you can use it with IronPython - a .NET version of python. Here's a quick example: http://stevegilham.blogspot.com/2007/07/hello-wpf-in-ironpython.html
{ "language": "en", "url": "https://stackoverflow.com/questions/163881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: SQL query: Simulating an "AND" over several rows instead of sub-querying Suppose I have a "tags" table with two columns: tagid and contentid. Each row represents a tag assigned to a piece of content. I want a query that will give me the contentid of every piece of content which is tagged with tagids 334, 338, and 342. The "easy" way to do this would be (pseudocode): select contentid from tags where tagid = 334 and contentid in ( select contentid from tags where tagid = 338 and contentid in ( select contentid from tags where tagid = 342 ) ) However, my gut tells me that there's a better, faster, more extensible way to do this. For example, what if I needed to find the intersection of 12 tags? This could quickly get horrendous. Any ideas? EDIT: Turns out that this is also covered in this excellent blog post. A: SELECT contentID FROM tags WHERE tagID in (334, 338, 342) GROUP BY contentID HAVING COUNT(DISTINCT tagID) = 3 --In general SELECT contentID FROM tags WHERE tagID in (...) --taglist GROUP BY contentID HAVING COUNT(DISTINCT tagID) = ... --tagcount A: Here's a solution that has worked much faster than the for me on a very large database of objects and tags. This is an example for a three-tag intersection. It just chains many joins on the object-tag table (objtags) to indicate the same object and stipulates the tag IDs in the WHERE clause: SELECT w0.objid FROM objtags t0 INNER JOIN objtags t1 ON t1.objid=t0.objid INNER JOIN objtags t2 ON t2.objid=t1.objid WHERE t0.tagid=512 AND t1.tagid=256 AND t2.tagid=128 I have no idea why this runs faster. It was inspired by the search code in the MusicBrainz server. Doing this in Postgres, I usually get a ~8-10x speedup over the HAVING COUNT(...) solution. A: The only alternative way i can think of is: select a.contentid from tags a inner join tags b on a.contentid = b.contentid and b.tagid=334 inner join tags c on a.contentid = c.contentid and c.tagid=342 where a.tagid=338 A: I don't know if this is better but it might be more maintainable select contentid from tags where tagid = 334 intersect select contentid from tags where tagid = 338 intersect select contentid from tags where tagid = 342 You'd have to build it dynamically which wouldn't be as bad as your original solution. A: What type of SQL? MS SQL Server, Oracle, MySQL? In SQL Server doesn't this equate to: select contentid from tags where tagid IN (334,338,342)
{ "language": "en", "url": "https://stackoverflow.com/questions/163887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Multiple H.264 video streams in one RTP session I would like to dynamically switch the video source in a streaming video application. However, the different video sources have unique image dimensions. I can generate individual SDP files for each video source, but I would like to combine them into a single SDP file so that the viewing client could automatically resize the display window as the video source changed. Here are two example SDP files: 640x480.sdp: v=0 o=VideoServer 305419896 9876543210 IN IP4 192.168.0.2 s=VideoStream640x480 t=0 0 c=IN IP4 192.168.0.2 m=video 8000/2 RTP/AVP 96 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=0; profile-level-id=4D4033; sprop-parameter-sets=Z01AM5ZkBQHtCAAAAwAIAAADAYR4wZU=,aO48gJ== a=control:trackID=1 960x480.sdp: v=0 o=VideoServer 305419896 9876543210 IN IP4 192.168.0.2 s=VideoStream960x480 t=0 0 c=IN IP4 192.168.0.2 m=video 8000/2 RTP/AVP 96 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=0; profile-level-id=4D4033; sprop-parameter-sets=J01AM5WwPA9sBAIA,KO4G8gA= a=control:trackID=1 How can these individual files be combined into a single SDP file? A: The parameters in your two sdp examples are very close - the stream name and the sprop-parameter-sets differ. I assume you don't care about the stream name. If you need separate sprop-parameter-sets and the clients support the standard well you can use separate dynamic payload types for each resolution and have a single SDP as follows: v=0 o=VideoServer 305419896 9876543210 IN IP4 192.168.0.2 s=VideoStream640x480 t=0 0 c=IN IP4 192.168.0.2 m=video 8000/2 RTP/AVP 96 97 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=0; profile-level-id=4D4033; sprop-parameter-sets=Z01AM5ZkBQHtCAAAAwAIAAADAYR4wZU=,aO48gJ== a=rtpmap:97 H264/90000 a=fmtp:97 packetization-mode=0; profile-level-id=4D4033; sprop-parameter-sets=J01AM5WwPA9sBAIA,KO4G8gA= a=control:trackID=1 Similar to other answers if you don't actually need the different stream names or the different sprop-parameter-sets you should be able to use your first SDP and switch format mid stream. I don't know the actual payload of H.264 or your particular decoder well enough to ensure that this will work in your applications but it is very common in videoconferencing applications to allow dynamically switching between resolutions without signaling a change or requiring a separate dynamic payload type. Although you can concatenate two SDP documents as mentioned in another answer I don't think it will help in this case. H.264 decoders can only work with a single sprop-parameter-sets parameter at a time I believe. Since both SDPs would have the same payload type, source port, etc. the receiver would not know when to use which sprop-parameter-sets parameter. UPDATE: Note some implementations get their sprops inband and not from the SDP (or only initially from the SDP). If that applies in your environment the SDP sprop-parameter-sets can be updated inband References: * *RFC 3984 RTP Payload Format for H.264 Video *New proposed H.264 RTP Payload Format RFC 6184 *RFC 4566 SDP: Session Description Protocol [Sorry for not giving the full cite - feel free to correct] A: I've gone over the RFC (RFC2327 - SDP: Session Description Protocol) and it appears you can just concatenate the two SDP documents. The document states explicitly: When SDP is conveyed by SAP, only one session description is allowed per packet. When SDP is conveyed by other means, many SDP session descriptions may be concatenated together (the `v=' line indicating the start of a session description terminates the previous description). A: I think it depends on your decoder. If it supports parameters change inside the stream, then if you can tell the encoder to put the corresponding header when changing resolution, your decoder should automatically switch. What is your question exactly ? Is it : How can I change resolution without stopping / restarting the stream ? I don't Think you can tell in advance to a decoder, here are the potential resolution that you will see with some sdp magic. Either your decoder is able to understand H264 parameter change, and then you are fine, or you have to stop restart the whole thing, and then dynamic sdp is sufficient. I know that for example, VLC is able to detect MP4 encoding change (for example moving from variable bit rate to constant bit rate), but will crash if you change resolution The only thing you can do with sdp is to combine different media description, for example with different dynamic payload type and different control-id attribute. A: You can either do the dynamic payload change or the in-stream parameter set change, or SIP re-INVITE. Payload changes have a problem that if you don't control the encoder and decoder you need to make sure the other end accepts both payloads, and that they'll switch payloads correctly (and fast enough for you - there's no requirement on that). in-stream changes have a problem if the parameter-set packets are lost. You can use a different set of parameter sets (switch from parameter-set 1 to 2 when you change) to avoid mis-decoding - if the sets are lost, you should just get a frozen or blank picture. I'd advise retransmitting them a few times (not in too-quick succession). SIP re-INVITE is out-of-band and handshaked, and thus safe, but adds delay to any switch and may glitch depending on the receiver, and could be rejected. (Note: I'm an author of RFC 3984bis, the update to RFC 3984)
{ "language": "en", "url": "https://stackoverflow.com/questions/163898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: HTML/Javascript app that runs on the filesystem, security issue I'm putting together a little tool that some business people can run on their local filesystems, since we don't want to setup a host for it. Basically, its just HTML + Javascript (using jQuery) to pull some reports using REST from a 3rd party. The problem is, FF3 and IE don't allow the ajax call, I get: Access to restricted URI denied" code: "1012 Obviously its an XSS issue...how do I work around it? The data returned is in XML format. I was trying to do it this way: $.get(productUrl, function (data){ alert (data); }); EDIT: To be clear...I'm not setting up an internal host for this(Way to much red tape), and we CANNOT host this externally due to the data being retrieved. EDIT #2: A little testing shows that I can use an IFRAME to make the request. Does anyone know if there any downsides to using a hidden IFRAME? A: In a similar situation, my solution was to use Mark Of The Web, which is a special HTML comment that IE recognizes. It places the page in a different security zone. Reference: MSDN A: If you have Python installed, a webserver to serve files can be as simple as python -c “import SimpleHTTPServer;SimpleHTTPServer.test()” Edit: Original poster can't use this approach, but in general I think this is the way to solve this particular problem for future users with this issue. A: Do you control the server providing the data? If so you can setup a callback. The basic idea is you have a function in the script that handles incoming data (in your case an XML string). Then the server responds to the request with a JavaScript snippet of your callback function with the string as the argument. And instead of using AJAX, you add a new script tag to the page. This is the basis for JSONP. It looks something like this. local page. <script> function callback(str) { alert(str); } function makeRequest(param) { var s = document.createElement('script'); s.src = 'http://serveranywhere/script.bla?' + params; document.getElementsByTagName[0].appendChild(s); } </script> remote server returns callback('<xml><that><does><something></something></does></that></xml>'); now when the script is added to the page, the function callback will be executed you the string you provide. And jQuery call do all of this for you using JSONP in the $.ajax call. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/163900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you decide if a project should be web-based or desktop-based? I'm having trouble deciding if I want a project of mine to be web-based (as in a web-app), desktop-based (a desktop application), or a desktop application that can sync or connect to the cloud. I don't know if anyone else would have an interest in this application, and it's only going to be for me, so I'm leaning toward desktop application. If, for some reason, I finish it, release it, and people actually like it, I might see about making it sync to the cloud as well (think v2). But I'm not sure how hard it is to make such a radical change, and I don't want to end up with something good that is useless because I made a poor choice before I even started the project. Is there any sort of guidance for this? Any rules of thumb or best practices? Any personal experiences? If the language matters, I'm thinking about Java simply because I'm most comfortable with it, and it would easily allow me to share it with my friends for testing and if I get stuck and need help from someone else in person. A: If you release as a web-app, you won't have to port it over. You'll also have access to it wherever you go. A: I base my choice on the GUI mostly. If the GUI is going to be complex, and (needs to be fast or will have aspects of it that will take a lot of time to process) then I will go with the Desktop. If it is simple, and will always have small data sets to work with at once, the I will go with the Web. I have worked on an app that was made as a web app, when clearly it was better suited for the desktop. It was a massive failure. I don't know HOW customers put up with it, cause I certainly wouldn't have used it. The desktop version (which took over 6 months to re-write) blew the web version out of the water. That being said, I have seen some nice web apps. A: All I can suggest are several factors that would be relevant. How you determine the answer and weight for the factor is up to you and other circumstances: * *What is your audience? Do you have any control over them? *How complex are the interactions you expect to implement? *Do you require near real-time data updates? *How often do you expect to update the application after the first release? *Do you expect a well-defined set of client platforms, or can you not predict that? Note that your choices also can include a Java WebStart application, which mitigates some of the disadvantages of a typical desktop application. A: I'd say that most applications should be desktop-based. The advantages are faster and more fluid apps. You should only create a web application if there are obvious benefits from it, like access from everywhere. (If that's necessary for your app.) A downside of web applications can also be that it is dependent on the developer, if you quit supporting it all your users (if you'll have any) can't use it anymore. Furthermore, there is a chance that users are not willing to store their data online. Ultimately it depends on what kind of an application you want to write. Even if you create it as a desktop-app, you can later on rewrite it for the web. Often a 2.0 version of software needs almost complete rewriting anyway. A: I generally ask a few questions: * *Can it even be done on the web? Something I did not too long ago involved an image editing component, and had to be a web app. It involved much pain to get this work, and a desktop app would have been a far better way to go. *Will I need to access it from anywhere? Yeah you could load it up on a thumb drive, but the web is far more feasible in this case. *Will there be multiple users? This could go either way, but "long tail" stuff usually means web. *What tech do you want to use? The latest and greatest WPF based UI? Desktop (yeah yeah, silverlight, let's not go there ok?). The brain dead stupid easy user management of Django or others? Web. *If it were a web app, will you need to worry about common attack vectors like SQL Injection, XSS, etc? A desktop app has its own issues here too, but tend to have less exposure. *How resource intensive is it? Will 10 users kill performance of a web server? *Versioning on the desktop can be a pain, whereas with a webapp everyone is on the same version. This can bite you though, see the New Facebook user pushback. EDIT: * *Cost can be a factor too. A web app with a database backend typically means a web server. If you want to stick with, say, the Microsoft Stack, you'll need licenses for SQL Server which can get pricey. Open source is cheaper, but may not be an option in all cases. "Serving" a desktop app is generally cheaper. A: Sometime web can be good and sometime not. We are in a new wave that go in the web but do not forget few things: * *GUI in web is more complicated because of multiple browser *People who need to work on your system might not like working the whole day in a browser *Web can be slower for some application (image editing, hard job that require a lot of CPU) *Rapid Gui like Visual Studio for winform are faster than for web But web has many advantage in the deployement and in the portability. If your system is well structured you could make both or change to one to other later with something build with MVC. Just change your visual and you will be fine. A: If this were an application to be used my multiple users, with shared data, you're probably going to want a server anyway. In that case I'd lean towards a web application. Otherwise you've got the complexity of syncing data between the desktop and a server. A: Two important questions not on the list so far: * *Will the first version have any features that need lowish-level access to hardware? *Will future versions have any featuers that need lowish-level access to hardware? It's pretty easy to answer the first one, but giving the second one some thought can save you some headache down the road. A: My default choice is to go with a web solution, as it's easier to deploy and generally multi-platform. The only time I go with winforms apps is when there are pressing security, performance, or functionality issues that require it. A: Previously you'd have written a desktop application, as tool were better for that and you'd have written it faster. People used to want web apps, but always ended up with desktop. Nowadays things are different, you can write a webservice just as quickly and easily so there's no reason not to go web-based. The advantages of web-based are flexibility, scalability and ease of deployment. It won't be as responsive as a desktop app could be, but that's not so much of an issue if you think about your design.
{ "language": "en", "url": "https://stackoverflow.com/questions/163913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What is the purpose for using OPTION(MAXDOP 1) in SQL Server? I have never clearly understood the usage of MAXDOP. I do know that it makes the query faster and that it is the last item that I can use for Query Optimization. However, my question is, when and where it is best suited to use in a query? A: As something of an aside, MAXDOP can apparently be used as a workaround to a potentially nasty bug: Returned identity values not always correct A: As Kaboing mentioned, MAXDOP(n) actually controls the number of CPU cores that are being used in the query processor. On a completely idle system, SQL Server will attempt to pull the tables into memory as quickly as possible and join between them in memory. It could be that, in your case, it's best to do this with a single CPU. This might have the same effect as using OPTION (FORCE ORDER) which forces the query optimizer to use the order of joins that you have specified. IN some cases, I have seen OPTION (FORCE PLAN) reduce a query from 26 seconds to 1 second of execution time. Books Online goes on to say that possible values for MAXDOP are: 0 - Uses the actual number of available CPUs depending on the current system workload. This is the default value and recommended setting. 1 - Suppresses parallel plan generation. The operation will be executed serially. 2-64 - Limits the number of processors to the specified value. Fewer processors may be used depending on the current workload. If a value larger than the number of available CPUs is specified, the actual number of available CPUs is used. I'm not sure what the best usage of MAXDOP is, however I would take a guess and say that if you have a table with 8 partitions on it, you would want to specify MAXDOP(8) due to I/O limitations, but I could be wrong. Here are a few quick links I found about MAXDOP: Books Online: Degree of Parallelism General guidelines to use to configure the MAXDOP option A: There are a couple of parallization bugs in SQL server with abnormal input. OPTION(MAXDOP 1) will sidestep them. EDIT: Old. My testing was done largely on SQL 2005. Most of these seem to not exist anymore, but every once in awhile we question the assumption when SQL 2014 does something dumb and we go back to the old way and it works. We never managed to demonstrate that it wasn't just a bad plan generation on more recent cases though since SQL server can be relied on to get the old way right in newer versions. Since all cases were IO bound queries MAXDOP 1 doesn't hurt. A: This is a general rambling on Parallelism in SQL Server, it might not answer your question directly. From Books Online, on MAXDOP: Sets the maximum number of processors the query processor can use to execute a single index statement. Fewer processors may be used depending on the current system workload. See Rickie Lee's blog on parallelism and CXPACKET wait type. It's quite interesting. Generally, in an OLTP database, my opinion is that if a query is so costly it needs to be executed on several processors, the query needs to be re-written into something more efficient. Why you get better results adding MAXDOP(1)? Hard to tell without the actual execution plans, but it might be so simple as that the execution plan is totally different that without the OPTION, for instance using a different index (or more likely) JOINing differently, using MERGE or HASH joins. A: Adding my two cents, based on a performance issue I observed. If simple queries are getting parellelized unnecessarily, it can bring more problems than solving one. However, before adding MAXDOP into the query as "knee-jerk" fix, there are some server settings to check. In Jeremiah Peschka - Five SQL Server Settings to Change, MAXDOP and "COST THRESHOLD FOR PARALLELISM" (CTFP) are mentioned as important settings to check. Note: Paul White mentioned max server memory aslo as a setting to check, in a response to Performance problem after migration from SQL Server 2005 to 2012. A good kb article to read is Using large amounts of memory can result in an inefficient plan in SQL Server Jonathan Kehayias - Tuning ‘cost threshold for parallelism’ from the Plan Cache helps to find out good value for CTFP. Why is cost threshold for parallelism ignored? Aaron Bertrand - Six reasons you should be nervous about parallelism has a discussion about some scenario where MAXDOP is the solution. Parallelism-Inhibiting Components are mentioned in Paul White - Forcing a Parallel Query Execution Plan
{ "language": "en", "url": "https://stackoverflow.com/questions/163917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Extracting Autocomplete Emails from Outlook 2007 I need to extract all the emails that show up as autocomplete entries in Outlook 2007. I mostly need to create a list of all the email addresses which I have sent emails to in the past and dump them into excel. Should I be connecting to Outlook through COM somehow? Thanks. A: All of that information is in a file in the local settings with an extension NK2. c:\Documents and Settings\{USERNAME}\Application Data\Microsoft\Outlook\{USERNAME}.NK2 This utility can read/edit the contents. The file format itself is explained here: Google Code debunk2 explanation of NK2 file format
{ "language": "en", "url": "https://stackoverflow.com/questions/163919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Methods for Geotagging or Geolabelling Text Content What are some good algorithms for automatically labeling text with the city / region or origin? That is, if a blog is about New York, how can I tell programatically. Are there packages / papers that claim to do this with any degree of certainty? I have looked at some tfidf based approaches, proper noun intersections, but so far, no spectacular successes, and I'd appreciate ideas! The more general question is about assigning texts to topics, given some list of topics. Simple / naive approaches preferred to full on Bayesian approaches, but I'm open. A: Latent Semantic Mapping seems like potentially a good fit. That's just about as naive of an algorithm as you're likely to find. A: You're looking for a named entity recognition system, or short NER. There are several good toolkits available to help you out. LingPipe in particular has a very decent tutorial. CAGEclass seems to be oriented around NER on geographical place names, but I haven't used it yet. Here's a nice blog entry about the difficulties of NER with geographical places names. If you're going with Java, I'd recommend using the LingPipe NER classes. OpenNLP also has some, but the former has a better documentation. If you're looking for some theoretical background, Chavez et al. (2005) have constructed an interesting syntem and documented it.
{ "language": "en", "url": "https://stackoverflow.com/questions/163923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What are the lengths of common datatypes? How many bytes would an int contain and how many would a long contain? Context: * *C++ *32 bit computer *Any difference on a 64-bit computer? A: See the wikipedia article about it. A: it is platform and compiler specific. do sizeof(int) and sizeof(long) in c or c++. A: (I assume you're talking about C/C++) It's implementation dependant, but this rule should be always valid: sizeof(short) <= sizeof(int) <= sizeof(long) A: As others have said endlessly, it depends on the compiler you're using (and even the compiler options that you select). However, in practice, with compilers for many 32-bit machines, you will find:- * *char: 8 bit *short: 16 bit *int: 32-bit *long: 32-bit *long long: 64-bit ( if supported) The C standard basiucally says that a long can't be shorter than an int which can't be shorter than a short, etc... For 64-bit CPUs, those often don't change, but you MUST beware that pointers and ints are frequently not the same size: sizeof(int) != sizeof(void*) A: It depends on the compiler. On a 32 bit system, both int and long contain 32 bits. On a 16 bit system, int is 16 bits and long is 32. There are other combinations! A: I think it depends on the hardware your using. on 32-bit platforms it is typically 4 bytes for both int and long. in C you can use the sizeof() operator to find out. int intBytes; long longBytes; intBytes= sizeof(int); longBytes = sizeof(long); I'm not sure if long becomes 8 bytes on 64-bit architectures or if it stays as 4. A: It depends on your compiler. And your language, for that matter. Try asking a more specific question. A: That depends greatly on the language you are using. In C, "int" will always be the word length of the processor. So 32 bits or 4 bytes on a 32 bit architecture.
{ "language": "en", "url": "https://stackoverflow.com/questions/163938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it possible to determine when a stored procedure was last modified in SQL Server 2000? I know that you can do this in SQL Server 2005, but I'm at a loss for 2000. A: Not to my knowledge. To get around this, I manage my stored procedures in a Visual Studio database project. Every stored procedure is in its own file and has a drop command at the top of the file. When I update the stored through Visual Studio, the database's created date is updated in the database because of the drop/create statement. I am able to use the created date in SQL Server 2000 as the last modified date in this manner. A: From all the research I've done on this in the past, I unfortunately have to say no. SQL Server 2000 simply does not store this information, and I've never seen any solution for retrieving it. There are a few alternative methods, but they all involve user intervention. Besides keeping stored procedure scripts in a source control system, I think the next best approach is to use comments inside the stored procedure. Not ideal, but it's better than nothing if you want to track what gets updated. A: SELECT crdate FROM sysobjects WHERE name = 'proc name here' AND type = 'P' A: It looks like you could use : SELECT * FROM INFORMATION_SCHEMA.ROUTINES Found here : Date object last modified
{ "language": "en", "url": "https://stackoverflow.com/questions/163957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Switching from std::string to std::wstring for embedded applications? Up until now I have been using std::string in my C++ applications for embedded system (routers, switches, telco gear, etc.). For the next project, I am considering to switch from std::string to std::wstring for Unicode support. This would, for example, allow end-users to use Chinese characters in the command line interface (CLI). What complications / headaches / surprises should I expect? What, for example, if I use a third-party library which still uses std::string? Since support for international strings isn't that strong of a requirement for the type of embedded systems that I work on, I would only do it if it isn't going to cause major headaches. A: Note that many communications protocols require 8-bit characters (or 7-bit characters, or other varieties), so you will often need to translate between your internal wchar_t/wstring data and external encodings. UTF-8 encoding is useful when you need to have an 8-bit representation of Unicode characters. (See How Do You Write Code That Is Safe for UTF-8? for some more info.) But note that you may need to support other encodings. More and more third-party libraries are supporting Unicode, but there are still plenty that don't. I can't really tell you whether it is worth the headaches. It depends on what your requirements are. If you are starting from scratch, then it will be easier to start with std::wstring than converting from std::string to std::wstring later. A: std::wstring is a good choice for holding Unicode strings on Windows, but not on most other platforms, and ceirtanly not for a portable code. Better try to stick with std::string and UTF-8. A: You might get some headache because of the fact that the C++ standard dictates that wide-streams are required to convert double-byte characters to single-byte when writing to a file, and how this conversion is done is implementation-dependent.
{ "language": "en", "url": "https://stackoverflow.com/questions/163962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: .Net: Convert Generic List of Objects to DataSet Does anyone have code to do this? A: Keith Elder has an example of this.
{ "language": "en", "url": "https://stackoverflow.com/questions/163973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Best Source Control Solution for Oracle/ASP.NET Environment? I am trying to plan a way for 5 developers to use Visual Studio 2005/2008 to collaboratively develop an ASP.NET web app on a development web server against an Oracle 8i(soon to be 10g) Database. The developers are either on the local network or coming in over a vpn (not a very fast connection), I evaluated the latest Visual SourceSafe, but ran into the following gotchas: 1) We can't use decentralized development because we can't replicate a development oracle database to all developers computers. Also, the vpn is too slow to let their local app instances connect to the database server. 2) Since VSS source code not on the file system, the only way to debug it is to build the app and run debugger, which only one developer can do at a time on a centralized development server. This is unacceptable. We tried using shadow folders so that every time a file is checked in it gets published to the app instance on the development server, but this failed for remote developers on the vpn. 3) Since the developers do a lot of web code, it is important for productivity reasons that when they SAVE a file, they should be able to immediately see the change working on the development server. 4) No easy way to implement a controlled process for pushing files to the production server. Any suggestions on a source control solution that would work under these contraints? Update: I guess since development is forced to be on the server, we need to go with a "Lock and Check In" model. So which source control solution would work best for "Lock and Check In' scenarios? Update: Does Visual SVN support developing centrally against a development server? As in, the dev can immediately see his update on the development server after saving in VS? A: I have used Subversion and TortoiseSVN and was very pleased. A: Is point 1 due to an issue with your database schema (or data) ? * *We can't use decentralized development because we can't replicate a development oracle database to all developers computers. If not, I strongly suggest that every developer has its own environment (Visual Studio, Oracle...) and use your development server for integration purposes. Maybe you could just give them a subset of the data, or maybe just the schema scripts. * *Oracle Express Edition is perfectly fit for this scenario. Besides, sharing the same database violates rule #1 for database work, which in my experience should be enforced anywhere possible. *As Guy suggested, have an automated build allowing any developer to recreate its database schema at any time. *More very useful guidelines can be found here (include rule #1 above). *Define your development process so that parallel development is possible, and only use locks as a last resort. I'm sorry if you already envisioned these solutions and found them unfit to your situation, but I really felt the urge to express them just in case... A: Visual Source Safe is the spawn of Satan. Look at Subversion, and Visual SVN (with Tortise SVN). Sure, Visual SVN costs a bit - $49 per seat - but it is a great tool. We have a development team of 6 programmers, and it has been a great boon to us. A: If you can spend the money, then Team Foundation Server is the one that works best in a Visual Studio dev environment. And based on personal experience, it works beautifully over VPN connections. And you can of course have automated builds going on it. A: I would say SVN on price (free), Perforce on ease of integration. You will undoubtedly hear about GIT and CVS as well and there are good reasons to look at them. A: Interesting -- it sounds you are working on a web site project on the server, and everyone is working on the same physical files. I agree that SVN is far superior to VSS and really good to work with, but in my experience it's really geared toward developers working on a copy of the code locally. VSS is a "lock and check in" type of source control, while SVN and TFS and most others are "edit and merge" -- devs all get copies of the source, edit the files as needed, and later merge their changes in to source control, and if someone else has edited the file in the meantime they merge the changes together. From a database standpoint, I assume you are checking in your database scripts, then have some automated build packaging and running them (or maybe just a dev or DBA running them manually every so often). In this case, having the developers have a local copy of the scripts that they can edit and merge using SVN or TFS makes sense. For a team working on a shared copy of the source code on a development server, though, you may get into problems using edit and merge -- a "lock and check in" model of source control may work better for you. Just not VSS, from a corruption and stability standpoint.
{ "language": "en", "url": "https://stackoverflow.com/questions/163980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Converting a UNION query in MySQL I have a very large table (8gb) with information about files, and i need to run a report against it that would would look something like this: (select * from fs_walk_scan where file_path like '\\\\server1\\groot$\\%' order by file_size desc limit 0,30) UNION ALL (select * from fs_walk_scan where file_path like '\\\\server1\\hroot$\\%' order by file_size desc limit 0,30) UNION ALL (select * from fs_walk_scan where file_path like '\\\\server1\\iroot$\\%' order by file_size desc limit 0,30) UNION ALL (select * from fs_walk_scan where file_path like '\\\\server2\\froot$\\%' order by file_size desc limit 0,30) UNION ALL (select * from fs_walk_scan where file_path like '\\\\server2\\groot$\\%' order by file_size desc limit 0,30) UNION ALL (select * from fs_walk_scan where file_path like '\\\\server3\\hroot$\\%' order by file_size desc limit 0,30) UNION ALL (select * from fs_walk_scan where file_path like '\\\\server4\\iroot$\\%' order by file_size desc limit 0,30) UNION ALL (select * from fs_walk_scan where file_path like '\\\\server5\\iroot$\\%' order by file_size desc limit 0,30) [...] order by substring_index(file_path,'\\',4), file_size desc This method accomplishes what I need to do: Get a list of the 30 biggest files for each volume. However, this is deathly slow, and the 'like' searches are hardcoded even though they are sitting in another table and can be gotten that way. What I'm looking for is a way to do this without going through the huge table several times. Anyone have any ideas? Thanks. P.S. I cant change the structure of the huge source table in any way. Update: There are indexes on file_path and file_size, but each one of those sub(?)queries still takes about 10 mins, and I have to do 22 minimum. A: What kind of indexes do you have on that table? This index: CREATE INDEX fs_search_idx ON fs_walk_scan(file_path, file_size desc) would speed this query up significantly... if you don't already have one like it. Update: You said there are already indexes on file_path and file_size... are they individual indexes? Or is there one single index with both columns indexed together? The difference would be huge for this query. Even with 22 subqueries, if indexed right, this should be blazing fast. A: You could use a regexp: select * from fs_walk_scan where file_path regexp '^\\\\server(1\\[ghi]|2\\[fg]|3\\h|[45]\\i)root$\\' Otherwise if you can modify your table structure, add two columns to hold the server name and base path (and index them), so that you can create a simpler query: select * from fs_walk_scan where server = 'server1' and base_path in ('groot$', 'hroot$', 'iroot$') or server = 'server2' and base_path in ('froot$', 'groot$') You can either set up a trigger to initialise the fields when you insert the record, or else do a bulk update afterwards to fill in the two extra columns. A: You could do something like this... assuming fs_list has a list of your "LIKE" searches: DELIMITER $$ DROP PROCEDURE IF EXISTS `test`.`proc_fs_search` $$ CREATE PROCEDURE `test`.`proc_fs_search` () BEGIN DECLARE cur_path VARCHAR(255); DECLARE done INT DEFAULT 0; DECLARE list_cursor CURSOR FOR select file_path from fs_list; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; SET @sql_query = ''; OPEN list_cursor; REPEAT FETCH list_cursor INTO cur_path; IF NOT done THEN IF @sql_query <> '' THEN SET @sql_query = CONCAT(@sql_query, ' UNION ALL '); END IF; SET @sql_query = CONCAT(@sql_query, ' (select * from fs_walk_scan where file_path like ''', cur_path , ''' order by file_size desc limit 0,30)'); END IF; UNTIL done END REPEAT; SET @sql_query = CONCAT(@sql_query, ' order by file_path, file_size desc'); PREPARE stmt FROM @sql_query; EXECUTE stmt; DEALLOCATE PREPARE stmt; END $$ DELIMITER ; A: Try this. You want to get every record where there are fewer than 30 records with greater file size and the same file path. SELECT * FROM fs_walk_scan a WHERE ( SELECT COUNT(*) FROM fs_walk_scan b WHERE b.file_size > a.file_size AND b.file_path = a.file_path ) < 30 Edit: Apparently this performs like a dog. So... How about this looping syntax? SELECT DISTINCT file_path INTO tmp1 FROM fs_walk_scan a DECLARE path VARCHAR(255); SELECT MIN(file_path) INTO path FROM tmp1 WHILE path IS NOT NULL DO SELECT * FROM fs_walk_scan WHERE file_path = path ORDER BY file_size DESC LIMIT 0,30 SELECT MIN(file_path) INTO path FROM tmp1 WHERE file_path > path END WHILE The idea here is to 1. get a list of the file paths 2. loop, doing a query for each path which will get the 30 largest file sizes. (I did look up the syntax, but I'm not very hot on MySQL, so appologies if it's not quite there. Feel free to edit/comment) A: How about something like this (haven't tested it, but looks close): select * from fs_walk_scan where file_path like '\\\\server' and file_path like 'root$\\%' order by file_size desc This way you're doing a pair of comparisons on the individual field which will generically match what you've described. It may be possible to use a regex, too, but I've not done it. A: You can use grouping and self join for achieving this. SELECT substring_index(file_path, '\\', 4), file_path from fs_walk_scan as ws1 WHERE 30<= ( select count(*) from fs_Walk_scan as ws2 where substring_index(ws2.file_path, '\\', 4) = substring_index(ws1.file_path, '\\', 4) and ws2.file_size > ws1.file_size and ws2.file_path <> ws1.file_path) group by substring_index(file_path, '\\', 4) It still is an O(n) query (n being number of groups) but is more flexible and shorter. Edit: Another approach is using variables. Feasibility for your purpose will depend on how you are going to run this query. set @idx=0; set @cur_vol=0; SELECT file_volume, file_path, file_size FROM ( SELECT file_volume, file_path, file_size, IF(@cur_vol != a.file_volume, @idx:=1, @idx:=@idx+1) AS row_index, IF(@cur_vol != a.file_volume, @cur_vol:=a.file_volume, 0) AS discard FROM (SELECT substring_index(file_path, '\\', 4) as file_volume, file_path, file_size FROM fs_walk_scan ORDER BY substring_index(file_path,'\\',4), file_size DESC) AS a HAVING row_index <= 30) AS b; I haven't tried this code yet, but the concept of variables can be used like this for your purpose.
{ "language": "en", "url": "https://stackoverflow.com/questions/163994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Access denied when creating a virtual directory via Web Deployment Project I’m trying to use a (VS 2008) Web Deployment project in a TFS solution to deploy the web site to the (TFS 2008) build server to run web based unit tests. For some reason, that I can't yet figure out, it is failing to create the virtual directory: Using "CreateVirtualDirectory" task from assembly "C:\Program Files\MSBuild\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.Tasks.dll". Task "CreateVirtualDirectory" Initializing IIS Web Server... C:\Program Files\MSBuild\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets(667,5): error : Access is denied. C:\Program Files\MSBuild\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets(667,5): error : Failed to create virtual directory 'abc'. Done executing task "CreateVirtualDirectory" -- FAILED. The TFSService user certainly is in the Administrators group on the TFS Build machine (which is running Windows Server 2008). I don’t know what else could be wrong. I’ve checked the event log an there’s no clues there. I am able to manually create the virtual directory on that machine through the IIS console with no problem. Any ideas what could be the problem or suggestions for how to diagnose this further? A: it has got to be permissions...did you try putting the TFSService in the same Groups you are in? A: Is the TFS account running under the same privileges as the account that you use to connect to IIS? Do as Craig suggested and move the TFS account into the groups that you participate in. A: You're sure that the build is running under the TFSService id and not under another id set up just for builds, and which may not be in the administrator's group? I haven't done more than just play with automated builds since I do mostly solo development, but I recall setting up a separate build id when I was looking at this. A: I have seen this occur when the IIS server wasn't running on the default port. I'd recommend checking IIS to see if it's running on port 80 as a step to diagnose your issue further. A: I eventually managed to get deployment working by calling the _CopyWebApplication build target of the web application from my TFS build script (after manually creating the IIS virtual directory). I had to add an additional target though to get linked files in the project to be copied also as the built in _CopyWebApplication target doesn't include those.
{ "language": "en", "url": "https://stackoverflow.com/questions/163997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Classical set operations for java.util.Collection Is there any built-in functionality for classical set operations on the java.util.Collection class? My specific implementation would be for ArrayList, but this sounds like something that should apply for all subclasses of Collection. I'm looking for something like: ArrayList<Integer> setA ... ArrayList<Integer> setB ... ArrayList<Integer> setAintersectionB = setA.intersection(setB); ArrayList<Integer> setAminusB = setA.subtract(setB); After some searching, I was only able to find home-grown solutions. Also, I realize I may be confusing the idea of a "Set" with the idea of a "Collection", not allowing and allowing duplicates respectively. Perhaps this is really just functionality for the Set interface? In the event that nobody knows of any built-in functionality, perhaps we could use this as a repository for standard practice Java set operation code? I imagine this wheel has been reinvented numerous times. A: For mutable operations see accepted answer. For an imutable variant you can do this with java 8 subtraction set1 .stream() .filter(item-> !set2.contains(item)) .collect(Collectors.toSet()) intersection set1 .stream() .filter(item-> set2.contains(item)) .collect(Collectors.toSet()) A: Are you looking for java.util.Set interface (and its implementations HashSet and TreeSet (sorted))? The interface defines removeAll(Collection c) which looks like substract(), and retainAll(Collection c) which looks like intersection. A: I would recommend Google Guava. The Sets class seems to have exactly what you are looking for. It has a intersection method and a difference method. This presentation is probably something you want to watch if you're interested. It refers to Google Collections, which was Guava's original name. A: Intersection is done with Collection.retainAll; subtraction with Collection.removeAll; union with Collection.addAll. In each case, as Set will act like a set and a List will act like a list. As mutable objects, they operate in place. You'll need to explicitly copy if you want to retain the original mutable object unmutated.
{ "language": "en", "url": "https://stackoverflow.com/questions/163998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: Why is fread reaching the EOF early? I am writing a C library that reads a file into memory. It skips the first 54 bytes of the file (header) and then reads the remainder as data. I use fseek to determine the length of the file, and then use fread to read in the file. The loop runs once and then ends because the EOF is reached (no errors). At the end, bytesRead = 10624, ftell(stream) = 28726, and the buffer contains 28726 values. I expect fread to read 30,000 bytes and the file position to be 30054 when EOF is reached. C is not my native language so I suspect I've got a dumb beginner mistake somewhere. Code is as follows: const size_t headerLen = 54; FILE * stream; errno_t ferrno = fopen_s( &stream, filename.c_str(), "r" ); if(ferrno!=0) { return -1; } fseek( stream, 0L, SEEK_END ); size_t bytesTotal = (size_t)(ftell( stream )) - headerLen; //number of data bytes to read size_t bytesRead = 0; BYTE* localBuffer = new BYTE[bytesTotal]; fseek(stream,headerLen,SEEK_SET); while(!feof(stream) && !ferror(stream)) { size_t result = fread(localBuffer+bytesRead,sizeof(BYTE),bytesTotal-bytesRead,stream); bytesRead+=result; } Depending on the reference you use, it's quite apparent that adding a "b" to the mode flag is the answer. Seeking nominations for the bonehead-badge. :-) This reference talks about it in the second paragraph, second sentence (though not in their table). MSDN doesn't discuss the binary flag until halfway down the page. OpenGroup mentions the existance of the "b" tag, but states that it "shall have no effect". A: perhaps it's a binary mode issue. Try opening the file with "r+b" as the mode. EDIT: as noted in a comment "rb" is likely a better match to your original intent since "r+b" will open it for read/write and "rb" is read-only. A: Also worth noting that simply including binmode.obj into your link command will do this for you for all file opens. A: A solution, based on the previous answers: size_t bytesRead = 0; BYTE* localBuffer = new BYTE[bytesTotal]; fseek(stream,headerLen,SEEK_SET); while(!feof(stream) && !ferror(stream)) { size_t result = fread(localBuffer+bytesRead,sizeof(BYTE),bytesTotal- bytesRead,stream); bytesRead+=result; }
{ "language": "en", "url": "https://stackoverflow.com/questions/164002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Sql Server 2005 Connection Limit Is there a connection limit on Sql Server 2005 Developers Edition. We have many threads grabbing connections, and I know ADO.NET does connection pooling, but I get OutOfMemory exceptions. We take out the db connections and it works fine. A: This is the response to that question on Euan Garden's (a Program Manager for Visual Studio Team Edition) blog: There are no limits in terms of memory, db size or procs for DE, it is essentially Enterprise Edition. There is however a licensing restriction that prevents it from being used in production. Therefore, you probably just need to make sure you are closing your connection objects properly. The using block will be perfect for such a job... A: You may not be closing or disposing of your connection objects correctly. Make sure your code looks something like this: using (SqlConnection conn = new SqlConnection("connectionstring")) { conn.Open(); // database access code goes here } The using block will automatically close and dispose of your connection object. A: 32767 on Enterprise Edition <ServerProductVersion>9.00.3235.00</ServerProductVersion> <ServerProductLevel>SP2</ServerProductLevel> <ServerEdition>Enterprise Edition</ServerEdition> <ServerEngineEdition>3</ServerEngineEdition> How I check... CREATE FUNCTION [dbo].svfV1GetSessionAndServerEnvironmentMetaData RETURNS xml AS BEGIN -- Declare the return variable here DECLARE @ResultVar xml -- Add the T-SQL statements to compute the return value here SET @ResultVar = ( SELECT @@SPID as SPID, @@ProcID as ProcId, @@DBTS as DBTS, getdate() as DateTimeStamp, System_User as SystemUser, Current_User as CurrentUser, Session_User as SessionUser, User_Name() as UserName, Permissions() as UserSessionPermissionsBitmap, Host_Id() as HostId, Host_Name() as HostName, App_Name() as AppName, ServerProperty('ProcessId') as ServerProcessId, ServerProperty('MachineName') as ServerMachineName, ServerProperty('ServerName') as ServerServerName, ServerProperty('ComputerNamePhysicalNetBIOS') as ServerComputerNamePhysicalNetBIOS, ServerProperty('InstanceName') as ServerInstanceName, ServerProperty('ProductVersion') as ServerProductVersion, ServerProperty('ProductLevel') as ServerProductLevel, @@CONNECTIONS as CumulativeSqlConnectionsSinceStartup, @@TOTAL_ERRORS as CumulativeDiskWriteErrorsSinceStartup, @@PACKET_ERRORS as CumulativeNetworkPacketErrorsSinceStartup, --Note: --If the time returned in @@CPU_BUSY, or @@IO_BUSY exceeds approximately 49 days of cumulative CPU time, --you receive an arithmetic overflow warning. In that case, --the value of @@CPU_BUSY, @@IO_BUSY and @@IDLE variables are not accurate. -- @@CPU_BUSY * @@TIMETICKS as CumulativeMicroSecondsServerCpuBusyTimeSinceStartup, -- @@IO_BUSY * @@TIMETICKS as CumulativeMicroSecondsServerIoBusyTimeSinceStartup, -- @@IDLE * @@TIMETICKS as CumulativeMicroSecondsServerIdleTimeSinceStartup, ServerProperty('BuildClrVersion') as ServerBuildClrVersion, ServerProperty('Collation') as ServerCollation, ServerProperty('CollationID') as ServerCollationId, ServerProperty('ComparisonStyle') as ServerComparisonStyle, ServerProperty('Edition') as ServerEdition, ServerProperty('EditionID') as ServerEditionID, ServerProperty('EngineEdition') as ServerEngineEdition, ServerProperty('IsClustered') as ServerIsClustered, ServerProperty('IsFullTextInstalled') as ServerIsFullTextInstalled, ServerProperty('IsIntegratedSecurityOnly') as ServerIsIntegratedSecurityOnly, ServerProperty('IsSingleUser') as ServerIsSingleUser, ServerProperty('LCID') as ServerLCID, ServerProperty('LicenseType') as ServerLicenseType, ServerProperty('NumLicenses') as ServerNumLicenses, ServerProperty('ResourceLastUpdateDateTime') as ServerResourceLastUpdateDateTime, ServerProperty('ResourceVersion') as ServerResourceVersion, ServerProperty('SqlCharSet') as ServerSqlCharSet, ServerProperty('SqlCharSetName') as ServerSqlCharSetName, ServerProperty('SqlSortOrder') as ServerSqlSortOrder, ServerProperty('SqlSortOrderName') as ServerSqlSortOrderName, @@MAX_CONNECTIONS as MaxAllowedConcurrentSqlConnections, SessionProperty('ANSI_NULLS') as SessionANSI_NULLS, SessionProperty('ANSI_PADDING') as SessionANSI_PADDING, SessionProperty('ANSI_WARNINGS') as SessionANSI_WARNINGS, SessionProperty('ARITHABORT') as SessionARITHABORT, SessionProperty('CONCAT_NULL_YIELDS_NULL') as SessionCONCAT_NULL_YIELDS_NULL, SessionProperty('NUMERIC_ROUNDABORT') as SessionNUMERIC_ROUNDABORT, SessionProperty('QUOTED_IDENTIFIER') as SessionQUOTED_IDENTIFIER FOR XML PATH('SequenceIdEnvironment') ) -- Return the result of the function RETURN @ResultVar END on my SQL Server database engine instance returns <SequenceIdEnvironment>   <SPID>56</SPID>   <ProcId>1666821000</ProcId>   <DBTS>AAAAAAAAB9A=</DBTS>   <DateTimeStamp>2008-10-02T15:09:26.560</DateTimeStamp> ...   <CurrentUser>dbo</CurrentUser>   <SessionUser>dbo</SessionUser>   <UserName>dbo</UserName>   <UserSessionPermissionsBitmap>67044350</UserSessionPermissionsBitmap>   <HostId>3852 </HostId> ...   <AppName>Microsoft SQL Server Management Studio - Query</AppName>   <ServerProcessId>508</ServerProcessId> ...   <ServerProductVersion>9.00.3235.00</ServerProductVersion>   <ServerProductLevel>SP2</ServerProductLevel>   <CumulativeSqlConnectionsSinceStartup>169394</CumulativeSqlConnectionsSinceStartup>   <CumulativeDiskWriteErrorsSinceStartup>0</CumulativeDiskWriteErrorsSinceStartup>   <CumulativeNetworkPacketErrorsSinceStartup>0</CumulativeNetworkPacketErrorsSinceStartup>   <ServerBuildClrVersion>v2.0.50727</ServerBuildClrVersion>   <ServerCollation>SQL_Latin1_General_CP1_CI_AS</ServerCollation>   <ServerCollationId>872468488</ServerCollationId>   <ServerComparisonStyle>196609</ServerComparisonStyle>   <ServerEdition>Enterprise Edition</ServerEdition> ...   <ServerEngineEdition>3</ServerEngineEdition>   <ServerIsClustered>0</ServerIsClustered>   <ServerIsFullTextInstalled>1</ServerIsFullTextInstalled>   <ServerIsIntegratedSecurityOnly>0</ServerIsIntegratedSecurityOnly>   <ServerIsSingleUser>0</ServerIsSingleUser> ...   <ServerResourceLastUpdateDateTime>2008-03-12T18:59:08.633</ServerResourceLastUpdateDateTime>   <ServerResourceVersion>9.00.3235</ServerResourceVersion>   <ServerSqlCharSet>1</ServerSqlCharSet>   <ServerSqlCharSetName>iso_1</ServerSqlCharSetName>   <ServerSqlSortOrder>52</ServerSqlSortOrder>   <ServerSqlSortOrderName>nocase_iso</ServerSqlSortOrderName> **  <MaxAllowedConcurrentSqlConnections>32767</MaxAllowedConcurrentSqlConnections> **   <SessionANSI_NULLS>1</SessionANSI_NULLS>   <SessionANSI_PADDING>1</SessionANSI_PADDING>   <SessionANSI_WARNINGS>1</SessionANSI_WARNINGS>   <SessionARITHABORT>1</SessionARITHABORT>   <SessionCONCAT_NULL_YIELDS_NULL>1</SessionCONCAT_NULL_YIELDS_NULL>   <SessionNUMERIC_ROUNDABORT>0</SessionNUMERIC_ROUNDABORT>   <SessionQUOTED_IDENTIFIER>1</SessionQUOTED_IDENTIFIER> </SequenceIdEnvironment> A: Are the out of memory exceptions from the .NET? If the error was on the server you would probably see a connection refused message instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/164008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Reading some integers then a line of text in C++ I'm reading input in a C++ program. First some integers, then a string. When I try reading the string with getline(cin,stringname);, it doesn't read the line that the user types: instead, I get an empty line, from when the user pressed Enter after typing the integers. cin>>track.day; //Int cin>>track.seriesday; //Int getline(cin,track.comment); //String How can I clear the cin (cin.clear() doesn't work) so that the string won't fill itself with the "enter" key? It's a normal input receiving, nothing special at the top of the code, I had a problem like this but I forgot the solution I need to clear the cin someway so the string won't get filled with "enter" key. A: I think that your cin of the ints is not reading the new line before the sentence. cin skips leading whitespace and stops reading a number when it encounters a non-digit, including whitespace. So: std::cin >> num1; std::cin >> num2; std::cin.ignore(INT_MAX, '\n'); // ignore the new line which follows num2 std::getline(std::cin, sentence); might work for you
{ "language": "en", "url": "https://stackoverflow.com/questions/164022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What guidelines are appropriate for determining when to implement a class member as a property versus a method? The .NET coding standards PDF from SubMain that have started showing up in the "Sponsored By" area seems to indicate that properties are only appropriate for logical data members (see pages 34-35 of the document). Methods are deemed appropriate in the following cases: * *The operation is a conversion, such as Object.ToString(). *The operation is expensive enough that you want to communicate to the user that they should consider caching the result. *Obtaining a property value using the get accessor would have an observable side effect. *Calling the member twice in succession produces different results. *The order of execution is important. *The member is static but returns a value that can be changed. *The member returns an array. Do most developers agree on the properties vs. methods argument above? If so, why? If not, why not? A: They seem sound, and basically in line with MSDN member design guidelines: http://msdn.microsoft.com/en-us/library/ms229059.aspx One point that people sometimes seem to forget (*) is that callers should be able to set properties in any order. Particularly important for classes that support designers, as you can't be sure of the order generated code will set properties. (*) I remember early versions of the Ajax Control Toolkit on Codeplex had numerous bugs due to developers forgetting this one. As for "Calling the member twice in succession produces different results", every rule has an exception, as the property DateTime.Now illustrates. A: Those are interesting guidelines, and I agree with them. It's interesting in that they are setting the rules based on "everything is a property except the following". That said, they are good guidelines for avoiding problems by defining something as a property that can cause issues later. At the end of the day a property is just a structured method, so the rule of thumb I use is based on Object Orientation -- if the member represents data owned by the entity, it should be defined as a property; if it represents behavior of the entity it should be implemented as a method. A: Fully agreed. According to the coding guidelines properties are "nouns" and methods are "verbs". Keep in mind that a user may call the property very often while thinking it would be a "cheap" operation. On the other side it's usually expected that a method may "take more time", so a user considers about caching method results. A: What's so interesting about those guidelines is that they are clearly an argument for having extension properties as well as extension methods. Shame. A: I never personally came to the conclusion or had the gut feeling that properties are fast, but the guidelines say they should be, so I just accept it. I always struggle with what to name my slow "get" methods while avoiding FxCop warnings. GetPeopleList() sounds good to me, but then FxCop tells me it might be better as a property.
{ "language": "en", "url": "https://stackoverflow.com/questions/164023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: On Win32 how do you move a thread to another CPU core? I'd like to make sure that a thread is moved to a specific CPU core and can never be moved from it by the scheduler. There's a SetThreadAffinityMask() call but there's no GetThreadAffinityMask(). The reason I need this is because high resolution timers will get messed up if the scheduler moves that thread to another CPU. A: If you could call a function that returns a number indicating what CPU the thread is running on, without using affinity, the answer would often be wrong as soon as the function returned. So checking the mask returned by SetThreadAffinityMask() is as close as you're going to get, outside of kernel code running at elevated IRQL, and even that's changing. It sounds like you're trying to work around RDTSC clock skew issues. If you are using the RDTSC instruction directly, consider calling QueryPerformanceCounter() instead: * *QueryPerformanceCounter() on Windows Vista uses the HPET if it is supported by the chipset and is in the system's ACPI tables. *AMD-based systems using the AMD Processor Driver will mostly compensate for multi-core clock skew if you call QueryPerformanceCounter(), but this does nothing for applications that use RDTSC directly. The AMD Dual-Core Optimizer is a hack for applications that use RDTSC directly, but if the amount of clock skew is changing due to C1 clock ramping (where the clock speed is reduced in the C1 power state), you will still have clock skew. And these utilities probably aren't very widespread, so using affinity with QueryPerformanceCounter() is still a good idea. A: What Ken said. But if you don't trust it's working, you can call SetThreadAffinityMask again, and confirm that the return value matches what you expect the mask to be. (But then of course, if you don't trust the function then you can't trust the second call...) Don't be confused by the existence of GetProcessAffinityMask. That function is not there to verify that SetProcessAffinityMask worked, but e.g. so you can construct a thread affinity that is a subset of process affinity. Just look that the return value and verify that it isn't 0 and you should be fine. A: You should probably just use SetThreadAffinityMask and trust that it is working. MSDN A: There is no need for GetThreadAffinityMask. Just get the value of GetProcessAffinityMask, turn some bits off, then call SetThreadAffinityMask. The threads inherit the process' affinity mask, and since their affinity is under your control, you already know a thread's affinity mask (it's the one you set it to).
{ "language": "en", "url": "https://stackoverflow.com/questions/164026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Docking a CControlBar derived window How can I dock a CControlBar derived window to the middle of a splitter window (CSplitterWnd)? I would like the bar to be repositioned whenever the splitter is moved. To make it a little clearer as to what I'm after, imagine the vertical ruler in the Dialog Editor in Visual Studio (MFC only). It gets repositioned whenever the tree view is resized. A: Alf, In case of VS, there's no splitter used: The resource view is a resizable ControlBar (It looks and feels like a splitter but it isn't a CSplitterWnd). The rest is a child frame (either tabbed or MDI. Go to Tools/Options/Environment/General and choose Multiple Documents to convince yourself). The ruler is part (controlbar?) of the child frame. In your case, I think you don't want a 3 panes splitter. You need a 2 pane splitter and the control bar should be part of your view (it wouldn't be a CControlBar per se). Unless you use MDI in which case you can make it a true ControlBar in your child frame. HTH A: Serge, I apologize, I wasn't very clear. The splitter would be between the resource view and the ruler bar. It would looke like this: Resource View | Vertical ruler | View In any case, I found the (now obvious) answer: split the main frame into three windows: m_wndSplitter.CreateStatic(this, 1, 3); m_wndLeftPane.Create(&m_wndSplitter,WS_CHILD|WS_VISIBLE,m_wndSplitter.IdFromRowCol(0, 0)); m_ruler.Create(&m_wndSplitter,WS_CHILD|WS_VISIBLE,m_wndSplitter.IdFromRowCol(0, 1)); m_wndSplitter.CreateView(0, 2, pContext->m_pNewViewClass, CSize(300, 0), pContext); SetActiveView((CScrollView*)m_wndSplitter.GetDlgItem(m_wndSplitter.IdFromRowCol(0, 2)));
{ "language": "en", "url": "https://stackoverflow.com/questions/164039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Basic programming/algorithmic concepts I'm about to start (with fellow programmers) a programming & algorithms club in my high school. The language of choice is C++ - sorry about that, I can't change this. We can assume students have little to no experience in the aforementioned topics. What do you think are the most basic concepts I should focus on? I know that teaching something that's already obvious to me isn't an easy task. I realize that the very first meeting should be given an extreme attention - to not scare students away - hence I ask you. Edit: I noticed that probably the main difference between programmers and beginners is "programmer's way of thinking" - I mean, conceptualizing problems as, you know, algorithms. I know it's just a matter of practice, but do you know any kind of exercises/concepts/things that could stimulate development in this area? A: Breaking it Down To me, what's unique about programming is the need to break down tasks into small enough steps for the computer. This varies by language, but the fact that you may have to write a "for loop" just to count to 100 takes getting used to. The "top-down" approach may help with this concept. You start by creating a master function for your program, like filterItemsByCriteria(); You have no idea how that will work, so you break it down into further steps: (Note: I don't know C++, so this is just a generic example) filterItemsByCritera() { makeCriteriaList(); lookAtItems(); removeNonMatchingItems(); } Then you break each of those down further. Pretty soon you can define all the small steps it takes to make your criteria list, etc. When all of the little functions work, the big one will work. It's kind of like the game kids play where they keep asking "why?" after everything you say, except you have to keep asking "how?" A: Linked lists - a classic interview question, and for good reason. A: I would try to work with a C subset, and not try to start with the OO stuff. That can be introduced after they understand some of the basics. A: Greetings! I think you are getting WAY ahead of yourself in forcing a specific language and working on specific topics and a curriculum.. It sounds like you (and some of the responders) are confusing "advising a programming club" with "leading a programming class". They are very different things. I would get the group together, and the group should decide what exactly they want to get out of the club. In essence, make a "charter" for the club. Then (and only then) can you make determinations such as preferred language/platform, how often to meet, what will happen at the meetings, etc. It may turn out that the best approach is a "survey", where different languages/platforms are explored. Or it may turn out that the best approach is a "topical"one, where there topic changes (like a book club) on a regular basis (this month is pointers, next month is sorting, the following is recursion, etc.) and then examples and discussions occur in various languages. As an aside, I would consider a "language-agnostic" orientation for the club. Encourage the kids to explore different languages and platforms. Good luck, and great work! A: Make programming fun! Possible things to talk about would be Programming Competitions that either your club could hold itself or it could enter in locally. I compete in programming competitions at the University (ACM) level and I know for a fact that they have them at lower levels as well. Those kind of events can really draw out some competitive spirit and bring the club members closer. Things don't always have to be about programming either. Perhaps suggest having a LAN party where you play games, discuss programming, etc could be a good idea as well. In terms of actual topics to go over that are programming/algorithm related, I would suggest as a group attempting some of these programming problems in this programming competition primer "Programming Challenges": Amazon Link They start out with fairly basic programming problems and slowly progress into problems that require various Data Structures like: * *Stacks *Queues *Dictionaries *Trees *Etc Most of the problems are given in C++. Eventually they progress into more advanced problems involving Graph Traversal and popular Graph algorithms (Dijkstra's, etc) , Combinatrics problems, etc. Each problem is fun and given in small "story" like format. Be warned though, some of these are very hard! Edit: Pizza and Soda never hurts either when it comes to getting people to show up for your club meetings. Our ACM club has pizza every meeting (once a month). Even though most of us would still show up it is a nice ice breaker. Especially for new clubs or members. A: Well, it's a programming club, so it should be FUN! So I would say dive into some hand on experience right away. Start with explaining what a main() method is,then have students write a hello world program. Gradually improve the hello world program so it has functions and prints out user inputs. I would say don't go into algorithm too fast for beginners, let them play with C++ first. A: Someone mentioned above, "make programming fun". It is interesting today that people don't learn for the sake of learning. Most people want instant gratification. Teach a bit of logic using Programming. This helps with(and is) problem solving. The classing one I have in my head are guessing games. * *Have them make a program that guesses at a number between 0 and 100. *Have them make a black jack clone ... I have done this in basic :-( Make paper instructions. A: * *Explain the "Fried eggs" story. Ask the auditory what they would do to make themselves fried eggs. Make them note the step they think about. Probably you will receive less than 5 steps algorithm. Then explain them how many steps should be written down if we want to teach a computer to fry eggs. Something like: 1) Go to the Fridge 2) Open the fridge door 3) Search for eggs 4) If there are no eggs - go to the shop to buy eggs ( this is another function ;) ) 5) If there are eggs - calculate how many do you need to fry 6) Close the fridge door 7) e.t.c. :) *Start with basics of C - syntax semantics e.t.c, and in parallel with that explain the very basic algorithms like bubble sort. *After the auditory is familiar with structured programming (this could take several weeks or months, depending how often you make the lessons), you can advance to C++ and OOP. A: Pseudocode should be a very first. Edit: If they are total programming beginners then I would make the first half just about programming. Once you get to a level where talking about algorithms would make sense then pseudocode is really important to get under the nails. A: The content in Deitel&Deitel's C++ programming is a decent introduction, and the exercises proposed at the end of each chapter are nice toy problems. Basically, you're talking about: - control structures - functions - arrays - pointers and strings You might want to follow up with an introduction to the STL ("ok, now that we've done it the hard way... here's a simpler option") A: Start out by making them understand a problem like for instance sorting. This is very basic and they should be able to relate quite fast. Once they see the problem then present them with the tools/solution to solve it. I remember how it felt when I first was show an example of merge-sort. I could follow all the steps but what the hell was I for? Make then crave a solution to a problem and they will understand the tool and solution much better. A: start out with a simple "hello world" program. This introduces fundamentals such as variables, writing to a stream and program flow. Then add complexity from there (linked lists, file io, getting user input, etc). The reason I say start with hello world is because the kid will get to see a running program really quick. It's nearly immediate feedback-as they will have written a running program right from the start. A: IMO, Big-O is one of the more important concepts for beginning programmers to learn. A: Have a debugging contest. Provide code samples that include a bug. Have a contest to see who can find the most or fastest. There is an excellent book, How Not to Program in C++, that you could use to start with. You always learn best from mistakes and I prefer to learn from some else's. It will also let those with little experience learn by see code, even if the code only almost works. A: In addition to the answers to this question, there are certain important topics to cover. Here's an example of how you could structure the lessons. First Lesson: Terminology and Syntax Terminology to cover: variable, operator, loop (iteration), method, reserved word, data type, class Syntax to cover: assignment, operation, if/then/else, for loop, while loop, select, input/output Second Lesson: Basic Algorithm Construction Cover a few simple algorithms, involving some input, maybe a for or a while loop. Third Lesson: More Advanced Algorithm Topics This is for things like recursion, matrix manipulation, and higher-level mathematics. You don't have to get into too complex of topics, but introduce enough complexity to be useful in a real project. Final Lesson: Group Project Make a project that groups can get involved in doing. These don't have to be single day lessons. You can spread the topics across multiple days. A: Thanks for your replies! And how would you teach them actual problem solving? I know a bunch of students that know C++ syntax and a few basic algorithms, but they can't apply the knowledge they know when they solve real problems - they don't know the approach, the way to transcribe their thoughts into a set of strict steps. I do not talk about 'high-level' approaches like dynamic programming, greedy etc., but about basic algorithmic mindset. I assume it's just because of the poor learning process they were going through. In other sciences - math, for example - they are really brilliant. A: Just because you are familiar with algorithms does not mean you can implement them and just because you can program does not mean you can implement an algorithm. Start simple with each topic (keep programming separate from designing algorithms). Once they have a handle on each, slowly start to bring the two concepts together. A: Wow. C++ is one of the worst possible languages to start with, in terms of the amount of unrelated crap you need to get anything working (Java would be slightly worse, I guess). When teaching beginners in a boilerplate-heavy environment, it's usual to start with "here's a simple C program. We'll discuss what all this crap at the top of the file is for later, but for now, concentrate on the lines between 'int main(void)' and the 'return' statement, which is where all the useful work is accomplished". Once you're past that point, basic concepts to cover include the basic data structures (arrays, linked lists, trees, and dictionaries), and the basic algorithms (sorting, searching, etc). A: Have your club learn how to actually program in any language by teaching the concepts of building software. Instead of running out an buying a dozen licenses for Visual Studio, have students use compilers, make systems, source files, objects and librarys in order to turn their C code into programs. I feel this is truly the beginning and actually empowers these kids to understand how to make software on any platform, without crutches that many educational institutions like to rely on. A: As for the language of choice - congratulations - you'll find C++ is very rich in making you think of mathematical shortcuts and millions of ways to make your code perform even better (or to implement fancy patterns). To the question: When I was beggining to program I would always try to break down one real life problem into several steps and then as I see similarity between tasks or data they transform I would always try to find a lazier, easier, meanier way to implement it. Elegance came after when learning patterns and real algorithms. A: Hank: Big O??? you mean tell beginning programmers that their code is of O(n^2) and yours is of n log n ?? A: I could see a few different ways to take this: 1) Basic programming building blocks. What are conditional statements, e.g. switch and if/else? What are repetition statements, e.g. for and while loops? How do we combine these to get a program to be the sequence of steps we want? You could take something as easy as adding up a grocery bill or converting temperatures or distances from metric to imperial or vice versa. What are basic variable types like a string, integer, or double? Also in here you could have Boolean Algebra for an advanced idea or possibly teach how to do arithmetic in base 2 or 16 which some people may find easy and others find hard. 2) Algorithmically what are similar building blocks. Sorting is a pretty simple topic that can be widely discussed and analysed to try to figure out how to make this faster than just swapping elements that seem out of order if you learn the Bubblesort which is the most brain dead way to do. 3) Compiling and run-time elements. What is a call stack? What is a heap? How is memory handled to run a program,e.g. the code pieces and data pieces? How do we open and manipulate files? What is compiling and linking? What are make files? Some of this is simple, but it can also be eye-opening just to see how things work which may be what the club covers most of the time. These next 2 are somewhat more challenging but could be fun: 4) Discuss various ideas behind algorithms such as: 1) Divide and conquer, 2) Dynamic programming, 3) Brute force, 4) Creation of a data structure, 5) Reducing a problem to a similar one already solved for example Fibonacci numbers is a classic recursive problem to give beginning programmers, and 6) The idea of being, "greedy," like in a making change example if you were in a country where coin denominations where a,b, and c. You could also get into some graph theory examples like a minimum weight spanning tree if you want something somewhat exotic, or the travelling salesmen for something that can be easy to describe but a pain to solve. 5) Mathematical functions. How would you program a factorial, which is the product of all numbers from 1 to n? How would you compute the sums of various Arithmetic or Geometric Series? Or compute the number of Combinations or Permutations of r elements from a set of n? Given a set of points, approximate the polynomial that meets this requirement, e.g. in a 2-dimensional plane called x and y you could give 2 points and have people figure out what are the slope and y intercept if you have solved pairs of linear equations already. 6) Lists which can be implemented using linked lists and arrays. Which is better for various cases? How do you implement basic functions such as insert, delete, find, and sort? 7) Abstract Data Structures. What are stacks and queues? How do you build and test classes? 8) Pointers. This just leads to huge amounts of topics like how to allocate/de-allocate memory, what is a memory leak? Those are my suggestions for various starting points. I think starting a discussion may lead to some interesting places if you can get a few people together that don't mind talking on the same subject week after week in some cases as sorting may be a huge topic to cover well if you want to get into the finer points of things. A: You guys could build the TinyPIM project from "C++ Standard Library from Scratch" and then, when it's working, start designing your own extensions.
{ "language": "en", "url": "https://stackoverflow.com/questions/164048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Should log file streams be opened/closed on each write or kept open during a desktop application's lifetime? Should log classes open/close a log file stream on each write to the log file or should it keep the log file stream open throughout the application's lifetime until all logging is complete? I'm asking in context of a desktop application. I have seen people do it both ways and was wondering which approach yields the best all-around results for a logger. A: For performance, keep open. For safety, flush often. This will mean that the run-time library will not try to buffer writes until it has lots of data -- you may crash before that's written! A: I would tend to leave them open -- but open them with the file share permissions set to allow other readers and make sure you flush log output with every message. I hate programs which don't even let you look at the logfile while they are running, or where the log file isn't flushed and lags behind what is happening. A: It's generally better to keep them open. If you're concerned about being able to read them from another process, you need make sure that the share mode you use to open/create them allows others to read them (but not write to them, obviously). If you're worried about losing data in the event of a crash, you should periodically flush/commit their buffers. A: It's a tradeoff. Opening and closing the file each time makes it more likely that the file will be updated on disk in the program crashes. On the other hand, there's some overhead involved in opening the file, seeking to the end, and appending data to it. On Windows, you won't be able to move/rename/delete the file while it's open, so open/write/close might be helpful for a long-running process where you might occasionally want to archive the old log contents without interrupting the writer. In most of the cases where I've done this sort of logging, I've kept the file open, and used fflush() to make it more likely the file was up-to-date if the program crashed. A: If you have frequent read/writes it is more efficient to keep the file open for the lifetime with a single open/close. You might want to flush periodically or after each write though. If your application crashes you might not have all the data written to your file. Use fflush on Unix-based systems and FlushFileBuffers on Windows. If you are running on Windows as well you can use the CreateFile API with FILE_FLAG_NO_BUFFERING to go direct to the file on each write. It is also better to keep the file open for the lifetime, because each time you open/close you might have a failure if the file is in use. For example, you might have a backup application that runs and opens/closes your file as it's backing it up. And this might cause your program to not be able to access your own file. Ideally, you would want to keep your file open always and specify sharing flags on Windows (FILE_SHARE_READ). On Unix-based systems, sharing will be default. A: In general, as everyone else said, keep the file open for performance (open is a relatively slow operation). However, you need to think about what's going to happen if you keep the file open and people either remove the log file or truncate it. And that depends on the flags used at open time. (I'm addressing Unix - similar considerations probably apply to Windows, but I'll accept correction by those more knowledgeable than me). If someone sees the log file grow to, say, 1 MiB and then removes it, the application will be none the wiser, and Unix will keep the log data safe until the log is closed by the application. What's more, the users will be confused because they probably created a new log file with the same name as the old and are puzzled about why the application 'stopped logging'. Of course, it didn't; it is just logging to the old file that no-one else can get at. If someone notices that the log file has grown to, say, 1 MiB and then truncates it, the application will also be none the wiser. Depending on how the log file was opened, though, you might get weird results. If the file was not opened with O_APPEND (POSIX-speak), then the program will continue to write at its current offset in the log file, and the first 1 MiB of the file will appear as a stream of zero bytes — which is apt to confuse programs looking at the file. How to avoid these problems? * *Open the log file with O_APPEND. *Periodically use fstat() on the file descriptor and check whether st_nlink is zero. If the link count goes to zero, somebody removed your log file. Time to close it, and reopen a new one. By comparison with stat() or open(), fstat() should be quick; it is basically copying information directly out of stuff that is already in memory, no name lookup needed. So, you should probably do that every time you are about to write. Suggestions: * *Make sure there is a mechanism to tell the program to switch logs. *Make sure you log the complete date and time in the messages. I suffer from an application that puts out time and not date. Earlier today, I had a message file that had some entries from 17th August (one of the messages accidentally included the date in the message after the time), and then some entries from today, but I can only tell that because I created them. If I looked at the log file in a weeks time, I could not tell which day they were created (though I would know the time when they were created). That sort of thing is annoying. You might also look at what systems such as Apache do — they have mechanisms for handling log files and there are tools for dealing with log rotation. Note: if the application does keep a single file open, does not use append mode, and does not plan for log rotation or size limits, then there's not much you can do about log files growing or having hunks of zeroes at the start — other than restarting the application periodically. You should make sure that all writes to the log complete as soon as possible. If you use file descriptors, there is only kernel buffering in the way; this may well be acceptable, but consider the O_SYNC or O_DSYNC options to open(). If you use file stream I/O, ensure that each write is followed by fflush(). If you have a multi-threaded application, ensure that each write() contains a complete message; do not try to write parts of a message separately. With file stream I/O, you may need to use flockfile() and relatives to group operations together. With file descriptor I/O, you may be able to use dprintf() to do formatted I/O to a file descriptor (though it is not absolutely clear that dprintf() makes a single call to write()), or perhaps writev() to write separate segments of data in a single operation. Incidentally, the disk blocks that 'contain' the zeroes are not actually allocated on disk. You can really screw up people's backup strategies by creating files which are a few GiB each, but all except the very last disk block contain just zeroes. Basically (error checking and file name generation omitted for conciseness): int fd = open("/some/file", O_WRITE|O_CREATE|O_TRUNC, 0444); lseek(fd, 1024L * 1024L * 1024L, 0); write(fd, "hi", 2); close(fd); This occupies one disk block on the disk - but 1 GiB (and change) on (uncompressed) backup and 1 GB (and change) when restored. Anti-social, but possible. A: Open and close. Can save you from a corrupt file in case of a system crash. A: I don't see any reason to close it. On the other hand, closing and reopening takes a little extra time. A: I can think of a couple reasons you don't want to hold the file open: * *If a log file is shared between several different apps, users, or app instances you could have locking issues. *If you're not clearing the stream buffer correctly you could lose the last few entries when that app crashes and you need them most. On the other hand, opening files can be slow, even in append mode. In the end, it comes to down to what your app is doing. A: The advantage to closing the file every time is that the OS will guarantee that the new message is written to disk. If your leave the file open and your program crashes, it is possible the entire thing wouldn't be written. You could also accomplish the same thing by doing an fflush() or whatever is the equivalent in the language you are using. A: As a user of your application I'd prefer it to not hold files open unless it's a real requirement of the app. Just one more thing that can go wrong in the event of a system crash, etc. A: I would open and close on each write (or batch of writes). If doing this causes a performance problem in a desktop application, it's possible you're writing to the log file too often (although I'm sure there can be legitimate reasons for lots of writes). A: For large intensive applications, what I usually do is I keep the log file open for the duration of the application and have a separate thread that flushes log content in the memory to HDD periodically. File open and close operation require system calls, which is a lot of work if you look into lower level.
{ "language": "en", "url": "https://stackoverflow.com/questions/164053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Versioning a MySQL database when code base doesn't have a ORM I've been thinking about this problem for a while and have yet to come up with any stable/elegant ideas. I know with MyISAM tables, you can get the table def update time but thats not so true with InnoDB and I've found its not even reliable to look at the .frm file for an idea of when the definition might have been modified.... nevermind if the dataset has been changed. I had an idea of every 30 minutes mysqldumping the contents of a schema, breaking that apart with an AWK script, then diffing that to the last version... but that seems a little excessive and could be a problem if the dataset involved is large. A: If you run mysqldump -d it only dumps the schema: [gary.richardson@server ~]$ mysqldump -d -u root mysql user -- MySQL dump 10.11 -- -- Host: localhost Database: mysql -- ------------------------------------------------------ -- Server version 5.0.45 /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */; /*!40103 SET TIME_ZONE='+00:00' */; /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */; /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */; /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */; /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */; -- -- Table structure for table `user` -- DROP TABLE IF EXISTS `user`; CREATE TABLE `user` ( `Host` char(60) collate utf8_bin NOT NULL default '', `User` char(16) collate utf8_bin NOT NULL default '', `Password` char(41) character set latin1 collate latin1_bin NOT NULL default '', `Select_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Insert_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Update_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Delete_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Create_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Drop_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Reload_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Shutdown_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Process_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `File_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Grant_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `References_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Index_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Alter_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Show_db_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Super_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Create_tmp_table_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Lock_tables_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Execute_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Repl_slave_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Repl_client_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Create_view_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Show_view_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Create_routine_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Alter_routine_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `Create_user_priv` enum('N','Y') character set utf8 NOT NULL default 'N', `ssl_type` enum('','ANY','X509','SPECIFIED') character set utf8 NOT NULL default '', `ssl_cipher` blob NOT NULL, `x509_issuer` blob NOT NULL, `x509_subject` blob NOT NULL, `max_questions` int(11) unsigned NOT NULL default '0', `max_updates` int(11) unsigned NOT NULL default '0', `max_connections` int(11) unsigned NOT NULL default '0', `max_user_connections` int(11) unsigned NOT NULL default '0', PRIMARY KEY (`Host`,`User`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='Users and global privileges'; /*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */; /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2008-10-02 20:06:38 Then you could do your parsing. There's another solution to your problem, but takes discipline. You can add a COMMENT field to columns and tables: CREATE TABLE example ( name varchar(32) COMMENT='Name of a person' ) COMMENT='example table'; I like to put a version number in there. You can tie that into your RCS: CREATE TABLE example ( name varchar(32) COMMENT='Name of a person' ) COMMENT='VERSION=1.2.3 example table'; A: This reminds me of this question: How do you manage database revisions on a medium sized project with branches? but maybe I'm being to general... http://odetocode.com/Blogs/scott/archive/2008/01/30/11702.aspx The codebase I'm currently working on does not have an ORM yet we still use the solution based on the blog above. It works. A: Yeah, it's tough. That's why I use innodb. It's easier to do dump/import, we even put the schemas under VC.
{ "language": "en", "url": "https://stackoverflow.com/questions/164073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Javascript callback when IFRAME is finished loading? I need to execute a callback when an IFRAME has finished loading. I have no control over the content in the IFRAME, so I can't fire the callback from there. This IFRAME is programmaticly created, and I need to pass its data as a variable in the callback, as well as destroy the iframe. Any ideas? EDIT: Here is what I have now: function xssRequest(url, callback) { var iFrameObj = document.createElement('IFRAME'); iFrameObj.src = url; document.body.appendChild(iFrameObj); $(iFrameObj).load(function() { document.body.removeChild(iFrameObj); callback(iFrameObj.innerHTML); }); } This callsback before the iFrame has loaded, so the callback has no data returned. A: I have had to do this in cases where documents such as word docs and pdfs were being streamed to the iframe and found a solution that works pretty well. The key is handling the onreadystatechanged event on the iframe. Lets say the name of your frame is "myIframe". First somewhere in your code startup (I do it inline any where after the iframe) add something like this to register the event handler: document.getElementById('myIframe').onreadystatechange = MyIframeReadyStateChanged; I was not able to use an onreadystatechage attribute on the iframe, I can't remember why, but the app had to work in IE 7 and Safari 3, so that may of been a factor. Here is an example of a how to get the complete state: function MyIframeReadyStateChanged() { if(document.getElementById('myIframe').readyState == 'complete') { // Do your complete stuff here. } } A: The innerHTML of your iframe is blank because your iframe tag doesn't surround any content in the parent document. In order to get the content from the page referred to by the iframe's src attribute, you need to access the iframe's contentDocument property. An exception will be thrown if the src is from a different domain though. This is a security feature that prevents you from executing arbitrary JavaScript on someone else's page, which would create a cross-site scripting vulnerability. Here is some example code the illustrates what I'm talking about: <script src="http://prototypejs.org/assets/2009/8/31/prototype.js" type="text/javascript"></script> <h1>Parent</h1> <script type="text/javascript"> function on_load(iframe) { try { // Displays the first 50 chars in the innerHTML of the // body of the page that the iframe is showing. // EDIT 2012-04-17: for wider support, fallback to contentWindow.document var doc = iframe.contentDocument || iframe.contentWindow.document; alert(doc.body.innerHTML.substring(0, 50)); } catch (e) { // This can happen if the src of the iframe is // on another domain alert('exception: ' + e); } } </script> <iframe id="child" src="iframe_content.html" onload="on_load(this)"></iframe> To further the example, try using this as the content of the iframe: <h1>Child</h1> <a href="http://www.google.com/">Google</a> <p>Use the preceeding link to change the src of the iframe to see what happens when the src domain is different from that of the parent page</p> A: I wanted to hide the waiting spinner div when the i frame content is fully loaded on IE, i tried literally every solution mentioned in Stackoverflow.Com, but with nothing worked as i wanted. Then i had an idea, that when the i frame content is fully loaded, the $(Window ) load event might be fired. And that exactly what happened. So, i wrote this small script, and worked like magic: $(window).load(function () { //alert("Done window ready "); var lblWait = document.getElementById("lblWait"); if (lblWait != null ) { lblWait.style.visibility = "false"; document.getElementById("divWait").style.display = "none"; } }); Hope this helps. A: This function will run your callback function immediately if the iFrame is already loaded or wait until the iFrame is completely loaded. This also addresses the following issues: * *Chrome initializes every iFrame with an about:blank page which will have readyState == "complete". Later, it will replace `about:blank with the actual iframe src value. So, the initial value of readyState will not represent the readyState of your actual iFrame. Therefore, besides checking for readyState value, this function also addresses the about:blank issue. *DOMContentLoaded event doesn't work with iFrame. So it uses the load event for running the callback function if iFrame isn't already loaded. The load event is equivalent to readyState == "complete" which has been used to check whether iFrame is already loaded. So, in any scenario, the callback function will run after iFrame is fully loaded. *iFrame src can have redirects and therefore load a page different from the original src url. This function will also work in that scenario. Pass in your callback function that you want to run when the iFrame finishes loading and the <iframe> element to this function: function iframeReady(callback, iframeElement) { const iframeWindow = iframeElement.contentWindow; if ((iframeElement.src == "about:blank" || (iframeElement.src != "about:blank" && iframeWindow.location.href != "about:blank")) && iframeWindow.document.readyState == "complete") { callback(); } else { iframeWindow.addEventListener("load", callback); } } A: First up, going by the function name xssRequest it sounds like you're trying cross site request - which if that's right, you're not going to be able to read the contents of the iframe. On the other hand, if the iframe's URL is on your domain you can access the body, but I've found that if I use a timeout to remove the iframe the callback works fine: // possibly excessive use of jQuery - but I've got a live working example in production $('#myUniqueID').load(function () { if (typeof callback == 'function') { callback($('body', this.contentWindow.document).html()); } setTimeout(function () {$('#frameId').remove();}, 50); }); A: I am using jQuery and surprisingly this seems to load as I just tested and loaded a heavy page and I didn't get the alert for a few seconds until I saw the iframe load: $('#the_iframe').load(function(){ alert('loaded!'); }); So if you don't want to use jQuery take a look at their source code and see if this function behaves differently with iframe DOM elements, I will look at it myself later as I am interested and post here. Also I only tested in the latest chrome. A: I had a similar problem as you. What I did is that I use something called jQuery. What you then do in the javascript code is this: $(function(){ //this is regular jQuery code. It waits for the dom to load fully the first time you open the page. $("#myIframeId").load(function(){ callback($("#myIframeId").html()); $("#myIframeId").remove(); }); }); It seems as you delete you iFrame before you grab the html from it. Now, I do see a problem with that :p Hope this helps :). A: I have a similar code in my projects that works fine. Adapting my code to your function, a solution could be the following: function xssRequest(url, callback) { var iFrameObj = document.createElement('IFRAME'); iFrameObj.id = 'myUniqueID'; document.body.appendChild(iFrameObj); iFrameObj.src = url; $(iFrameObj).load(function() { callback(window['myUniqueID'].document.body.innerHTML); document.body.removeChild(iFrameObj); }); } Maybe you have an empty innerHTML because (one or both causes): 1. you should use it against the body element 2. you have removed the iframe from the your page DOM A: I think the load event is right. What is not right is the way you use to retreive the content from iframe content dom. What you need is the html of the page loaded in the iframe not the html of the iframe object. What you have to do is to access the content document with iFrameObj.contentDocument. This returns the dom of the page loaded inside the iframe, if it is on the same domain of the current page. I would retreive the content before removing the iframe. I've tested in firefox and opera. Then i think you can retreive your data with $(childDom).html() or $(childDom).find('some selector') ... A: I've had exactly the same problem in the past and the only way I found to fix it was to add the callback into the iframe page. Of course that only works when you have control over the iframe content. A: Using onload attrbute will solve your problem. Here is an example. function a() { alert("Your iframe has been loaded"); } <iframe src="https://stackoverflow.com" onload="a()"></iframe> Is this what you want? Click here for more information.
{ "language": "en", "url": "https://stackoverflow.com/questions/164085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "156" }
Q: Make your collections thread-safe? When designing a collection class, is there any reason not to implement locking privately to make it thread safe? Or should I leave that responsibility up to the consumer of the collection? A: Thread safe collections can be deceiving. Jared Par posted a couple of interesting articles about thread safe collections: The problem is there are several levels of thread safe collections. I find that when most people say thread safe collection what they really mean “a collection that will not be corrupted when modified and accessed from multiple threads” ... But if building a data thread safe list is so easy, why doesn’t Microsoft add these standard collections in the framework? Answer: ThreadSafeList is a virtually unusable class because the design leads you down the path to bad code. The flaws in this design are not apparent until you examine how lists are commonly used. For example, take the following code which attempts to grab the first element out of the list if there is one. static int GetFirstOrDefault(ThreadSafeList<int> list) { if (list.Count > 0) { return list[0]; } return 0; } This code is a classic race condition. Consider the case where there is only one > element in the list. If another thread removes that element in between the if statement and the return statement, the return statement will throw an exception because it’s trying to access an invalid index in the list. Even though ThreadSafeList is data thread safe, there is nothing guaranteeing the validity of a return value of one call across the next call to the same object http://blogs.msdn.com/b/jaredpar/archive/2009/02/11/why-are-thread-safe-collections-so-hard.aspx http://blogs.msdn.com/b/jaredpar/archive/2009/02/16/a-more-usable-thread-safe-collection.aspx A: Collection classes need to be as fast as possible. Hence leave the locks out. The calling code will know where the locks best lie the collection class doesn't. In the worst case scenario the app will have to add an additional lock meaning that two locks occur making it double the perf hit. A: I personally would leave it up to the consumers. It will make your collection class more generic. A: Just be clear in your documentation that your are not making it thread safe and leave it out, or if, for your application, you want it thread safe, make it thread safe and note that in your documentation for it. The only rule is to document it. Other than that, make your class for you and if other people want to use it, they can. A: If I'm looking for a collection class and I need thread safe capabilities and your class doesn't have them, I'm immediately going to skip to the next offering out there to see what they provide. Your collection won't get any more of my attention. Note the "If" at the beginning. Some customers will want it, some will not, and some won't care. If you're going to build a tool-kit for consumers, then why not offer both varieties? That way I can choose which one to use, but if I want thread-safe you still have my attention and I don't have to write it myself. A: Making the collection threadsafe is what killed Java's Vector and Hashtable classes. It is far easier for a client to wrap it in a threadsafe wrapper, as previously suggested, or to synchronize data access on the subset of methods, than to take a synchronization hit every time the class is accessed. Hardly anyone uses Vector or Hashtable, and if they do, they get laughed at, because their replacements (ArrayList and HashMap) are worlds faster. Which is unfortunate, as I (coming from a C++ background) much prefer the "Vector" name (STL), but ArrayList is here to stay. A: is there any reason not to implement locking privately to make it thread safe? It depends. Is your goal to write a collection class which is accessed by multiple threads? If so, make it thread safe. If not, don't waste your time. This kind of thing is what people refer to when they talk about 'premature optimization' Solve the problems that you have. Don't try to solve future problems that you think you may have some years in the future, because you can't see the future, and you'll invariably be wrong. Note: You still need to write your code in a maintainable way, such that if you did need to come along and add locking to the collection, it wouldn't be terribly hard. My point is "don't implement features that you don't need and won't use" A: For Java, you should leave unsynchronized for speed. Consumer of the collection can wrap in a synchronization wrapper if desired. A: The primary reason not to make it thread safe is performance. Thread safe code can be 100s of times slower than non-safe code, so if you client doesn't want the feature, that's a pretty big waste. A: Note that if you attempt to make any class thread-safe you need to decide on common usage scenarios. For instance, in the case of a collection, just making all the properties and methods individually thread-safe might not be good enough for a consumer, as reading first the count, and then looping, or similar, would not do much good if the count changed after reading it. A: Basically, design your collection as thread-safe, with locking implemented in two methods of your class: lock() and unlock(). Call them anywhere needed, but leave them empty. Then subclass your collection implementing the lock() and unlock() methods. Two classes for the price of one. A: A really good reason to NOT make your collection thread-safe is for improved single-thread performance. Example: ArrayList over Vector. Deferring thread-safety to the caller allows the unsynchronized use case to optimize by avoiding locking. A really good reason to make your collection thread-safe is for improved multi-threaded performance. Example: ConcurrentHashMap over HashMap. Because CHM internalizes the multi-threaded concerns, it can stripe locking for greater concurrent access more effectively than external synchronization. A: This would make it impossible to simultaneously access a collection from several threads even if you know that the element you touch is not used by anyone else. An example would be a collection with an integer based index accessor. Each thread might know from its id which index values it can access without worrying about dirty reads/writes. Another case where you would get an unnecessary performance hit would be when data is only read from the collection and not written to. A: I agree that leaving it up to the consumer is the right approach. If provides the consumer much more flexibility as to whether the Collection instance is synchronized on or a different object is synchronized on. For example, if you had two lists that both needed to be updated it might make sense to have them in a single sychronized block using a single lock. A: If you make a collection class, do not make it thread safe. It's quite hard to do right (e.g. correct and fast), and the problems for your consumer when you do it wrong (heisenbugs) are difficult to debug. In stead, implement one of the Collection APIs and use Collections.synchronizedCollection( yourCollectionInstance) to obtain a thread-safe implementation if they need it. Just refer to the appropriate Collections.synchronizedXXX method in your class javadoc; it will make clear that you have considered thread-safety in your design and ensured the consumer has a thread-safe option at his disposal. A: Here's a good start. thread-safe-dictionary But you will notice you lose one of the great features of collections - enumeration. You cannot thread-safe an enumerator, it's just not really feasible, unless you implement your own enumerator which holds an instance lock back to the collection itself. I would suspect this would cause major bottlenecks and potential deadlocks. A: As of JDK 5 if you need a thread-safe collection I'd first see if one of the already implemented collections in java.util.concurrent would work. As the authors of Java Concurrency In Practice point out (including the guy who wrote most of the classes) implementing these correctly is very difficult, especially if performance is important. Quoting http://download.oracle.com/javase/6/docs/api/java/util/concurrent/package-summary.html Concurrent Collections Besides Queues, this package supplies Collection implementations designed for use in multithreaded contexts: ConcurrentHashMap, ConcurrentSkipListMap, ConcurrentSkipListSet, CopyOnWriteArrayList, and CopyOnWriteArraySet. When many threads are expected to access a given collection, a ConcurrentHashMap is normally preferable to a synchronized HashMap, and a ConcurrentSkipListMap is normally preferable to a synchronized TreeMap. A CopyOnWriteArrayList is preferable to a synchronized ArrayList when the expected number of reads and traversals greatly outnumber the number of updates to a list.
{ "language": "en", "url": "https://stackoverflow.com/questions/164088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Serialization of a long array (in C) in a C program I have an long* that I want to serialize (thus converting to chars). A long doesn't fit in a single char, and the size varies depending of the processor (can be 4 bytes or 8 bytes). Theres a good way to make the serialization and de-serialization? A: This is portable, but nowhere near as inefficient as using printf/scanf void longtochar(char *buffer, unsigned long number) { int i; for (i=0; i<sizeof(long); i++) { buffer[i] = number & 0xFF; // place bottom 8 bits in char number = number >> 8; // shift down remaining bits } return; // the long is now stored in the first few (2,4,or 8) bytes of buffer } And to unpack it again (assuming long is the same size) long chartolong(char *buffer) { long number = 0; int i; for (i=sizeof(long)-1; i>=0; i--) { number = number << 8; // left shift bits in long already number += buffer[i]; // add in bottom 8 bits } return number; } Do note the BIG assumption that long is the same length on both systems. Safe thing to do is #include <stdint.h> and use the types it provides (uint32_t or uint16_t). Also, my code has it as an unsigned long. I don't have access to a C compiler right now, so I can't confirm if it would or not would not work with signed integers. If memory serves me, the behavior of it might be undefined (though it might not matter, how I handle it). A: You are likely solving the wrong problem. You should serialize to a fixed size int, using int32_t for instance. You probably want to use this fixed size type throughout your program, or you'll have problems when a 64-bit program can't save to the smaller size (or use int64_t). If know you'll never have to load 64-bit saves on a 32-bit platform, then don't bother. Just write out sizeof(long) bytes to the file, and read back sizeof(long) bytes. But put a flag early in your data that indicates the source platform to avoid mistakes. A: You don't have to serialize as chars - you can fwrite as longs (to a file). To serialise to a char array invest a byte at the beginning to indicate the size of int and the byte order - you will need this later. i.e. char *p = &long_array[0]; To access the long array as char simply cast it - and multiple the length of the array by sizeof(long) to get the size in chars. A simple example illustrates this: #include <stdio.h> main() { int aaa[10]; int i; char *p; for(i=0;i<sizeof(aaa)/sizeof(aaa[0]);i++) { aaa[i] = i; printf ("setting aaa[%d] = %8x\n",i,aaa[i]); } aaa[9] = 0xaabbccdd; printf ("sizeof aaa (bytes) :%d\n",sizeof(aaa)); printf ("each element of aaa bytes :%d\n",sizeof(aaa[0])); p = (char*) aaa; for(i=0;i<sizeof(aaa);i++) printf ("%d: %8x\n",i,(unsigned char)p[i]); } A: long * longs; // ... int numChars = numLongs * sizeof(long); char* longsAsChars = (char*) longs; char* chars = malloc(numChars); memcpy(chars, longsAsChars, numChars); A: In C you can get the size of a long with sizeof(long) But if your stored long has to be transferable between multiple platforms you should serialize it always as 4 bytes. Larger numbers couldn't be read by the 4byte processor anyway. A: If you create a char pointer that points to the beginning of the long array, when you increment through the char "array", you'll get 8 bits at a time. Be aware, though, that the long won't be null-terminated (necessarily, it might be), so you need to keep track where the end of it is. For example: long list[MAX]; char *serial = list; int chunk = sizeof(long); int k; for(k=0; k<(MAX*chunk); k++){ // do something with the "char" }
{ "language": "en", "url": "https://stackoverflow.com/questions/164093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Need help improving a Ruby DSL for controlling an Arduino controlled drink dispenser (bar monkey) I'm writing a DSL in Ruby to control an Arduino project I'm working on; Bardino. It's a bar monkey that will be software controlled to serve drinks. The Arduino takes commands via the serial port to tell the Arduino what pumps to turn on and for how long. It currently reads a recipe (see below) and prints it back out. The code for serial communications still need to be worked in as well as some other ideas that I have mentioned below. This is my first DSL and I'm working off of a previous example so it's very rough around the edges. Any critiques, code improvements (are there any good references for Ruby DSL best practices or idioms?) or any general comments. I currently have a rough draft of the DSL so a drink recipe looks like the following (Github link): desc "Simple glass of water" recipe "water" do ingredients( "Water" => 2.ounces ) end This in turn is interpreted and currently results with the following (Github link): [mwilliams@Danzig barduino-tender]$ ruby barduino-tender.rb examples/water.rb Preparing: Simple glass of water Ingredients: Water: 2 ounces This is a good start for the DSL, however, I do think it could be implemented a little bit better. Some ideas I had below: * *Defining what "ingredients" are available using the name of the ingredient and the number pump that it's connected to. Maybe using a hash? ingredients = {"water" => 1, "vodka" => 2}. This way, when an ingredient is interpreted it can either a) send the pump number over the serial port followed by the number of ounces for the Arduino to dispense b) tell the user that ingredient does not exist and abort so nothing is dispensed c) easily have the capability to change or add new ingredients if they're changed. *Making the recipe look less code like, which is the main purpose of a DSL, maybe build a recipe builder? Using the available ingredients to prompt the user for a drink name, ingredients involved and how much? The Github project is here, feel free to fork and make pull requests, or post your code suggestions and examples here for other users to see. And if you're at all curious, the Arduino code, using the Ruby Arduino Development framework is here. Update I modified and cleaned things up a bit to reflect Orion Edwards suggestion for a recipe. It now looks like the following. description 'Screwdriver' do serve_in 'Highball Glass' ingredients do 2.ounces :vodka 5.ounces :orange_juice end end I also added a hash (key being the ingredient and the value the pump number it's hooked up to). I think this provided much progress. I'll leave the question open for any further suggestions for now, but will ultimately select Orion's answer. The updated DSL code is here. A: Without looking into implementation details (or your github links), I'd try write a DSL like this: (stealing from here: http://supercocktails.com/1310/Long-Island-Iced-Tea-) describe "Long Island Iced Tea" do serve_in 'Highball Glass' ingredients do half.ounce.of :vodka half.ounce.of :tequila half.ounce.of :light_rum half.ounce.of :gin 1.dash.of :coca_cola #ignoring lemon peel as how can a robot peel a lemon? end steps do add :vodka, :tequila, :light_rum, :gin stir :gently add :coca_cola end end Hope that helps! A: If you want the recipe to look more natural, why not (from the same recipe Orion Ewards used, thanks!): Recipe for Long Island Iced Tea #1 Ingredients: 1/2 oz Vodka 1/2 oz Tequila 1/2 oz Light Rum 1/2 oz Gin 1 Dash Coca-Cola # ignored Twist of Lemon Peel (or Lime) Then add Treetop to the mix. You could have rules such as: grammar Cocktail rule cocktail title ingredients end rule title 'Recipe for' S text:(.*) EOF end rule ingredients ingredient+ end rule ingredient qty S liquid end # ... end Which the treetop compiler will transform into a nice ruby module. Then: parser = CocktailParser.new r = parser.parse(recipe) A: Orion's DSL looks very nice. The only change I'd possibly suggest from you "updated" code is * *Replace description with recipe. It is a more descriptive term *Since the set of ingredients and actions is fixed, bind the ingredients to variables rather than symbols i.e you have vodka = :vodka defined someplace. Its is easier to say mix vodka, gin and triple_sec # instead of using :vodka, :gin and :triple_sec. anyways that's a minor nit.
{ "language": "en", "url": "https://stackoverflow.com/questions/164095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: In c++, why does the compiler choose the non-const function when the const would work also? For example, suppose I have a class: class Foo { public: std::string& Name() { m_maybe_modified = true; return m_name; } const std::string& Name() const { return m_name; } protected: std::string m_name; bool m_maybe_modified; }; And somewhere else in the code, I have something like this: Foo *a; // Do stuff... std::string name = a->Name(); // <-- chooses the non-const version Does anyone know why the compiler would choose the non-const version in this case? This is a somewhat contrived example, but the actual problem we are trying to solve is periodically auto-saving an object if it has changed, and the pointer must be non-const because it might be changed at some point. A: The compiler does not take into account how you are using the return value in its determination; that's not part of the rules. It doesn't know if you're doing std::string name = b->Name(); or b->Name() = "me"; It has to choose the version that works in both cases. A: You can add a "cName" function that is equivalent to "Name() const". This way you can call the const version of the function without casting to a const object first. This is mostly useful with the new keyword auto in C++0x, which is why they are updating the library to include cbegin(), cend(), crbegin(), crend() to return const_iterator's even if the object is non-const. What you are doing is probably better done by having a setName() function that allows you to change the name rather than returning a reference to the underlying container and then "maybe" it is modified. A: Two answers spring to mind: * *The non-const version is a closer match. *If it called the const overload for the non-const case, then under what circumstances would it ever call the non-const overload? You can get it to use the other overload by casting a to a const Foo *. Edit: From C++ Annotations Earlier, in section 2.5.11 the concept of function overloading was introduced. There it noted that member functions may be overloaded merely by their const attribute. In those cases, the compiler will use the member function matching most closely the const-qualification of the object: A: Because a is not a const pointer. Therefore, a non-const function is a closer match. Here is how you can call the const function: const Foo* b = a; std::string name = b->Name(); If you have both a const and a non-const overload, and want to call the const one on a non-const object, this might be an indication of bad design.
{ "language": "en", "url": "https://stackoverflow.com/questions/164102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Testing onbeforeunload events from Selenium I'm trying to write a Selenium test for a web page that uses an onbeforeunload event to prompt the user before leaving. Selenium doesn't seem to recognize the confirmation dialog that comes up, or to provide a way to hit OK or Cancel. Is there any way to do this? I'm using the Java Selenium driver, if that's relevant. A: You could write a user extension (or just some JavaScript in a storeEval etc) that tests that window.onbeforeunload is set, and then replaces it with null before continuing on from the page. Ugly, but ought to get you off the page. A: I've just had to do this for an application of mine where the onbeforeunload handler brings up a prompt if a user leaves a page while a document is in an unsaved state. Python code: driver.switch_to.alert.accept() The Java equivalent would be: driver.switchTo().alert().accept(); If the alert does not exist, the code above will fail with a NoAlertPresentException so there is no need for a separate test to check the existence before accepting the prompt. I'm running Selenium 2.43.0 but I think this has been doable for a while now. In cases where I don't want the prompt to come up at all because that's not what I'm testing, I run custom JavaScript in the browser to set window.onbeforeunload to null before leaving the page. I put this in the test teardown code. A: faced same problem with "beforeunlaod" event listner, LUMINUS! a chrome addon that helps me just block the event listener in the plugin thats all.. A: When I was confronted with limited control which I had over browser using Selenium, I turned to MozLab plugin which solved my problem if only for one browser platform.
{ "language": "en", "url": "https://stackoverflow.com/questions/164105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: RSS Item updates I'm working on an RSS feed for a custom tasking system we use, and I'm still wrapping my head around how things should work. What I want to have is a feed for each user that shows tasks assigned to them, and additionally a feed for each task that shows updates for the task. What I want to know right now concerns the user feed. When a case assigned to a user is updated, I currently have code to change the pubDate entry for that item and the lastBuildDate for the channel. I was hoping this would make the item appear as unread in readers so that the user would know to look at the item again, but this seems not to be the case. Should I be changing the guid, even though it's really the same items? What would the side-effects of that be? Is there anything I'm missing? How can I solve this? A: Changing the <pubDate> does indicate that the entry changed, but there is no requirement that a given RSS reader do anything about it. (Strictly speaking, there is no requirement than an RSS reader do anything, but let's remain reasonable.) Some reader do mark updated entries as changed. For example Bloglines.com can optionally detect changes in the <description> and mark entries as new again if that case. Depending on your reader, changing the <title>, <description>, or <pubDate> might give you the behavior you want. But as GateKiller mentions above, your safest option is to make it an entirely new entry with a new <guid>. While you're at it, you might want to use it as an opportunity to add a direct link or details about the update. Of course, if you're writing both the producer and consumer of the RSS, and your goal is that the feed always contains the full set of assigned tasks, just updating the <pubDate> will work just fine. A: The solution is to also change the GUID which means including the updated time in it. The GUID provides the uniqueness for each item in the feed and will be marked as unread if you put the date updated in it.
{ "language": "en", "url": "https://stackoverflow.com/questions/164124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I deploy a Python desktop application? I have started on a personal python application that runs on the desktop. I am using wxPython as a GUI toolkit. Should there be a demand for this type of application, I would possibly like to commercialize it. I have no knowledge of deploying "real-life" Python applications, though I have used py2exe in the past with varied success. How would I obfuscate the code? Can I somehow deploy only the bytecode? An ideal solution would not jeopardize my intellectual property (source code), would not require a direct installation of Python (though I'm sure it will need to have some embedded interpreter), and would be cross-platform (Windows, Mac, and Linux). Does anyone know of any tools or resources in this area? Thanks. A: Wow, there are a lot of questions in there: * *It is possible to run the bytecode (.pyc) file directly from the Python interpreter, but I haven't seen any bytecode obfuscation tools available. *I'm not aware of any "all in one" deployment solution, but: * *For Windows you could use NSIS(http://nsis.sourceforge.net/Main_Page). The problem here is that while OSX/*nix comes with python, Windows doesn't. If you're not willing to build a binary with py2exe, I'm not sure what the licensing issues would be surrounding distribution of the Python runtime environment (not to mention the technical ones). *You could package up the OS X distribution using the "bundle" format, and *NIX has it's own conventions for installing software-- typically a "make install" script. Hope that was helpful. A: Maybe IronPython can provide something for you? I bet those .exe/.dll-files can be pretty locked down. Not sure how such features work on mono, thus no idea how this works on Linux/OS X... A: You can distribute the compiled Python bytecode (.pyc files) instead of the source. You can't prevent decompilation in Python (or any other language, really). You could use an obfuscator like pyobfuscate to make it more annoying for competitors to decipher your decompiled source. As Alex Martelli says in this thread, if you want to keep your code a secret, you shouldn't run it on other people's machines. IIRC, the last time I used cx_Freeze it created a DLL for Windows that removed the necessity for a native Python installation. This is at least worth checking out. A: I have been using py2exe with good success on Windows. The code needs to be modified a bit so that the code analysis picks up all modules needed, but apart from that, it works. As for Linux, there are several important distribution formats: * *DEB (Debian, Ubuntu and other derivatives) *RPM (RedHat, Fedora, openSuSE) DEBs aren't particularly difficult to make, especially when you're already using distutils/setuptools. Some hints are given in the policy document, examples for packaging Python applications can be found in the repository. I don't have any experience with RPM, but I'm sure there are enough examples to be found. A: Try to use scraZ obfuscator (http://scraZ.me). This is obfuscator for bytecode, not for source code. Free version have good, but not perfect obfuscation methods. PRO version have very very strong protection for bytecode. (after bytecode obfuscation a decompilation is impossible)
{ "language": "en", "url": "https://stackoverflow.com/questions/164137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: registers vs stacks What exactly are the advantages and disadvantages to using a register-based virtual machine versus using a stack-based virtual machine? To me, it would seem as though a register based machine would be more straight-forward to program and more efficient. So why is it that the JVM, the CLR, and the Python VM are all stack-based? A: How many registers do you need? I'll probably need at least one more than that. A: Implemented in hardware, a register-based machine is going to be more efficient simply because there are fewer accesses to the slower RAM. In software, however, even a register based architecture will most likely have the "registers" in RAM. A stack based machine is going to be just as efficient in that case. In addition a stack-based VM is going to make it a lot easier to write compilers. You don't have to deal with register allocation strategies. You have, essentially, an unlimited number of registers to work with. Update: I wrote this answer assuming an interpreted VM. It may not hold true for a JIT compiled VM. I ran across this paper which seems to indicate that a JIT compiled VM may be more efficient using a register architecture. A: This has already been answered, to a certain level, in the Parrot VM's FAQ and associated documents: A Parrot Overview The relevant text from that doc is this: the Parrot VM will have a register architecture, rather than a stack architecture. It will also have extremely low-level operations, more similar to Java's than the medium-level ops of Perl and Python and the like. The reasoning for this decision is primarily that by resembling the underlying hardware to some extent, it's possible to compile down Parrot bytecode to efficient native machine language. Moreover, many programs in high-level languages consist of nested function and method calls, sometimes with lexical variables to hold intermediate results. Under non-JIT settings, a stack-based VM will be popping and then pushing the same operands many times, while a register-based VM will simply allocate the right amount of registers and operate on them, which can significantly reduce the amount of operations and CPU time. You may also want to read this: Registers vs stacks for interpreter design Quoting it a bit: There is no real doubt, it's easier to generate code for a stack machine. Most freshman compiler students can do that. Generating code for a register machine is a bit tougher, unless you're treating it as a stack machine with an accumulator. (Which is doable, albeit somewhat less than ideal from a performance standpoint) Simplicity of targeting isn't that big a deal, at least not for me, in part because so few people are actually going to directly target it--I mean, come on, how many people do you know who actually try to write a compiler for something anyone would ever care about? The numbers are small. The other issue there is that many of the folks with compiler knowledge already are comfortable targeting register machines, as that's what all hardware CPUs in common use are. A: Stack based VM's are simpler and the code is much more compact. As a real world example, a friend built (about 30 years ago) a data logging system with a homebrew Forth VM on a Cosmac. The Forth VM was 30 bytes of code on a machine with 2k of ROM and 256 bytes of RAM. A: It is not obvious to me that a "register-based" virtual machine would be "more straight-forward to program" or "more efficient". Perhaps you are thinking that the virtual registers would provide a short-cut during the JIT compilation phase? This would certainly not be the case, since the real processor may have more or fewer registers than the VM, and those registers may be used in different ways. (Example: values that are going to be decremented are best placed in the ECX register on x86 processors.) If the real machine has more registers than the VM, then you're wasting resources, fewer and you've gained nothing using "register-based" programming. A: Traditionally, virtual machine implementors have favored stack-based architectures over register-based due to 'simplicity of VM implementation' ease of writing a compiler back-end - most VMs are originally designed to host a single language and code density and executables for stack architecture are invariably smaller than executables for register architectures. The simplicity and code density are a cost of performance. Studies have shown that a registered-based architecture requires an average of 47% less executed VM instructions than stack-based architecture, and the register code is 25% larger than corresponding stack code but this increase cost of fetching more VM instructions due to larger code size involves only 1.07% extra real machine loads per VM instruction which is negligible. The overall performance of the register-based VM is that it takes, on average, 32.3% less time to execute standard benchmarks. A: Stack based VMs are easier to generate code for. Register based VMs are easier to create fast implementations for, and easier to generate highly optimized code for. For your first attempt, I recommend starting with a stack based VM. A: One reason for building stack-based VMs is that that actual VM opcodes can be smaller and simpler (no need to encode/decode operands). This makes the generated code smaller, and also makes the VM code simpler.
{ "language": "en", "url": "https://stackoverflow.com/questions/164143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80" }
Q: Compare two DataTables to determine rows in one but not the other I have two DataTables, A and B, produced from CSV files. I need to be able to check which rows exist in B that do not exist in A. Is there a way to do some sort of query to show the different rows or would I have to iterate through each row on each DataTable to check if they are the same? The latter option seems to be very intensive if the tables become large. A: You can use the Merge and GetChanges methods on the DataTable to do this: A.Merge(B); // this will add to A any records that are in B but not A return A.GetChanges(); // returns records originally only in B A: The answers so far assume that you're simply looking for duplicate primary keys. That's a pretty easy problem - you can use the Merge() method, for instance. But I understand your question to mean that you're looking for duplicate DataRows. (From your description of the problem, with both tables being imported from CSV files, I'd even assume that the original rows didn't have primary key values, and that any primary keys are being assigned via AutoNumber during the import.) The naive implementation (for each row in A, compare its ItemArray with that of each row in B) is indeed going to be computationally expensive. A much less expensive way to do this is with a hashing algorithm. For each DataRow, concatenate the string values of its columns into a single string, and then call GetHashCode() on that string to get an int value. Create a Dictionary<int, DataRow> that contains an entry, keyed on the hash code, for each DataRow in DataTable B. Then, for each DataRow in DataTable A, calculate the hash code, and see if it's contained in the dictionary. If it's not, you know that the DataRow doesn't exist in DataTable B. This approach has two weaknesses that both emerge from the fact that two strings can be unequal but produce the same hash code. If you find a row in A whose hash is in the dictionary, you then need to check the DataRow in the dictionary to verify that the two rows are really equal. The second weakness is more serious: it's unlikely, but possible, that two different DataRows in B could hash to the same key value. For this reason, the dictionary should really be a Dictionary<int, List<DataRow>>, and you should perform the check described in the previous paragraph against each DataRow in the list. It takes a fair amount of work to get this working, but it's an O(m+n) algorithm, which I think is going to be as good as it gets. A: Assuming you have an ID column which is of an appropriate type (i.e. gives a hashcode and implements equality) - string in this example, which is slightly pseudocode because I'm not that familiar with DataTables and don't have time to look it all up just now :) IEnumerable<string> idsInA = tableA.AsEnumerable().Select(row => (string)row["ID"]); IEnumerable<string> idsInB = tableB.AsEnumerable().Select(row => (string)row["ID"]); IEnumerable<string> bNotA = idsInB.Except(idsInA); A: would I have to iterate through each row on each DataTable to check if they are the same. Seeing as you've loaded the data from a CSV file, you're not going to have any indexes or anything, so at some point, something is going to have to iterate through every row, whether it be your code, or a library, or whatever. Anyway, this is an algorithms question, which is not my specialty, but my naive approach would be as follows: 1: Can you exploit any properties of the data? Are all the rows in each table unique, and can you sort them both by the same criteria? If so, you can do this: * *Sort both tables by their ID (using some useful thing like a quicksort). If they're already sorted then you win big. *Step through both tables at once, skipping over any gaps in ID's in either table. Matched ID's mean duplicated records. This allows you to do it in (sort time * 2 ) + one pass, so if my big-O-notation is correct, it'd be (whatever-sort-time) + O(m+n) which is pretty good. (Revision: this is the approach that ΤΖΩΤΖΙΟΥ describes ) 2: An alternative approach, which may be more or less efficient depending on how big your data is: * *Run through table 1, and for each row, stick it's ID (or computed hashcode, or some other unique ID for that row) into a dictionary (or hashtable if you prefer to call it that). *Run through table 2, and for each row, see if the ID (or hashcode etc) is present in the dictionary. You're exploiting the fact that dictionaries have really fast - O(1) I think? lookup. This step will be really fast, but you'll have paid the price doing all those dictionary inserts. I'd be really interested to see what people with better knowledge of algorithms than myself come up with for this one :-) A: Just FYI: Generally speaking about algorithms, comparing two sets of sortable (as ids typically are) is not an O(M*N/2) operation, but O(M+N) if the two sets are ordered. So you scan one table with a pointer to the start of the other, and: other_item= A.first() only_in_B= empty_list() for item in B: while other_item > item: other_item= A.next() if A.eof(): only_in_B.add( all the remaining B items) return only_in_B if item < other_item: empty_list.append(item) return only_in_B The code above is obviously pseudocode, but should give you the general gist if you decide to code it yourself. A: Thanks for all the feedback. I do not have any index's unfortunately. I will give a little more information about my situation. We have a reporting program (replaced Crystal reports) that is installed in 7 Servers across EU. These servers have many reports on them (not all the same for each country). They are invoked by a commandline application that uses XML files for their configuration. So One XML file can call multiple reports. The commandline application is scheduled and controlled by our overnight process. So the XML file could be called from multiple places. The goal of the CSV is to produce a list of all the reports that are being used and where they are being called from. I am going through the XML files for all references, querying the scheduling program and producing a list of all the reports. (this is not too bad). The problem I have is I have to keep a list of all the reports that might have been removed from production. So I need to compare the old CSV with the new data. For this I thought it best to put it into DataTables and compare the information, (this could be the wrong approach. I suppose I could create an object that holds it and compares the difference then create iterate through them). The data I have about each report is as follows: String - Task Name String - Action Name Int - ActionID (the Action ID can be in multiple records as a single action can call many reports, i.e. an XML file). String - XML File called String - Report Name I will try the Merge idea given by MusiGenesis (thanks). (rereading some of the posts not sure if the Merge will work, but worth trying as I have not heard about it before so something new to learn). The HashCode Idea sounds interesting as well. Thanks for all the advice. A: public DataTable compareDataTables(DataTable First, DataTable Second) { First.TableName = "FirstTable"; Second.TableName = "SecondTable"; //Create Empty Table DataTable table = new DataTable("Difference"); DataTable table1 = new DataTable(); try { //Must use a Dataset to make use of a DataRelation object using (DataSet ds4 = new DataSet()) { //Add tables ds4.Tables.AddRange(new DataTable[] { First.Copy(), Second.Copy() }); //Get Columns for DataRelation DataColumn[] firstcolumns = new DataColumn[ds4.Tables[0].Columns.Count]; for (int i = 0; i < firstcolumns.Length; i++) { firstcolumns[i] = ds4.Tables[0].Columns[i]; } DataColumn[] secondcolumns = new DataColumn[ds4.Tables[1].Columns.Count]; for (int i = 0; i < secondcolumns.Length; i++) { secondcolumns[i] = ds4.Tables[1].Columns[i]; } //Create DataRelation DataRelation r = new DataRelation(string.Empty, firstcolumns, secondcolumns, false); ds4.Relations.Add(r); //Create columns for return table for (int i = 0; i < First.Columns.Count; i++) { table.Columns.Add(First.Columns[i].ColumnName, First.Columns[i].DataType); } //If First Row not in Second, Add to return table. table.BeginLoadData(); foreach (DataRow parentrow in ds4.Tables[0].Rows) { DataRow[] childrows = parentrow.GetChildRows(r); if (childrows == null || childrows.Length == 0) table.LoadDataRow(parentrow.ItemArray, true); table1.LoadDataRow(childrows, false); } table.EndLoadData(); } } catch (Exception ex) { Console.WriteLine(ex.Message); } return table; } A: I found an easy way to solve this. Unlike previous "except method" answers, I use the except method twice. This not only tells you what rows were deleted but what rows were added. If you only use one except method - it will only tell you one difference and not both. This code is tested and works. See below //Pass in your two datatables into your method //build the queries based on id. var qry1 = datatable1.AsEnumerable().Select(a => new { ID = a["ID"].ToString() }); var qry2 = datatable2.AsEnumerable().Select(b => new { ID = b["ID"].ToString() }); //detect row deletes - a row is in datatable1 except missing from datatable2 var exceptAB = qry1.Except(qry2); //detect row inserts - a row is in datatable2 except missing from datatable1 var exceptAB2 = qry2.Except(qry1); then execute your code against the results if (exceptAB.Any()) { foreach (var id in exceptAB) { //execute code here } } if (exceptAB2.Any()) { foreach (var id in exceptAB2) { //execute code here } } A: Could you not simply compare the CSV files before loading them into DataTables? string[] a = System.IO.File.ReadAllLines(@"cvs_a.txt"); string[] b = System.IO.File.ReadAllLines(@"csv_b.txt"); // get the lines from b that are not in a IEnumerable<string> diff = b.Except(a); //... parse b into DataTable ... A: try { if (ds.Tables[0].Columns.Count == ds1.Tables[0].Columns.Count) { for (int i = 0; i < ds.Tables[0].Rows.Count; i++) { for (int j = 0; j < ds.Tables[0].Columns.Count; j++) { if (ds.Tables[0].Rows[i][j].ToString() == ds1.Tables[0].Rows[i][j].ToString()) { } else { MessageBox.Show(i.ToString() + "," + j.ToString()); } } } } else { MessageBox.Show("Table has different columns "); } } catch (Exception) { MessageBox.Show("Please select The Table"); } A: I'm continuing tzot's idea ... If you have two sortable sets, then you can just use: List<string> diffList = new List<string>(sortedListA.Except(sortedListB)); If you need more complicated objects, you can define a comparator yourself and still use it. A: The usual usage scenario considers a user that has a DataTable in hand and changes it by Adding, Deleting or Modifying some of the DataRows. After the changes are performed, the DataTable is aware of the proper DataRowState for each row, and also keeps track of the Original DataRowVersion for any rows that were changed. In this usual scenario, one can Merge the changes back into a source table (in which all rows are Unchanged). After merging, one can get a nice summary of only the changed rows with a call to GetChanges(). In a more unusual scenario, a user has two DataTables with the same schema (or perhaps only the same columns and lacking primary keys). These two DataTables consist of only Unchanged rows. The user may want to find out what changes does he need to apply to one of the two tables in order to get to the other one. That is, which rows need to be Added, Deleted, or Modified. We define here a function called GetDelta() which does the job: using System; using System.Data; using System.Xml; using System.Linq; using System.Collections.Generic; using System.Data.DataSetExtensions; public class Program { private static DataTable GetDelta(DataTable table1, DataTable table2) { // Modified2 : row1 keys match rowOther keys AND row1 does not match row2: IEnumerable<DataRow> modified2 = ( from row1 in table1.AsEnumerable() from row2 in table2.AsEnumerable() where table1.PrimaryKey.Aggregate(true, (boolAggregate, keycol) => boolAggregate & row1[keycol].Equals(row2[keycol.Ordinal])) && !row1.ItemArray.SequenceEqual(row2.ItemArray) select row2); // Modified1 : IEnumerable<DataRow> modified1 = ( from row1 in table1.AsEnumerable() from row2 in table2.AsEnumerable() where table1.PrimaryKey.Aggregate(true, (boolAggregate, keycol) => boolAggregate & row1[keycol].Equals(row2[keycol.Ordinal])) && !row1.ItemArray.SequenceEqual(row2.ItemArray) select row1); // Added : row2 not in table1 AND row2 not in modified2 IEnumerable<DataRow> added = table2.AsEnumerable().Except(modified2, DataRowComparer.Default).Except(table1.AsEnumerable(), DataRowComparer.Default); // Deleted : row1 not in row2 AND row1 not in modified1 IEnumerable<DataRow> deleted = table1.AsEnumerable().Except(modified1, DataRowComparer.Default).Except(table2.AsEnumerable(), DataRowComparer.Default); Console.WriteLine(); Console.WriteLine("modified count =" + modified1.Count()); Console.WriteLine("added count =" + added.Count()); Console.WriteLine("deleted count =" + deleted.Count()); DataTable deltas = table1.Clone(); foreach (DataRow row in modified2) { // Match the unmodified version of the row via the PrimaryKey DataRow matchIn1 = modified1.Where(row1 => table1.PrimaryKey.Aggregate(true, (boolAggregate, keycol) => boolAggregate & row1[keycol].Equals(row[keycol.Ordinal]))).First(); DataRow newRow = deltas.NewRow(); // Set the row with the original values foreach(DataColumn dc in deltas.Columns) newRow[dc.ColumnName] = matchIn1[dc.ColumnName]; deltas.Rows.Add(newRow); newRow.AcceptChanges(); // Set the modified values foreach (DataColumn dc in deltas.Columns) newRow[dc.ColumnName] = row[dc.ColumnName]; // At this point newRow.DataRowState should be : Modified } foreach (DataRow row in added) { DataRow newRow = deltas.NewRow(); foreach (DataColumn dc in deltas.Columns) newRow[dc.ColumnName] = row[dc.ColumnName]; deltas.Rows.Add(newRow); // At this point newRow.DataRowState should be : Added } foreach (DataRow row in deleted) { DataRow newRow = deltas.NewRow(); foreach (DataColumn dc in deltas.Columns) newRow[dc.ColumnName] = row[dc.ColumnName]; deltas.Rows.Add(newRow); newRow.AcceptChanges(); newRow.Delete(); // At this point newRow.DataRowState should be : Deleted } return deltas; } private static void DemonstrateGetDelta() { DataTable table1 = new DataTable("Items"); // Add columns DataColumn column1 = new DataColumn("id1", typeof(System.Int32)); DataColumn column2 = new DataColumn("id2", typeof(System.Int32)); DataColumn column3 = new DataColumn("item", typeof(System.Int32)); table1.Columns.Add(column1); table1.Columns.Add(column2); table1.Columns.Add(column3); // Set the primary key column. table1.PrimaryKey = new DataColumn[] { column1, column2 }; // Add some rows. DataRow row; for (int i = 0; i <= 4; i++) { row = table1.NewRow(); row["id1"] = i; row["id2"] = i*i; row["item"] = i; table1.Rows.Add(row); } // Accept changes. table1.AcceptChanges(); PrintValues(table1, "table1:"); // Create a second DataTable identical to the first. DataTable table2 = table1.Clone(); // Add a row that exists in table1: row = table2.NewRow(); row["id1"] = 0; row["id2"] = 0; row["item"] = 0; table2.Rows.Add(row); // Modify the values of a row that exists in table1: row = table2.NewRow(); row["id1"] = 1; row["id2"] = 1; row["item"] = 455; table2.Rows.Add(row); // Modify the values of a row that exists in table1: row = table2.NewRow(); row["id1"] = 2; row["id2"] = 4; row["item"] = 555; table2.Rows.Add(row); // Add a row that does not exist in table1: row = table2.NewRow(); row["id1"] = 13; row["id2"] = 169; row["item"] = 655; table2.Rows.Add(row); table2.AcceptChanges(); Console.WriteLine(); PrintValues(table2, "table2:"); DataTable delta = GetDelta(table1,table2); Console.WriteLine(); PrintValues(delta,"delta:"); // Verify that the deltas DataTable contains the adequate Original DataRowVersions: DataTable originals = table1.Clone(); foreach (DataRow drow in delta.Rows) { if (drow.RowState != DataRowState.Added) { DataRow originalRow = originals.NewRow(); foreach (DataColumn dc in originals.Columns) originalRow[dc.ColumnName] = drow[dc.ColumnName, DataRowVersion.Original]; originals.Rows.Add(originalRow); } } originals.AcceptChanges(); Console.WriteLine(); PrintValues(originals,"delta original values:"); } private static void Row_Changed(object sender, DataRowChangeEventArgs e) { Console.WriteLine("Row changed {0}\t{1}", e.Action, e.Row.ItemArray[0]); } private static void PrintValues(DataTable table, string label) { // Display the values in the supplied DataTable: Console.WriteLine(label); foreach (DataRow row in table.Rows) { foreach (DataColumn col in table.Columns) { Console.Write("\t " + row[col, row.RowState == DataRowState.Deleted ? DataRowVersion.Original : DataRowVersion.Current].ToString()); } Console.Write("\t DataRowState =" + row.RowState); Console.WriteLine(); } } public static void Main() { DemonstrateGetDelta(); } } The code above can be tested in https://dotnetfiddle.net/. The resulting output is shown below: table1: 0 0 0 DataRowState =Unchanged 1 1 1 DataRowState =Unchanged 2 4 2 DataRowState =Unchanged 3 9 3 DataRowState =Unchanged 4 16 4 DataRowState =Unchanged table2: 0 0 0 DataRowState =Unchanged 1 1 455 DataRowState =Unchanged 2 4 555 DataRowState =Unchanged 13 169 655 DataRowState =Unchanged modified count =2 added count =1 deleted count =2 delta: 1 1 455 DataRowState =Modified 2 4 555 DataRowState =Modified 13 169 655 DataRowState =Added 3 9 3 DataRowState =Deleted 4 16 4 DataRowState =Deleted delta original values: 1 1 1 DataRowState =Unchanged 2 4 2 DataRowState =Unchanged 3 9 3 DataRowState =Unchanged 4 16 4 DataRowState =Unchanged Note that if your tables don't have a PrimaryKey, the where clause in the LINQ queries gets simplified a little bit. I'll let you figure that out on your own. A: Achieve it simply using linq. private DataTable CompareDT(DataTable TableA, DataTable TableB) { DataTable TableC = new DataTable(); try { var idsNotInB = TableA.AsEnumerable().Select(r => r.Field<string>(Keyfield)) .Except(TableB.AsEnumerable().Select(r => r.Field<string>(Keyfield))); TableC = (from row in TableA.AsEnumerable() join id in idsNotInB on row.Field<string>(ddlColumn.SelectedItem.ToString()) equals id select row).CopyToDataTable(); } catch (Exception ex) { lblresult.Text = ex.Message; ex = null; } return TableC; }
{ "language": "en", "url": "https://stackoverflow.com/questions/164144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Character offset in an Internet Explorer TextRange As far as I can tell there's no simple way of retrieving a character offset from a TextRange object in Internet Explorer. The W3C Range object has a node, and the offset into the text within that node. IE seems to just have pixel offsets. There are methods to create, extend and compare ranges, so it would be possible to write an algorithm to calculate the character offset, but I feel I must be missing something. So, what's the easiest way to calculate the character offset of the start of an Internet Explorer TextRange? A: I use a method based on this caret position trick: // Assume r is a range: var offsetFromBody = Math.abs( r.moveEnd('character', -1000000) ); Since moveEnd returns the number of characters actually moved, offset should now be the offset from the start of the document. This works fine for testing primitive caret movement, but for expanded selections and for getting the exact node that holds the range anchor you'll need something more complex: // where paramter r is a range: function getRangeOffsetIE( r ) { var end = Math.abs( r.duplicate().moveEnd('character', -1000000) ); // find the anchor element's offset var range = r.duplicate(); r.collapse( false ); var parentElm = range.parentElement(); var children = parentElm.getElementsByTagName('*'); for (var i = children.length - 1; i >= 0; i--) { range.moveToElementText( children[i] ); if ( range.inRange(r) ) { parentElm = children[i]; break; } } range.moveToElementText( parentElm ); return end - Math.abs( range.moveStart('character', -1000000) ); } This should return the correct caret text offset. Of course, if you know the target node already, or are able to provide a context, then you can skip the whole looping search mess. A: I'd suggest IERange, or just the TextRange-to-DOM Range algorithm from it. Update, 9 August 2011 I'd now suggest using my own Rangy library, which is similar in idea to IERange but much more fully realized and supported. A: I used a slightly simpler solution using the offset values of a textRange: function getIECharOffset() { var offset = 0; // get the users selection - this handles empty selections var userSelection = document.selection.createRange(); // get a selection from the contents of the parent element var parentSelection = userSelection.parentElement().createTextRange(); // loop - moving the parent selection on a character at a time until the offsets match while (!offsetEqual(parentSelection, userSelection)) { parentSelection.move('character'); offset++; } // return the number of char you have moved through return offset; } function offsetEqual(arg1, arg2) { if (arg1.offsetLeft == arg2.offsetLeft && arg1.offsetTop == arg2.offsetTop) { return true; } return false; } A: You can iterate through the body element's TextRange.text property using String.substring() to compare against the TextRange for which you want the character offset. function charOffset(textRange, parentTextRange) { var parentTxt = parentTextRange.text; var txt = textRange.text; var parentLen = parentTxt.length; for(int i=0; i < parentLen ; ++i) { if (parentTxt.substring(i, txt.length+i) == txt) { var originalPosition = textRange.getBookmark(); //moves back one and searches backwards for same text textRange.moveStart("character",-1); var foundOther = textRange.findText(textRange.text,-parentLen,1); //if no others were found return offset if (!foundOther) return i; //returns to original position to try next offset else textRange.moveToBookmark(originalPosition); } } return -1; } [Reference for findText()]
{ "language": "en", "url": "https://stackoverflow.com/questions/164147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a SQL server performance counter for average execution time? I want to tune a production SQL server. After making adjustments (such as changing the degree of parallelism) I want to know if it helped or hurt query execution times. This seems like an obvious performance counter, but for the last half hour I've been searching Google and the counter list in perfmon, and I have not been able to find a performance counter for SQL server to give me the average execution time for all queries hitting a server. The SQL Server equivalent of the ASP.NET Request Execution Time. Does one exist that I'm missing? Is there another effective way of monitoring the average query times for a server? A: I don't believe there is a PerfMon but there is a report within SQL Server Management Studio: Right click on the database, select Reports > Standard Reports > Object Execution Statistics. This will give you several very good statistics about what's running within the database, how long it's taking, how much memory/io processing it takes, etc. You can also run this on the server level across all databases. A: You can use Query Analyzer (which is one of the tools with SQL Server) and see how they are executed internally so you can optimize indexing etc. That wouldn't tell you about the average, or round-trip back to the client. To do that you'd have to log it on the client and analyze the data yourself. A: I managed to do it by saving the Trace to SQL. When the trace is open File > Save As > Trace Table Select the SQL, and once its imported run select avg(duration) from dbo.[YourTableImportName] You can very easily perform other stats, max, min, counts etc... Much better way of interrogating the trace result A: An other solution is to run multiple time the query and get the average query time: DO $proc$ DECLARE StartTime timestamptz; EndTime timestamptz; Delta double precision; BEGIN StartTime := clock_timestamp(); FOR i IN 1..100 LOOP PERFORM * FROM table_name; END LOOP; EndTime := clock_timestamp(); Delta := 1000 * (extract(epoch FROM EndTime) - extract(epoch FROM StartTime)) / 100; RAISE NOTICE 'Average duration in ms = %', Delta; END; $proc$; Here it run 100 time the query: PERFORM * FROM table_name; Just replace SELECT by PERFORM A: Average over what time and for which queries? You need to further define what you mean by "average" or it has no meaning, which is probably why it's not a simple performance counter. You could capture this information by running a trace, capturing that to a table, and then you could slice and dice the execution times in one of many ways. A: It doesn't give exactly what you need, but I'd highly recommend trying the SQL Server 2005 Performance Dashboard Reports, which can be downloaded here. It includes a report of the top 20 queries and their average execution time and a lot of other useful ones as well (top queries by IO, wait stats etc). If you do install it be sure to take note of where it installs and follow the instructions in the Additional Info section. A: The profiler will give you statistics on query execution times and activities on the server. Overall query times may or may not mean very much without tying them to specific jobs and query plans. Other indicators of performance bottlenecks are resource contention counters (general statistics, latches, locks). You can see these through performance counters. Also looking for large number of table-scan or other operations that do not make use of indexes can give you an indication that indexing may be necessary. On a loaded server increasing parallelism is unlikely to materially affect performance as there are already many queries active at any given time. Where parallelism gets you a win is on large infrequently run batch jobs such as ETL processes. If you need to reduce the run-time of such a process then parallelism might be a good place to look. On a busy server doing a transactional workload with many users the system resources will be busy from the workload so parallelism is unlikely to be a big win. A: You can use Activity Monitor. It's built into SSMS. It will give you real-time tracking of all current expensive queries on the server. To open Activity Monitor: * *In Sql Server Management Studio (SSMS), Right click on the server and select Activity Monitor. *Open Recent Expensive Queries to see CPU Usage, Average Query Time, etc. Hope that helps. A: There are counters in 'SQL Server:Batch Resp Statistics' group, which are able to track SQL Batch Response times. Counters are divided based on response time intervals, for example, from 0 ms to 1 ms, ..., from 10 ms to 20 ms, ..., from 1000 ms to 2000 ms and so on, So proper counters can be selected for the desired time interval. Hope it helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/164154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Flex graphic assets: SWF or SWC? Which is a better format to store graphic assets for a Flex application, SWF or SWC? Are there any real differences, and if so what are they? A: SWC is what you use when you're looking for a library to compile into your app. You have access to the classes and can import individual parts. SWF is more likely what you're looking for when embedding graphics. Here's the docs you might be interested in: http://livedocs.adobe.com/flex/3/html/help.html?content=layoutperformance_06.html#223998 I've been having good success with SVG for images, but there's some caveats since Flex only implements a subset of the features. A: I have no real reason for doing this so it may be incorrect but I usually create SWF's for things that need to be loaded during runtime and SWC's for things that need to be available for design time. A: Assets in a seperate SWF are loaded and included at runtime. Assets in a SWC are loaded and included / compiled at compile time. You can also directly embed assets within the main app SWF at compile time (check out the Embed meta data). Of course, you can also load individual assets (such as a PNG) directly at runtime. As far as which is better, it really depends on what you are trying to do, and how the assets are used. mike A: A SWC is simply a SWF and some metadata wrapped into a zip file. Other than the fact that runtime loading of SWC isn't supported, I don't think there are any major differences between using the two formats.
{ "language": "en", "url": "https://stackoverflow.com/questions/164162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Quicksort: Choosing the pivot When implementing Quicksort, one of the things you have to do is to choose a pivot. But when I look at pseudocode like the one below, it is not clear how I should choose the pivot. First element of list? Something else? function quicksort(array) var list less, greater if length(array) ≤ 1 return array select and remove a pivot value pivot from array for each x in array if x ≤ pivot then append x to less else append x to greater return concatenate(quicksort(less), pivot, quicksort(greater)) Can someone help me grasp the concept of choosing a pivot and whether or not different scenarios call for different strategies. A: It depends on your requirements. Choosing a pivot at random makes it harder to create a data set that generates O(N^2) performance. 'Median-of-three' (first, last, middle) is also a way of avoiding problems. Beware of relative performance of comparisons, though; if your comparisons are costly, then Mo3 does more comparisons than choosing (a single pivot value) at random. Database records can be costly to compare. Update: Pulling comments into answer. mdkess asserted: 'Median of 3' is NOT first last middle. Choose three random indexes, and take the middle value of this. The whole point is to make sure that your choice of pivots is not deterministic - if it is, worst case data can be quite easily generated. To which I responded: * *Analysis Of Hoare's Find Algorithm With Median-Of-Three Partition (1997) by P Kirschenhofer, H Prodinger, C Martínez supports your contention (that 'median-of-three' is three random items). *There's an article described at portal.acm.org that is about 'The Worst Case Permutation for Median-of-Three Quicksort' by Hannu Erkiö, published in The Computer Journal, Vol 27, No 3, 1984. [Update 2012-02-26: Got the text for the article. Section 2 'The Algorithm' begins: 'By using the median of the first, middle and last elements of A[L:R], efficient partitions into parts of fairly equal sizes can be achieved in most practical situations.' Thus, it is discussing the first-middle-last Mo3 approach.] *Another short article that is interesting is by M. D. McIlroy, "A Killer Adversary for Quicksort", published in Software-Practice and Experience, Vol. 29(0), 1–4 (0 1999). It explains how to make almost any Quicksort behave quadratically. *AT&T Bell Labs Tech Journal, Oct 1984 "Theory and Practice in the Construction of a Working Sort Routine" states "Hoare suggested partitioning around the median of several randomly selected lines. Sedgewick [...] recommended choosing the median of the first [...] last [...] and middle". This indicates that both techniques for 'median-of-three' are known in the literature. (Update 2014-11-23: The article appears to be available at IEEE Xplore or from Wiley — if you have membership or are prepared to pay a fee.) *'Engineering a Sort Function' by J L Bentley and M D McIlroy, published in Software Practice and Experience, Vol 23(11), November 1993, goes into an extensive discussion of the issues, and they chose an adaptive partitioning algorithm based in part on the size of the data set. There is a lot of discussion of trade-offs for various approaches. *A Google search for 'median-of-three' works pretty well for further tracking. Thanks for the information; I had only encountered the deterministic 'median-of-three' before. A: Don't try and get too clever and combine pivoting strategies. If you combined median of 3 with random pivot by picking the median of the first, last and a random index in the middle, then you'll still be vulnerable to many of the distributions which send median of 3 quadratic (so its actually worse than plain random pivot) E.g a pipe organ distribution (1,2,3...N/2..3,2,1) first and last will both be 1 and the random index will be some number greater than 1, taking the median gives 1 (either first or last) and you get an extermely unbalanced partitioning. A: It is easier to break the quicksort into three sections doing this * *Exchange or swap data element function *The partition function *Processing the partitions It is only slightly more inefficent than one long function but is alot easier to understand. Code follows: /* This selects what the data type in the array to be sorted is */ #define DATATYPE long /* This is the swap function .. your job is to swap data in x & y .. how depends on data type .. the example works for normal numerical data types .. like long I chose above */ void swap (DATATYPE *x, DATATYPE *y){ DATATYPE Temp; Temp = *x; // Hold current x value *x = *y; // Transfer y to x *y = Temp; // Set y to the held old x value }; /* This is the partition code */ int partition (DATATYPE list[], int l, int h){ int i; int p; // pivot element index int firsthigh; // divider position for pivot element // Random pivot example shown for median p = (l+h)/2 would be used p = l + (short)(rand() % (int)(h - l + 1)); // Random partition point swap(&list[p], &list[h]); // Swap the values firsthigh = l; // Hold first high value for (i = l; i < h; i++) if(list[i] < list[h]) { // Value at i is less than h swap(&list[i], &list[firsthigh]); // So swap the value firsthigh++; // Incement first high } swap(&list[h], &list[firsthigh]); // Swap h and first high values return(firsthigh); // Return first high }; /* Finally the body sort */ void quicksort(DATATYPE list[], int l, int h){ int p; // index of partition if ((h - l) > 0) { p = partition(list, l, h); // Partition list quicksort(list, l, p - 1); // Sort lower partion quicksort(list, p + 1, h); // Sort upper partition }; }; A: Heh, I just taught this class. There are several options. Simple: Pick the first or last element of the range. (bad on partially sorted input) Better: Pick the item in the middle of the range. (better on partially sorted input) However, picking any arbitrary element runs the risk of poorly partitioning the array of size n into two arrays of size 1 and n-1. If you do that often enough, your quicksort runs the risk of becoming O(n^2). One improvement I've seen is pick median(first, last, mid); In the worst case, it can still go to O(n^2), but probabilistically, this is a rare case. For most data, picking the first or last is sufficient. But, if you find that you're running into worst case scenarios often (partially sorted input), the first option would be to pick the central value( Which is a statistically good pivot for partially sorted data). If you're still running into problems, then go the median route. A: If you are sorting a random-accessible collection (like an array), it's general best to pick the physical middle item. With this, if the array is all ready sorted (or nearly sorted), the two partitions will be close to even, and you'll get the best speed. If you are sorting something with only linear access (like a linked-list), then it's best to choose the first item, because it's the fastest item to access. Here, however,if the list is already sorted, you're screwed -- one partition will always be null, and the other have everything, producing the worst time. However, for a linked-list, picking anything besides the first, will just make matters worse. It pick the middle item in a listed-list, you'd have to step through it on each partition step -- adding a O(N/2) operation which is done logN times making total time O(1.5 N *log N) and that's if we know how long the list is before we start -- usually we don't so we'd have to step all the way through to count them, then step half-way through to find the middle, then step through a third time to do the actual partition: O(2.5N * log N) A: It is entirely dependent on how your data is sorted to begin with. If you think it will be pseudo-random then your best bet is to either pick a random selection or choose the middle. A: Never ever choose a fixed pivot - this can be attacked to exploit your algorithm's worst case O(n2) runtime, which is just asking for trouble. Quicksort's worst case runtime occurs when partitioning results in one array of 1 element, and one array of n-1 elements. Suppose you choose the first element as your partition. If someone feeds an array to your algorithm that is in decreasing order, your first pivot will be the biggest, so everything else in the array will move to the left of it. Then when you recurse, the first element will be the biggest again, so once more you put everything to the left of it, and so on. A better technique is the median-of-3 method, where you pick three elements at random, and choose the middle. You know that the element that you choose won't be the the first or the last, but also, by the central limit theorem, the distribution of the middle element will be normal, which means that you will tend towards the middle (and hence, nlog(n) time). If you absolutely want to guarantee O(nlog(n)) runtime for the algorithm, the columns-of-5 method for finding the median of an array runs in O(n) time, which means that the recurrence equation for quicksort in the worst case will be: T(n) = O(n) (find the median) + O(n) (partition) + 2T(n/2) (recurse left and right) By the Master Theorem, this is O(nlog(n)). However, the constant factor will be huge, and if worst case performance is your primary concern, use a merge sort instead, which is only a little bit slower than quicksort on average, and guarantees O(nlog(n)) time (and will be much faster than this lame median quicksort). Explanation of the Median of Medians Algorithm A: Choosing a random pivot minimizes the chance that you will encounter worst-case O(n2) performance (always choosing first or last would cause worst-case performance for nearly-sorted or nearly-reverse-sorted data). Choosing the middle element would also be acceptable in the majority of cases. Also, if you are implementing this yourself, there are versions of the algorithm that work in-place (i.e. without creating two new lists and then concatenating them). A: Ideally the pivot should be the middle value in the entire array. This will reduce the chances of getting worst case performance. A: In a truly optimized implementation, the method for choosing pivot should depend on the array size - for a large array, it pays off to spend more time choosing a good pivot. Without doing a full analysis, I would guess "middle of O(log(n)) elements" is a good start, and this has the added bonus of not requiring any extra memory: Using tail-call on the larger partition and in-place partitioning, we use the same O(log(n)) extra memory at almost every stage of the algorithm. A: Quick sort's complexity varies greatly with the selection of pivot value. for example if you always choose first element as an pivot, algorithm's complexity becomes as worst as O(n^2). here is an smart method to choose pivot element- 1. choose the first, mid, last element of the array. 2. compare these three numbers and find the number which is greater than one and smaller than other i.e. median. 3. make this element as pivot element. choosing the pivot by this method splits the array in nearly two half and hence the complexity reduces to O(nlog(n)). A: On the average, Median of 3 is good for small n. Median of 5 is a bit better for larger n. The ninther, which is the "median of three medians of three" is even better for very large n. The higher you go with sampling the better you get as n increases, but the improvement dramatically slows down as you increase the samples. And you incur the overhead of sampling and sorting samples. A: I recommend using the middle index, as it can be calculated easily. You can calculate it by rounding (array.length / 2). A: If you choose the first or the last element in the array, then there are high chance that the pivot is the smallest or the largest element of the array and that is bad. Why? Because in that case the number of element smaller / larger than the pivot element in 0. and this will repeat as follow : Consider the size of the array n.Then, (n) + (n - 1) + (n - 2) + ......+ 1 = O(n^2) Hence, the time complexity increases to O(n^2) from O(nlogn). So, I highly recommend to use median / random element of the array as the pivot.
{ "language": "en", "url": "https://stackoverflow.com/questions/164163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "130" }
Q: Very poor (non-completing) performance of UNION in SQL Server 2005 Warning: this is the actual code generated from my system: ;WITH RESULTS AS ( SELECT 1174 AS BatchRunID, 'STATINV' AS Program, m.APPL_CD, m.ALBASE, 'CountFocusRecords' AS Measure, COUNT(*) AS Value FROM [MISWork].[SX_FOCUS_NATIVE_200806] AS m WITH(NOLOCK) INNER JOIN MISProcess.SXProcessCatalog AS cat WITH(NOLOCK) ON cat.APPL_CD = m.APPL_CD AND cat.ALBASE = m.ALBASE AND COALESCE(cat.ProcessName, 'STATINV') = 'STATINV' GROUP BY m.APPL_CD, m.ALBASE UNION SELECT 1174 AS BatchRunID, 'STATINV' AS Program, c.APPL_CD, c.ALBASE, 'CountBiminiRecords' AS Measure, COUNT(*) AS Value FROM [MISWork].[SX_STATINV] AS c WITH(NOLOCK) INNER JOIN MISProcess.SXProcessCatalog AS cat WITH(NOLOCK) ON cat.APPL_CD = c.APPL_CD AND cat.ALBASE = c.ALBASE AND COALESCE(cat.ProcessName, 'STATINV') = 'STATINV' GROUP BY c.APPL_CD, c.ALBASE UNION SELECT 1174 AS BatchRunID, 'STATINV' AS Program, m.APPL_CD, m.ALBASE, 'RecordsInFocusMissingInBimini' AS Measure, COUNT(*) AS Value FROM [MISWork].[SX_FOCUS_NATIVE_200806] AS m WITH(NOLOCK) LEFT JOIN [MISWork].[SX_STATINV] AS c WITH(NOLOCK) ON m.[YEAR] = c.[YEAR] AND m.[MONTH] = c.[MONTH] AND m.[BANK_NO] = c.[BANK_NO] AND m.[COST_CENTER] = c.[COST_CENTER] AND m.[GLACCOUNT_NO] = c.[GLACCOUNT_NO] AND m.[CUSTACCOUNT] = c.[CUSTACCOUNT] AND m.[APPL_CD] = c.[APPL_CD] AND m.[ALBASE] = c.[ALBASE] INNER JOIN MISProcess.SXProcessCatalog AS cat WITH(NOLOCK) ON cat.APPL_CD = m.APPL_CD AND cat.ALBASE = m.ALBASE AND COALESCE(cat.ProcessName, 'STATINV') = 'STATINV' WHERE c.[YEAR] IS NULL GROUP BY m.APPL_CD, m.ALBASE UNION SELECT 1174 AS BatchRunID, 'STATINV' AS Program, c.APPL_CD, c.ALBASE, 'RecordsInBiminiMissingInFocus' AS Measure, COUNT(*) AS Value FROM [MISWork].[SX_FOCUS_NATIVE_200806] AS m WITH(NOLOCK) RIGHT JOIN [MISWork].[SX_STATINV] AS c WITH(NOLOCK) ON m.[YEAR] = c.[YEAR] AND m.[MONTH] = c.[MONTH] AND m.[BANK_NO] = c.[BANK_NO] AND m.[COST_CENTER] = c.[COST_CENTER] AND m.[GLACCOUNT_NO] = c.[GLACCOUNT_NO] AND m.[CUSTACCOUNT] = c.[CUSTACCOUNT] AND m.[APPL_CD] = c.[APPL_CD] AND m.[ALBASE] = c.[ALBASE] INNER JOIN MISProcess.SXProcessCatalog AS cat WITH(NOLOCK) ON cat.APPL_CD = c.APPL_CD AND cat.ALBASE = c.ALBASE AND COALESCE(cat.ProcessName, 'STATINV') = 'STATINV' WHERE m.[YEAR] IS NULL GROUP BY c.APPL_CD, c.ALBASE ) SELECT * FROM RESULTS ORDER BY Program, APPL_CD, ALBASE, Measure The code just sits there, no locking or blocking. The individual components of the UNION return in a few seconds each. The code works in general for checking the output results of all the other programs in the STAT group, but just halts for this one. Remove the CTE, no effect, sits there for 30 minutes/an hour, however long you care to wait before cancelling. Remove the UNION, and the 4 result sets return in 11 seconds, total of 19 records accross all 4 result sets. Run just the first two together - works fine, run just the last 2 together, also fine. First 3 together, fine, too. I've already modified the code to output these to a #temp table, for other requirements, so I'm just going to change it to output each to the #temp table in sequence, but I have never seen SQL just stop like that with no evidence of blocking or anything. A: Change to UNION ALL, since you'll never have dupes (the Measure column is hard coded to be different). UNION must first sort the rows, and then find dupes and eliminate. My real guess is it's a parallelization issue. Try adding OPTION (MAXDOP 1) at the end. A: If you can post the query execution plan in XML format, that'll help us determine what parts of the query are causing problems. In SSMS, click Query, Display Estimated Execution Plan, and when it comes up, right-click on it and save as XML. A: I've moved on to regression testing 200808, but the fundamental query is the same, with a different batchrunid and different known good table. <?xml version="1.0"?> <ShowPlanXML xmlns="http://schemas.microsoft.com/sqlserver/2004/07/showplan" Version="1.0" Build="9.00.3239.00"> <BatchSequence> <Batch> <Statements> <StmtSimple StatementText="&#13;&#10;;WITH RESULTS AS (&#13;&#10;SELECT 1251 AS BatchRunID, 'STATINV' AS Program, m.APPL_CD, m.ALBASE, 'CountFocusRecords' AS Measure, COUNT(*) AS Value&#13;&#10;FROM [MISWork].[SX_FOCUS_NATIVE_200808] AS m WITH(NOLOCK)&#13;&#10;INNER JOIN MISProcess.SXProcessCatalog AS cat WITH(NOLOCK)&#13;&#10;ON cat.APPL_CD = m.APPL_CD&#13;&#10;AND cat.ALBASE = m.ALBASE&#13;&#10;AND COALESCE(cat.ProcessName, 'STATINV') = 'STATINV'&#13;&#10;GROUP BY m.APPL_CD, m.ALBASE&#13;&#10;UNION&#13;&#10;SELECT 1251 AS BatchRunID, 'STATINV' AS Program, c.APPL_CD, c.ALBASE, 'CountBiminiRecords' AS Measure, COUNT(*) AS Value&#13;&#10;FROM [MISWork].[SX_STATINV] AS c WITH(NOLOCK)&#13;&#10;INNER JOIN MISProcess.SXProcessCatalog AS cat WITH(NOLOCK)&#13;&#10;ON cat.APPL_CD = c.APPL_CD&#13;&#10;AND cat.ALBASE = c.ALBASE&#13;&#10;AND COALESCE(cat.ProcessName, 'STATINV') = 'STATINV'&#13;&#10;GROUP BY c.APPL_CD, c.ALBASE&#13;&#10;UNION&#13;&#10;SELECT 1251 AS BatchRunID, 'STATINV' AS Program, m.APPL_CD, m.ALBASE, 'RecordsInFocusMissingInBimini' AS Measure, COUNT(*) AS Value&#13;&#10;FROM [MISWork].[SX_FOCUS_NATIVE_200808] AS m WITH(NOLOCK)&#13;&#10;LEFT JOIN [MISWork].[SX_STATINV] AS c WITH(NOLOCK)&#13;&#10;ON m.[YEAR] = c.[YEAR]&#13;&#10; AND m.[MONTH] = c.[MONTH]&#13;&#10; AND m.[BANK_NO] = c.[BANK_NO]&#13;&#10; AND m.[COST_CENTER] = c.[COST_CENTER]&#13;&#10; AND m.[GLACCOUNT_NO] = c.[GLACCOUNT_NO]&#13;&#10; AND m.[CUSTACCOUNT] = c.[CUSTACCOUNT]&#13;&#10; AND m.[APPL_CD] = c.[APPL_CD]&#13;&#10; AND m.[ALBASE] = c.[ALBASE]&#13;&#10;INNER JOIN MISProcess.SXProcessCatalog AS cat WITH(NOLOCK)&#13;&#10;ON cat.APPL_CD = m.APPL_CD&#13;&#10;AND cat.ALBASE = m.ALBASE&#13;&#10;AND COALESCE(cat.ProcessName, 'STATINV') = 'STATINV'&#13;&#10;WHERE c.[YEAR] IS NULL&#13;&#10;GROUP BY m.APPL_CD, m.ALBASE&#13;&#10;UNION&#13;&#10;SELECT 1251 AS BatchRunID, 'STATINV' AS Program, c.APPL_CD, c.ALBASE, 'RecordsInBiminiMissingInFocus' AS Measure, COUNT(*) AS Value&#13;&#10;FROM [MISWork].[SX_FOCUS_NATIVE_200808] AS m WITH(NOLOCK)&#13;&#10;RIGHT JOIN [MISWork].[SX_STATINV] AS c WITH(NOLOCK)&#13;&#10;ON m.[YEAR] = c.[YEAR]&#13;&#10; AND m.[MONTH] = c.[MONTH]&#13;&#10; AND m.[BANK_NO] = c.[BANK_NO]&#13;&#10; AND m.[COST_CENTER] = c.[COST_CENTER]&#13;&#10; AND m.[GLACCOUNT_NO] = c.[GLACCOUNT_NO]&#13;&#10; AND m.[CUSTACCOUNT] = c.[CUSTACCOUNT]&#13;&#10; AND m.[APPL_CD] = c.[APPL_CD]&#13;&#10; AND m.[ALBASE] = c.[ALBASE]&#13;&#10;INNER JOIN MISProcess.SXProcessCatalog AS cat WITH(NOLOCK)&#13;&#10;ON cat.APPL_CD = c.APPL_CD&#13;&#10;AND cat.ALBASE = c.ALBASE&#13;&#10;AND COALESCE(cat.ProcessName, 'STATINV') = 'STATINV'&#13;&#10;WHERE m.[YEAR] IS NULL&#13;&#10;GROUP BY c.APPL_CD, c.ALBASE&#13;&#10;) SELECT * FROM RESULTS ORDER BY Program, APPL_CD, ALBASE, Measure&#13;&#10;&#13;&#10;" StatementId="1" StatementCompId="1" StatementType="SELECT" StatementSubTreeCost="1209.5" StatementEstRows="13965.1" StatementOptmLevel="FULL"> <StatementSetOptions QUOTED_IDENTIFIER="false" ARITHABORT="true" CONCAT_NULL_YIELDS_NULL="false" ANSI_NULLS="false" ANSI_PADDING="false" ANSI_WARNINGS="false" NUMERIC_ROUNDABORT="false"/> <QueryPlan CachedPlanSize="504" CompileTime="1244" CompileCPU="1099" CompileMemory="5016"> <MissingIndexes> <MissingIndexGroup Impact="29.2539"> <MissingIndex Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]"> <ColumnGroup Usage="EQUALITY"> <Column Name="[APPL_CD]" ColumnId="7"/> <Column Name="[ALBASE]" ColumnId="8"/> </ColumnGroup> <ColumnGroup Usage="INCLUDE"> <Column Name="[YEAR]" ColumnId="1"/> <Column Name="[MONTH]" ColumnId="2"/> <Column Name="[BANK_NO]" ColumnId="3"/> <Column Name="[COST_CENTER]" ColumnId="4"/> <Column Name="[GLACCOUNT_NO]" ColumnId="5"/> <Column Name="[CUSTACCOUNT]" ColumnId="6"/> </ColumnGroup> </MissingIndex> </MissingIndexGroup> <MissingIndexGroup Impact="29.6796"> <MissingIndex Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]"> <ColumnGroup Usage="EQUALITY"> <Column Name="[APPL_CD]" ColumnId="7"/> <Column Name="[ALBASE]" ColumnId="8"/> </ColumnGroup> </MissingIndex> </MissingIndexGroup> </MissingIndexes> <RelOp NodeId="0" PhysicalOp="Parallelism" LogicalOp="Gather Streams" EstimateRows="13965.1" EstimateIO="0" EstimateCPU="0.121489" AvgRowSize="45" EstimatedTotalSubtreeCost="1209.5" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Column="Union1039"/> <ColumnReference Column="Union1040"/> <ColumnReference Column="Union1041"/> <ColumnReference Column="Union1042"/> <ColumnReference Column="Union1043"/> <ColumnReference Column="Union1044"/> </OutputList> <Parallelism> <OrderBy> <OrderByColumn Ascending="1"> <ColumnReference Column="Union1041"/> </OrderByColumn> <OrderByColumn Ascending="1"> <ColumnReference Column="Union1042"/> </OrderByColumn> <OrderByColumn Ascending="1"> <ColumnReference Column="Union1043"/> </OrderByColumn> </OrderBy> <RelOp NodeId="1" PhysicalOp="Sort" LogicalOp="Sort" EstimateRows="13965.1" EstimateIO="0.00281532" EstimateCPU="0.220682" AvgRowSize="45" EstimatedTotalSubtreeCost="1209.37" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Column="Union1039"/> <ColumnReference Column="Union1040"/> <ColumnReference Column="Union1041"/> <ColumnReference Column="Union1042"/> <ColumnReference Column="Union1043"/> <ColumnReference Column="Union1044"/> </OutputList> <MemoryFractions Input="0.0191727" Output="1"/> <Sort Distinct="0"> <OrderBy> <OrderByColumn Ascending="1"> <ColumnReference Column="Union1041"/> </OrderByColumn> <OrderByColumn Ascending="1"> <ColumnReference Column="Union1042"/> </OrderByColumn> <OrderByColumn Ascending="1"> <ColumnReference Column="Union1043"/> </OrderByColumn> </OrderBy> <RelOp NodeId="2" PhysicalOp="Concatenation" LogicalOp="Concatenation" EstimateRows="13965.1" EstimateIO="0" EstimateCPU="0.000349132" AvgRowSize="45" EstimatedTotalSubtreeCost="1209.15" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Column="Union1039"/> <ColumnReference Column="Union1040"/> <ColumnReference Column="Union1041"/> <ColumnReference Column="Union1042"/> <ColumnReference Column="Union1043"/> <ColumnReference Column="Union1044"/> </OutputList> <Concat> <DefinedValues> <DefinedValue> <ColumnReference Column="Union1039"/> <ColumnReference Column="Expr1006"/> <ColumnReference Column="Expr1014"/> <ColumnReference Column="Expr1025"/> <ColumnReference Column="Expr1036"/> </DefinedValue> <DefinedValue> <ColumnReference Column="Union1040"/> <ColumnReference Column="Expr1007"/> <ColumnReference Column="Expr1015"/> <ColumnReference Column="Expr1026"/> <ColumnReference Column="Expr1037"/> </DefinedValue> <DefinedValue> <ColumnReference Column="Union1041"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_STATINV]" Alias="[c]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_STATINV]" Alias="[c]" Column="APPL_CD"/> </DefinedValue> <DefinedValue> <ColumnReference Column="Union1042"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_STATINV]" Alias="[c]" Column="ALBASE"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_STATINV]" Alias="[c]" Column="ALBASE"/> </DefinedValue> <DefinedValue> <ColumnReference Column="Union1043"/> <ColumnReference Column="Expr1008"/> <ColumnReference Column="Expr1016"/> <ColumnReference Column="Expr1027"/> <ColumnReference Column="Expr1038"/> </DefinedValue> <DefinedValue> <ColumnReference Column="Union1044"/> <ColumnReference Column="Expr1005"/> <ColumnReference Column="Expr1013"/> <ColumnReference Column="Expr1024"/> <ColumnReference Column="Expr1035"/> </DefinedValue> </DefinedValues> <RelOp NodeId="4" PhysicalOp="Compute Scalar" LogicalOp="Compute Scalar" EstimateRows="7140" EstimateIO="0" EstimateCPU="0.0001785" AvgRowSize="42" EstimatedTotalSubtreeCost="362.728" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> <ColumnReference Column="Expr1005"/> <ColumnReference Column="Expr1006"/> <ColumnReference Column="Expr1007"/> <ColumnReference Column="Expr1008"/> </OutputList> <ComputeScalar> <DefinedValues> <DefinedValue> <ColumnReference Column="Expr1006"/> <ScalarOperator ScalarString="(1251)"> <Const ConstValue="(1251)"/> </ScalarOperator> </DefinedValue> <DefinedValue> <ColumnReference Column="Expr1007"/> <ScalarOperator ScalarString="'STATINV'"> <Const ConstValue="'STATINV'"/> </ScalarOperator> </DefinedValue> <DefinedValue> <ColumnReference Column="Expr1008"/> <ScalarOperator ScalarString="'CountFocusRecords'"> <Const ConstValue="'CountFocusRecords'"/> </ScalarOperator> </DefinedValue> </DefinedValues> <RelOp NodeId="6" PhysicalOp="Compute Scalar" LogicalOp="Compute Scalar" EstimateRows="7140" EstimateIO="0" EstimateCPU="0.0001785" AvgRowSize="23" EstimatedTotalSubtreeCost="362.728" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> <ColumnReference Column="Expr1005"/> </OutputList> <ComputeScalar> <DefinedValues> <DefinedValue> <ColumnReference Column="Expr1005"/> <ScalarOperator ScalarString="CONVERT_IMPLICIT(int,[globalagg1083],0)"> <Convert DataType="int" Style="0" Implicit="1"> <ScalarOperator> <Identifier> <ColumnReference Column="globalagg1083"/> </Identifier> </ScalarOperator> </Convert> </ScalarOperator> </DefinedValue> </DefinedValues> <RelOp NodeId="7" PhysicalOp="Hash Match" LogicalOp="Aggregate" EstimateRows="7140" EstimateIO="0" EstimateCPU="0.114864" AvgRowSize="27" EstimatedTotalSubtreeCost="362.728" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> <ColumnReference Column="globalagg1083"/> </OutputList> <MemoryFractions Input="0.5" Output="0.980827"/> <Hash> <DefinedValues> <DefinedValue> <ColumnReference Column="globalagg1083"/> <ScalarOperator ScalarString="SUM([partialagg1082])"> <Aggregate Distinct="0" AggType="SUM"> <ScalarOperator> <Identifier> <ColumnReference Column="partialagg1082"/> </Identifier> </ScalarOperator> </Aggregate> </ScalarOperator> </DefinedValue> </DefinedValues> <HashKeysBuild> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </HashKeysBuild> <BuildResidual> <ScalarOperator ScalarString="[DUASFIN].[MISWork].[SX_FOCUS_NATIVE_200808].[APPL_CD] as [m].[APPL_CD] = [DUASFIN].[MISWork].[SX_FOCUS_NATIVE_200808].[APPL_CD] as [m].[APPL_CD] AND [DUASFIN].[MISWork].[SX_FOCUS_NATIVE_200808].[ALBASE] as [m].[ALBASE] = [DUASFIN].[MISWork].[SX_FOCUS_NATIVE_200808].[ALBASE] as [m].[ALBASE]"> <Logical Operation="AND"> <ScalarOperator> <Compare CompareOp="IS"> <ScalarOperator> <Identifier> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> </Identifier> </ScalarOperator> </Compare> </ScalarOperator> <ScalarOperator> <Compare CompareOp="IS"> <ScalarOperator> <Identifier> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </Identifier> </ScalarOperator> </Compare> </ScalarOperator> </Logical> </ScalarOperator> </BuildResidual> <RelOp NodeId="8" PhysicalOp="Parallelism" LogicalOp="Repartition Streams" EstimateRows="28560" EstimateIO="0" EstimateCPU="0.0614707" AvgRowSize="27" EstimatedTotalSubtreeCost="362.613" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> <ColumnReference Column="partialagg1082"/> </OutputList> <Parallelism PartitioningType="Hash"> <PartitionColumns> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </PartitionColumns> <RelOp NodeId="9" PhysicalOp="Hash Match" LogicalOp="Partial Aggregate" EstimateRows="28560" EstimateIO="0" EstimateCPU="1.7277" AvgRowSize="27" EstimatedTotalSubtreeCost="362.551" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> <ColumnReference Column="partialagg1082"/> </OutputList> <MemoryFractions Input="0" Output="0"/> <Hash> <DefinedValues> <DefinedValue> <ColumnReference Column="partialagg1082"/> <ScalarOperator ScalarString="COUNT(*)"> <Aggregate Distinct="0" AggType="COUNT*"/> </ScalarOperator> </DefinedValue> </DefinedValues> <HashKeysBuild> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </HashKeysBuild> <BuildResidual> <ScalarOperator ScalarString="[DUASFIN].[MISWork].[SX_FOCUS_NATIVE_200808].[APPL_CD] as [m].[APPL_CD] = [DUASFIN].[MISWork].[SX_FOCUS_NATIVE_200808].[APPL_CD] as [m].[APPL_CD] AND [DUASFIN].[MISWork].[SX_FOCUS_NATIVE_200808].[ALBASE] as [m].[ALBASE] = [DUASFIN].[MISWork].[SX_FOCUS_NATIVE_200808].[ALBASE] as [m].[ALBASE]"> <Logical Operation="AND"> <ScalarOperator> <Compare CompareOp="IS"> <ScalarOperator> <Identifier> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> </Identifier> </ScalarOperator> </Compare> </ScalarOperator> <ScalarOperator> <Compare CompareOp="IS"> <ScalarOperator> <Identifier> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </Identifier> </ScalarOperator> <ScalarOperator> <Identifier> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </Identifier> </ScalarOperator> </Compare> </ScalarOperator> </Logical> </ScalarOperator> </BuildResidual> <RelOp NodeId="10" PhysicalOp="Hash Match" LogicalOp="Inner Join" EstimateRows="879583" EstimateIO="0" EstimateCPU="62.0602" AvgRowSize="19" EstimatedTotalSubtreeCost="360.824" Parallel="1" EstimateRebinds="0" EstimateRewinds="0"> <OutputList> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </OutputList> <MemoryFractions Input="1" Output="0.5"/> <Hash> <DefinedValues/> <HashKeysBuild> <ColumnReference Database="[DUASFIN]" Schema="[MISProcess]" Table="[SXProcessCatalog]" Alias="[cat]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISProcess]" Table="[SXProcessCatalog]" Alias="[cat]" Column="ALBASE"/> </HashKeysBuild> <HashKeysProbe> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="APPL_CD"/> <ColumnReference Database="[DUASFIN]" Schema="[MISWork]" Table="[SX_FOCUS_NATIVE_200808]" Alias="[m]" Column="ALBASE"/> </HashKeysProbe> <ProbeResidual>
{ "language": "en", "url": "https://stackoverflow.com/questions/164167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you construct a std::string with an embedded null? If I want to construct a std::string with a line like: std::string my_string("a\0b"); Where i want to have three characters in the resulting string (a, null, b), I only get one. What is the proper syntax? A: The following will work... std::string s; s.push_back('a'); s.push_back('\0'); s.push_back('b'); A: You'll have to be careful with this. If you replace 'b' with any numeric character, you will silently create the wrong string using most methods. See: Rules for C++ string literals escape character. For example, I dropped this innocent looking snippet in the middle of a program // Create '\0' followed by '0' 40 times ;) std::string str("\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00", 80); std::cerr << "Entering loop.\n"; for (char & c : str) { std::cerr << c; // 'Q' is way cooler than '\0' or '0' c = 'Q'; } std::cerr << "\n"; for (char & c : str) { std::cerr << c; } std::cerr << "\n"; Here is what this program output for me: Entering loop. Entering loop. vector::_M_emplace_ba QQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQ That was my first print statement twice, several non-printing characters, followed by a newline, followed by something in internal memory, which I just overwrote (and then printed, showing that it has been overwritten). Worst of all, even compiling this with thorough and verbose gcc warnings gave me no indication of something being wrong, and running the program through valgrind didn't complain about any improper memory access patterns. In other words, it's completely undetectable by modern tools. You can get this same problem with the much simpler std::string("0", 100);, but the example above is a little trickier, and thus harder to see what's wrong. Fortunately, C++11 gives us a good solution to the problem using initializer list syntax. This saves you from having to specify the number of characters (which, as I showed above, you can do incorrectly), and avoids combining escaped numbers. std::string str({'a', '\0', 'b'}) is safe for any string content, unlike versions that take an array of char and a size. A: In C++14 you now may use literals using namespace std::literals::string_literals; std::string s = "a\0b"s; std::cout << s.size(); // 3 A: If you are doing manipulation like you would with a c-style string (array of chars) consider using std::vector<char> You have more freedom to treat it like an array in the same manner you would treat a c-string. You can use copy() to copy into a string: std::vector<char> vec(100) strncpy(&vec[0], "blah blah blah", 100); std::string vecAsStr( vec.begin(), vec.end()); and you can use it in many of the same places you can use c-strings printf("%s" &vec[0]) vec[10] = '\0'; vec[11] = 'b'; Naturally, however, you suffer from the same problems as c-strings. You may forget your null terminal or write past the allocated space. A: Since C++14 we have been able to create literal std::string #include <iostream> #include <string> int main() { using namespace std::string_literals; std::string s = "pl-\0-op"s; // <- Notice the "s" at the end // This is a std::string literal not // a C-String literal. std::cout << s << "\n"; } Before C++14 The problem is the std::string constructor that takes a const char* assumes the input is a C-string. C-strings are \0 terminated and thus parsing stops when it reaches the \0 character. To compensate for this, you need to use the constructor that builds the string from a char array (not a C-String). This takes two parameters - a pointer to the array and a length: std::string x("pq\0rs"); // Two characters because input assumed to be C-String std::string x("pq\0rs",5); // 5 Characters as the input is now a char array with 5 characters. Note: C++ std::string is NOT \0-terminated (as suggested in other posts). However, you can extract a pointer to an internal buffer that contains a C-String with the method c_str(). Also check out Doug T's answer below about using a vector<char>. Also check out RiaD for a C++14 solution. A: I have no idea why you'd want to do such a thing, but try this: std::string my_string("a\0b", 3); A: What new capabilities do user-defined literals add to C++? presents an elegant answer: Define std::string operator "" _s(const char* str, size_t n) { return std::string(str, n); } then you can create your string this way: std::string my_string("a\0b"_s); or even so: auto my_string = "a\0b"_s; There's an "old style" way: #define S(s) s, sizeof s - 1 // trailing NUL does not belong to the string then you can define std::string my_string(S("a\0b")); A: Better to use std::vector<char> if this question isn't just for educational purposes. A: anonym's answer is excellent, but there's a non-macro solution in C++98 as well: template <size_t N> std::string RawString(const char (&ch)[N]) { return std::string(ch, N-1); // Again, exclude trailing `null` } With this function, RawString(/* literal */) will produce the same string as S(/* literal */): std::string my_string_t(RawString("a\0b")); std::string my_string_m(S("a\0b")); std::cout << "Using template: " << my_string_t << std::endl; std::cout << "Using macro: " << my_string_m << std::endl; Additionally, there's an issue with the macro: the expression is not actually a std::string as written, and therefore can't be used e.g. for simple assignment-initialization: std::string s = S("a\0b"); // ERROR! ...so it might be preferable to use: #define std::string(s, sizeof s - 1) Obviously you should only use one or the other solution in your project and call it whatever you think is appropriate. A: Almost all implementations of std::strings are null-terminated, so you probably shouldn't do this. Note that "a\0b" is actually four characters long because of the automatic null terminator (a, null, b, null). If you really want to do this and break std::string's contract, you can do: std::string s("aab"); s.at(1) = '\0'; but if you do, all your friends will laugh at you, you will never find true happiness. A: I know it is a long time this question has been asked. But for anyone who is having a similar problem might be interested in the following code. CComBSTR(20,"mystring1\0mystring2\0")
{ "language": "en", "url": "https://stackoverflow.com/questions/164168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "107" }
Q: Informix SQL Syntax - Nest Count, Sum, Round Let me apologize in advance for the simplicity of this question (I heard Jeff's podcast and his concern that the quality of the questions will be "dumbed down"), but I'm stuck. I'm using AquaData to hit my Informix DB. There are quirky little nuances between MS SQL and Informix SQL. Anyway, I'm trying to do a simple nested expression and it hates me. select score, count(*) students, count(finished) finished, count(finished) / count(*)students -- round((count(finished) / count(*)students),2) from now_calc group by score order by score The line with the simple division expression returns the percentage of people who finished, which exactly what I want...I just need the result rounded to 2 places. The commented line (--) does not work. I've tried every variation I can possibly think of. *I'm NOT trying to use lines 5 & 6 at the same time I'm sorry, I should have mentioned that now_calc is a temp table and that the field names actually are "students " and "finished". I had them named like that because I'm going to spit these results straight into Excel and I wanted the field names to double as column headings. So, I understand what you are saying, and based on that, I made it work by removing the (*) like this: select score, count(students) students, count(finished) finished, round((count(finished) / count(students) * 100),2) perc from now_calc group by score order by score I'm including the entire query - it might make more sense to anyone else looking at this. From a learning perspective, it's important to note the only reason count works on the 'finished' field is because of the Case statement that made the values 1 or null depending on the evaluation of the Case statement. If that case statement did not exist, counting 'finished' would produce the exact same results as counting 'students'. --count of cohort members and total count of all students (for reference) select cohort_yr, count (*) id, (select count (*) id from prog_enr_rec where cohort_yr is not null and prog = 'UNDG' and cohort_yr >=1998) grand from prog_enr_rec where cohort_yr is not null and prog = 'UNDG' and cohort_yr >=1998 group by cohort_yr order by cohort_yr; --cohort members from all years for population select id, cohort_yr, cl, enr_date, prog from prog_enr_rec where cohort_yr is not null and prog = 'UNDG' and cohort_yr >=1998 order by cohort_yr into temp pop with no log; --which in population are still attending (726) select pop.id, 'Y' fin from pop, stu_acad_rec where pop.id = stu_acad_rec.id and pop.prog = stu_acad_rec.prog and sess = 'FA' and yr = 2008 and reg_hrs > 0 and stu_acad_rec.cl[1,1] <> 'P' into temp att with no log; --which in population graduated with either A or B deg (702) select pop.id, 'Y' fin from pop, ed_rec where pop.id = ed_rec.id and pop.prog = ed_rec.prog and ed_rec.sch_id = 10 and (ed_rec.deg_earn[1,1] = 'B' or (ed_rec.deg_earn[1,1] = 'A' and pop.id not in (select pop.id from pop, ed_rec where pop.id = ed_rec.id and pop.prog = ed_rec.prog and ed_rec.deg_earn[1,1] = 'B' and ed_rec.sch_id = 10))) into temp grad with no log; --combine all those that either graduated or are still attending select * from att union select * from grad into temp all_fin with no log; --ACT scores for all students in population who have a score (inner join to eliminate null values) --score > 50 eliminates people who have data entry errors - SAT scores in ACT field --2270 select pop.id, max (exam_rec.score5) score from pop, exam_rec where pop.id = exam_rec.id and ctgry = 'ACT' and score5 > 0 and score5 < 50 group by pop.id into temp pop_score with no log; select pop.id students, Case when all_fin.fin = 'Y' then 1 else null end finished, pop_score.score from pop, pop_score, outer all_fin where pop.id = all_fin.id and pop.id = pop_score.id into temp now_calc with no log; select score, count(students) students, count(finished) finished, round((count(finished) / count(students) * 100),2) perc from now_calc group by score order by score Thanks! A: SELECT score, count(*) students, count(finished) finished, count(finished) / count(*) AS something_other_than_students, round((count(finished) / count(*)),2) AS rounded_value FROM now_calc GROUP BY score ORDER BY score; Note that the output column name 'students' was being repeated and was also confusing you. The AS I used is optional. I've now formally validated the syntax against IDS, and it is usable: Black JL: sqlcmd -Ffixsep -d stores -xf xx.sql | sed 's/ //g' + create temp table now_calc(finished CHAR(1), score INTEGER, name CHAR(10) PRIMARY KEY); + insert into now_calc values(null, 23, 'a'); + insert into now_calc values('y', 23, 'b'); + insert into now_calc values('y', 23, 'h'); + insert into now_calc values('y', 23, 'i'); + insert into now_calc values('y', 23, 'j'); + insert into now_calc values('y', 43, 'c'); + insert into now_calc values(null, 23, 'd'); + insert into now_calc values('y', 43, 'e'); + insert into now_calc values(null, 23, 'f'); + insert into now_calc values(null, 43, 'g'); + SELECT score, count(*) students, count(finished) finished, count(finished) / count(*) AS something_other_than_students, round((count(finished) / count(*)),2) AS rounded_value FROM now_calc GROUP BY score ORDER BY score; 23| 7| 4| 5.71428571428571E-01| 0.57 43| 3| 2| 6.66666666666667E-01| 0.67 Black JL: I let 'finished' take nulls because the only reason for 'count(finished) / count(*)' not to return 1 is if 'finished' accepts nulls -- not very good table design, though. And I put 7 rows with score 23 to get a large number of decimal places (and then changed one row with score 43 to generate a second number with a large number of decimal places).
{ "language": "en", "url": "https://stackoverflow.com/questions/164173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is considered a good response time for a dynamic, personalized web application? For a complex web application that includes dynamic content and personalization, what is a good response time from the server (so excluding network latency and browser rendering time)? I'm thinking about sites like Facebook, Amazon, MyYahoo, etc. A related question is what is a good response time for a backend service? A: I have been striving for < 3 seconds for my applications, but I'm a bit picky when it comes to performance. If you ask around, they say that people start to lose interest in the >= 7 second range, by 10-15 seconds you have typically lost them, unless you REALLY have something they want or need. A: It depends on what keeps your users happy. For example, Gmail takes quite a while to open at first, but users wait because it is worth waiting for. A: Of course, it lays in the nature of your question, so answers are highly subjective. The first response of a website is also only a small part of the time until a page is readable/usable. I am annoyed by everything larger than 10 sec responses. I think a website should be rendered after 5-7 sec. Btw: stackoverflow.com has an excellent response time! A: Our company has a 5 second response time standard limit, and we aim for 2-3 seconds in general. This accounts for 98% of page loads. A few particular tasks are allowed to go up to 15 seconds, but we then mitigate that time by putting up a page and refreshing every 5 seconds telling the user that we are still trying to process the request. That way the user sees that something is happening and doesn't just leave. Although, considering that I work on a website whose users are forced to use for business reasons, they aren't going to leave, but they are capable of complaining quite loudly. In general, if the processing is going to take more than 5 seconds, put up a temporary page so that the user doesn't lose interest. A: I think you will find that if your web app is performing a complex operation then provided feedback is given to the user, they won't mind (too much). For example: Loading Google Mail. A: Not only does it depend on what keeps your users happy, but how much development time do you have? What kind of resources can you throw at the problem (software, hardware, and people)? I don't mind a couple-few second delay for hosted applications if they're doing something "complex". If it's really simple, delays bother me. A: There's a great deal of research on this. Here's a quick summary. Response Times: The 3 Important Limits by Jakob Nielsen on January 1, 1993 Summary: There are 3 main time limits (which are determined by human perceptual abilities) to keep in mind when optimizing web and application performance. Excerpt from Chapter 5 in my book Usability Engineering, from 1993: The basic advice regarding response times has been about the same for thirty years [Miller 1968; Card et al. 1991]: * *0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result. *1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data. *10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect. A: We strive for response times of 20 milliseconds, while some complex pages take up to 100 milliseconds. For the most complex pages, we break the page down into smaller pieces, and use the progressive display pattern to load each section. This way, some portions load quickly, even if the page takes 1 to 2 seconds to load, keeping the user engaged while the rest of the page is loading. A: 2 to 3 seconds
{ "language": "en", "url": "https://stackoverflow.com/questions/164175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "167" }
Q: How to fetch a remote image to display in a canvas? How can I fetch images from a server? I've got this bit of code which allows me to draw some images on a canvas. <html> <head> <script type="text/javascript"> function draw(){ var canvas = document.getElementById('canv'); var ctx = canvas.getContext('2d'); for (i=0;i<document.images.length;i++){ ctx.drawImage(document.images[i],i*150,i*130); } } </script> </head> <body onload="draw();"> <canvas id="canv" width="1024" height="1024"></canvas> <img src="http://www.google.com/intl/en_ALL/images/logo.gif"> <img src="http://l.yimg.com/a/i/ww/beta/y3.gif"> <img src="http://static.ak.fbcdn.net/images/welcome/welcome_page_map.png"> </body> </html> Instead of looping over document.images, i would like to continually fetch images from a server. for (;;) { /* how to fetch myimage??? */ myimage = fetch???('http://myserver/nextimage.cgi'); ctx.drawImage(myimage, x, y); } A: If you want to draw an image to a canvas you also need to wait for the image to actually load, so the correct thing to do will be: myimage = new Image(); myimage.onload = function() { context.drawImage(myimage, ...); } myimage.src = 'http://myserver/nextimage.cgi'; A: Use the built-in JavaScript Image object. Here is a very simple example of using the Image object: myimage = new Image(); myimage.src = 'http://myserver/nextimage.cgi'; Here is a more appropriate mechanism for your scenario from the comments on this answer. Thanks olliej! It's worth noting that you can't synchronously request a resource, so you should actually do something along the lines of: myimage = new Image(); myimage.onload = function() { ctx.drawImage(myimage, x, y); } myimage.src = 'http://myserver/nextimage.cgi'; A: To add an image in JavaScript you can do the following: myimage = new Image() myimage.src='http://....' If an image on your page has an ID "image1", you can assign the src of image1 to myimage.src. A: I have found that using prototypes is very helpful here. If you aren't familiar with them, prototypes are part of objects that allow you to set your own variables and/or methods to them. Doing something like: Image.prototype.position = { x: 0, y: 0 } Image.prototype.onload = function(){ context.drawImage(this, this.position.x, this.position.y); } allows you to set position and draw to the canvas without too much work. The "position" variable allows you to move it around on the canvas. So it's possible to do: var myImg = new Image(); myImg.position.x = 20; myImg.position.y = 200; myImg.src = "http://www.google.com/intl/en_ALL/images/logo.gif"; and the image will automatically draw to the canvas at (20,200). Prototype works for all HTML and native Javascript objects. So Array.prototype.sum = function(){ var _sum = 0.0; for (var i=0; i<this.length; i++){ _sum += parseFloat(this[i]); } return _sum; } gives a new function to all Arrays. However, var Bob; Bob.Prototype.sayHi = function(){ alert("Hello there."); } will not work (for multiple reasons, but i'll just talk about prototypes). Prototype is a "property" of sorts, which contains all the your properties/methods that you input, and is already in each of the HTML and native Javascript objects (not the ones you make). Prototypes also allow for easy calling (you can do "myImg.position.x" instead of "myImg.prototype.position.x" ). Besides, if you are defining you variable, you should do it more like this. var Bob = function(){ this.sayHi = function(){ alert("Hello there."); } } A: Using Promises: class App { imageUrl = 'https://img-prod-cms-rt-microsoft-com.akamaized.net/cms/api/am/imageFileData/RE4HZBo' constructor(dom) { this.start(dom) } async start(dom) { const appEl = dom.createElement('div') dom.body.append(appEl) const imageEl = await this.loadImage(this.imageUrl) const canvas = dom.createElement('canvas') canvas.width = imageEl.width canvas.height = imageEl.height const ctx = canvas.getContext('2d') ctx.drawImage(imageEl, 0, 0) appEl.append(canvas) } loadImage = async (url) => new Promise((resolve) => { const imageEl = new Image() imageEl.src = url imageEl.onload = () => resolve(imageEl) }) } new App(document) A: If you are using jQuery you can do: $.('<img src="http://myserver/nextimage.cgi" />').appendTo('#canv'); You can also add widths and anything else in the img tag.
{ "language": "en", "url": "https://stackoverflow.com/questions/164181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to get Single XElement object using Linq to Xml? I would like to use Linq to Xml to get a single XElement from a .xml file by attribute name, similar to how you retrieve single objects in Linq to Sql by Id below: var singleDog = context.Dogs.Single(p => p.Id == int.Parse(Id)); Is this possible? A: Absolutely. Just use something like: xdoc.Descendants() .Where(x => x.HasAttribute("id") && x.Attribute("id")==id) .Single(); There may be a more efficient way of doing it, admittedly...
{ "language": "en", "url": "https://stackoverflow.com/questions/164192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why do I get a segmentation fault when writing to a "char *s" initialized with a string literal, but not "char s[]"? The following code receives seg fault on line 2: char *str = "string"; str[0] = 'z'; // could be also written as *str = 'z' printf("%s\n", str); While this works perfectly well: char str[] = "string"; str[0] = 'z'; printf("%s\n", str); Tested with MSVC and GCC. A: char *str = "string"; The above sets str to point to the literal value "string" which is hard-coded in the program's binary image, which is probably flagged as read-only in memory. So str[0]= is attempting to write to the read-only code of the application. I would guess this is probably compiler dependent though. A: To understand this error or problem you should first know difference b/w the pointer and array so here firstly i have explain you differences b/w them string array char strarray[] = "hello"; In memory array is stored in continuous memory cells, stored as [h][e][l][l][o][\0] =>[] is 1 char byte size memory cell ,and this continuous memory cells can be access by name named strarray here.so here string array strarray itself containing all characters of string initialized to it.in this case here "hello" so we can easily change its memory content by accessing each character by its index value `strarray[0]='m'` it access character at index 0 which is 'h'in strarray and its value changed to 'm' so strarray value changed to "mello"; one point to note here that we can change the content of string array by changing character by character but can not initialized other string directly to it like strarray="new string" is invalid Pointer As we all know pointer points to memory location in memory , uninitialized pointer points to random memory location so and after initialization points to particular memory location char *ptr = "hello"; here pointer ptr is initialized to string "hello" which is constant string stored in read only memory (ROM) so "hello" can not be changed as it is stored in ROM and ptr is stored in stack section and pointing to constant string "hello" so ptr[0]='m' is invalid since you can not access read only memory But ptr can be initialised to other string value directly since it is just pointer so it can be point to any memory address of variable of its data type ptr="new string"; is valid A: char *str = "string"; allocates a pointer to a string literal, which the compiler is putting in a non-modifiable part of your executable; char str[] = "string"; allocates and initializes a local array which is modifiable A: The C FAQ that @matli linked to mentions it, but no one else here has yet, so for clarification: if a string literal (double-quoted string in your source) is used anywhere other than to initialize a character array (ie: @Mark's second example, which works correctly), that string is stored by the compiler in a special static string table, which is akin to creating a global static variable (read-only, of course) that is essentially anonymous (has no variable "name"). The read-only part is the important part, and is why the @Mark's first code example segfaults. A: Most of these answers are correct, but just to add a little more clarity... The "read only memory" that people are referring to is the text segment in ASM terms. It's the same place in memory where the instructions are loaded. This is read-only for obvious reasons like security. When you create a char* initialized to a string, the string data is compiled into the text segment and the program initializes the pointer to point into the text segment. So if you try to change it, kaboom. Segfault. When written as an array, the compiler places the initialized string data in the data segment instead, which is the same place that your global variables and such live. This memory is mutable, since there are no instructions in the data segment. This time when the compiler initializes the character array (which is still just a char*) it's pointing into the data segment rather than the text segment, which you can safely alter at run-time. A: The char *str = "string"; line defines a pointer and points it to a literal string. The literal string is not writable so when you do: str[0] = 'z'; you get a seg fault. On some platforms, the literal might be in writable memory so you won't see a segfault, but it's invalid code (resulting in undefined behavior) regardless. The line: char str[] = "string"; allocates an array of characters and copies the literal string into that array, which is fully writable, so the subsequent update is no problem. A: Why do I get a segmentation fault when writing to a string? C99 N1256 draft There are two different uses of character string literals: * *Initialize char[]: char c[] = "abc"; This is "more magic", and described at 6.7.8/14 "Initialization": An array of character type may be initialized by a character string literal, optionally enclosed in braces. Successive characters of the character string literal (including the terminating null character if there is room or if the array is of unknown size) initialize the elements of the array. So this is just a shortcut for: char c[] = {'a', 'b', 'c', '\0'}; Like any other regular array, c can be modified. *Everywhere else: it generates an: * *unnamed *array of char What is the type of string literals in C and C++? *with static storage *that gives UB if modified So when you write: char *c = "abc"; This is similar to: /* __unnamed is magic because modifying it gives UB. */ static char __unnamed[] = "abc"; char *c = __unnamed; Note the implicit cast from char[] to char *, which is always legal. Then if you modify c[0], you also modify __unnamed, which is UB. This is documented at 6.4.5 "String literals": 5 In translation phase 7, a byte or code of value zero is appended to each multibyte character sequence that results from a string literal or literals. The multibyte character sequence is then used to initialize an array of static storage duration and length just sufficient to contain the sequence. For character string literals, the array elements have type char, and are initialized with the individual bytes of the multibyte character sequence [...] 6 It is unspecified whether these arrays are distinct provided their elements have the appropriate values. If the program attempts to modify such an array, the behavior is undefined. 6.7.8/32 "Initialization" gives a direct example: EXAMPLE 8: The declaration char s[] = "abc", t[3] = "abc"; defines "plain" char array objects s and t whose elements are initialized with character string literals. This declaration is identical to char s[] = { 'a', 'b', 'c', '\0' }, t[] = { 'a', 'b', 'c' }; The contents of the arrays are modifiable. On the other hand, the declaration char *p = "abc"; defines p with type "pointer to char" and initializes it to point to an object with type "array of char" with length 4 whose elements are initialized with a character string literal. If an attempt is made to use p to modify the contents of the array, the behavior is undefined. GCC 4.8 x86-64 ELF implementation Program: #include <stdio.h> int main(void) { char *s = "abc"; printf("%s\n", s); return 0; } Compile and decompile: gcc -ggdb -std=c99 -c main.c objdump -Sr main.o Output contains: char *s = "abc"; 8: 48 c7 45 f8 00 00 00 movq $0x0,-0x8(%rbp) f: 00 c: R_X86_64_32S .rodata Conclusion: GCC stores char* it in .rodata section, not in .text. If we do the same for char[]: char s[] = "abc"; we obtain: 17: c7 45 f0 61 62 63 00 movl $0x636261,-0x10(%rbp) so it gets stored in the stack (relative to %rbp). Note however that the default linker script puts .rodata and .text in the same segment, which has execute but no write permission. This can be observed with: readelf -l a.out which contains: Section to Segment mapping: Segment Sections... 02 .text .rodata A: String literals like "string" are probably allocated in your executable's address space as read-only data (give or take your compiler). When you go to touch it, it freaks out that you're in its bathing suit area and lets you know with a seg fault. In your first example, you're getting a pointer to that const data. In your second example, you're initializing an array of 7 characters with a copy of the const data. A: See the C FAQ, Question 1.32 Q: What is the difference between these initializations? char a[] = "string literal"; char *p = "string literal"; My program crashes if I try to assign a new value to p[i]. A: A string literal (the formal term for a double-quoted string in C source) can be used in two slightly different ways: * *As the initializer for an array of char, as in the declaration of char a[] , it specifies the initial values of the characters in that array (and, if necessary, its size). *Anywhere else, it turns into an unnamed, static array of characters, and this unnamed array may be stored in read-only memory, and which therefore cannot necessarily be modified. In an expression context, the array is converted at once to a pointer, as usual (see section 6), so the second declaration initializes p to point to the unnamed array's first element. Some compilers have a switch controlling whether string literals are writable or not (for compiling old code), and some may have options to cause string literals to be formally treated as arrays of const char (for better error catching). A: // create a string constant like this - will be read only char *str_p; str_p = "String constant"; // create an array of characters like this char *arr_p; char arr[] = "String in an array"; arr_p = &arr[0]; // now we try to change a character in the array first, this will work *arr_p = 'E'; // lets try to change the first character of the string contant *str_p = 'G'; // this will result in a segmentation fault. Comment it out to work. /*----------------------------------------------------------------------------- * String constants can't be modified. A segmentation fault is the result, * because most operating systems will not allow a write * operation on read only memory. *-----------------------------------------------------------------------------*/ //print both strings to see if they have changed printf("%s\n", str_p); //print the string without a variable printf("%s\n", arr_p); //print the string, which is in an array. A: In the first code, "string" is a string constant, and string constants should never be modified because they are often placed into read only memory. "str" is a pointer being used to modify the constant. In the second code, "string" is an array initializer, sort of short hand for char str[7] = { 's', 't', 'r', 'i', 'n', 'g', '\0' }; "str" is an array allocated on the stack and can be modified freely. A: Because the type of "whatever" in the context of the 1st example is const char * (even if you assign it to a non-const char*), which means you shouldn't try and write to it. The compiler has enforced this by putting the string in a read-only part of memory, hence writing to it generates a segfault. A: Normally, string literals are stored in read-only memory when the program is run. This is to prevent you from accidentally changing a string constant. In your first example, "string" is stored in read-only memory and *str points to the first character. The segfault happens when you try to change the first character to 'z'. In the second example, the string "string" is copied by the compiler from its read-only home to the str[] array. Then changing the first character is permitted. You can check this by printing the address of each: printf("%p", str); Also, printing the size of str in the second example will show you that the compiler has allocated 7 bytes for it: printf("%d", sizeof(str)); A: In the first place, str is a pointer that points at "string". The compiler is allowed to put string literals in places in memory that you cannot write to, but can only read. (This really should have triggered a warning, since you're assigning a const char * to a char *. Did you have warnings disabled, or did you just ignore them?) In the second place, you're creating an array, which is memory that you've got full access to, and initializing it with "string". You're creating a char[7] (six for the letters, one for the terminating '\0'), and you do whatever you like with it. A: Assume the strings are, char a[] = "string literal copied to stack"; char *p = "string literal referenced by p"; In the first case, the literal is to be copied when 'a' comes into scope. Here 'a' is an array defined on stack. It means the string will be created on the stack and its data is copied from code (text) memory, which is typically read-only (this is implementation specific, a compiler can place this read-only program data in read-writable memory also). In the second case, p is a pointer defined on stack (local scope) and referring a string literal (program data or text) stored else where. Usually modifying such memory is not good practice nor encouraged. A: Section 5.5 Character Pointers and Functions of K&R also discusses about this topic: There is an important difference between these definitions: char amessage[] = "now is the time"; /* an array */ char *pmessage = "now is the time"; /* a pointer */ amessage is an array, just big enough to hold the sequence of characters and '\0' that initializes it. Individual characters within the array may be changed but amessage will always refer to the same storage. On the other hand, pmessage is a pointer, initialized to point to a string constant; the pointer may subsequently be modified to point elsewhere, but the result is undefined if you try to modify the string contents. A: Constant memory Since string literals are read-only by design, they are stored in the Constant part of memory. Data stored there is immutable, i.e., cannot be changed. Thus, all string literals defined in C code get a read-only memory address here. Stack memory The Stack part of memory is where the addresses of local variables live, e.g., variables defined in functions. As @matli's answer suggests, there are two ways of working with string these constant strings. 1. Pointer to string literal When we define a pointer to a string literal, we are creating a pointer variable living in Stack memory. It points to the read-only address where the underlying string literal resides. #include <stdio.h> int main(void) { char *s = "hello"; printf("%p\n", &s); // Prints a read-only address, e.g. 0x7ffc8e224620 return 0; } If we try to modify s by inserting s[0] = 'H'; we get a Segmentation fault (core dumped). We are trying to access memory that we shouldn't access. We are attempting to modify the value of a read-only address, 0x7ffc8e224620. 2. Array of chars For the sake of the example, suppose the string literal "Hello" stored in constant memory has a read-only memory address identical to the one above, 0x7ffc8e224620. #include <stdio.h> int main(void) { // We create an array from a string literal with address 0x7ffc8e224620. // C initializes an array variable in the stack, let's give it address // 0x7ffc7a9a9db2. // C then copies the read-only value from 0x7ffc8e224620 into // 0x7ffc7a9a9db2 to give us a local copy we can mutate. char a[] = "hello"; // We can now mutate the local copy a[0] = 'H'; printf("%p\n", &a); // Prints the Stack address, e.g. 0x7ffc7a9a9db2 printf("%s\n", a); // Prints "Hello" return 0; } Note: When using pointers to string literals as in 1., best practice is to use the const keyword, like const *s = "hello". This is more readable and the compiler will provide better help when it's violated. It will then throw an error like error: assignment of read-only location ‘*s’ instead of the seg fault. Linters in editors will also likely pick up the error before you manually compile the code. A: First is one constant string which can't be modified. Second is an array with initialized value, so it can be modified. A: Segmentation fault is caused when you try to access the memory which is inaccessible. char *str is a pointer to a string that is nonmodifiable(the reason for getting segfault). whereas char str[] is an array and can be modifiable..
{ "language": "en", "url": "https://stackoverflow.com/questions/164194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "334" }
Q: Printing barcode labels from a web page I am working on an ASP.Net web application that must print dynamically created labels on standard Avery-style label sheets (one particular size, so only one overall layout). The labels have a variable number of lines (3-6) and may contain either lines of text or a graphic barcode image. Our first cut, that I inherited, used monospaced fonts to reduce the formatting issues, but that did not allow enough text to the fit on the labels and the customer was dissatisfied. Basically it was formatted text. My next version used TABLEs, DIVs, CSS, and a bit of JavaScript calculations to format the labels using proportional fonts. It still required a bit of tweaking (the user had to set their print margins correctly and turn off the print headers and footers), but it seemed to work. However, it seems that there are some variations on how different printers render the text (WYS ain't WYG), so even though we tested on different browsers using at least two different printers (an inkjet and a laser printer), some user's labels don't line up. Slight margin variations can be adjusted by adjusting the margins on the page setup dialog, but the harder problem is that the inter-label spacing can be off by a tiny fraction of an inch, so that if the first label is pretty well centered, by the end of the page the label text and images have crawled off the top or bottom of the labels. We are about to the point of switching to generating Word, Excel, or PDF output which is going to take quite a bit of development time and possible add extra steps in the printing process. So, does anyone have any suggestions on how to do an HTML/CSS layout that will precisely render on different types of printers? I don't really care if the line/word breaks are a bit different, but I need to be able to predictably position the upper left corners of each label area. Right now the labels flow down the page in a table and we have been tweaking the box model of the cells and internal DIVs to make them a uniform height. I suspect that using absolute positioning of each element may be the best answer, but that is going to be tricky as well due to the ASP.Net generation of the label elements. If I knew for sure that would work, I would rather try it than throw away everything we have to go to a different generation method. Slight Update: Right now I'm doing some tests with absolute positioning - setting only the top and left coordinate of a containing block element. So far there are minor variations on the offset onto the page (margins, paper alignment, etc.), but all browsers and printers tested put the elements in exactly the right spots relative to each other. I appreciate the PDF tips, but does anyone know of additional "gotchas" on using absolute positioning this way? Update: For the record, I rewrote the label printing portion using iTextSharp and it works perfectly - definitely the way to do this in the future... A: The web is not a format that is guaranteed to get consistent print results. Given the standard support for label printing with MS Word, and the relative ease of automation and generation, I would strongly recommend going that route. I'm not aware of ANY method to get percise printing across all types of browsers, operating systems, and printers when using web content. A: "precisely" and "printing" aren't two words that really work together that well. I did an OCR/OMR application a year or so ago, and even when building a PDF I saw significant differences between different print drivers and such. Because of that, my gut is to tell you that you might not have 100% success. If CSS and layout issues don't work that well for you, you might need to resort to building the labels as images using GDI+ -- at least that way you can use GetFontMetrics() and such. Good luck! A: I had a similiar issue and the answer is you can't do it. Instead, I generated a PDF file in realtime using iTextSharp and passed that to the response. A: Forget HTML and make a PDF. HTML printing is extremely variable - not just across browsers but across different versions of the same browser. PDF is a lot easier. Even if you get it exactly right with one browser / font setup / printer / phase of the moon, it will be the most fragile thing you've ever had to maintain. No matter how long you think it will take to make a PDF (and it's not really that hard as there are some free libraries out there), HTML will ultimately take a lot more of your time. PDF readers are widely deployed and print more consistently than even Word files. A: Using SQL Server Reporting Services, I generate a PDF to send to the printer, but it can be seen as HTML on the screen using the control you can include in your web pages. There are RDLC files that are available on the internet to print to various Avery formats. A: I rewrote the SharpPDFLabel code that was mentioned back in 2011 this week as I needed it to be a lot more flexible (and to work with the current iTextSharp library). You can get it here: https://github.com/finalcut/SharpPDFLabel I added the ability to specify the contents of each individual label if you want (or to continue creating a sheet of identical labels too). By extending the LabelDefinition class you can specify the layout of your labels pretty easily. A: I also struggled with the HTML/CSS approach due to the inconsistent printing behaviour across browsers. I created a C# library to produce Avery Labels from ASP.NET which I hope you might find useful: https://github.com/wheelibin/SharpPDFLabel#readme You can add images and text to the labels, and it's easy to define more labels types. (I use it for barcode labels, the barcode is generated as an image and then added to the label using this library.) Cheers A: Add a few options to your app that let users adjust spacing for their particular configuration. You could include this right on the label if you want, and style it away via media selectors, but you'll probably want to persist them somewhere, too. A: Flash is also good method to push a printable like a label albeit a little more complex to implement and maintain. In most cases it displays much quicker than a PDF and you can embed it into the design of the page and simply add a "Print" button within the flash. I did this several years ago when we were using HTML and PDF to generate confirmation receipts. HTML is "ok" but is at the mercy of the end users web browser so we quickly dumped that method. PDF's are good as long as they have a PDF reader, which to our surprise a lot of our customers did not. So that was dumped as well after we switched to a FLASH version using a simple flash movie that included a few dynamic text areas and a "print" button. I communicated the data between the page and flash using a few flash vars. You can also use web service. When I need something more than just simple text I use the free community edition of the PDF Generator component from DynamicPDF.com. It works great and is very quick. A: I just went through the same thing. Ended up switching and making a short little JSF app (running on Glassfish) that uses JasperReports to print directly to the lable printer. Push button, instant label at the printer, don't even have to view it on-screen if you don't want to since Jasper can directly output to printer (as well as PDF in browser).
{ "language": "en", "url": "https://stackoverflow.com/questions/164197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I add a EULA to a VS2008 setup project? This is a very simple question with a simple answer, but it is not quite so simple to find the answer on the internet. I have a simple Setup (deployment) project in Visual Studio 2008, and I have the EULA text. What do I need to do in the project to get the EULA into the install wizard? A: This is how you performed the actions in vs2003 an vs2005, I don't believe they've made changes but I'm not running vs2008 yet so I can't be certain. right click the installation project, select View->User Interface. In the "Start" section, right click, and select Add Dialog. Choose the license dialog. point the license dialog to an RTF file.
{ "language": "en", "url": "https://stackoverflow.com/questions/164247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Protecting Section in App.config file Console Application I am trying to encrypt the appSettings and connectionStrings section in App.config file of the console application. For some reason section.SectionInformation.IsProtected is always returning true. static void Main(string[] args) { EncryptSection("connectionStrings", "DataProtectionConfigurationProvider"); } private static void EncryptSection(string sectionName, string providerName) { string assemblyPath = Assembly.GetExecutingAssembly().Location; Configuration config = ConfigurationManager.OpenExeConfiguration(assemblyPath); ConfigurationSection section = config.GetSection(sectionName); if (section != null && !section.SectionInformation.IsProtected) { section.SectionInformation.ProtectSection(providerName); config.Save(); } } Not sure why it is always returning true. A: Your code opens the current application configuration. You can try this : static void Main(string[] args) { if (args.Length != 0) { Console.Error.WriteLine("Usage : Program.exe <configFileName>"); // App.Config } EncryptSection(args[0], "connectionStrings", "DataProtectionConfigurationProvider"); } private static void EncryptSection(string configurationFile, string sectionName, string providerName) { Configuration config = ConfigurationManager.OpenExeConfiguration(configurationFile); ConfigurationSection section = config.GetSection(sectionName); if (section != null && !section.SectionInformation.IsProtected) { section.SectionInformation.ProtectSection(providerName); config.Save(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/164268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there anyway to log Firebug 'profile' results to an external file? Specifically, we've got some external JavaScript tracking code on our sites that throws itself into an infinite loop each time an anchor is clicked on. We don't maintain the tracking code, so we don't know exactly how it works. Since the code causes the browser to lock up almost immediately, I was wondering if there's anyway to log the results of Firebug's 'profile' functionality to an external file for review? A: Perhaps by modifying firebug itself, or creating a firebug plugin, you could log the data to a preferences or sqllite. But firefox doesn't grant write access to plain old javascript. A: You should be able to narrow it down by setting breakpoints in the offending JavaScript. It might be messy (especially if they "minify" their JavaScript), but I think it's your best bet.
{ "language": "en", "url": "https://stackoverflow.com/questions/164282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I transfer a file using wininet that is readable by a php script? I would like to transfer a text file to a webserver using wininet as if the file was being transferred using a web form that posts the file to the server. Based on answers I've received I've tried the following code: static TCHAR hdrs[] = "Content-Type: multipart/form-data\nContent-Length: 25"; static TCHAR frmdata[] = "file=filename.txt\ncontent"; HINTERNET hSession = InternetOpen("MyAgent", INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0); HINTERNET hConnect = InternetConnect(hSession, "example.com", INTERNET_DEFAULT_HTTP_PORT, NULL, NULL, INTERNET_SERVICE_HTTP, 0, 1); HINTERNET hRequest = HttpOpenRequest(hConnect, "POST", "test.php", NULL, NULL, NULL, 0, 1); HttpSendRequest(hRequest, hdrs, strlen(hdrs), frmdata, strlen(frmdata));"); The test.php script is being run, but it doesn't appear to be getting the correct data. Could anyone give me any additional help or somewhere to look? Thanks. A: Let's take this one step at a time. First the HTTP headers Involved: * *Content-Type: multipart/form-data *Content-Length: <this depends on the sum of the bytes of the contents> Then you have to build a string with the contents of a POST Form. Lets assume you have the input named file: file=filename.txt <You now add the content of the file after that carriage return> You calculate the length of this string and put on the Content-Length above. Ok a complete HTTP Request would look like this: POST /file_upload.php HTTP/1.0 Content-type: multipart/form-data Content-length: <calculated string's length: integer> file=filename.txt ...File Content... Now some code from the PHP manual: <?php // In PHP versions earlier than 4.1.0, $HTTP_POST_FILES should be used instead // of $_FILES. $uploaddir = '/var/www/uploads/'; $uploadfile = $uploaddir . basename($_FILES['file']['name']); echo '<pre>'; if (move_uploaded_file($_FILES['file']['tmp_name'], $uploadfile)) { echo "File is valid, and was successfully uploaded.\n"; } else { echo "Possible file upload attack!\n"; } echo 'Here is some more debugging info:'; print_r($_FILES); print "</pre>"; ?> Knowing me I've probably messed the format for the content but this is the general idea. A: Changing the form data and headers that I had above to the following solved the problem: static TCHAR frmdata[] = "-----------------------------7d82751e2bc0858\nContent-Disposition: form-data; name=\"uploadedfile\"; filename=\"file.txt\"\nContent-Type: text/plain\n\nfile contents here\n-----------------------------7d82751e2bc0858--"; static TCHAR hdrs[] = "Content-Type: multipart/form-data; boundary=---------------------------7d82751e2bc0858"; A: Here's a general description of the things involved in that. Basically, you have to create an HTTP request to a web address, attach information to the request and then send it. The request must be a POST request in your case.
{ "language": "en", "url": "https://stackoverflow.com/questions/164284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Writing a Scheduled Windows Service in .NET I want to write a windows service which the user can schedule. i.e, the user can choose to run the service from 9:00 AM to 6 PM daily, or he could run it every night, starting from night 12 o clock at night to next day morning 6, etc. Is there any out of the box .NET API that will help me do this? I know I can do this using the Scheduled tasks, but is there any way to do this programmatically? A: I've had good results using Quartz.NET to perform scheduled tasks inside a Windows service. You can do everything from simple interval scheduling, to cron-style schedules. A: My first response is to question why a service? But more importantly, the question would be why not use the powerful scheduler that is provided by the operating system? That said, a windows service is pretty much just a thread that your application runs in. You could ship it in two parts, the first is the service itself which executes on a timer. The startup of the service could check a registry value to determine how often it's supposed to execute. The second part of the service would be a little windows app that allowed the user to set the schedule, and, of course, write it to the previously mentioned registry value. There isn't any sort of special API that you'd need. A: If you don't want the user to have to deal with the task scheduler, then you should write a program that will let them pick the day and time to run the program, and then you programatically setup the scheduled task for them. That way they never have to know specifically about what process you are running, and they also don't have to know how to use task scheduler. They just do it all from your app. A: If you're going to schedule it, just build a console program and add some code to the installer that helps the user setup a scheduled task in windows. A: I implemented some unattended services (Windows Services written in c#), using crontab algorithm to manage the scheduling. The pattern is powerful, and flexible. We can create schedules to any time we want, only using the cron expression. Maybe I am wrong, but the only schedule that I think cron doesn't cover is if we want the last day of the month, but this was never a requirement for all services. I copied the cron algorithm from an article in the internet (open source by Atif Aziz), and implemented in my utility class, working beautifully for years. See more details in my blog: CronTab schedule parser algorithm Cheers! Roberto
{ "language": "en", "url": "https://stackoverflow.com/questions/164286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I avoid page breaks inside tables and groups in BIRT? When creating reports using BIRT 2.3.1, I don't want page breaks inside tables or groups; if the table doesn't fit in the space available at the page, I want to put the entire element in the next page. Using previous versions of BIRT it was possible to set pageBreakInside to "avoid", but it didn't work. In BIRT 2.3.1 this (useless) option was removed, since it wasn't implemented correctly. A: This is one of the 'hard' problems in reporting. Tried to get it in 2.3 and we missed, rather than have people thinking that they were doing something wrong (when it didn't work), we backed it out in 2.3.1. This is high on the priority list for 2.5 (June 2009). Sorry to disappoint, we just ran out of time.
{ "language": "en", "url": "https://stackoverflow.com/questions/164292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you specify a message label when using WCF and NetMsmqBinding? I would like to set the MSMQ message label using the NetMsmqBinding. I understand it’s easy when using the MsmqIntegrationBinding, but I would like to continue to use the NetMsmqBinding (even call private methods, if possible) A: I thought this was an interesting question. Unfortunately, from everything I've seen, it looks like you can't access the Label property on an outgoing MSMQ message using NetMsmqBinding. Here are some of the links I came across: * *http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/3389679b-a130-4e83-bb4c-1b522c216227/ *http://blogs.msdn.com/skaufman/archive/2007/12/17/msmq-label-property-and-wcf.aspx I couldn't find anything that explained exactly why, but the reasoning makes sense - the NetMsmqBinding does not expose anything specific to System.Messaging, so that the binding itself can be easily swapped out for another binding without any code changes. Like you said, the MsmqIntegrationBinding is tightly coupled to System.Messaging concepts, so you get access to all the System.Messaging stuff at the expense of interchangability with other bindings. If setting the Label is important, the easiest route will probably be to just use msmqIntegrationBinding. A: George: No answer, but I'm curious to know how you plan to use the MSMQ label together with NetMsmqBinding. The reason I ask is that NetMsmqBinding was really created to support the scenario in which both the sending and receiving endpoints are both WCF applications, so at that point you might as well just stick any out-of-band data you need in the message headers... A: Use OperationContext.Current.IncomingMessageProperties.Values
{ "language": "en", "url": "https://stackoverflow.com/questions/164295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to upgrade PowerBuilder code? I have code from PowerBuilder 5 that can't be built. The compiler just stops before it is done without any error codes. I would like to upgrade the code to the recent version of PowerBuilder but there are some intermediate versions of PowerBuilder that have binary dependencies to an old Microsoft java dll that Microsoft no longer can distribute due to some court case. So, is there a way to get my code running in a newer environment? /johan/ A: Firstly, you don't need to use "intermediate versions of PowerBuilder" to migrate up to a current version, so even if this java DLL dependency sounds questionable to me (at least it doesn't ring a bell), it's irrelevant unless it affects the target version of PowerBuilder. For migrating, you might want to check out this migration guide, as well as a list of changes to PB that may affect you. A: Very unusual sounding problem. You could give a try to migrating the code to a more recent version of PowerBuilder and see if it will compile or at least fail but give you some useful error messages. I would also recommend posting this in the PowerBuilder section of the Sybase newsgroups. They are very active and full of some brilliant PB minds with lots of experience. You can find them here: http://forums.sybase.com A: From here:http://forums.sybase.com/cgi-bin/webnews.cgi?cmd=item-4558&group=sybase.public.powersite I just learned that the combination of "severe" message, and message that psdwc70.dll was unable to self-register is probably because msjava.dll is not present and/or registered on your machine. The psdwc70.dll file relies on msjava.dll in order to install properly. /johan/ A: Have you tried exporting the code in PB5 and importing in new version?
{ "language": "en", "url": "https://stackoverflow.com/questions/164297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Windows hangs during headless build We are trying to automate a build of one of our products which includes a step where it packages some things with WISE. At one point WISE pops up a window with a progress bar on it to show how it is doing. If one is connected to the machine with remote desktop the build works fine but if one is not connected the build stalls until you reconnect at which point the window opens and the build progresses. Does anybody know of a work around for this? Some way of tricking windows into believing that there is a desktop session connected? A: Sorry for yet another guess - but I had a problem with a wise installer locking up. It was because WISE had installed a "font" and so broadcast a "system config changed" message. My DELL had a Dell utility running on it that had a message queue it wasn't reading from so the broadcast locked up the installer. WISE made a new version for me that did an async broadcast instead to fix the problem. It's possible that there's an app on your system that doesn't bother reading its msg queue when there is no desktop. Finally the answer: check you have the latest patches for your WISE installer. In particular, look for patches that fix lock-ups related to the windowing system. A: What version are you using? Looking at the feature set, it looks like their "std" version might be limited. Perhaps unattended installs require the Pro version? That's just a guess.... Regardless, I wonder whether you could simply code up an auto-run task for the box that calls CreateDesktop to pretend there's an interactive login? I found a CreateDesktop example that's about desktop switching, and an example about unattended installs -- you might be able to use one of them as a starting point to "fake out" WISE :) It might be worth a try...
{ "language": "en", "url": "https://stackoverflow.com/questions/164304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C++ types using CodeSynthesis XSD Tree Mapping I'm using CodeSynthesis XSD C++/Tree Mapping utility to convert an existing xsd into c++ code we can populate the values in. This was we always make sure we follow the schema. After doing the conversion, I'm trying to get it to work so I can test it. Problem is, I'm not used to doing this in c++ and it's my first time with this tool. I start with a class called ABSTRACTNETWORKMODEL with types versno_type and fromtime_type typedef'd inside. Here is the constructor I am trying to use as well as the typedefs ABSTRACTNETWORKMODEL(const versno_type&, const fromtime_type&); typedef ::xml_schema::double_ versno_type; typedef ::xml_schema::time fromtime_type; all these are in the ABSTRACTNETWORKMODEL class and the definitions for double_ and time are: typedef ::xsd::cxx::tree::time<char, simple_type> time; typedef double double_; where the definition for time is a class with multiple constructors: template<typename C, typename B> class time: public B, public time_zone { public: time(unsigned short hours, unsigned short minutes, double seconds); ... } I know I'm not correctly creating a new ABSTRACTNETWORKMODEL but I don't know what I need to do this. Here is all I'm trying to do at this point: ::xml_schema::time t(); ABSTRACTNETWORKMODEL anm(1234, t); This, of course, throws an error about converting the second parameter, but can somebody tell me what it is that is incorrect? Or at least point me down the right path, as one of the things I'm trying to do right now is learn more c++. A: I've been bitten by this before. If the line: ::xml_schema::time t(); is exactly as it appears in your code (that is, with the parens) then the problem is that you didn't actually instantiate an object like you think. To instantiate an object you would use ::xml_schema::time t; The first line, instead, declares a function t() that takes no arguments and returns an object of type ::xml_schema::time. Since there is no body, the compiler thinks you will define the function later. It is perfectly legal C++, and it's something that people do a lot (say, in header files) so the compiler accepts it, does not issue a warning because it has no way of knowing that's not what you meant, and does something you weren't expecting. And when you pass that function to the ABSTRACTNETWORKMODEL constructor you get an error because you can't pass a function as an argument (you can pass a pointer to the function, and you can call the function, passing the resulting temporary): ::xml_schema::time t(); ABSTRACTNETWORKMODEL anm(1234, t()); // calls t(), gets a temporary of type ::xml_schema::time, and passes the temporary to the constructor So the reason "the instantiation of time didn't cause an error" is that a time object was never instantiated. The time class doesn't have a default constructor either, and attempting to instantiate t with the correct syntax would have thrown the error you were expecting. For the record, the parenthesis are required in some cases. For instance, when instantiating a temporary object and manipulating that temporary in the same line: int hours = time().get_hours(); // assuming that there is now a default constructor Because dropping the first set of parenthesis will result in an error: int hours = time.set_time("12:00:00am"); // error: there is a time class, but no object named "time" Believe me, I really like C++, but the syntax can get really difficult to keep straight some times. A: Asked around the office, and it appears my problem wasn't creating the ABSTRACTNETWORKMODEL, but it was actually the ::xml_schema::time. I find it odd that the instantiation of time didn't cause an error, given that it doesn't have any default constructors or why it wasn't accepted even though the template and types were correct.
{ "language": "en", "url": "https://stackoverflow.com/questions/164305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Installing RMagick on Mac OS X with MacPorts With the MacPorts version of ImageMagick 6.4.4 installed, I'm getting an error installing the RMagick gem. /opt/local/bin/ruby extconf.rb update rmagick checking for Ruby version >= 1.8.2... yes checking for /usr/bin/gcc-4.0... yes checking for Magick-config... no Can't install RMagick 2.7.0. Can't find Magick-config in /System/Library/Frameworks/JavaVM.framework/Versions/1.5/Commands: /Users/jason/.bin:/opt/local/bin:/usr/local/bin:/usr/local/mysql/bin: /usr/local/ec2-api-tools/bin:/opt/local/bin:/usr/bin: /usr/local/bin:/bin:/usr/sbin:/sbin:/usr/X11/bin I've installed older versions of rmagick successfully. I've seen references to a dev package of ImageMagick, but it doesn't seem to be available from MacPorts. How can I install RMagick 2.7 on Mac OS X with ImageMagick 6.4.4 from MacPorts? A: Try this from the command line before installing the rmagick gem: sudo port install tiff -macosx imagemagick +q8 +gs +wmf Also have you read the installation documentation here ? A: The install script can't find Magick-config in your path. Did you use a non-standard install location when you installed ImageMagick through MacPorts? Usually it goes into /opt/local/bin/ You can see where MacPorts put your Magick-config by running: port contents ImageMagick If you find it listed there, then make sure that the directory is included in your PATH and rerun the rmagick install. A: I suggest using Homebrew instead of Macports. After installing Homebrew, run: brew install imagemagick gem install rmagick A: I've run the install command, but I keep getting this error: /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- RMagick2.so (LoadError) Turns out it correctly builds the shared object file, but the name is "wrong". The file I get is named /Library/Ruby/Gems/1.8/gems/rmagick-2.11.1/lib/RMagick2.bundle; renaming it to RMagick2.so fixes this issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/164307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Can an Adobe AIR app that's running in the system tray pop up a window? For example, if I were to write a calendar app on top of AIR, say with Flex, could this app pop up reminder windows for approaching appointments, just like Microsoft Outlook can? Clarification: Can those windows be actual dialogs where I can enter and save information? A: See Creating toast-style windows A: Twhirl pops up "toast" notifications (similiar to most instant messengers), while it is running in the system tray. So yes. A: There are notification balloons tips but this method is going out of favor with systray icons in Windows 7 and it's not cross platform. Unfortunately, you can only call this in the Win32 API using Shell_NotifyIcon and there is no way to get to it from Air. Stuck making your own toaster popups. A: YES Your Flex AIR application can pop up windows, dialogs, toasters, whatever. A: Have a look at this article on the Adobe Website, for creating your own Toast Style Popups. (Credit for this goes to Duncan Smart, answering this question.)
{ "language": "en", "url": "https://stackoverflow.com/questions/164311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there any difference between GROUP BY and DISTINCT I learned something simple about SQL the other day: SELECT c FROM myTbl GROUP BY C Has the same result as: SELECT DISTINCT C FROM myTbl What I am curious of, is there anything different in the way an SQL engine processes the command, or are they truly the same thing? I personally prefer the distinct syntax, but I am sure it's more out of habit than anything else. EDIT: This is not a question about aggregates. The use of GROUP BY with aggregate functions is understood. A: What's the difference from a mere duplicate removal functionality point of view Apart from the fact that unlike DISTINCT, GROUP BY allows for aggregating data per group (which has been mentioned by many other answers), the most important difference in my opinion is the fact that the two operations "happen" at two very different steps in the logical order of operations that are executed in a SELECT statement. Here are the most important operations: * *FROM (including JOIN, APPLY, etc.) *WHERE *GROUP BY (can remove duplicates) *Aggregations *HAVING *Window functions *SELECT *DISTINCT (can remove duplicates) *UNION, INTERSECT, EXCEPT (can remove duplicates) *ORDER BY *OFFSET *LIMIT As you can see, the logical order of each operation influences what can be done with it and how it influences subsequent operations. In particular, the fact that the GROUP BY operation "happens before" the SELECT operation (the projection) means that: * *It doesn't depend on the projection (which can be an advantage) *It cannot use any values from the projection (which can be a disadvantage) 1. It doesn't depend on the projection An example where not depending on the projection is useful is if you want to calculate window functions on distinct values: SELECT rating, row_number() OVER (ORDER BY rating) AS rn FROM film GROUP BY rating When run against the Sakila database, this yields: rating rn ----------- G 1 NC-17 2 PG 3 PG-13 4 R 5 The same couldn't be achieved with DISTINCT easily: SELECT DISTINCT rating, row_number() OVER (ORDER BY rating) AS rn FROM film That query is "wrong" and yields something like: rating rn ------------ G 1 G 2 G 3 ... G 178 NC-17 179 NC-17 180 ... This is not what we wanted. The DISTINCT operation "happens after" the projection, so we can no longer remove DISTINCT ratings because the window function was already calculated and projected. In order to use DISTINCT, we'd have to nest that part of the query: SELECT rating, row_number() OVER (ORDER BY rating) AS rn FROM ( SELECT DISTINCT rating FROM film ) f Side-note: In this particular case, we could also use DENSE_RANK() SELECT DISTINCT rating, dense_rank() OVER (ORDER BY rating) AS rn FROM film 2. It cannot use any values from the projection One of SQL's drawbacks is its verbosity at times. For the same reason as what we've seen before (namely the logical order of operations), we cannot "easily" group by something we're projecting. This is invalid SQL: SELECT first_name || ' ' || last_name AS name FROM customer GROUP BY name This is valid (repeating the expression) SELECT first_name || ' ' || last_name AS name FROM customer GROUP BY first_name || ' ' || last_name This is valid, too (nesting the expression) SELECT name FROM ( SELECT first_name || ' ' || last_name AS name FROM customer ) c GROUP BY name I've written about this topic more in depth in a blog post A: GROUP BY has a very specific meaning that is distinct (heh) from the DISTINCT function. GROUP BY causes the query results to be grouped using the chosen expression, aggregate functions can then be applied, and these will act on each group, rather than the entire resultset. Here's an example that might help: Given a table that looks like this: name ------ barry dave bill dave dave barry john This query: SELECT name, count(*) AS count FROM table GROUP BY name; Will produce output like this: name count ------------- barry 2 dave 3 bill 1 john 1 Which is obviously very different from using DISTINCT. If you want to group your results, use GROUP BY, if you just want a unique list of a specific column, use DISTINCT. This will give your database a chance to optimise the query for your needs. A: If you are using a GROUP BY without any aggregate function then internally it will treated as DISTINCT, so in this case there is no difference between GROUP BY and DISTINCT. But when you are provided with DISTINCT clause better to use it for finding your unique records because the objective of GROUP BY is to achieve aggregation. A: There is no difference (in SQL Server, at least). Both queries use the same execution plan. http://sqlmag.com/database-performance-tuning/distinct-vs-group Maybe there is a difference, if there are sub-queries involved: http://blog.sqlauthority.com/2007/03/29/sql-server-difference-between-distinct-and-group-by-distinct-vs-group-by/ There is no difference (Oracle-style): http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:32961403234212 A: They have different semantics, even if they happen to have equivalent results on your particular data. A: Please don't use GROUP BY when you mean DISTINCT, even if they happen to work the same. I'm assuming you're trying to shave off milliseconds from queries, and I have to point out that developer time is orders of magnitude more expensive than computer time. A: In Teradata perspective : From a result set point of view, it does not matter if you use DISTINCT or GROUP BY in Teradata. The answer set will be the same. From a performance point of view, it is not the same. To understand what impacts performance, you need to know what happens on Teradata when executing a statement with DISTINCT or GROUP BY. In the case of DISTINCT, the rows are redistributed immediately without any preaggregation taking place, while in the case of GROUP BY, in a first step a preaggregation is done and only then are the unique values redistributed across the AMPs. Don’t think now that GROUP BY is always better from a performance point of view. When you have many different values, the preaggregation step of GROUP BY is not very efficient. Teradata has to sort the data to remove duplicates. In this case, it may be better to the redistribution first, i.e. use the DISTINCT statement. Only if there are many duplicate values, the GROUP BY statement is probably the better choice as only once the deduplication step takes place, after redistribution. In short, DISTINCT vs. GROUP BY in Teradata means: GROUP BY -> for many duplicates DISTINCT -> no or a few duplicates only . At times, when using DISTINCT, you run out of spool space on an AMP. The reason is that redistribution takes place immediately, and skewing could cause AMPs to run out of space. If this happens, you have probably a better chance with GROUP BY, as duplicates are already removed in a first step, and less data is moved across the AMPs. A: group by is used in aggregate operations -- like when you want to get a count of Bs broken down by column C select C, count(B) from myTbl group by C distinct is what it sounds like -- you get unique rows. In sql server 2005, it looks like the query optimizer is able to optimize away the difference in the simplistic examples I ran. Dunno if you can count on that in all situations, though. A: Use DISTINCT if you just want to remove duplicates. Use GROUPY BY if you want to apply aggregate operators (MAX, SUM, GROUP_CONCAT, ..., or a HAVING clause). A: MusiGenesis' response is functionally the correct one with regard to your question as stated; the SQL Server is smart enough to realize that if you are using "Group By" and not using any aggregate functions, then what you actually mean is "Distinct" - and therefore it generates an execution plan as if you'd simply used "Distinct." However, I think it's important to note Hank's response as well - cavalier treatment of "Group By" and "Distinct" could lead to some pernicious gotchas down the line if you're not careful. It's not entirely correct to say that this is "not a question about aggregates" because you're asking about the functional difference between two SQL query keywords, one of which is meant to be used with aggregates and one of which is not. A hammer can work to drive in a screw sometimes, but if you've got a screwdriver handy, why bother? (for the purposes of this analogy, Hammer : Screwdriver :: GroupBy : Distinct and screw => get list of unique values in a table column) A: In that particular query there is no difference. But, of course, if you add any aggregate columns then you'll have to use group by. A: I expect there is the possibility for subtle differences in their execution. I checked the execution plans for two functionally equivalent queries along these lines in Oracle 10g: core> select sta from zip group by sta; --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 58 | 174 | 44 (19)| 00:00:01 | | 1 | HASH GROUP BY | | 58 | 174 | 44 (19)| 00:00:01 | | 2 | TABLE ACCESS FULL| ZIP | 42303 | 123K| 38 (6)| 00:00:01 | --------------------------------------------------------------------------- core> select distinct sta from zip; --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 58 | 174 | 44 (19)| 00:00:01 | | 1 | HASH UNIQUE | | 58 | 174 | 44 (19)| 00:00:01 | | 2 | TABLE ACCESS FULL| ZIP | 42303 | 123K| 38 (6)| 00:00:01 | --------------------------------------------------------------------------- The middle operation is slightly different: "HASH GROUP BY" vs. "HASH UNIQUE", but the estimated costs etc. are identical. I then executed these with tracing on and the actual operation counts were the same for both (except that the second one didn't have to do any physical reads due to caching). But I think that because the operation names are different, the execution would follow somewhat different code paths and that opens the possibility of more significant differences. I think you should prefer the DISTINCT syntax for this purpose. It's not just habit, it more clearly indicates the purpose of the query. A: You're only noticing that because you are selecting a single column. Try selecting two fields and see what happens. Group By is intended to be used like this: SELECT name, SUM(transaction) FROM myTbl GROUP BY name Which would show the sum of all transactions for each person. A: From a 'SQL the language' perspective the two constructs are equivalent and which one you choose is one of those 'lifestyle' choices we all have to make. I think there is a good case for DISTINCT being more explicit (and therefore is more considerate to the person who will inherit your code etc) but that doesn't mean the GROUP BY construct is an invalid choice. I think this 'GROUP BY is for aggregates' is the wrong emphasis. Folk should be aware that the set function (MAX, MIN, COUNT, etc) can be omitted so that they can understand the coder's intent when it is. The ideal optimizer will recognize equivalent SQL constructs and will always pick the ideal plan accordingly. For your real life SQL engine of choice, you must test :) PS note the position of the DISTINCT keyword in the select clause may produce different results e.g. contrast: SELECT COUNT(DISTINCT C) FROM myTbl; SELECT DISTINCT COUNT(C) FROM myTbl; A: I know it's an old post. But it happens that I had a query that used group by just to return distinct values when using that query in toad and oracle reports everything worked fine, I mean a good response time. When we migrated from Oracle 9i to 11g the response time in Toad was excellent but in the reporte it took about 35 minutes to finish the report when using previous version it took about 5 minutes. The solution was to change the group by and use DISTINCT and now the report runs in about 30 secs. I hope this is useful for someone with the same situation. A: Sometimes they may give you the same results but they are meant to be used in different sense/case. The main difference is in syntax. Minutely notice the example below. DISTINCT is used to filter out the duplicate set of values. (6, cs, 9.1) and (1, cs, 5.5) are two different sets. So DISTINCT is going to display both the rows while GROUP BY Branch is going to display only one set. SELECT * FROM student; +------+--------+------+ | Id | Branch | CGPA | +------+--------+------+ | 3 | civil | 7.2 | | 2 | mech | 6.3 | | 6 | cs | 9.1 | | 4 | eee | 8.2 | | 1 | cs | 5.5 | +------+--------+------+ 5 rows in set (0.001 sec) SELECT DISTINCT * FROM student; +------+--------+------+ | Id | Branch | CGPA | +------+--------+------+ | 3 | civil | 7.2 | | 2 | mech | 6.3 | | 6 | cs | 9.1 | | 4 | eee | 8.2 | | 1 | cs | 5.5 | +------+--------+------+ 5 rows in set (0.001 sec) SELECT * FROM student GROUP BY Branch; +------+--------+------+ | Id | Branch | CGPA | +------+--------+------+ | 3 | civil | 7.2 | | 6 | cs | 9.1 | | 4 | eee | 8.2 | | 2 | mech | 6.3 | +------+--------+------+ 4 rows in set (0.001 sec) Sometimes the results that can be achieved by GROUP BY clause is not possible to achieved by DISTINCT without using some extra clause or conditions. E.g in above case. To get the same result as DISTINCT you have to pass all the column names in GROUP BY clause like below. So see the syntactical difference. You must have knowledge about all the column names to use GROUP BY clause in that case. SELECT * FROM student GROUP BY Id, Branch, CGPA; +------+--------+------+ | Id | Branch | CGPA | +------+--------+------+ | 1 | cs | 5.5 | | 2 | mech | 6.3 | | 3 | civil | 7.2 | | 4 | eee | 8.2 | | 6 | cs | 9.1 | +------+--------+------+ Also I have noticed GROUP BY displays the results in ascending order by default which DISTINCT does not. But I am not sure about this. It may be differ vendor wise. Source : https://dbjpanda.me/dbms/languages/sql/sql-syntax-with-examples#group-by A: In terms of usage, GROUP BY is used for grouping those rows you want to calculate. DISTINCT will not do any calculation. It will show no duplicate rows. I always used DISTINCT if I want to present data without duplicates. If I want to do calculations like summing up the total quantity of mangoes, I will use GROUP BY A: GROUP BY lets you use aggregate functions, like AVG, MAX, MIN, SUM, and COUNT. On the other hand DISTINCT just removes duplicates. For example, if you have a bunch of purchase records, and you want to know how much was spent by each department, you might do something like: SELECT department, SUM(amount) FROM purchases GROUP BY department This will give you one row per department, containing the department name and the sum of all of the amount values in all rows for that department. A: For the query you posted, they are identical. But for other queries that may not be true. For example, it's not the same as: SELECT C FROM myTbl GROUP BY C, D A: I read all the above comments but didn't see anyone pointed to the main difference between Group By and Distinct apart from the aggregation bit. Distinct returns all the rows then de-duplicates them whereas Group By de-deduplicate the rows as they're read by the algorithm one by one. This means they can produce different results! For example, the below codes generate different results: SELECT distinct ROW_NUMBER() OVER (ORDER BY Name), Name FROM NamesTable SELECT ROW_NUMBER() OVER (ORDER BY Name), Name FROM NamesTable GROUP BY Name If there are 10 names in the table where 1 of which is a duplicate of another then the first query returns 10 rows whereas the second query returns 9 rows. The reason is what I said above so they can behave differently! A: If you use DISTINCT with multiple columns, the result set won't be grouped as it will with GROUP BY, and you can't use aggregate functions with DISTINCT. A: In Hive (HQL), GROUP BY can be way faster than DISTINCT, because the former does not require comparing all fields in the table. See: https://sqlperformance.com/2017/01/t-sql-queries/surprises-assumptions-group-by-distinct. A: The way I always understood it is that using distinct is the same as grouping by every field you selected in the order you selected them. i.e: select distinct a, b, c from table; is the same as: select a, b, c from table group by a, b, c A: Funtional efficiency is totally different. If you would like to select only "return value" except duplicate one, use distinct is better than group by. Because "group by" include ( sorting + removing ) , "distinct" include ( removing ) A: Generally we can use DISTINCT for eliminate the duplicates on Specific Column in the table. In Case of 'GROUP BY' we can Apply the Aggregation Functions like AVG, MAX, MIN, SUM, and COUNT on Specific column and fetch the column name and it aggregation function result on the same column. Example : select specialColumn,sum(specialColumn) from yourTableName group by specialColumn; A: There is no significantly difference between group by and distinct clause except the usage of aggregate functions. Both can be used to distinguish the values but if in performance point of view group by is better. When distinct keyword is used , internally it used sort operation which can be view in execution plan. Try simple example Declare @tmpresult table ( Id tinyint ) Insert into @tmpresult Select 5 Union all Select 2 Union all Select 3 Union all Select 4 Select distinct Id From @tmpresult
{ "language": "en", "url": "https://stackoverflow.com/questions/164319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "443" }
Q: How to get file size from within TSQL (path specified in column) in SSRS 2005 I need to get the Folder size and display the info on a report (SSRS). I need to do this for a number of Databases (loop!). These DB's are websites' backends. Are any samples available for this? Does xp_filesize and the like the right solution? A: Looking at the question and Tomalak's response, and I'm assuming the reporting server will be able to reach the folders held in the DB: Firstly set up the query to get you back the result-set of paths - I assume you'll have no trouble with this part. Next you'll need to add a custom code function to your report: http://msdn.microsoft.com/en-us/library/ms155798.aspx - This function will take the folder path as a parameter, and pass back the size of the folder. You'll have to write in VB.Net if you want to embed the code in the report, or you could code up a DLL and bring that in. An example VB.Net code block (Remember you may need to prefix objects with System.IO.) http://www.freevbcode.com/ShowCode.asp?ID=4287 Public Shared Function GetFolderSize(ByVal DirPath As String, _ Optional IncludeSubFolders as Boolean = True) As Long Dim lngDirSize As Long Dim objFileInfo As FileInfo Dim objDir As DirectoryInfo = New DirectoryInfo(DirPath) Dim objSubFolder As DirectoryInfo Try 'add length of each file For Each objFileInfo In objDir.GetFiles() lngDirSize += objFileInfo.Length Next 'call recursively to get sub folders 'if you don't want this set optional 'parameter to false If IncludeSubFolders then For Each objSubFolder In objDir.GetDirectories() lngDirSize += GetFolderSize(objSubFolder.FullName) Next End if Catch Ex As Exception End Try Return lngDirSize End Function Now, in your report, in your table, you'd have for the cell that shows the folder size an expression something like: =Code.GetFolderSize(Fields!FolderPath.Value) I doubt this approach will be performant for a manually-viewed report, but you might get away with it for small result sets, or a scheduled report delivered by email? Oh, and this piece suggests you 'may' run into permissions issues using System.IO from within RS: http://blogs.sqlxml.org/bryantlikes/pages/824.aspx A: Could you clarify who should do what in your scenario? Do you want SQL Server do get the info or do you want Reporting Server do that? What exactly do you mean by "folder size"? Is "one folder, sum up each file" enough or does it need to be recursive? Either way, I'd go for a little custom .NET function that uses System.IO.Directory and it's relatives. A: I'd consider splitting this into two pieces, maybe a Windows Service to scan the directories and aggregate the data into a database, then use SSRS to report on the database as usual. The reason I suggest this is to use master..xp_filesize and it's kin the account the SQL Server service is starting with needs access to the paths to be scanned. Once this turns into accessing paths on other machines I'd be less comfortable with the security implications of that. Hope this helps A: In SSRS you can to do this with the help of custom data extension. U need give the path for the datasource as your folder name and it will retrive your files and its related informations and displayed For further reference and custom dll use this http://www.devx.com/dbzone/Article/31336/0/page/4 I have done this earlier. Note: you have to make related changes to Report Designer and Report Manager configuration files.
{ "language": "en", "url": "https://stackoverflow.com/questions/164324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to get the position() of an XElement? Any XPath like /NodeName/position() would give you the position of the Node w.r.t it's parent node. There is no method on the XElement (Linq to XML) object that can get the position of the Element. Is there? A: You could use the NodesBeforeSelf method to do this: XElement root = new XElement("root", new XElement("one", new XElement("oneA"), new XElement("oneB") ), new XElement("two"), new XElement("three") ); foreach (XElement x in root.Elements()) { Console.WriteLine(x.Name); Console.WriteLine(x.NodesBeforeSelf().Count()); } Update: If you really just want a Position method, just add an extension method. public static class ExMethods { public static int Position(this XNode node) { return node.NodesBeforeSelf().Count(); } } Now you can just call x.Position(). :) A: Actually NodesBeforeSelf().Count doesn't work because it gets everything even of type XText Question was about XElement object. So I figured it's int position = obj.ElementsBeforeSelf().Count(); that should be used, Thanks to Bryant for the direction. A: Actually in the Load method of XDocument you can set a load option of SetLineInfo, you can then typecast XElements to IXMLLineInfo to get the line number. you could do something like var list = from xe in xmldoc.Descendants("SomeElem") let info = (IXmlLineInfo)xe select new { LineNum = info.LineNumber, Element = xe } A: static int Position(this XNode node) { var position = 0; foreach(var n in node.Parent.Nodes()) { if(n == node) { return position; } position++; } return -1; }
{ "language": "en", "url": "https://stackoverflow.com/questions/164335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Should repositories implement IQueryable? I'm considering one of two IRepository interfaces, one that is a descendant of IQueryable and one that contains IQueryable. Like this: public interface IRepository<T> : IQueryable<T> { T Save(T entity); void Delete(T entity); } Or this: public interface IRepository<T> { T Save(T entity); void Delete(T entity); IQueryable<T> Query(); } LINQ usage would be: from dos in ServiceLocator.Current.GetInstance<IRepository<DomainObject>>() where dos.Id == id select dos Or... from dos in ServiceLocator.Current.GetInstance<IRepository<DomainObject>>().Query where dos.Id == id select dos I kinda like the first one, but it's problematic to mock. How have other people implemented LINQable, mockable repositories? A: Personally, I use the Repository Pattern to return all items from the Repository as an IQueryable. By doing this, my repository layer is now very very light, small .. with the service layer (which consumes the Repository layer) can now be open to all types of query manipulation. Basically, all my logic now sits in the service layer (which has no idea what type of repository it will be using .. and doesn't want to know <-- separation of concerns) .. while my repository layer is just dealing with Getting data and Saving data to the repo (a sql server, a file, a satellite in space.. etc <-- more separation of concerns). eg. More or less pseduo code as i'm remembering what we've done in our code and simplifying it for this answer... public interface IRepository<T> { IQueryable<T> Find(); void Save(T entity); void Delete(T entity); } and to have a user repository... public class UserRepository : IRepository<User> { public IQueryable<User> Find() { // Context is some Entity Framework context or // Linq-to-Sql or NHib or an Xml file, etc... // I didn't bother adding this, to this example code. return context.Users().AsQueryable(); } // ... etc } and now for the best bit :) public void UserServices : IUserServices { private readonly IRepository<User> _userRepository; public UserServices(IRepository<User> userRepository) { _userRepository = userRepository; } public User FindById(int userId) { return _userRepository.Find() .WithUserId(userId) .SingleOrDefault(); // <-- This will be null, if the // user doesn't exist // in the repository. } // Note: some people might not want the FindBySingle method because this // uber method can do that, also. But i wanted to show u the power // of having the Repository return an IQuerable. public User FindSingle(Expression<Func<User, bool>> predicate) { return _userRepository .Find() .SingleOrDefault(predicate); } } Bonus Points: WTF is WithUserId(userId) in the FindById method? That's a Pipe and Filter. Use them :) love them :) hug them :) They make your code SOOO much readable :) Now, if u're wanting to know what that does.. this is the extension method. public static User WithId(this IQueryable<User> source, int userId) { return source.Where(u => u.UserId == userId).SingleOrDefault(); } HTH's even though this question is .. well ... nearly two years old :) A: Depends on if you want a Has-A or an Is-A relationship. The first one is an Is-A relationship. The IRepository interface is a IQueryable interface. The second is a has-a. The IRepository has an IQueryable interface. In the process of writing this, I actually like the second better then the first, simply because when use your second IRepository, I can give the Query() method ANYTHING that returns IQueryable. To me, that is more flexible then the first implementation. A: You could always quick write stuff against List, it's not mocking using a mock framework, but it sure works great.
{ "language": "en", "url": "https://stackoverflow.com/questions/164342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Create a BCEL JavaClass object from arbitrary .class file I'm playing around with BCEL. I'm not using it to generate bytecode, but instead I'm trying to inspect the structure of existing compiled classes. I need to be able to point to an arbitrary .class file anywhere on my hard drive and load a JavaClass object based on that. Ideally I'd like to avoid having to add the given class to my classpath. A: The existing .class can be class loaded to java lang class object. Then it can be converted into BCEL intermediate javaclass structure. The following code may help:- Class<?> javaClass1 = null; javaClass1 = ucl.loadClass("com.sample.Customer"); org.apache.bcel.classfile.JavaClass javaClazz1=org.apache.bcel.Repository.lookupClass(javaClass1); A: new ClassParser(classfilebytearrayhere).parse() A: The straightforward way is to create a ClassParser with the file name and call parse(). Alternatively you can use SyntheticRepository and supply a classpath (that is not your classpath, IYSWIM).
{ "language": "en", "url": "https://stackoverflow.com/questions/164343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I read a text file from the second line using fstream? How can I make my std::fstream object start reading a text file from the second line? A: The more efficient way is ignoring strings with std::istream::ignore for (int currLineNumber = 0; currLineNumber < startLineNumber; ++currLineNumber){ if (addressesFile.ignore(numeric_limits<streamsize>::max(), addressesFile.widen('\n'))){ //just skipping the line } else return HandleReadingLineError(addressesFile, currLineNumber); } HandleReadingLineError is not standart but hand-made, of course. The first parameter is maximum number of characters to extract. If this is exactly numeric_limits::max(), there is no limit: Link at cplusplus.com: std::istream::ignore If you are going to skip a lot of lines you definitely should use it instead of getline: when i needed to skip 100000 lines in my file it took about a second in opposite to 22 seconds with getline. A: Use getline() to read the first line, then begin reading the rest of the stream. ifstream stream("filename.txt"); string dummyLine; getline(stream, dummyLine); // Begin reading your stream here while (stream) ... (Changed to std::getline (thanks dalle.myopenid.com)) A: You could use the ignore feature of the stream: ifstream stream("filename.txt"); // Get and drop a line stream.ignore ( std::numeric_limits<std::streamsize>::max(), '\n' ); // Get and store a line for processing. // std::getline() has a third parameter the defaults to '\n' as the line // delimiter. std::string line; std::getline(stream,line); std::string word; stream >> word; // Reads one space separated word from the stream. A common mistake for reading a file: while( someStream.good() ) // !someStream.eof() { getline( someStream, line ); cout << line << endl; } This fails because: When reading the last line it does not read the EOF marker. So the stream is still good, but there is no more data left in the stream to read. So the loop is re-entered. std::getline() then attempts to read another line from someStream and fails, but still write a line to std::cout. Simple solution: while( someStream ) // Same as someStream.good() { getline( someStream, line ); if (someStream) // streams when used in a boolean context are converted to a type that is usable in that context. If the stream is in a good state the object returned can be used as true { // Only write to cout if the getline did not fail. cout << line << endl; } } Correct Solution: while(getline( someStream, line )) { // Loop only entered if reading a line from somestream is OK. // Note: getline() returns a stream reference. This is automatically cast // to boolean for the test. streams have a cast to bool operator that checks // good() cout << line << endl; } A: Call getline() once to throw away the first line There are other methods, but the problem is this, you don't know how long the first line will be do you? So you can't skip it till you know where that first '\n' is. If however you did know how long the first line was going to be, you could simply seek past it, then begin reading, this would be faster. So to do it the first way would look something like: #include <fstream> #include <iostream> using namespace std; int main () { // Open your file ifstream someStream( "textFile.txt" ); // Set up a place to store our data read from the file string line; // Read and throw away the first line simply by doing // nothing with it and reading again getline( someStream, line ); // Now begin your useful code while( !someStream.eof() ) { // This will just over write the first line read getline( someStream, line ); cout << line << endl; } return 0; } A: #include <fstream> #include <iostream> using namespace std; int main () { char buffer[256]; ifstream myfile ("test.txt"); // first line myfile.getline (buffer,100); // the rest while (! myfile.eof() ) { myfile.getline (buffer,100); cout << buffer << endl; } return 0; } A: You can use ignore function as follow: fstream dataFile("file.txt"); dataFile.ignore(1, '\n'); // ignore one line A: #include <iostream> #include <fstream> #include <string> using namespace std; int main() { string textString; string anotherString; ifstream textFile; textFile.open("TextFile.txt"); if (textFile.is_open()) { while (getline(textFile, textString)){ anotherString = anotherString + textString; } } std::cout << anotherString; textFile.close(); return 0; } A: this code can read file from your specified line from file but you have to make file in file explorer before hand my file name is "temp" code is given below https://i.stack.imgur.com/OTrsj.png hope this can help
{ "language": "en", "url": "https://stackoverflow.com/questions/164344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Ping Failure Without IPv6 Our user interface is communicating with another application on a different machine, often connecting using domain names. On our network, when IPv6 is installed, DNS name resolution works great, all machines can be pinged and contacted fine. When IPv6 is uninstalled, pinging the same DNS names returns an IP address on some distant subnet (24.28.193.9; local subnet is 192.168.1.1); our application is then unable to communicate. When IPv6 is reinstalled, the DNS resolution corrects itself. Even without IPv6 when ping is not working, I can still browse other machines using Windows Explorer by entering \\\\MACHINE_NAME\\. I'm not sure why the name resolution seems to work here. We are working in the Windows XP SP2 environment. The IPs of the machines can be pinged successfully. It is only the DNS names that do not resolve properly. I looked for the address of our DNS server. All of our computers are pointing at the network gateway, which is a wireless router. The router has the same DNS server address listed when IPv6 is installed as it does when it isn't installed. The strangest thing is that I just discovered that it does not matter what DNS name I ping. All pings to DNS names return the same address: "24.28.193.9". I tried flushing the DNS Resolver Cache and registering DNS on the target machine and the source machine. All to no avail. The only DNS name that I can ping is the name of the current machine. Any thoughts as to why our software can't communicate without IPv6 installed? UPDATE: OK, I've done a little more research now. I looked for the address of our DNS server. All of our computers are pointing at the network gateway, which is a wireless router. The router has the same DNS server address listed when IPv6 is installed as it does when it isn't installed. The strangest thing is that I just discovered that it does not matter what DNS name I ping. All pings to DNS names return the same address: "24.28.193.9". I tried flushing the DNS Resolver Cache and registering DNS on the target machine and the source machine. All to no avail. The only DNS name that I can ping is the name of the current machine. Any other suggestions? Thanks so much for your help. A: You've got multiple things going on here * *DNS Name resolution *Windows Name resolution *IP-IP ICMP communication You've written your question as if there's a problem with #3, but everything you describe points to the problem actually being with #1. If you take resolution out of the question, can you ping the correct IPs with our without IPv6 installed? It sounds like maybe you have an IPv6 name server installed that has correct information and the IPv4 name server is incorrect? Are you receiving name servers via DHCP or hard coding? What are the IPs of the name servers you are using when IPv6 is installed and when it isn't? A: I know this is a late answer, but in case someone else has the same problem, the key is the IP address, "24.28.193.9". A quick Google search reveals it seems to be related to your ISP completely breaking the DNS protocol by returning a fixed IP address for all non-existent domain names (the correct answer would be NXDOMAIN). Your network gateway is most probably just forwarding your queries to your ISP's name servers. Your systems are relying on the correct operation of the DNS protocol. They are expecting a NXDOMAIN answer before querying the name via other methods (most probably NetBIOS name resolution). Since the DNS server is completely broken and returning an incorrect answer, the correct address is never looked up. The reason installing or uninstalling IPv6 changes the situation is most probably because something related to it is changing the name resolution order (to look up using other methods before trying DNS). So, a workaround would be to change the name resolution order yourself. The real fix would be to either change to a better ISP (one which does not break established protocols) or run your own DNS server (which is what I started doing on all systems I administer ever since VeriSign pulled a similar stunt; theirs was even worse in that changing ISPs made no difference at all). References: * * Warning: Road Runner DNS says nonexistent domains exist
{ "language": "en", "url": "https://stackoverflow.com/questions/164350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Research on TDD I know there is done some research on TDD at the North Carolina State University. They have published a paper called 'An Initial Investigation of Test Driven Development in Industry'. Other publications by NCSU can be found here. Can anybody point me to other good publications on this topic? A: On the Effectiveness of the Test-First Approach to Programming, by Hakan Erdogmus, Maurizio Morisio, and Marco Torchiano. Despite the name it covers TDD: Abstract: Test-Driven Development (TDD) is based on formalizing a piece of functionality as a test, implementing the functionality such that the test passes, and iterating the process. This paper describes a controlled experiment for evaluating an important aspect of TDD: In TDD, programmers write functional tests before the corresponding implementation code. The experiment was conducted with undergraduate students. While the experiment group applied a test-first strategy, the control group applied a more conventional development technique, writing tests after the implementation. Both groups followed an incremental process, adding new features one at a time and regression testing them. We found that test-first students on average wrote more tests and, in turn, students who wrote more tests tended to be more productive. We also observed that the minimum quality increased linearly with the number of programmer tests, independent of the development strategy employed. A: The ACM Digital Library has quite a few papers on TDD. Simply Search for Test Driven Development. The top results from Google's Test driven development academic research: Test-Driven Development: Concepts, Taxonomy, and Future Direction in the IEEE Computer Society. software Architecture Improvement through TDD at the ACM A: As a TDD Practitioner myself, I have launched a new site WeDoTDD.com that lists just that. Companies practicing it, and stories behind how they practice Test Driven Development!
{ "language": "en", "url": "https://stackoverflow.com/questions/164354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Can I use the STL if I cannot afford the slow performance when exceptions are thrown? For example, I'm writing a multi-threaded time-critical application that processes and streams audio in real-time. Interruptions in the audio are totally unacceptable. Does this mean I cannot use the STL because of the potential slow down when an exception is thrown? A: If an STL container throws, you are probably having much bigger problem than the slow down :) A: It's not clearly written in the previous answers, so: Exceptions happen in C++ Using the STL or not won't remove the RAII code that will free the objects's resources you allocated. For example: void doSomething() { MyString str ; doSomethingElse() ; } In the code above, the compiler will generate the code to free the MyString resources (i.e. will call the MyString destructor), no matter what happens in the meantime including if if an exception is thrown by doSomethingElse or if you do a "return" before the end of the function scope. If you have a problem with that, then either you should revise your mindset, or try C. Exceptions are supposed to be exceptional Usually, when an exception occurs (and only when), you'll have a performance hit. But then, the exception should only sent when: * *You have an exceptional event to handle (i.e. some kind of error) *In very exceptional cases (i.e. a "massive return" from multiple function call in the stack, like when doing a complicated search, or unwinding the stack prior a thread graceful interruption) The keyword here is "exceptional", which is good because we are discussing "exception" (see the pattern?). In your case, if you have an exception thrown, chances are good something so bad happened your program would have crashed anyway without exception. In this case, your problem is not dealing with the performance hit. It is to deal with a graceful handling of the error, or, at worse, graceful termination of your program (including a "Sorry" messagebox, saving unsaved data into a temporary file for later recovery, etc.). This means (unless in very exceptional cases), don't use exceptions as "return data". Throw exceptions when something very bad happens. Catch an exception only if you know what to do with that. Avoid try/catching (unless you know how to handle the exception). What about the STL ? Now that we know that: * *You still want to use C++ *Your aim is not to throw thousand exceptions each and every seconds just for the fun of it We should discuss STL: STL will (if possible) usually verify if you're doing something wrong with it. And if you do, it will throw an exception. Still, in C++, you usually won't pay for something you won't use. An example of that is the access to a vector data. If you know you won't go out of bounds, then you should use the operator []. If you know you won't verify the bounds, then you should use the method at(). Example A: typedef std::vector<std::string> Vector ; void outputAllData(const Vector & aString) { for(Vector::size_type i = 0, iMax = aString.size() ; i != iMax ; ++i) { std::cout << i << " : " << aString[i] << std::endl ; } } Example B: typedef std::vector<std::string> Vector ; void outputSomeData(const Vector & aString, Vector::size_type iIndex) { std::cout << iIndex << " : " << aString.at(iIndex) << std::endl ; } The example A "trust" the programmer, and no time will be lost in verification (and thus, less chance of an exception thrown at that time if there is an error anyway... Which usually means the error/exception/crash will usually happen after, which won't help debugging and will let more data be corrupted). The example B asks the vector to verify the index is correct, and throw an exception if not. The choice is yours. A: Do not be afraid of exceptions with regard to performance. In the old days of C++ a build with exceptions enabled could be a lot slower on some compilers. These days it really does not matter if your build with or without exception handling. In general STL does not throw exceptions unless you run out of memory so that should not be a problem for your type of application either. (Now do not use a language with GC.....) A: It's worth noting a couple of points: * *Your application is multi-threaded. If one thread (maybe a GUI one) is slowed down by an exception, it should not affect the performance of the real-time threads. *Exceptions are for exceptional circumstances. If an exception is thrown in your real-time thread, the chances are it will mean that you couldn't continue playing audio anyway. If you find for whatever reason that you are continually processing exceptions in those threads, redesign to avoid the exceptions in the first place. I'd recommend you accept the STL with it's exceptions (unless the STL itself proves too slow - but remember: measure first, optimise second), and also to adopt exception handling for your own 'exceptional situations' (audio hardware failure, whatever) in your application. A: Generally, the only exceptions that STL containers will throw by themselves is an std::bad_alloc if new fails. The only other times are when user code (for example constructors, assignments, copy constructors) throws. If your user code never throws then you only have to guard against new throwing, which you would have had to do anyways most likely. Other things that can throw exceptions: - at() functions can throw std::out_of_range if you access them out of bounds. This is a serious program error anyways. Secondly, exceptions aren't always slow. If an exception occurs in your audio processing, its probably because of a serious error that you will need to handle anyways. The error handling code is probably going to be significantly more expensive than the exception handling code to transport the exception to the catch site. A: I'm struggling to think which portions of the STL specify that they can raise an exception. In my experience most error handling is handled by return codes or as a prerequisite of the STL's use. An object passed to the STL could definitely raise an exception, e.g. copy constructor, but that would be an issue regardless of the use of STL. Others have mentioned functions such as std::vector::at() but you can perform a check or use an alternate method usually to ensure no exception can be thrown. Certainly a particular implementation of the STL can performs "checks", generally for debug builds, on your use of the STL, I think it will raise an assertion only, but perhaps some will throw an exception. If there is no try/catch present I believe no/minimal performance hit will be incurred unless an exception is raised by your own classes. On Visual Studio you can disable the use of C++ exceptions entirely see Project Properties -> C/C++ -> Code Generation -> Enable C++ Exceptions. I presume this is available on most C++ platforms. A: You talk as if exceptions are inevitable. Simply don't do anything that could cause an exception -- fix your bugs, verify your inputs.
{ "language": "en", "url": "https://stackoverflow.com/questions/164356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I create an email-sending service? I've been kicking around this idea for a while and would like to read your thoughts. I'd like to create a .NET service to send and track email messages. My rough ideas: * *Within various applications, serialize instances of .NET email (System.Net.Mail.MailMessage) objects and put them into a database or file system queue *The mail service/process polls the queue and sends the emails *Enforce subscribe/unsubscribe lists/rules *Track opens, bounces, out-of-office auto-replies, etc. *Report statuses back to the original applications Does anyone have advice for how I should get started or what issues I may have? Is there off-the-shelf software/service I should look at? A: You will need to learn everything you can about the SMTP protocol, even if you are using higher level tools that do most of the work for you. In my own experience with processing outbound and inbound emails with .NET, I didn't really "get it" until I learned to telnet to port 25 of an SMTP server and send en email by issuing the commands myself. If you are sending lots of emails out and you need to monitor NDRs (non-deliverable reports), you will have to set the SMTP envelope sender address to your own server and parse all of those emails when they come in to figure out what happened. The System.Net email classes don't allow you to set the MAIL FROM in the conversation with the MTA without also setting the From address in the email header to the same thing, so you will need to use a 3rd party library like aspNetEmail if you need those addresses to be different. ListNanny is another tool that is helpful to parse NDRs, among other functions. I'm not sure about serializing the MailMessage objects. I think it would be simpler to just store the separate data elements themselves and then instantiate MailMessage objects when you need them. A: You may find some of the answers to Handling undelivered emails in webapp useful. A: I suggest you seriously consider outsourcing your email services. I've looked into building a service like this for my webapps as well, but there are many things you will have to keep on top of that will take your time away from your primary webapp (unless of course, you are making this service your primary offering). Third party services take the headache and ongoing maintenance out of the picture. If you go this route, you will need to regularly monitor your server reputation, maintain whitelist relationships with the big ISPs, monitor and handle bounces, set up correct message throttling per ISP, implement features such as DKIM, VERP, SPF, etc. Assuming this is for opt-in mailings, your primary objective will be to make sure that every message sent will land in a user's mailbox, not their spam box. Be prepared for it to take more time than you realize. For transactional email (account signup confirmations, account status reports, billing, etc.), take a look at this SO post for providers with webapp friendly APIs. For marketing email and mass opt-in mailings, take a look at this SO post. I just added MailChimp to that list.
{ "language": "en", "url": "https://stackoverflow.com/questions/164363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Moq'ing an interface While I'm googling/reading for this answer I thought I would also ask here. I have a class that is a wrapper for a SDK. The class accepts an ILoader object and uses the ILoader object to create an ISBAObject which is cast into an ISmallBusinessInstance object. I am simply trying to mock this behavior using Moq. [TestMethod] public void Test_Customer_GetByID() { var mock = new Mock<ILoader>(); var sbainst = new Mock<ISbaObjects>(); mock.Expect(x => x.GetSbaObjects("")).Returns(sbainst); } The compiler error reads: Error 1 The best overloaded method match for 'Moq.Language.IReturns.Returns(Microsoft.BusinessSolutions.SmallBusinessAccounting.Loader.ISbaObjects)' has some invalid arguments What is going on here? I expected the Mock of ISbaObjects to be able to be returned without a problem. A: You need to use sbainst.Object, as sbinst isn't an instance of ISbaObjects - it's just the mock part. A: Updated, correct code [TestMethod] public void Test_Customer_GetByID() { var mock = new Mock<ILoader>(); var sbainst = new Mock<ISbaObjects>(); mock.Expect(x => x.GetSbaObjects("")).Returns(sbainst.Object); }
{ "language": "en", "url": "https://stackoverflow.com/questions/164369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Structured exception handling with a multi-threaded server This article gives a good overview on why structured exception handling is bad. Is there a way to get the robustness of stopping your server from crashing, while getting past the problems mentioned in the article? I have a server software that runs about 400 connected users concurrently. But if there is a crash all 400 users are affected. We added structured exception handling and enjoyed the results for a while, but eventually had to remove it because of some crashes causing the whole server to hang (which is worse than just having it crash and restart itself). So we have this: * *With SEH: only 1 user of the 400 get a problem for most crashes *Without SEH: If any user gets a crash, all 400 are affected. *But sometimes with SEH: Server hangs, all 400 are affected and future users that try to connect. A: Using SEH because your program crashes randomly is a bad idea. It's not magic pixie dust that you can sprinkle on your program to make it stop crashing. Tracking down and fixing the bugs that cause the crashes is the right solution. Using SEH when you really need to handle a structured exception is fine. Larry Osterman made a followup post explaining what situations require SEH: memory mapped files, RPC, and security boundary transitions. A: Break your program up into worker processes and a single server process. The server process will handle initial requests and then hand them off the the worker processes. If a worker process crashes, only the users on that worker are affected. Don't use SEH for general exception handling - as you have found out, it can and will leave you wide open to deadlocks, and you can still crash anyway. A: Fix the bugs in your program ? ;) Personally I'd keep the SEH handlers in, have them dump out a call stack of where the access violation or whatever happened and fix the problems. The 'sometimes the server hangs' problem is probably due to deadlocks caused by the thread that had the SEH exception keeping something locked and so is unlikely to be related to the fact that you're using SEH itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/164372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Programmatically inspect .class files I'm working on a project where we're doing a lot of remote object transfer between a Java service and clients written in other various languages. Given our current constraints I've decided to see what it would take to generate code based on an existing Java class. Basically I need to take a .class file (or a collection of them) parse the bytecode to determine all of the data members and perhaps getters/setters and then write something that can output code in a different language to create a class with the same structure. I'm not looking for standard decompilers such as JAD. I need to be able to take a .class file and create an object model of its data members and methods. Is this possible at all? A: I've used BCEL and find it really quite awkward. ASM is much better. It very extensively uses visitors (which can be a little confusing) and does not create an object model. Not creating an object model turns out to be a bonus, as any model you do want to create is unlikely to look like a literal interpretation of all the data. A: I have used BCEL in the past and it was pretty easy to use. It was a few years ago so there may be something better now. Apache Jakarta BCEL A: From your description, it sounds like simple reflection would suffice. You can discover all of the static structure of the class, as well as accessing the fields of a particular instance. I would only move on to BCEL if you are trying to translate method instructions. (And if that's what you're trying to automate, good luck!) A: JAD is a java decompiler that doesn't allow programmatic access. It isn't readily available anymore, and probably won't work for newer projects with Java7 bytecodes. A: I'm shocked that no one has mentioned ASM yet. It's the best bytecode library your money can buy. Well, ok it's free. A: I think javassist might help you too. http://www.jboss.org/javassist/ I have never had the need of using it, but if you give it a try, would you let us know your comments about it? Although I think it is more for bytecode manipulation than .class inspection.
{ "language": "en", "url": "https://stackoverflow.com/questions/164378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Flash Banners Conflicting with Pop-Up Blockers? We've been creating banners using the getURL linking method (in a blank window). For many people, it works just fine. You click the banner and are taken to our site. For others (me included), clicking the flash object triggers a pop-up warning in FireFox (both 2 and 3, default settings). The weird thing is that it doesn't happen for everyone. It happens on my main machine (vista 64, FF3) but not on my secondary machine (XP 64, FF3). I have other people running Vista/FF3 just like me, and it's working fine for them...but not me. An example is the 300x250 banner on the left side of this page: http://www.jguitar.com/ We're pretty stumped and have no idea why this is happening. Any feedback would be greatly appreicated. A: In my experience you need to put your link inside a onRelease handler (or MouseEvent.CLICK in as3) for it to not get blocked. If you set it to onPress or anything else will it will be blocked. This isn't foolproof on some setups it will get blocked anyway, but often that's due to a tougher setting on the blocker or something like that. A: Use this code, with allowscriptaccess='always' and wmode='transparant' or 'opaque' in the HTML code on the Flash element. private function click(event : MouseEvent) : void { getURL(LoaderInfo(root.loaderInfo).parameters.clic kTag); } private function getURL(url : String, window : String = "_blank") : void { var browser : String = ExternalInterface.call("function getBrowser(){return navigator.userAgent}") as String; if (browser.indexOf("Firefox") != -1 || browser.indexOf("MSIE 7.0") != -1) { ExternalInterface.call('window.open("' + url + '","' + window + '")'); } else { navigateToURL(new URLRequest(url), window); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/164382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Searching For String Literals In the quest for localization I need to find all the string literals littered amongst our source code. I was looking for a way to script this into a post-modification source repository check. (I.E. after some one checks something in have a box setup to check this stat) I'll probably use NAnt and CruiseControl or something to handle the management of the CVS (Well StarTeam in my case :( ) But do you know of any scriptable (or command line) utility to accurately cycle through source code looking for string literals? I realize I could do simple string look up based on regular expressions but want a little more bang for my buck. (Maybe analyze the string or put it into categories) Because a lot of times the string may not necessarily require translation. Any ideas? A: Visual Studio 2010 and earlier: * *Find In Files (CTRL+SHIFT+F) *Use: Regular Expressions *Find: :q (quoted string) *Find All Find Results window will now contain a report of all files, with line numbers and the line itself with the quoted string. For Visual Studio 2012 and later search for ((\".+?\")|('.+?')) (reference, hat-tip to @CincauHangus) A: It uses the compiled binary instead of source, but Sysinternals' Strings app might be useful. A: To find all Text="textonly" instances use the following Regular Expression when searching: (Text=)(")([a-z]) This is help for finding Text="*" but excluding text that's already been converted to use resource files: Text="<%$ Resources:LocalizedText, KeyNameFromResourceFile%>" Also (>)([a-z]) can be used to find literals between tags like so: <h1>HeaderText</h1> A: * *Find In Files (CTRL+SHIFT+F) *Find options -> Check Use Regular Expressions For specific text within the literal: *Find what: "+.*(MYSPECIFICTEXT)+.*"+ For all literals *Find what: "+.*"+ Then *Find All A: There's a C# parser on CodePlex that you can probably use. A: hi this is regex for searching literals, that I use to find a text for translation. it also includes empty spaces and different quotes regex: ([",`,'])([\w,\s]*)([",`,']) searchstring: var test='This is a test';
{ "language": "en", "url": "https://stackoverflow.com/questions/164393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Is it possible to add data members dynamically in PHP? I'm wondering if its possible to add new class data members at run-time in PHP? A: It is. You can add public members are run time with no additional code, and can affect protected/private members using the magical overloading methods __get() / __set(). See here for more details. A: Yes. $prop = 'newname'; $obj->$prop = 42; will do the same thing as: $obj->newname = 42; Either one will add "newname" as a property in $obj if it does not yet exist.
{ "language": "en", "url": "https://stackoverflow.com/questions/164395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: JavaScript: How do I print a message to the error console? How can I print a message to the error console, preferably including a variable? For example, something like: print('x=%d', x); A: Exceptions are logged into the JavaScript console. You can use that if you want to keep Firebug disabled. function log(msg) { setTimeout(function() { throw new Error(msg); }, 0); } Usage: log('Hello World'); log('another message'); A: If you are using Firebug and need to support IE, Safari or Opera as well, Firebug Lite adds console.log() support to these browsers. A: The WebKit Web Inspector also supports Firebug's console API (just a minor addition to Dan's answer). A: One good way to do this that works cross-browser is outlined in Debugging JavaScript: Throw Away Your Alerts!. A: A note about 'throw()' mentioned above. It seems that it stops execution of the page completely (I checked in IE8) , so it's not very useful for logging "on going processes" (like to track a certain variable...) My suggestion is perhaps to add a textarea element somewhere in your document and to change (or append to) its value (which would change its text) for logging information whenever needed... A: As always, Internet Explorer is the big elephant in rollerskates that stops you just simply using console.log(). jQuery's log can be adapted quite easily, but is a pain having to add it everywhere. One solution if you're using jQuery is to put it into your jQuery file at the end, minified first: function log() { if (arguments.length > 0) { // Join for graceful degregation var args = (arguments.length > 1) ? Array.prototype.join.call(arguments, " ") : arguments[0]; // This is the standard; Firebug and newer WebKit browsers support this. try { console.log(args); return true; } catch(e) { // Newer Opera browsers support posting erros to their consoles. try { opera.postError(args); return true; } catch(e) { } } // Catch all; a good old alert box. alert(args); return false; } } A: Install Firebug and then you can use console.log(...) and console.debug(...), etc. (see the documentation for more). A: Visit https://developer.chrome.com/devtools/docs/console-api for a complete console api reference console.error(object[Obj,....])\ In this case, object would be your error string A: console.error(message); // Outputs an error message to the Web Console console.log(message); // Outputs a message to the Web Console console.warn(message); // Outputs a warning message to the Web Console console.info(message); // Outputs an informational message to the Web Console. In some browsers it shows a small "i" in front of the message. You also can add CSS: console.log('%c My message here', "background: blue; color: white; padding-left:10px;"); More info can be found here: https://developer.mozilla.org/en-US/docs/Web/API/console A: function foo() { function bar() { console.trace("Tracing is Done here"); } bar(); } foo(); console.log(console); //to print console object console.clear('console.clear'); //to clear console console.log('console.log'); //to print log message console.info('console.info'); //to print log message console.debug('console.debug'); //to debug message console.warn('console.warn'); //to print Warning console.error('console.error'); //to print Error console.table(["car", "fruits", "color"]);//to print data in table structure console.assert('console.assert'); //to print Error console.dir({"name":"test"});//to print object console.dirxml({"name":"test"});//to print object as xml formate To Print Error:- console.error('x=%d', x); console.log("This is the outer level"); console.group(); console.log("Level 2"); console.group(); console.log("Level 3"); console.warn("More of level 3"); console.groupEnd(); console.log("Back to level 2"); console.groupEnd(); console.log("Back to the outer level"); A: Here is a solution to the literal question of how to print a message to the browser's error console, not the debugger console. (There might be good reasons to bypass the debugger.) As I noted in comments about the suggestion to throw an error to get a message in the error console, one problem is that this will interrupt the thread of execution. If you don't want to interrupt the thread, you can throw the error in a separate thread, one created using setTimeout. Hence my solution (which turns out to be an elaboration of the one by Ivo Danihelka): var startTime = (new Date()).getTime(); function logError(msg) { var milliseconds = (new Date()).getTime() - startTime; window.setTimeout(function () { throw( new Error(milliseconds + ': ' + msg, "") ); }); } logError('testing'); I include the time in milliseconds since the start time because the timeout could skew the order in which you might expect to see the messages. The second argument to the Error method is for the filename, which is an empty string here to prevent output of the useless filename and line number. It is possible to get the caller function but not in a simple browser independent way. It would be nice if we could display the message with a warning or message icon instead of the error icon, but I can't find a way to do that. Another problem with using throw is that it could be caught and thrown away by an enclosing try-catch, and putting the throw in a separate thread avoids that obstacle as well. However, there is yet another way the error could be caught, which is if the window.onerror handler is replaced with one that does something different. Can't help you there. A: If you use Safari, you can write console.log("your message here"); and it appears right on the console of the browser. A: To actually answer the question: console.error('An error occurred!'); console.error('An error occurred! ', 'My variable = ', myVar); console.error('An error occurred! ' + 'My variable = ' + myVar); Instead of error, you can also use info, log or warn. A: console.log("your message here"); working for me.. i'm searching for this.. i used Firefox. here is my Script. $('document').ready(function() { console.log('all images are loaded'); }); works in Firefox and Chrome. A: The simplest way to do this is: console.warn("Text to print on console"); A: To answer your question you can use ES6 features, var var=10; console.log(`var=${var}`); A: This does not print to the Console, but will open you an alert Popup with your message which might be useful for some debugging: just do: alert("message"); A: With es6 syntax you can use: console.log(`x = ${x}`);
{ "language": "en", "url": "https://stackoverflow.com/questions/164397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "458" }
Q: Sql 2005 Express edition slow connections I'm running SqlServer 2005 express edition on my laptop for development purposes. It seems that when I open a connection to the database, the setup time is REALLY slow. It can take up to 10 seconds to get a connection. I usually have multiple connections open at the same time (Profiler, Development environment, Query Analyser, etc.) I have a hunch that the slow times are related to the fact that I have multiple connections open. Is there a governor in Express edition that throttles connection times when multiple connections are made to an instance? Update: My workstation is not on active directory, and SQL is running mixed mode security. I will try the login with sql authentication. I am not using user instances. Update2: I setup a trace to try and figure out what is going on. When the connection to the database is opened the follow command is executed: master.dbo.sp_MShasdbaccess This command takes 6 seconds to execute. A: I figured it out. The problem was I had multiple databases with AutoClose set to true. I shut it off in all my databases and the problem went away. see this article for more info. A: Are you sure the connection is the bottleneck? Is it your conn.Open() line that is taking 10 seconds? A: AFAIK there's no governer anymore in SQL Express. Now, are you on a Windows Active Directory Domain? If so, there might be an issue with your DNS or something that means the connection to the domain controller to validate your logon to the server instance is taking the time. I suggest you experiment switching the server over to use SQL Security, give the SA account a password, and try logging in as SA and see if that makes a difference.
{ "language": "en", "url": "https://stackoverflow.com/questions/164400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What JavaScript frameworks conflict with each other? There are times when I want to use mootools for certain things and Prototype & script.aculo.us for others but within the same site. I've even considered adding others, but was concerned about conflicts. Anyone have experience, or am I just trying to make things too complicated for myself? A: If you really, really want to do this, then you will be able to without too many problems - the main libraries are designed to behave well inside their own namespaces, with a couple of notable exceptions - from Using JQuery with Other Frameworks: The jQuery library, and virtually all of its plugins are constrained within the jQuery namespace. As a general rule, "global" objects are stored inside the jQuery namespace as well, so you shouldn't get a clash between jQuery and any other library (like Prototype, MooTools, or YUI). That said, there is one caveat: By default, jQuery uses "$" as a shortcut for "jQuery", which you can over-ride. So, yes, you can do it, but you'd likely be creating maintenance headaches further down the line for yourself - subtle differences between framework functions may be obvious to you today, but come back in 6 months and it can be a whole other story! So I would recommend keeping it as simple as you can, and having as few different frameworks (preferrably 1!) as you can in your codebase. A: AFAIK, all the popular frameworks are designed to be combined with other frameworks. I don't think combining them is that much of a problem. I would however discourage combining them purely from a case of bandwidth needs. A slow site experience is less forgivable than a more complicated development experience. A: A recent question: jQuery & Prototype Conflict Prototype.js library used to be very offensive and conflicted with many other libraries / code. However, to my knowledge, they recently given up with some really hard-core staff, such as replacing Element object etc. A: You are better off sticking with a single framework per application. Otherwise your client will spend too much time/bandwidth downloading the javascripts. That being said, Prototype and JQuery can work together. Information is on the JQuery web site. A: My suggestion is to learn one (or more!) framework(s) very well so that you will be able to replicate the features you need without adding the overhead of multiple frameworks. remember the more code that you push to the client the slower everything becomes. A: From my experience I can say that some javascript libraries rather conflict with the browser, than with each other. What I mean is the following: sometime code written against some library will not well co-exist with the code written against browser DOM. A: My Framework Scanner tool is useful for finding JS/CSS conflicts between libraries: http://mankz.com/code/GlobalCheck.htm
{ "language": "en", "url": "https://stackoverflow.com/questions/164403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I "inverse match" with regex? I'm processing a file, line-by-line, and I'd like to do an inverse match. For instance, I want to match lines where there is a string of six letters, but only if these six letters are not 'Andrea'. How should I do that? I'm using RegexBuddy, but still having trouble. A: (?!Andrea).{6} Assuming your regexp engine supports negative lookaheads... ...or maybe you'd prefer to use [A-Za-z]{6} in place of .{6} Note that lookaheads and lookbehinds are generally not the right way to "inverse" a regular expression match. Regexps aren't really set up for doing negative matching; they leave that to whatever language you are using them with. A: Negative lookahead assertion (?!Andrea) This is not exactly an inverted match, but it's the best you can directly do with regex. Not all platforms support them though. A: If you want to do this in RegexBuddy, there are two ways to get a list of all lines not matching a regex. On the toolbar on the Test panel, set the test scope to "Line by line". When you do that, an item List All Lines without Matches will appear under the List All button on the same toolbar. (If you don't see the List All button, click the Match button in the main toolbar.) On the GREP panel, you can turn on the "line-based" and the "invert results" checkboxes to get a list of non-matching lines in the files you're grepping through. A: For Python/Java, ^(.(?!(some text)))*$ http://www.lisnichenko.com/articles/javapython-inverse-regex.html A: (?! is useful in practice. Although strictly speaking, looking ahead is not a regular expression as defined mathematically. You can write an inverted regular expression manually. Here is a program to calculate the result automatically. Its result is machine generated, which is usually much more complex than hand writing one. But the result works. A: I just came up with this method which may be hardware intensive but it is working: You can replace all characters which match the regex by an empty string. This is a oneliner: notMatched = re.sub(regex, "", string) I used this because I was forced to use a very complex regex and couldn't figure out how to invert every part of it within a reasonable amount of time. This will only return you the string result, not any match objects! A: In PCRE and similar variants, you can actually create a regex that matches any line not containing a value: ^(?:(?!Andrea).)*$ This is called a tempered greedy token. The downside is that it doesn't perform well. A: If you have the possibility to do two regex matches for the inverse and join them together you can use two capturing groups to first capture everything before your regex ^((?!yourRegex).)* and then capture everything behind your regex (?<=yourRegex).* This works for most regexes. One problem I discovered was when I had a quantifier like {2,4} at the end. Then you gotta get creative. A: The capabilities and syntax of the regex implementation matter. You could use look-ahead. Using Python as an example, import re not_andrea = re.compile('(?!Andrea)\w{6}', re.IGNORECASE) To break that down: (?!Andrea) means 'match if the next 6 characters are not "Andrea"'; if so then \w means a "word character" - alphanumeric characters. This is equivalent to the class [a-zA-Z0-9_] \w{6} means exactly six word characters. re.IGNORECASE means that you will exclude "Andrea", "andrea", "ANDREA" ... Another way is to use your program logic - use all lines not matching Andrea and put them through a second regex to check for six characters. Or first check for at least six word characters, and then check that it does not match Andrea. A: In Perl you can do: process($line) if ($line =~ !/Andrea/);
{ "language": "en", "url": "https://stackoverflow.com/questions/164414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "152" }
Q: Determining if enum value is in list (C#) I am building a fun little app to determine if I should bike to work. I would like to test to see if it is either Raining or Thunderstorm(ing). public enum WeatherType : byte { Sunny = 0, Cloudy = 1, Thunderstorm = 2, Raining = 4, Snowing = 8, MostlyCloudy = 16 } I was thinking I could do something like: WeatherType _badWeatherTypes = WeatherType.Thunderstorm | WeatherType.Raining; if(currentWeather.Type == _badWeatherTypes) { return false;//don't bike } but this doesn't work because _badWeatherTypes is a combination of both types. I would like to keep them separated out because this is supposed to be a learning experience and having it separate may be useful in other situations (IE, Invoice not paid reason's etc...). I would also rather not do: (this would remove the ability to be configured for multiple people) if(WeatherType.Thunderstorm) { return false; //don't bike } etc... A: I'm not sure that it should be a flag - I think that you should have an range input for: * *Temperature *How much it's raining *Wind strength *any other input you fancy (e.g. thunderstorm) you can then use an algorithm to determine if the conditions are sufficiently good. I think you should also have an input for how likely the weather is to remain the same for cycling home. The criteria may be different - you can shower and change more easliy when you get home. If you really want to make it interesting, collect the input data from a weather service api, and evaulate the decision each day - Yes, I should have cycled, or no, it was a mistake. Then perhaps you can have the app learn to make better decisions. Next step is to "socialize" your decision, and see whether other people hear you are making the same decisions. A: Your current code will say whether it's exactly "raining and thundery". To find out whether it's "raining and thundery and possibly something else" you need: if ((currentWeather.Type & _badWeatherTypes) == _badWeatherTypes) To find out whether it's "raining or thundery, and possibly something else" you need: if ((currentWeather.Type & _badWeatherTypes) != 0) EDIT (for completeness): It would be good to use the FlagsAttribute, i.e. decorate the type with [Flags]. This is not necessary for the sake of this bitwise logic, but affects how ToString() behaves. The C# compiler ignores this attribute (at least at the moment; the C# 3.0 spec doesn't mention it) but it's generally a good idea for enums which are effectively flags, and it documents the intended use of the type. At the same time, the convention is that when you use flags, you pluralise the enum name - so you'd change it to WeatherTypes (because any actual value is effectively 0 or more weather types). It would also be worth thinking about what "Sunny" really means. It's currently got a value of 0, which means it's the absence of everything else; you couldn't have it sunny and raining at the same time (which is physically possible, of course). Please don't write code to prohibit rainbows! ;) On the other hand, if in your real use case you genuinely want a value which means "the absence of all other values" then you're fine. A: use the FlagsAttribute. That will allow you to use the enum as a bit mask. A: You need to use the [Flags] attribute (check here) on your enum; then you can use bitwise and to check for individual matches. A: You should be using the Flags attribute on your enum. Beyond that, you also need to test to see if a particular flag is set by: (currentWeather.Type & WeatherType.Thunderstorm == WeatherType.Thunderstorm) This will test if currentWeather.Type has the WeatherType.Thunderstorm flag set. A: I wouldn't limit yourself to the bit world. Enums and bitwise operators are, as you found out, not the same thing. If you want to solve this using bitwise operators, I'd stick to just them, i.e. don't bother with enums. However, I'd something like the following: WeatherType[] badWeatherTypes = new WeatherType[] { WeatherType.Thunderstorm, WeatherType.Raining }; if (Array.IndexOf(badWeatherTypes, currentWeather.Type) >= 0) { return false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/164425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Change Django Templates Based on User-Agent I've made a Django site, but I've drank the Koolaid and I want to make an IPhone version. After putting much thought into I've come up with two options: * *Make a whole other site, like i.xxxx.com. Tie it into the same database using Django's sites framework. *Find some time of middleware that reads the user-agent, and changes the template directories dynamically. I'd really prefer option #2, however; I have some reservations, mainly because the Django documentation discourages changing settings on the fly. I found a snippet that would do the what I'd like. My main issue is having it as seamless as possible, I'd like it to be automagic and transparent to the user. Has anyone else come across the same issue? Would anyone care to share about how they've tackled making IPhone versions of Django sites? Update I went with a combination of middleware and tweaking the template call. For the middleware, I used minidetector. I like it because it detects a plethora of mobile user-agents. All I have to do is check request.mobile in my views. For the template call tweak: def check_mobile(request, template_name): if request.mobile: return 'mobile-%s'%template_name return template_name I use this for any view that I know I have both versions. TODO: * *Figure out how to access request.mobile in an extended version of render_to_response so I don't have to use check_mobile('template_name.html') *Using the previous automagically fallback to the regular template if no mobile version exists. A: I'm developing djangobile, a django mobile extension: http://code.google.com/p/djangobile/ A: Rather than changing the template directories dynamically you could modify the request and add a value that lets your view know if the user is on an iphone or not. Then wrap render_to_response (or whatever you are using for creating HttpResponse objects) to grab the iphone version of the template instead of the standard html version if they are using an iphone. A: You should take a look at the django-mobileadmin source code, which solved exactly this problem. A: Other way would be creating your own template loader that loads templates specific to user agent. This is pretty generic technique and can be use to dynamically determine what template has to be loaded depending on other factors too, like requested language (good companion to existing Django i18n machinery). Django Book has a section on this subject. A: There is a nice article which explains how to render the same data by different templates http://www.postneo.com/2006/07/26/acknowledging-the-mobile-web-with-django You still need to automatically redirect the user to mobile site however and this can be done using several methods (your check_mobile trick will work too) A: Detect the user agent in middleware, switch the url bindings, profit! How? Django request objects have a .urlconf attribute, which can be set by middleware. From django docs: Django determines the root URLconf module to use. Ordinarily, this is the value of the ROOT_URLCONF setting, but if the incoming HttpRequest object has an attribute called urlconf (set by middleware request processing), its value will be used in place of the ROOT_URLCONF setting. * *In yourproj/middlware.py, write a class that checks the http_user_agent string: import re MOBILE_AGENT_RE=re.compile(r".*(iphone|mobile|androidtouch)",re.IGNORECASE) class MobileMiddleware(object): def process_request(self,request): if MOBILE_AGENT_RE.match(request.META['HTTP_USER_AGENT']): request.urlconf="yourproj.mobile_urls" *Don't forget to add this to MIDDLEWARE_CLASSES in settings.py: MIDDLEWARE_CLASSES= [... 'yourproj.middleware.MobileMiddleware', ...] *Create a mobile urlconf, yourproj/mobile_urls.py: urlpatterns=patterns('',('r'/?$', 'mobile.index'), ...) A: How about redirecting user to i.xxx.com after parsing his UA in some middleware? I highly doubt that mobile users care how url look like, still they can access your site using main url. A: best possible scenario: use minidetector to add the extra info to the request, then use django's built in request context to pass it to your templates like so from django.shortcuts import render_to_response from django.template import RequestContext def my_view_on_mobile_and_desktop(request) ..... render_to_response('regular_template.html', {'my vars to template':vars}, context_instance=RequestContext(request)) then in your template you are able to introduce stuff like: <html> <head> {% block head %} <title>blah</title> {% if request.mobile %} <link rel="stylesheet" href="{{ MEDIA_URL }}/styles/base-mobile.css"> {% else %} <link rel="stylesheet" href="{{ MEDIA_URL }}/styles/base-desktop.css"> {% endif %} </head> <body> <div id="navigation"> {% include "_navigation.html" %} </div> {% if not request.mobile %} <div id="sidebar"> <p> sidebar content not fit for mobile </p> </div> {% endif %> <div id="content"> <article> {% if not request.mobile %} <aside> <p> aside content </p> </aside> {% endif %} <p> article content </p> </aricle> </div> </body> </html> A: A simple solution is to create a wrapper around django.shortcuts.render. I put mine in a utils library in the root of my application. The wrapper works by automatically rendering templates in either a "mobile" or "desktop" folder. In utils.shortcuts: from django.shortcuts import render from user_agents import parse def my_render(request, *args, **kwargs): """ An extension of django.shortcuts.render. Appends 'mobile/' or 'desktop/' to a given template location to render the appropriate template for mobile or desktop depends on user_agents python library https://github.com/selwin/python-user-agents """ template_location = args[0] args_list = list(args) ua_string = request.META['HTTP_USER_AGENT'] user_agent = parse(ua_string) if user_agent.is_mobile: args_list[0] = 'mobile/' + template_location args = tuple(args_list) return render(request, *args, **kwargs) else: args_list[0] = 'desktop/' + template_location args = tuple(args_list) return render(request, *args, **kwargs) In view: from utils.shortcuts import my_render def home(request): return my_render(request, 'home.html')
{ "language": "en", "url": "https://stackoverflow.com/questions/164427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Why is it that UTF-8 encoding is used when interacting with a UNIX/Linux environment? I know it is customary, but why? Are there real technical reasons why any other way would be a really bad idea or is it just based on the history of encoding and backwards compatibility? In addition, what are the dangers of not using UTF-8, but some other encoding (most notably, UTF-16)? Edit : By interacting, I mostly mean the shell and libc. A: I believe it's mainly the backwards compatability that UTF8 gives with ASCII. For an answer to the 'dangers' question, you need to specify what you mean by 'interacting'. Do you mean interacting with the shell, with libc, or with the kernel proper? A: Modern Unixes use UTF-8, but this was not always true. On RHEL2 -- which is only a few years old -- the default is $ locale LANG=C LC_CTYPE="C" LC_NUMERIC="C" LC_TIME="C" LC_COLLATE="C" LC_MONETARY="C" LC_MESSAGES="C" LC_PAPER="C" LC_NAME="C" LC_ADDRESS="C" LC_TELEPHONE="C" LC_MEASUREMENT="C" LC_IDENTIFICATION="C" LC_ALL=The C/POSIX locale is expected to be a 7-bit ASCII-compatible encoding. However, as Jonathan Leffler stated, any encoding which allows for NUL bytes within a character sequence is unworkable on Unix, as system APIs are locale-ignorant; strings are all assumed to be byte sequences terminated by \0. A: Partly because the file systems expect NUL ('\0') bytes to terminate file names, so UTF-16 would not work well. You'd have to modify a lot of code to make that change. A: As jonathan-leffler mentions, the prime issue is the ASCII null character. C traditionally expects a string to be null terminated. So standard C string functions will choke on any UTF-16 character containing a byte equivalent to an ASCII null (0x00). While you can certainly program with wide character support, UTF-16 is not a suitable external encoding of Unicode in filenames, text files, environment variables. Furthermore, UTF-16 and UTF-32 have both big endian and little endian orientations. To deal with this, you'll either need external metadata like a MIME type, or a Byte Orientation Mark. It notes, Where UTF-8 is used transparently in 8-bit environments, the use of a BOM will interfere with any protocol or file format that expects specific ASCII characters at the beginning, such as the use of "#!" of at the beginning of Unix shell scripts. The predecessor to UTF-16, which was called UCS-2 and didn't support surrogate pairs, had the same issues. UCS-2 should be avoided. A: I believe that when Microsoft started using a two byte encoding, characters above 0xffff had not been assigned, so using a two byte encoding meant that no-one had to worry about characters being different lengths. Now that there are characters outside this range, so you'll have to deal with characters of different lengths anyway, why would anyone use UTF-16? I suspect Microsoft would make a different decision if they were desigining their unicode support today. A: Yes, it's for compatibility reasons. UTF-8 is backwards comptable with ASCII. Linux/Unix were ASCII based, so it just made/makes sense. A: I thought 7-bit ASCII was fine. Seriously, Unicode is relatively new in the scheme of things, and UTF-8 is backward compatible with ASCII and uses less space (half) for typical files since it uses 1 to 4 bytes per code point (character), while UTF-16 uses either 2 or 4 bytes per code point (character). UTF-16 is preferable for internal program usage because of the simpler widths. Its predecessor UCS-2 was exactly 2 bytes for every code point. A: I think it's because programs that expect ASCII input won't be able to handle encodings such as UTF-16. For most characters (in the 0-255 range), those programs will see the high byte as a NUL / 0 char, which is used in many languages and systems to mark the end of a string. This doesn't happen in UTF-8, which was designed to avoid embedded NUL's and be byte-order agnostic.
{ "language": "en", "url": "https://stackoverflow.com/questions/164430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Continuing in the Visual Studio debugger after an exception occurs When I debug a C# program and I get an exception throwed (either thrown by code OR thrown by the framework), the IDE stops and get me to the corresponding line in my code. Everything is fine for now. I then press "F5" to continue. From this moment, it seams like I'm in an infinite loop. The IDE always get me back to the exception line. I have to Shift + F5 (stop debugging/terminate the program) to get out of his. I talked with some co-workers here and they told me that this happens sometime to them too. What's happening? A: You probably have the option "Unwind the callstack on unhandled exceptions" checked in Visual Studio. When this option is on Visual Studio will unwind to right before the exception, so hitting F5 will keep ramming into the same exception. If you uncheck the option Visual Studio will break at the exception, but hitting F5 will proceed past that line. This option is under menu Tools → Options → Debugging → General. Update: According to Microsoft, this option was removed from Visual Studio in VS2017, and maybe earlier. A: When the IDE breaks on the offending line of code, it stops just before executing the line that generated the exception. If you continue, it will just execute that line again, and get the exception again. If you want to move past the error to see what would have happened, had the error not occurred, you can drag the yellow-highlighted line (the line that will execute next) down to the next line of code after the offending one. Of course, depending on what the offending line failed to do, your program may now be in a state that causes other errors, in which case you haven't really helped yourself much, and probably should fix your code so that the exception either doesn't occur, or is handled properly. A: Once you get an exception Visual Studio (or whatever IDE you might be using) will not let you go any further unless the exception is handled in your code. This behaviour is by design. A: Some said that is by design, but it was never been before. You were able to F5 again and the code would continue until the end or next exception catch. It was an useful behavior that worked well, and as developers, I think we should not reach artificial barriers while we are debugging and investigating issues in an application. That said, I found a workaround to this (sort of): * *press Ctr+Shift+E (Exception settings) *uncheck the "Common Language Runtime Exceptions" box *press F10 to continue only the exception you are stuck *check "Common Language Runtime Exceptions" again, if you want the breaks to happen again That seems dirty, too much work for a thing that is used to be simpler, but I guess that is what we have for today. A: This is because the exception is un-handled and Visual Studio can not move past that line without it being handled in some manner. Simply put, it is by design. One thing that you can do is drag and drop the execution point (yellow line/arrow) to a previous point in your code and modify the in memory values (using the Visual Studio watch windows) so that they do not cause an exception. Then start stepping through the code again1. It is a better idea though to stop execution and fix the problem that is causing the exception, or properly handle the exception if the throw is not desired. 1 This can have unintended consequences since you are essentially re-executing some code (not rewinding the execution). A: An uncaught exception will bring down your app. Instead of this, VS will just keep you at the uncaught exception, You will have to terminate or backwind your app.
{ "language": "en", "url": "https://stackoverflow.com/questions/164433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Programmatically launching standalone Adobe flashplayer on Linux/X11 The standalone flashplayer takes no arguments other than a .swf file when you launch it from the command line. I need the player to go full screen, no window borders and such. This can be accomplished by hitting ctrl+f once the program has started. I want to do this programmatically as I need it to launch into full screen without any human interaction. My guess is that I need to some how get a handle to the window and then send it an event that looks like the "ctrl+f" keystroke. If it makes any difference, it looks like flashplayer is a gtk application and I have python with pygtk installed. UPDATE (the solution I used... thanks to ypnos' answer): ./flashplayer http://example.com/example.swf & sleep 3 && ~/xsendkey -window "Adobe Flash Player 10" Control+F A: You can use a dedicated application which sends the keystroke to the window manager, which should then pass it to flash, if the window starts as being the active window on the screen. This is quite error prone, though, due to delays between starting flash and when the window will show up. For example, your script could do something like this: flashplayer *.swf sleep 3 && xsendkey Control+F The application xsendkey can be found here: http://people.csail.mit.edu/adonovan/hacks/xsendkey.html Without given a specific window, it will send it to the root window, which is handled by your window manager. You could also try to figure out the Window id first, using xprop or something related to it. Another option is a Window manager, which is able to remember your settings and automatically apply them. Fluxbos for example provides this feature. You could set fluxbox to make the Window decor-less and stretch it over the whole screen, if flashplayer supports being resized. This is also not-so-nice, as it would probably affect all the flashplayer windows you open ever. A: I've actually done this a long time ago, but it wasn't petty. What we did is use the Sawfish window manager and wrote a hook to recognize the flashplayer window, then strip all the decorations and snap it full screen. This may be possible without using the window manager, by registering for X window creation events from an external application, but I'm not familiar enough with X11 to tell you how that would be done. Another option would be to write a pygtk application that embedded the standalone flash player inside a gtk.Socket and then resized itself. After a bit of thought, this might be your best bet. A: nspluginplayer --fullscreen src=path/to/flashfile.swf which is from the [http://gwenole.beauchesne.info//en/projects/nspluginwrapper](nspluginwrapper project) A: Another option would be to write a pygtk application that embedded the standalone flash player inside a gtk.Socket and then resized itself. After a bit of thought, this might be your best bet. This is exactly what I did. In addition to that, my player scales flash content via Xcomposite, Xfixes and Cairo. A .deb including python source be found here: http://www.crutzi.info/crutziplayer A: I've done this using openbox using a similar mechanism to the one that bmdhacks mentions. The thing that I did note from this was that the standalone flash player performed considerably worse fullscreen than the same player in a maximised undecorated window. (that, annoyingly is not properly fullscreen because of the menubar). I was wondering about running it with a custom gtk theme to make the menu invisible. That's just a performance issue though. If fullscreen currently works ok, then it's unneccisarily complicated. I was running on an OLPC XO, performance is more of an issue there. I didn't have much luck with nspluginplayer (too buggy I think). Ultimately I had the luxury of making the flash that was running so I could simply place code into the flash itself. By a similar token, Since you can embed flash within flash, it should be possible to make a little stub swf that goes fullscreen automatically and contains the target sfw. A: You have to use Acton script 3 cmd: stage.displayState = StageDisplayState.FULL_SCREEN; See Adobe Action script 3 programming. But be careful : in full screen, you will lose display performances! I've got this problem ... more under Linux!!!
{ "language": "en", "url": "https://stackoverflow.com/questions/164460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Does This ASP.NET Consultant Know What He's Doing? The IT department of a subsidiary of ours had a consulting company write them an ASP.NET application. Now it's having intermittent problems with mixing up who the current user is and has been known to show Joe some of Bob's data by mistake. The consultants were brought back to troubleshoot and we were invited to listen in on their explanation. Two things stuck out. First, the consultant lead provided this pseudo-code: void MyFunction() { Session["UserID"] = SomeProprietarySessionManagementLookup(); Response.Redirect("SomeOtherPage.aspx"); } He went on to say that the assignment of the session variable is asynchronous, which seemed untrue. Granted the call into the lookup function could do something asynchronously, but this seems unwise. Given that alleged asynchronousness, his theory was that the session variable was not being assigned before the redirect's inevitable ThreadAbort exception was raised. This faulure then prevented SomeOtherPage from displaying the correct user's data. Second, he gave an example of a coding best practice he recommends. Rather than writing: int MyFunction(int x, int x) { try { return x / y; } catch(Exception ex) { // log it throw; } } the technique he recommended was: int MyFunction(int x, int y, out bool isSuccessful) { isSuccessful = false; if (y == 0) return 0; isSuccessful = true; return x / y; } This will certainly work and could be better from a performance perspective in some situations. However, from these and other discussion points it just seemed to us that this team was not well-versed technically. Opinions? A: On the asynchronous part, the only way that could be true is if the assignment going on there is actually an indexer setter on Session that is hiding an asynchronous call with no callback indicating success/failure. This would seem to be a HORRIBLE design choice, and it looks like a core class in your framework, so I find it highly unlikely. Usually asynchronous calls have a way to specify a callback so you can determine what the result is, or if the operation was successful. The documentation for Session should be pretty clear though on if it is actually hiding an asynchronous call, but yeah... doesn't look like the consultant knows what he is talking about... The method call that is being assigned to the Session indexer cannot be asynch, because to get a value asynchronously, you HAVE to use a callback... no way around that, so if there is no explicit callback, it's definitely not asynch (well, internally there could be an asynchronous call, but the caller of the method would perceive it as synchronous, so it is irrelevant if the method internally for example invokes a web service asynchronously). For the second point, I think this would be much better, and keep the same functionality essentially: int MyFunction(int x, int y) { if (y == 0) { // log it throw new DivideByZeroException("Divide by zero attempted!"); } return x / y; } A: For the first point, that does indeed seem bizarre. On the second one, it's reasonable to try to avoid division by 0 - it's entirely avoidable and that avoidance is simple. However, using an out parameter to indicate success is only reasonable in certain cases, such as int.TryParse and DateTime.TryParseExact - where the caller can't easily determine whether or not their arguments are reasonable. Even then, the return value is usually the success/failure and the out parameter is the result of the method. A: Asp.net sessions, if you're using the built-in providers, won't accidentally give you someone else's session. SomeProprietarySessionManagementLookup() is the likely culprit and is returning bad values or just not working. Session["UserID"] = SomeProprietarySessionManagementLookup(); First of all assigning the return value from an asynchronously SomeProprietarySessionManagementLookup() just wont work. The consultants code probably looks like: public void SomeProprietarySessionManagementLookup() { // do some async lookup Action<object> d = delegate(object val) { LookupSession(); // long running thing that looks up the user. Session["UserID"] = 1234; // Setting session manually }; d.BeginInvoke(null,null,null); } The consultant isn't totally full of BS, but they have written some buggy code. Response.Redirect() does throw a ThreadAbort, and if the proprietary method is asynchronous, asp.net doesn't know to wait for the asynchronous method to write back to the session before asp.net itself saves the session. This is probably why it sometimes works and sometimes doesn't. Their code might work if the asp.net session is in-process, but a state server or db server wouldn't. It's timing dependent. I tested the following. We use state server in development. This code works because the session is written to before the main thread finishes. Action<object> d = delegate(object val) { System.Threading.Thread.Sleep(1000); // waits a little Session["rubbish"] = DateTime.Now; }; d.BeginInvoke(null, null, null); System.Threading.Thread.Sleep(5000); // waits a lot object stuff = Session["rubbish"]; if( stuff == null ) stuff = "not there"; divStuff.InnerHtml = Convert.ToString(stuff); This next snippet of code doesn't work because the session was already saved back to state server by the time the asynchronous method gets around to setting a session value. Action<object> d = delegate(object val) { System.Threading.Thread.Sleep(5000); // waits a lot Session["rubbish"] = DateTime.Now; }; d.BeginInvoke(null, null, null); // wait removed - ends immediately. object stuff = Session["rubbish"]; if( stuff == null ) stuff = "not there"; divStuff.InnerHtml = Convert.ToString(stuff); The first step is for the consultant to make their code synchronous because their performance trick didn't work at all. If that fixes it, have the consultant properly implement using the Asynchronous Programming Design Pattern A: I agree with him in part -- it's definitely better to check y for zero rather than catching the (expensive) exception. The out bool isSuccessful seems really dated to me, but whatever. re: the asynchronous sessionid buffoonery -- may or may not be true, but it sounds like the consultant is blowing smoke for cover. A: Rule of thumb: If you need to ask if a consultant knows what he's doing, he probably doesn't ;) And I tend to agree here. Obviously you haven't provided much, but they don't seem terribly competent. A: I would agree. These guys seem quite incompetent. (BTW, I'd check to see if in "SomeProprietarySessionManagementLookup," they're using static data. Saw this -- with behavior exactly as you describe on a project I inherited several months ago. It was a total head-slap moment when we finally saw it ... And wished we could get face to face with the guys who wrote it ... ) A: If the consultant has written an application that's supposed to be able to keep track of users and only show the correct data to the correct users and it doesn't do that, then clearly something's wrong. A good consultant would find the problem and fix it. A bad consultant would tell you that it was asynchronicity. A: Cody's rule of thumb is dead right. If you have to ask, he probably doesn't. It seems like point two its patently incorrect. .NET's standards explain that if a method fails it should throw an exception, which seems closer to the original; not the consulstant's suggestion. Assuming the exception is accurately & specifically describing the failure. A: The consultants created the code in the first place right? And it doesn't work. I think you have quite a bit of dirt on them already. The asynchronous answer sounds like BS, but there may be something in it. Presumably they have offered a suitable solution as well as pseudo-code describing the problem they themselves created. I would be more tempted to judge them on their solution rather than their expression of the problem. If their understanding is flawed their new solution won't work either. Then you'll know they are idiots. (In fact look round to see if you have a similar proof in any other areas of their code already) The other one is a code style issue. There are a lot of different ways to cope with that. I personally don't like that style, but there will be circumstances under which it is suitable. A: They're wrong on the async. The assignment happens and then the page redirects. The function can start something asynchronously and return (and could even conceivably alter the Session in its own way), but whatever it does return has to be assigned in the code you gave before the redirect. They're wrong on that defensive coding style in any low-level code and even in a higher-level function unless it's a specific business case that the 0 or NULL or empty string or whatever should be handled that way - in which case, it's always successful (that successful flag is a nasty code smell) and not an exception. Exceptions are for exceptions. You don't want to mask behaviors like this by coddling the callers of the functions. Catch things early and throw exceptions. I think Maguire covered this in Writing Solid Code or McConnell in Code Complete. Either way, it smells. A: This guy does not know what he is doing. The obvious culprit is right here: Session["UserID"] = SomeProprietarySessionManagementLookup(); A: Typical "consultant" bollocks: * *The problem is with whatever SomeProprietarySessionManagementLookup is doing *Exceptions are only expensive if they're thrown. Don't be afraid of try..catch, but throws should only occur in exceptional circumstances. If variable y shouldn't be zero then an ArgumentOutOfRangeException would be appropriate. A: I have to agree with John Rudy. My gut tells me the problem is in SomeProprietarySessionManagementLookup(). .. and your consultants do not sound to sure of themselves. A: Storing in Session in not async. So that isn't true unless that function is async. But even so, since it isn't calling a BeginCall and have something to call on completion, the next line of code wouldn't execute until the Session line is complete. For the second statement, while that could be used, it isn't exactly a best practice and you have a few things to note with it. You save the cost of throwing an exception, but wouldn't you want to know that you are trying to divide by zero instead of just moving past it? I don't think that is a solid suggestion at all. A: Quite strange. On the second item it may or may not be faster. It certainly isn't the same functionality though. A: I'm guessing your consultant is suggesting use a status variable instead of exception for error handling is a better practice? I don't agree. How often does people forgot or too lazy to do error checking for return values? Also, pass/fail variable is not informative. There are more things can go wrong other than divide by zero like integer x/y is too big or x is NaN. When things go wrong, status variable cannot tell you what went wrong, but exception can. Exception is for exceptional case, and divide by zero or NaN are definitely exceptional cases. A: The session thing is possible. It's a bug, beyond doubt, but it could be that the write arrives at whatever custom session state provider you're using after the next read. The session state provider API accommodates locking to prevent this sort of thing, but if the implementor has just ignored all that, your consultant could be telling the truth. The second issue is also kinda valid. It's not quite idiomatic - it's a slightly reversed version of things like int.TryParse, which are there to avoid performance issues caused by throwing lots of exceptions. But unless you're calling that code an awful lot, it's unlikely it'll make a noticeable difference (compared to say, one less database query per page etc). It's certainly not something you should do by default. A: If SomeProprietarySessionManagementLookup(); is doing an asynchronous assignment it would more likely look like this: SomeProprietarySessionManagementLookup(Session["UserID"]); The very fact that the code is assigning the result to Session["UserID"] would suggest that it is not supposed to be asynchronous and the result should be obtained before Response.Redirect is called. If SomeProprietarySessionManagementLookup is returning before its result is calculated they have a design flaw anyway. The throw an exception or use an out parameter is a matter of opinion and circumstance and in actual practice won't amount to a hill of beans which ever way you do it. For the performance hit of exceptions to become an issue you would need to be calling the function a huge number of times which would probably be a problem in itself. A: If the consultants deployed their ASP.NET application on your server(s), then they may have deployed it in uncompiled form, which means there would be a bunch of *.cs files floating around that you could look at. If all you can find is compiled .NET assemblies (DLLs and EXEs) of theirs, then you should still be able to decompile them into somewhat readable source code. I'll bet if you look through the code you'll find them using static variables in their proprietary lookup code. You'd then have something very concrete to show your bosses. A: This entire answer stream is full of typical programmer attitudes. It reminds me of Joel's 'Things you should never do' article (rewrite from scratch.) We don't really know anything about the system, other than there's a bug, and some guy posted some code online. There are so many unknowns that it is ridiculous to say "This guy does not know what he is doing." A: Rather than pile on the Consultant, you could just as easily pile on the person who procured their services. No consultant is perfect, nor is a hiring manager ... but at the end of the day the real direction you should be taking is very clear: instead of trying to find fault you should expend energy into working collaboratively to find solutions. No matter how skilled someone is at their roles and responsibilities they will certainly have deficiencies. If you determine there is a pattern of incompentencies then you may choose to transition to another resource going forward, but assigning blame has never solved a single problem in history. A: On the second point, I would not use exceptions here. Exceptions are reserved for exceptional cases. However, division of anything by zero certainly does not equal zero (in math, at least), so this would be case specific.
{ "language": "en", "url": "https://stackoverflow.com/questions/164468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What is the reporting tool that you would use? I need a tool that handle both on-screen and printed reports, via my C# application. I'm looking for simple, standard and powerful. I need to be able to give the user the ability to select which columns to display, formatting, etc... with my own GUI and dynamically build the report based upon their choices. Crystal does not fit the bill here because the columns cannot be added on the fly (and the column widths can not be adjusted on the fly). I'm thinking of using HTML with the WebBrowser control in my app, but I will have very little control over printing and print preview. Another option is go to .NET 3.5 (I'm using 2.0) and use XAML with Flow Documents. What would you use? A: We use SQL reporting services. HTML reports have their place but you dont get very much controlling over formatting. SQL reporting services summary: Advantages: Basic version is free Included with SQL express Many exporting options pdf, html, csv etc Can use many different datasources Webservice which exposes various methods SQL standard editon includes a report builder component to allow users create and share their own reports Lots of features for querying formatting etc Scheduling options Extensibility import .net framework dlls for custom functionality Familiar microsoft environment Disadvantages: An extra thing to setup Seemless authentication between application and report server can be a pain depending on your setup A little bit of a learning curve although its not too hard to pick up Report model creator needs some work and doesnt automatically a-z fields I have heard good things about DevXpress so may be worth looking into. I used Crystal about 5 years ago and remember it being a pain to setup and was costly licence wise. A: Check out the Report Viewer stuff in studio 2008 / .NET 3.5 This amazing site has the full scoop: GotReportViewer It's a nice build in reporting system that will show a report and print. It's not full blown like Crystal or SQL Reporting Services. If all you need is some lightweight reporting you can't beat the price. A: Crystal = Big footprint, huge deployment, fast, good designer and support MS ReportViewer = small footprint, slow, bad designer, support.. well, not so damn easy to search after reportviewer, a name all uses.. sigh. A: We use ActiveReports.net here. They're OK and tend to get the job done pretty well, but I'm not sure if they would fit your definition of "Dynamic". But you can pretty much make them do anything though code. A: I'm currently considering DevXpress XtraReports as a replacement for CR. So far I like what I see. A: SQL Reporting Services probably aren't flexible enough for what you want as you don't really get a deep level of code manipulation. Active reports let you get into the binding events and pretty much do whatever you want, however there are a couple of small bugs with active reports (like not being able to bind to a defaultview of a datatable) which make it a pain. Apart from that, it's highly flexible. XtraReports are awesome but they're a lot pricier than Active Reports. Having said that, their support is fantastic and the reporting package is rock solid. I'd look at forking out the cash for them if possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/164492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I create a thread-safe singleton pattern in Windows? I've been reading about thread-safe singleton patterns here: http://en.wikipedia.org/wiki/Singleton_pattern#C.2B.2B_.28using_pthreads.29 And it says at the bottom that the only safe way is to use pthread_once - which isn't available on Windows. Is that the only way of guaranteeing thread safe initialisation? I've read this thread on SO: Thread safe lazy construction of a singleton in C++ And seems to hint at an atomic OS level swap and compare function, which I assume on Windows is: http://msdn.microsoft.com/en-us/library/ms683568.aspx Can this do what I want? Edit: I would like lazy initialisation and for there to only ever be one instance of the class. Someone on another site mentioned using a global inside a namespace (and he described a singleton as an anti-pattern) - how can it be an "anti-pattern"? Accepted Answer: I've accepted Josh's answer as I'm using Visual Studio 2008 - NB: For future readers, if you aren't using this compiler (or 2005) - Don't use the accepted answer!! Edit: The code works fine except the return statement - I get an error: error C2440: 'return' : cannot convert from 'volatile Singleton *' to 'Singleton *'. Should I modify the return value to be volatile Singleton *? Edit: Apparently const_cast<> will remove the volatile qualifier. Thanks again to Josh. A: While I like the accepted solution, I just found another promising lead and thought I should share it here: One-Time Initialization (Windows) A: A simple way to guarantee cross-platform thread safe initialization of a singleton is to perform it explicitly (via a call to a static member function on the singleton) in the main thread of your application before your application starts any other threads (or at least any other threads that will access the singleton). Ensuring thread safe access to the singleton is then achieved in the usual way with mutexes/critical sections. Lazy initialization can also be achieved using a similar mechanism. The usual problem encountered with this is that the mutex required to provide thread-safety is often initialized in the singleton itself which just pushes the thread-safety issue to initialization of the mutex/critical section. One way to overcome this issue is to create and initialize a mutex/critical section in the main thread of your application then pass it to the singleton via a call to a static member function. The heavyweight initialization of the singleton can then occur in a thread-safe manner using this pre-initialized mutex/critical section. For example: // A critical section guard - create on the stack to provide // automatic locking/unlocking even in the face of uncaught exceptions class Guard { private: LPCRITICAL_SECTION CriticalSection; public: Guard(LPCRITICAL_SECTION CS) : CriticalSection(CS) { EnterCriticalSection(CriticalSection); } ~Guard() { LeaveCriticalSection(CriticalSection); } }; // A thread-safe singleton class Singleton { private: static Singleton* Instance; static CRITICAL_SECTION InitLock; CRITICIAL_SECTION InstanceLock; Singleton() { // Time consuming initialization here ... InitializeCriticalSection(&InstanceLock); } ~Singleton() { DeleteCriticalSection(&InstanceLock); } public: // Not thread-safe - to be called from the main application thread static void Create() { InitializeCriticalSection(&InitLock); Instance = NULL; } // Not thread-safe - to be called from the main application thread static void Destroy() { delete Instance; DeleteCriticalSection(&InitLock); } // Thread-safe lazy initializer static Singleton* GetInstance() { Guard(&InitLock); if (Instance == NULL) { Instance = new Singleton; } return Instance; } // Thread-safe operation void doThreadSafeOperation() { Guard(&InstanceLock); // Perform thread-safe operation } }; However, there are good reasons to avoid the use of singletons altogether (and why they are sometimes referred to as an anti-pattern): * *They are essentially glorified global variables *They can lead to high coupling between disparate parts of an application *They can make unit testing more complicated or impossible (due to the difficultly in swapping real singletons with fake implementations) An alternative is to make use of a 'logical singleton' whereby you create and initialise a single instance of a class in the main thread and pass it to the objects which require it. This approach can become unwieldy where there are many objects which you want to create as singletons. In this case the disparate objects can be bundled into a single 'Context' object which is then passed around where necessary. A: If you are are using Visual C++ 2005/2008 you can use the double checked locking pattern, since "volatile variables behave as fences". This is the most efficient way to implement a lazy-initialized singleton. From MSDN Magazine: Singleton* GetSingleton() { volatile static Singleton* pSingleton = 0; if (pSingleton == NULL) { EnterCriticalSection(&cs); if (pSingleton == NULL) { try { pSingleton = new Singleton(); } catch (...) { // Something went wrong. } } LeaveCriticalSection(&cs); } return const_cast<Singleton*>(pSingleton); } Whenever you need access to the singleton, just call GetSingleton(). The first time it is called, the static pointer will be initialized. After it's initialized, the NULL check will prevent locking for just reading the pointer. DO NOT use this on just any compiler, as it's not portable. The standard makes no guarantees on how this will work. Visual C++ 2005 explicitly adds to the semantics of volatile to make this possible. You'll have to declare and initialize the CRITICAL SECTION elsewhere in code. But that initialization is cheap, so lazy initialization is usually not important. A: You can use an OS primitive such as mutex or critical section to ensure thread safe initialization however this will incur an overhead each time your singleton pointer is accessed (due to acquiring a lock). It's also non portable. A: There is one clarifying point you need to consider for this question. Do you require ... * *That one and only one instance of a class is ever actually created *Many instances of a class can be created but there should only be one true definitive instance of the class There are many samples on the web to implement these patterns in C++. Here's a Code Project Sample A: The following explains how to do it in C#, but the exact same concept applies to any programming language that would support the singleton pattern http://www.yoda.arachsys.com/csharp/singleton.html What you need to decide is wheter you want lazy initialization or not. Lazy initialization means that the object contained inside the singleton is created on the first call to it ex : MySingleton::getInstance()->doWork(); if that call isnt made until later on, there is a danger of a race condition between the threads as explained in the article. However, if you put MySingleton::getInstance()->initSingleton(); at the very beginning of your code where you assume it would be thread safe, then you are no longer lazy initializing, you will require "some" more processing power when your application starts. However it will solve a lot of headaches about race conditions if you do so. A: If you are looking for a more portable, and easier solution, you could turn to boost. boost::call_once can be used for thread safe initialization. Its pretty simple to use, and will be part of the next C++0x standard. A: The question does not require the singleton is lazy-constructed or not. Since many answers assume that, I assume that for the first phrase discuss: Given the fact that the language itself is not thread-awareness, and plus the optimization technique, writing a portable reliable c++ singleton is very hard (if not impossible), see "C++ and the Perils of Double-Checked Locking" by Scott Meyers and Andrei Alexandrescu. I've seen many of the answer resort to sync object on windows platform by using CriticalSection, but CriticalSection is only thread-safe when all the threads is running on one single processor, today it's probably not true. MSDN cite: "The threads of a single process can use a critical section object for mutual-exclusion synchronization. ". And http://msdn.microsoft.com/en-us/library/windows/desktop/ms682530(v=vs.85).aspx clearify it further: A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads of a single process. Now, if "lazy-constructed" is not a requirement, the following solution is both cross-module safe and thread-safe, and even portable: struct X { }; X * get_X_Instance() { static X x; return &x; } extern int X_singleton_helper = (get_X_instance(), 1); It's cross-module-safe because we use locally-scoped static object instead of file/namespace scoped global object. It's thread-safe because: X_singleton_helper must be assigned to the correct value before entering main or DllMain It's not lazy-constructed also because of this fact), in this expression the comma is an operator, not punctuation. Explicitly use "extern" here to prevent compiler optimize it out(Concerns about Scott Meyers article, the big enemy is optimizer.), and also make static-analyze tool such as pc-lint keep silent. "Before main/DllMain" is Scott meyer called "single-threaded startup part" in "Effective C++ 3rd" item 4. However, I'm not very sure about whether compiler is allowed to optimize the call the get_X_instance() out according to the language standard, please comment. A: There are many ways to do thread safe Singleton* initialization on windows. In fact some of them are even cross-platform. In the SO thread that you linked to, they were looking for a Singleton that is lazily constructed in C, which is a bit more specific, and can be a bit trickier to do right, given the intricacies of the memory model you are working under. * *which you should never use
{ "language": "en", "url": "https://stackoverflow.com/questions/164496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Using CURRENT_TIMESTAMP, arithmetic operator and parameter with Firebird Why doesn't this work (when parameter is set to 1) : SELECT * FROM TABLE WHERE TIMESTAMPFIELD > (CURRENT_TIMESTAMP - ?) But this works : SELECT * FROM TABLE WHERE TIMESTAMPFIELD > (CURRENT_TIMESTAMP - 1) I get error message: "conversion error from string "39723.991882951" " I'm using Firebird 2.1 EDIT: I found the answer myself with a little help: SELECT * FROM TABLE WHERE TIMESTAMPFIELD > (CURRENT_TIMESTAMP - CAST(? as DECIMAL(18,9)) Works if the parameter is given as float value. A: What do you want to do exactly? Maybe I can be more helpfull with more details. SELECT * FROM TABLE WHERE TIMESTAMPFIELD > (CURRENT_TIMESTAMP - ?) How do you set your parameter in your code? Which language do you use? If you use Delphi, then your parameter should be passed as Float. Ie: MyQuery.ParamByName('delta').asFloat := 0.1; Try this and tell us if it's working HTH
{ "language": "en", "url": "https://stackoverflow.com/questions/164516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: To Nest or Not to Nest? Premise: Usually during preparation of a new Ruby on Rails App, I draw out models and relations regarding user navigations. Usually I hit a place where I need to ask myself, whether or not I should go beyond the usual "rule of thumb" of nesting no more 1 level deep. Sometimes I feel the need to nest, rather than creating another namespace route and duplicating work. Here's an example: Models: User, Company, Location User has and belongs to many Companies (many to many) User has and belongs to many Locations (many to many) Company has and belongs to many Locations (many to many) Routes: 1 level nesting users/:user_id/companies/ - list all companies related to a user users/:user_id/locations/ - list all locations related to a user more than 1 level nesting users/:user_id/companies/:company_id/locations/ - list all company-locations of a user So, my question is whether or not it is appropriate to nest more than 1 level deep in RoR? Yes or no? And why? A: I tend to follow Jamis Buck's advice and never nest more than one level deep. Edit: If you are going to nest more than 1 level I would check out the new shallow routes feature in Edge A: users/:user_id/companies/:company_id/locations/ While technically this is fine, wouldn't the named route helper therefore be user_company_location_path( user_id, company_id, location_id ) having to cart round 3 parameters like that is annoying. Anything annoying is probably a red flag. A: Whilst it sounds good in theory, I've found nesting more than one level can start to get confusing - particularly if you have the same named controller at different levels (which can be quite common) Eg user/x/blog/y/profile/z, and user/x/profile/a I'll often find I'm working in a different namespace to what I think I'm working in. If they do similar, but different things, it can get quite confusing =) My current app, I went thru last week and removed most of the nested routes. (Of course, YMMV)
{ "language": "en", "url": "https://stackoverflow.com/questions/164520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Exposing Member Objects As Properties or Methods in .NET In .NET, if a class contains a member that is a class object, should that member be exposed as a property or with a method? A: That is irrelevant to the matter. It should be a Property if the value is some detail about the state of the object. It should be Method if it performs some action on the object. A: You should use properties for anything that conceptually represents the object's state, so long as its retrieval isn't an expensive enough operation that you should avoid using it repeatedly. From MSDN: Class library designers often must decide between implementing a class member as a property or a method. In general, methods represent actions and properties represent data. Use the following guidelines to help you choose between these options. * *Use a property when the member is a logical data member. In the following member declarations, Name is a property because it is a logical member of the class. public string Name get { return name; } set { name = value; } *Use a method when: * *The operation is a conversion, such as Object.ToString. *The operation is expensive enough that you want to communicate to the user that they should consider caching the result. *Obtaining a property value using the get accessor would have an observable side effect. *Calling the member twice in succession produces different results. *The order of execution is important. Note that a type's properties should be able to be set and retrieved in any order. *The member is static but returns a value that can be changed. *The member returns an array. Properties that return arrays can be very misleading. Usually it is necessary to return a copy of the internal array so that the user cannot change internal state. This, coupled with the fact that a user can easily assume it is an indexed property, leads to inefficient code. In the following code example, each call to the Methods property creates a copy of the array. As a result, 2n+1 copies of the array will be created in the following loop. Type type = // Get a type. for (int i = 0; i < type.Methods.Length; i++) { if (type.Methods[i].Name.Equals ("text")) { // Perform some operation. } } The following example illustrates the correct use of properties and methods. class Connection { // The following three members should be properties // because they can be set in any order. string DNSName {get{};set{};} string UserName {get{};set{};} string Password {get{};set{};} // The following member should be a method // because the order of execution is important. // This method cannot be executed until after the // properties have been set. bool Execute (); } A: If all you are doing is exposing an object instance that is relevant to the state of the current object you should use a property. A method should be used when you have some logic that is doing more than accessing an in memory object and returning that value or when you are performing an action that has a broad affect on the state of the current object. A: Property. A Property is basically just a 'cheap' method. Getting or setting a reference to an object is pretty cheap. Just to clarify, properties are usually supposed to represent the internal state of an object. However, the implementation of a member as a property or method tells the user how expensive the call is likely to be. A: Properties read and assign values to instances within a class. Methods do something with the data assigned to the class. A: Overview Generally, properties store data for an object, such as Name, and methods are actions an object can be asked to perform, such as Move or Show. Sometimes it is not obvious which class members should be properties and which should be methods - the Item method of a collection class (VB) stores and retrieves data and can be implemented as an indexed property. On the other hand, it would also be reasonable to implement Item as a method. Syntax How a class member is going to be used could also be a determining factor in whether it should be represented as a property or a method. The syntax for retrieving information from a parameterized property is almost identical to the syntax used for a method implemented as a function. However, the syntax for modifying such a value is slightly different. If you implement the member of a class as a property, you must modify it's value this way: ThisObject.ThisProperty(Index) = NewValue if the class member is implemented as a method, the value being modified must be modified using an arguement: ThisObject.ThisProperty(Index, NewValue) Errors An attempt to assign a value to a read-only property will return a different error message than a similar call to a method. Correctly implemented class members return error messages that are easier to interpret. A: I confused about the using property and method before. But now I am using this rule according to MSDN Guideline: methods represent actions and properties represent data. Properties are meant to be used like fields, meaning that properties should not be computationally complex or produce side effects. When it does not violate the following guidelines, consider using a property, rather than a method, because less experienced developers find properties easier to use.
{ "language": "en", "url": "https://stackoverflow.com/questions/164527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }