text
stringlengths
8
267k
meta
dict
Q: Drag and drop onto Python script in Windows Explorer I would like to drag and drop my data file onto a Python script and have it process the file and generate output. The Python script accepts the name of the data file as a command-line parameter, but Windows Explorer doesn't allow the script to be a drop target. Is there some kind of configuration that needs to be done somewhere for this work? A: Sure. From a mindless technology article called "Make Python Scripts Droppable in Windows", you can add a drop handler by adding a registry key: Here’s a registry import file that you can use to do this. Copy the following into a .reg file and run it (Make sure that your .py extensions are mapped to Python.File). Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Python.File\shellex\DropHandler] @="{60254CA5-953B-11CF-8C96-00AA00B8708C}" This makes Python scripts use the WSH drop handler, which is compatible with long filenames. To use the short filename handler, replace the GUID with 86C86720-42A0-1069-A2E8-08002B30309D. A comment in that post indicates that one can enable dropping on "no console Python files (.pyw)" or "compiled Python files (.pyc)" by using the Python.NoConFile and Python.CompiledFile classes. A: Try using py2exe. Use py2exe to convert your python script into a windows executable. You should then be able to drag and drop input files to your script in Windows Explorer. You should also be able to create a shortcut on your desktop and drop input files onto it. And if your python script can take a file list you should be able to drag and drop multiple files on your script (or shortcut). A: write a simple shell script (file.bat) "C:\python27\python.exe" yourprogram.py %* where %* stands for the all arguments you pass to the script. Now drag & drop your target files on the file.bat icon. A: Create a shortcut of the file. In case you don't have python open .py files by default, go into the properties of the shortcut and edit the target of the shortcut to include the python version you're using. For example: Target: C:\Python26\python.exe < shortcut target path> I'm posting this because I didn't want to edit the Registry and the .bat workaround didn't work for me. A: 1). create shortcut of .py 2). right click -> properties 3). prefix "Target:" with "python" so it runs the .py as an argument into the python command or 1). create a .bat 2). python some.py %* these shortcut versions are simplest for me to do what i'm doing otherwise i'd convert it to a .exe, but would rather just use java or c/c++ A: Late answer but none of the answers on this page worked for me. I managed to enable/fix the drag and drop onto .py scripts using: * *HKEY_CLASSES_ROOT\.py -> Set default value to Python.File *HKEY_CLASSES_ROOT\Python.File\Shell\Open -> Create a key called Command with default value "C:\Windows\py.exe" "%1" %* *CLASSES_ROOT\Applications\py.exe\open\command -> Create keys if the don't exist and set the default value to "C:\Windows\py.exe" "%1" %* *CLASSES_ROOT\Python.File\ShellEx -> create key DropHandler with default value {86C86720-42A0-1069-A2E8-08002B30309D} That's it. Test it by dragging a file onto the python script: import sys args = sys.argv print(args) A: With an installed python - at least 2.6.1 - you can just drag and drop any file on a python script. import sys droppedFile = sys.argv[1] print droppedFile sys.argv[0] is the script itself. sys.argv[n+1] are the files you have dropped. A: For those who use argv in .py script but still can't drag files to execute, this could be solved by simply using Python Launcher (with rocket icon) the script property "Open File" was set as python.exe, which has no knowledge that the script needs command-line arguments "%*" Refer to: https://bugs.python.org/issue40253
{ "language": "en", "url": "https://stackoverflow.com/questions/142844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60" }
Q: Splitting log4j Output with Quartz Worker Threads I'm working on an application that consists of an overall Quartz-based scheduler and "CycledJob" run using CronTriggers. The purpose of the application is to process inputs from different email inboxes based on the source country. Based on the country that it comes in from (i.e. US, UK, FR, etc.) the application triggers one job thread to run each country's processing cycle, so there would be a UK Worker thread, one for US, France, etc. When formatting the output to log4j, I'm using the thread parameter, so it emits [ApplicationName_Worker-1], [ApplicationName_Worker-2] etc. Try as I might, I can't find a way to name the threads since they're pulled out of Quartz's Thread Pools. Although I could possibly go so far as to extend Quartz, I'd like to work out a different solution instead of messing with the standard library. Here's the problem: When using log4j, I'd like to have all log items from the US thread output to a US only file, likewise for each of the country threads. I don't care if they stay in one unified ConsoleAppender, the FileAppender split is what I'm after here. I already know how to specify multiple file appenders and such, my issue is I can't differentiate based on country. There are 20+ classes within the application that can be on the execution chain, very few of which I want to burden with the knowledge of passing an extra "context" parameter through EVERY method... I've considered a Strategy pattern extending a log4j wrapper class, but unless I can let every class in the chain know which thread it's on to parameterize the logger call, that seems impossible. Without being able to name the thread also creates a challenge (or else this would be easy!). So here's the question: What would be a suggested approach to allow many subordinate classes in an application that are each used for every different thread to process the input know that they are within the context of a particular country thread when they are logging? Good luck understanding, and please ask clarifying questions! I hope someone is able to help me figure out a decent way to tackle this. All suggestions welcome. A: At the top of each country's processing thread, put the country code into Log4j's mapped diagnostic context (MDC). This uses a ThreadLocal variable so that you don't have to pass the country up and down the call stack explicitly. Then create a custom filter that looks at the MDC, and filters out any events that don't contain the current appender's country code. In your Job: ... public static final String MDC_COUNTRY = "com.y.foo.Country"; public void execute(JobExecutionContext context) /* Just guessing that you have the country in your JobContext. */ MDC.put(MDC_COUNTRY, context.get(MDC_COUNTRY)); try { /* Perform your job here. */ ... } finally { MDC.remove(MDC_COUNTRY); } } ... Write a custom Filter: package com.y.log4j; import org.apache.log4j.spi.LoggingEvent; /** * This is a general purpose filter. If its "value" property is null, * it requires only that the specified key be set in the MDC. If its * value is not null, it further requires that the value in the MDC * is equal. */ public final class ContextFilter extends org.apache.log4j.spi.Filter { public int decide(LoggingEvent event) { Object ctx = event.getMDC(key); if (value == null) return (ctx != null) ? NEUTRAL : DENY; else return value.equals(ctx) ? NEUTRAL : DENY; } private String key; private String value; public void setContextKey(String key) { this.key = key; } public String getContextKey() { return key; } public void setValue(String value) { this.value = value; } public String getValue() { return value; } } In your log4j.xml: <appender name="fr" class="org.apache.log4j.FileAppender"> <param name="file" value="france.log"/> ... <filter class="com.y.log4j.ContextFilter"> <param name="key" value="com.y.foo.Country" /> <param name="value" value="fr" /> </filter> </appender> A: I wish I could be a bit more helpful than this, but you may want to investigate using some filters? Perhaps your logging could output the country code and you could match your filter based on that? A StringMatchFilter should probably be able to match it for you. Couldn't get the below address to work properly as a link, but if you look at it, it has some stuff on separate file logging using filters. http://mail-archives.apache.org/mod_mbox/logging-log4j-user/200512.mbox/<1CC26C83B6E5AA49A9540FAC8D35158B01E2968E@pune.kaleconsultants.com > (just remove the space before the >) http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/spi/Filter.html A: I may be completely off base on my understanding of what you are attempting to accomplish, but I will take a stab at the solution. It sounds like you want a separate log file for each country for which you are processing email. Based on that understanding, here is a possible solution: * *Set up an appender in your log4j configuration for each country for which you wish to log separately (US example provided): log4j.appender.usfile=org.apache.log4j.FileAppender log4j.appender.usfile.File=us.log log4j.appender.usfile.layout=org.apache.log4j.PatternLayout log4j.appender.usfile.layout.ConversionPattern=%m%n *Create a logger for each country and direct each of them to the appropriate appender (US example provided): log4j.logger.my-us-logger=debug,usfile *In your code, create your Logger based on the country for which the email is being processed: Logger logger = Logger.getLogger("my-us-logger"); *Determine how you will accomplish step 3 for the subsequent method calls. You could repeat step 3 in each class/method; or you could modify the method signatures to accept a Logger as input; or you could possibly use ThreadLocal to pass the Logger between methods. Extra info: If you do not want the log statements going to parent loggers (e.g. the rootLogger), you can set their additivity flags to false (US example provided): log4j.additivity.my-us-logger=false A: Why not just call Thread.setName() when your job starts to set the name the Thread? If there is an access problem, configure quartz to use your own thread pool.
{ "language": "en", "url": "https://stackoverflow.com/questions/142845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What programming languages support arbitrary precision arithmetic? What programming languages support arbitrary precision arithmetic and could you give a short example of how to print an arbitrary number of digits? A: Python has such ability. There is an excellent example here. From the article: from math import log as _flog from decimal import getcontext, Decimal def log(x): if x < 0: return Decimal("NaN") if x == 0: return Decimal("-inf") getcontext().prec += 3 eps = Decimal("10")**(-getcontext().prec+2) # A good initial estimate is needed r = Decimal(repr(_flog(float(x)))) while 1: r2 = r - 1 + x/exp(r) if abs(r2-r) < eps: break else: r = r2 getcontext().prec -= 3 return +r Also, the python quick start tutorial discusses the arbitrary precision: http://docs.python.org/lib/decimal-tutorial.html and describes getcontext: the getcontext() function accesses the current context and allows the settings to be changed. Edit: Added clarification on getcontext. A: In Common Lisp, (format t "~D~%" (expt 7 77)) "~D~%" in printf format would be "%d\n". Arbitrary precision arithmetic is built into Common Lisp. A: Smalltalk supports arbitrary precision Integers and Fractions from the beginning. Note that gnu Smalltalk implementation does use GMP under the hood. I'm also developping ArbitraryPrecisionFloat for various dialects (Squeak/Pharo Visualworks and Dolphin), see http://www.squeaksource.com/ArbitraryPrecisionFl.html A: Many people recommended Python's decimal module, but I would recommend using mpmath over decimal for any serious numeric uses. A: COBOL 77 VALUE PIC S9(4)V9(4). a signed variable witch 4 decimals. PL/1 DCL VALUE DEC FIXED (4,4); :-) I can't remember the other old stuff... Jokes apart, as my example show, I think you shouldn't choose a programming language depending on a single feature. Virtually all decent and recent language support fixed precision in some dedicated classes. A: Scheme (a variation of lisp) has a capability called 'bignum'. there are many good scheme implementations available both full language environments and embeddable scripting options. a few I can vouch for MitScheme (also referred to as gnu scheme) PLTScheme Chezscheme Guile (also a gnu project) Scheme 48 A: Ruby whole numbers and floating point numbers (mathematically speaking: rational numbers) are by default not strictly tied to the classical CPU related limits. In Ruby the integers and floats are automatically, transparently, switched to some "bignum types", if the size exceeds the maximum of the classical sizes. One probably wants to use some reasonably optimized and "complete", multifarious, math library that uses the "bignums". This is where the Mathematica-like software truly shines with its capabilities. As of 2011 the Mathematica is extremely expensive and terribly restricted from hacking and reshipping point of view, specially, if one wants to ship the math software as a component of a small, low price end, web application or an open source project. If one needs to do only raw number crunching, where visualizations are not required, then there exists a very viable alternative to the Mathematica and Maple. The alternative is the REDUCE Computer Algebra System, which is Lisp based, open source and mature (for decades) and under active development (in 2011). Like Mathematica, the REDUCE uses symbolic calculation. For the recognition of the Mathematica I say that as of 2011 it seems to me that the Mathematica is the best at interactive visualizations, but I think that from programming point of view there are more convenient alternatives even if Mathematica were an open source project. To me it seems that the Mahtematica is also a bit slow and not suitable for working with huge data sets. It seems to me that the niche of the Mathematica is theoretical math, not real-life number crunching. On the other hand the publisher of the Mathematica, the Wolfram Research, is hosting and maintaining one of the most high quality, if not THE most high quality, free to use, math reference sites on planet Earth: the http://mathworld.wolfram.com/ The online documentation system that comes bundled with the Mathematica is also truly good. When talking about speed, then it's worth to mention that REDUCE is said to run even on a Linux router. The REDUCE itself is written in Lisp, but it comes with 2 of its very own, specific, Lisp implementations. One of the Lisps is implemented in Java and the other is implemented in C. Both of them work decently, at least from math point of view. The REDUCE has 2 modes: the traditional "math mode" and a "programmers mode" that allows full access to all of the internals by the language that the REDUCE is self written in: Lisp. So, my opinion is that if one looks at the amount of work that it takes to write math routines, not to mention all of the symbolic calculations that are all MATURE in the REDUCE, then one can save enormous amount of time (decades, literally) by doing most of the math part in REDUCE, specially given that it has been tested and debugged by professional mathematicians over a long period of time, used for doing symbolic calculations on old-era supercomputers for real professional tasks and works wonderfully, truly fast, on modern low end computers. Neither has it crashed on me, unlike at least one commercial package that I don't want to name here. http://www.reduce-algebra.com/ To illustrate, where the symbolic calculation is essential in practice, I bring an example of solving a system of linear equations by matrix inversion. To invert a matrix, one needs to find determinants. The rounding that takes place with the directly CPU supported floating point types, can render a matrix that theoretically has an inverse, to a matrix that does not have an inverse. This in turn introduces a situation, where most of the time the software might work just fine, but if the data is a bit "unfortunate" then the application crashes, despite the fact that algorithmically there's nothing wrong in the software, other than the rounding of floating point numbers. The absolute precision rational numbers do have a serious limitation. The more computations is performed with them, the more memory they consume. As of 2011 I don't know any solutions to that problem other than just being careful and keeping track of the number of operations that has been performed with the numbers and then rounding the numbers to save memory, but one has to do the rounding at a very precise stage of the calculations to avoid the aforementioned problems. If possible, then the rounding should be done at the very end of the calculations as the very last operation. A: In PHP you have BCMath. You not need to load any dll or compile any module. Supports numbers of any size and precision, represented as string <?php $a = '1.234'; $b = '5'; echo bcadd($a, $b); // 6 echo bcadd($a, $b, 4); // 6.2340 ?> A: Apparently Tcl also has them, from version 8.5, courtesy of LibTomMath: http://wiki.tcl.tk/5193 http://www.tcl.tk/cgi-bin/tct/tip/237.html http://math.libtomcrypt.com/ A: There are several Javascript libraries that handle arbitrary-precision arithmetic. For example, using my big.js library: Big.DP = 20; // Decimal Places var pi = Big(355).div(113) console.log( pi.toString() ); // '3.14159292035398230088' A: In R you can use the Rmpfr package: library(Rmpfr) exp(mpfr(1, 120)) ## 1 'mpfr' number of precision 120 bits ## [1] 2.7182818284590452353602874713526624979 You can find the vignette here: Arbitrarily Accurate Computation with R: The Rmpfr Package A: Some languages have this support built in. For example, take a look at java.math.BigDecimal in Java, or decimal.Decimal in Python. Other languages frequently have a library available to provide this feature. For example, in C you could use GMP or other options. The "Arbitrary-precision software" section of this article gives a good rundown of your options. A: Mathematica. N[Pi, 100] 3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068 Not only does mathematica have arbitrary precision but by default it has infinite precision. It keeps things like 1/3 as rationals and even expressions involving things like Sqrt[2] it maintains symbolically until you ask for a numeric approximation, which you can have to any number of decimal places. A: Java natively can do bignum operations with BigDecimal. GMP is the defacto standard library for bignum with C/C++. A: If you want to work in the .NET world you can use still use the java.math.BigDecimal class. Just add a reference to vjslib (in the framework) and then you can use the java classes. The great thing is, they can be used fron any .NET language. For example in C#: using java.math; namespace MyNamespace { class Program { static void Main(string[] args) { BigDecimal bd = new BigDecimal("12345678901234567890.1234567890123456789"); Console.WriteLine(bd.ToString()); } } } A: The (free) basic program x11 basic ( http://x11-basic.sourceforge.net/ ) has arbitrary precision for integers. (and some useful commands as well, e.g. nextprime( abcd...pqrs)) A: IBM's interpreted scripting language Rexx, provides custom precision setting with Numeric. https://www.ibm.com/docs/en/zos/2.1.0?topic=instructions-numeric. The language runs on mainframes and pc operating systems and has very powerful parsing and variable handling as well as extension packages. Object Rexx is the most recent implementation. Links from https://en.wikipedia.org/wiki/Rexx A: Haskell has excellent support for arbitrary-precision arithmetic built in, and using it is the default behavior. At the REPL, with no imports or setup required: Prelude> 2 ^ 2 ^ 12 1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336 (try this yourself at https://tryhaskell.org/) If you're writing code stored in a file and you want to print a number, you have to convert it to a string first. The show function does that. module Test where main = do let x = 2 ^ 2 ^ 12 let xStr = show x putStrLn xStr (try this yourself at code.world: https://www.code.world/haskell#Pb_gPCQuqY7r77v1IHH_vWg) What's more, Haskell's Num abstraction lets you defer deciding what type to use as long as possible. -- Define a function to make big numbers. The (inferred) type is generic. Prelude> superbig n = 2 ^ 2 ^ n -- We can call this function with different concrete types and get different results. Prelude> superbig 5 :: Int 4294967296 Prelude> superbig 5 :: Float 4.2949673e9 -- The `Int` type is not arbitrary precision, and we might overflow. Prelude> superbig 6 :: Int 0 -- `Double` can hold bigger numbers. Prelude> superbig 6 :: Double 1.8446744073709552e19 Prelude> superbig 9 :: Double 1.3407807929942597e154 -- But it is also not arbitrary precision, and can still overflow. Prelude> superbig 10 :: Double Infinity -- The Integer type is arbitrary-precision though, and can go as big as we have memory for and patience to wait for the result. Prelude> superbig 12 :: Integer 1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336 -- If we don't specify a type, Haskell will infer one with arbitrary precision. Prelude> superbig 12 1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336
{ "language": "en", "url": "https://stackoverflow.com/questions/142855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How do I import a pre-existing Java project into Eclipse and get up and running? Comment on Duplicate Reference: Why would this be marked duplicate when it was asked years prior to the question referenced as a duplicate? I also believe the question, detail, and response is much better than the referenced question. I've been a C++ programmer for quite a while but I'm new to Java and new to Eclipse. I want to use the touch graph "Graph Layout" code to visualize some data I'm working with. This code is organized like this: ./com ./com/touchgraph ./com/touchgraph/graphlayout ./com/touchgraph/graphlayout/Edge.java ./com/touchgraph/graphlayout/GLPanel.java ./com/touchgraph/graphlayout/graphelements ./com/touchgraph/graphlayout/graphelements/GESUtils.java ./com/touchgraph/graphlayout/graphelements/GraphEltSet.java ./com/touchgraph/graphlayout/graphelements/ImmutableGraphEltSet.java ./com/touchgraph/graphlayout/graphelements/Locality.java ./com/touchgraph/graphlayout/graphelements/TGForEachEdge.java ./com/touchgraph/graphlayout/graphelements/TGForEachNode.java ./com/touchgraph/graphlayout/graphelements/TGForEachNodePair.java ./com/touchgraph/graphlayout/graphelements/TGNodeQueue.java ./com/touchgraph/graphlayout/graphelements/VisibleLocality.java ./com/touchgraph/graphlayout/GraphLayoutApplet.java ./com/touchgraph/graphlayout/GraphListener.java ./com/touchgraph/graphlayout/interaction ./com/touchgraph/graphlayout/interaction/DragAddUI.java ./com/touchgraph/graphlayout/interaction/DragMultiselectUI.java ./com/touchgraph/graphlayout/interaction/DragNodeUI.java ./com/touchgraph/graphlayout/interaction/GLEditUI.java ./com/touchgraph/graphlayout/interaction/GLNavigateUI.java ./com/touchgraph/graphlayout/interaction/HVRotateDragUI.java ./com/touchgraph/graphlayout/interaction/HVScroll.java ./com/touchgraph/graphlayout/interaction/HyperScroll.java ./com/touchgraph/graphlayout/interaction/LocalityScroll.java ./com/touchgraph/graphlayout/interaction/RotateScroll.java ./com/touchgraph/graphlayout/interaction/TGAbstractClickUI.java ./com/touchgraph/graphlayout/interaction/TGAbstractDragUI.java ./com/touchgraph/graphlayout/interaction/TGAbstractMouseMotionUI.java ./com/touchgraph/graphlayout/interaction/TGAbstractMousePausedUI.java ./com/touchgraph/graphlayout/interaction/TGSelfDeactivatingUI.java ./com/touchgraph/graphlayout/interaction/TGUIManager.java ./com/touchgraph/graphlayout/interaction/TGUserInterface.java ./com/touchgraph/graphlayout/interaction/ZoomScroll.java ./com/touchgraph/graphlayout/LocalityUtils.java ./com/touchgraph/graphlayout/Node.java ./com/touchgraph/graphlayout/TGAbstractLens.java ./com/touchgraph/graphlayout/TGException.java ./com/touchgraph/graphlayout/TGLayout.java ./com/touchgraph/graphlayout/TGLensSet.java ./com/touchgraph/graphlayout/TGPaintListener.java ./com/touchgraph/graphlayout/TGPanel.java ./com/touchgraph/graphlayout/TGPoint2D.java ./com/touchgraph/graphlayout/TGScrollPane.java ./TG-APACHE-LICENSE.txt ./TGGL ReleaseNotes.txt ./TGGraphLayout.html ./TGGraphLayout.jar How do I add this project in Eclipse and get it compiling and running quickly? A: In the menu go to : - File - Import - as the filter select 'Existing Projects into Workspace' - click next - browse to the project directory at 'select root directory' - click on 'finish' A: * *Create a new Java project in Eclipse. This will create a src folder (to contain your source files). *Also create a lib folder (the name isn't that important, but it follows standard conventions). *Copy the ./com/* folders into the /src folder (you can just do this using the OS, no need to do any fancy importing or anything from the Eclipse GUI). *Copy any dependencies (jar files that your project itself depends on) into /lib (note that this should NOT include the TGGL jar - thanks to commenter Mike Deck for pointing out my misinterpretation of the OPs post!) *Copy the other TGGL stuff into the root project folder (or some other folder dedicated to licenses that you need to distribute in your final app) *Back in Eclipse, select the project you created in step 1, then hit the F5 key (this refreshes Eclipse's view of the folder tree with the actual contents. *The content of the /src folder will get compiled automatically (with class files placed in the /bin file that Eclipse generated for you when you created the project). If you have dependencies (which you don't in your current project, but I'll include this here for completeness), the compile will fail initially because you are missing the dependency jar files from the project classpath. *Finally, open the /lib folder in Eclipse, right click on each required jar file and choose Build Path->Add to build path. That will add that particular jar to the classpath for the project. Eclipse will detect the change and automatically compile the classes that failed earlier, and you should now have an Eclipse project with your app in it. A: I think you'll have to import the project via the file->import wizard: http://www.coderanch.com/t/419556/vc/Open-existing-project-Eclipse It's not the last step, but it will start you on your way. I also feel your pain - there is really no excuse for making it so difficult to do a simple thing like opening an existing project. I truly hope that the Eclipse designers focus on making the IDE simpler to use (tho I applaud their efforts at trying different approaches - but please, Eclipse designers, if you are listening, never complicate something simple). A: This assumes Eclipse and an appropriate JDK are installed on your system * *Open Eclipse and create a new Workspace by specifying an empty directory. *Make sure you're in the Java perspective by selecting Window -> Open Perspective ..., select Other... and then Java *Right click anywhere in the Package Explorer pane and select New -> Java Project *In the dialog that opens give the project a name and then click the option that says "Crate project from existing sources." *In the text box below the option you selected in Step 4 point to the root directory where you checked out the project. This should be the directory that contains "com" *Click Finish. For this particular project you don't need to do any additional setup for your classpath since it only depends on classes that are part of the Java SE API.
{ "language": "en", "url": "https://stackoverflow.com/questions/142863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Change Oracle port from port 8080 How do I change Oracle from port 8080? My Eclipse is using 8080, so I can't use that. A: I assume you're talking about the Apache server that Oracle installs. Look for the file httpd.conf. Open this file in a text editor and look for the line Listen 8080 or Listen {ip address}:8080 Change the port number and either restart the web server or just reboot the machine. A: Oracle (database) can use many ports. when you install the software it scans for free ports and decides which port to use then. The database listener defaults to 1520 but will use 1521 or 1522 if 1520 is not available. This can be adjusted in the listener.ora files. The Enterprise Manager, web-based database administration tool defaults to port 80 but will use 8080 if 80 is not available. See here for details on how to change the port number for enterprise manager: http://download-uk.oracle.com/docs/cd/B14099_19/integrate.1012/b19370/manage_oem.htm#i1012853 A: From this blog post: XE: Changing the default http port Oracle XE uses the embedded http listener that comes with the XML DB (XDB) to serve http requests. The default port for HTTP access is 8080. EDIT: Update 8080 port to which port(9090 for example) you like SQL> -- set http port SQL> begin 2 dbms_xdb.sethttpport('9090'); 3 end; 4 / After changing the port, when we start Oracle it will go on port 8080, we should type manually new port(9090) in the address bar to run Oracle XE. A: From Start | Run open a command window. Assuming your environmental variables are set correctly start with the following: C:\>sqlplus /nolog SQL*Plus: Release 10.2.0.1.0 - Production on Tue Aug 26 10:40:44 2008 Copyright (c) 1982, 2005, Oracle. All rights reserved. SQL> connect Enter user-name: system Enter password: <enter password if will not be visible> Connected. SQL> Exec DBMS_XDB.SETHTTPPORT(3010); [Assuming you want to have HTTP going to this port] PL/SQL procedure successfully completed. SQL>quit then open browser and use 3010 port. A: Just open Run SQL Command Line and login as sysadmin and then enter below command Exec DBMS_XDB.SETHTTPPORT(8181); That's it. You are done..... A: Execute Exec DBMS_XDB.SETHTTPPORT(8181); as SYS/SYSTEM. Replace 8181 with the port you'd like changing to. Tested this with Oracle 10g. Source : http://hodentekhelp.blogspot.com/2008/08/my-oracle-10g-xe-is-on-port-8080-can-i.html A: There are many Oracle components that run a web service, so it's not clear which you are referring to. For example, the web site port for standalone OC4J is configured in the j2ee/home/config/default-web-site.xml file: <web-site xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlns.oracle.com/oracleas/schema/web-site-10_0.xsd" port="8888" display-name="OC4J 10g (10.1.3) Default Web Site" schema-major-version="10" schema-minor-version="0" > A: Login in with System Admin User Account and execute below SQL Procedure. begin dbms_xdb.sethttpport('Your Port Number'); end; Then open the Browser and access the below URL http://127.0.0.1:YourPortNumber/apex/
{ "language": "en", "url": "https://stackoverflow.com/questions/142868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: Can the C preprocessor be used to tell if a file exists? I have a very large codebase (read: thousands of modules) that has code shared across numerous projects that all run on different operating systems with different C++ compilers. Needless to say, maintaining the build process can be quite a chore. There are several places in the codebase where it would clean up the code substantially if only there were a way to make the pre-processor ignore certain #includes if the file didn't exist in the current folder. Does anyone know a way to achieve that? Presently, we use an #ifdef around the #include in the shared file, with a second project-specific file that #defines whether or not the #include exists in the project. This works, but it's ugly. People often forget to properly update the definitions when they add or remove files from the project. I've contemplated writing a pre-build tool to keep this file up to date, but if there's a platform-independent way to do this with the preprocessor I'd much rather do it that way instead. Any ideas? A: Another possibility: populate a directory somewhere with zero-length versions of all of the headers you wish to optionally include. Pass a -I argument to this directory as the last such option. The GCC cpp searches its include directories in order, if it finds a header file in an earlier directory it will use it. Otherwise, it will eventually find the zero-length file, and be happy. I presume that other cpp implementations also search their include directories in the order specified. A: Create a special folder for missing headers, and make that folder to be searched last (that is compliler specific - last item in "INCLUDES" environment variable, something like that) Then if some header1.h can be missing, create in that folder a stub header1.h: #define header1_is_missing Now you can always write #include <header1.h> #ifdef header1_is_missing // there is no header1.h #endif A: You could have a pre-build step run that generates an include file that contains a list of #defines that represent the names of the files existing in the current directory: #define EXISTS_FILE1_C #define EXISTS_FILE1_H #define EXISTS_FILE2_C Then, include that file from within your source code, and then your source can test the EXISTS_* defines to see whether a file exists or not. A: So far as I know cpp does not have a directive regarding the existence of a file. You might be able to accomplish this with a bit of help from the Makefile, if you're using the same make across platforms. You can detect the presence of a file in the Makefile: foo.o: foo.c if [ -f header1.h ]; then CFLAGS+=-DHEADER1_INC As @Greg Hewgill mentions, you can then make your #includes be conditional: #ifdef HEADER1_INC #include <header1.h> #endif A: Generally this is done by using a script that tries running the preprocessor on an attempt at including the file. Depending on if the preprocessor returns an error, the script updates a generated .h file with an appropriate #define (or #undef). In bash, the script might look vaguely like this: cat > .test.h <<'EOM' #include <asdf.h> EOM if gcc -E .test.h then echo '#define HAVE_ASDF_H 1' >> config.h else echo '#ifdef HAVE_ASDF_H' >> config.h echo '# undef HAVE_ASDF_H' >> config.h echo '#endif' >> config.h fi A pretty thorough framework for portably working with portability checks like this (as well as thousands others) is autoconf. A: Contrary to some claims here and on the internet, Visual Studio 2015 does NOT support the __has_include feature - at least according to my experience. Tested with Update 3. The rumors may have arisen from the fact that VS 2017 is also referred to as "Version 15"; VS 2015 is instead referred to as "Version 14". Support for the feature seems to have been officially introduced with "Visual Studio 2017 Version 15.3". A: Little Update Some compilers might support __has_include ( header-name ). The extension was added to the C++17 standard (P0061R1). Compiler Support * *Clang *GCC from 5.X *Visual Studio from VS2015 Update 2 (?) Example (from clang website): // Note the two possible file name string formats. #if __has_include("myinclude.h") && __has_include(<stdint.h>) # include "myinclude.h" #endif Sources * *SD-6: SG10 Feature Test Recommendations *Clang Language Extensions A: The preprocessor itself cannot identify the existence of files but you certainly can use the build environment to do so. I'm mostly familiar with make, which would allow you to do something like this in your makefile: ifdef $(test -f filename && echo "present") DEFINE=-DFILENAME_PRESENT endif Of course, you'd have to find an analog to this in other build environments like VisualStudio, but I'm sure they exist. A: I had to do something similar for the Symbian OS. This is how i did it: lets say you want to check if the file "file_strange.h" exists and you want to include some headers or link to some libraries depending on the existance of that file. first creat a small batch file for checking the existence of that file. autoconf is good but an over kill for many small projects. ----------check.bat @echo off IF EXIST [\epoc32\include\domain\middleware\file_strange] GOTO NEW_API GOTO OLD_API GOTO :EOF :NEW_API echo.#define NEW_API_SUPPORTED>../inc/file_strange_supported.h GOTO :EOF :OLD_API echo.#define OLD_API_SUPPORTED>../inc/file_strange_supported.h GOTO :EOF ----------check.bat ends then i created a gnumake file ----------checkmedialist.mk do_nothing : @rem do_nothing MAKMAKE : check.bat BLD : do_nothing CLEAN : do_nothing LIB : do_nothing CLEANLIB : do_nothing RESOURCE : do_nothing FREEZE : do_nothing SAVESPACE : do_nothing RELEASABLES : do_nothing FINAL : do_nothing ----------check.mk ends include the check.mk file in your bld.inf file, it MUST be before your MMP files PRJ_MMPFILES gnumakefile checkmedialist.mk now at compile time the file file_strange_supported.h will have an appropriate flag set. you can use this flag in your cpp files or even in the mmp file for example in mmp #include "../inc/file_strange_supported.h" #ifdef NEW_API_SUPPORTED LIBRARY newapi.lib #else LIBRARY oldapi.lib #endif and in .cpp #include "../inc/file_strange_supported.h" #ifdef NEW_API_SUPPORTED CStrangeApi* api = Api::NewLC(); #else // .. #endif
{ "language": "en", "url": "https://stackoverflow.com/questions/142877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87" }
Q: Which language is better for general purpose programming, F# or Haskell? I'm currently learning Haskell, Which language (F# or Haskell) do you prefer for programming general purpose applications? Which do you think is the stronger language? A: I'd say F#, as you can access the entire .Net framework. However, that's more of a library thing. A: It depends what you want to do: Haskell is the more purely functional language of the two. F# is more of a hybrid language, and not purely functional, but has a great set of base class libraries that you can use to do modern things easily on Windows or Mono. A: I think Jon Harrop has a serious downer on Haskell for some reason. It's simply not true that it is not used outside of academia in fact it is widely used in investment banking and far more so than F# and OCaml are and for good reason. If you want a job functional programming then learn Haskell as there are far more posts advertised for Haskell programmers than F# or OCaml. I'm sure F# will gain popularity as it has Microsoft behind it and it is starting from zero but at the moment Haskell has a clear lead. Probably 2 or 3 years ago, OCaml led the field in practical functional languages but since then Haskell has overtaken it with more libraries, more features, better performance and wider commercial use. A: You might find this blog post by Neil Mitchell informative: F# From a Haskell Perspective The comments are also illuminating. A: I prefer Haskell. Jon Harrop's claim that Haskell has poor tools caused me to think a bit, since I quite disagree with this. I think that the issue here is partly one of development style. Let's compare a few tool-related characteristics of F# and GHC: * *F# has extensive visual tools and GHC has none. For me, the lack of visual tools is irrelevant: I work with vi, a Unix command line, and a heavily custom build system. The lack of support for my style of development in F# would be very trying for me. On the other hand, if you prefer working under a Visual-Studio-type environment, you'd have quite the reverse opinion. *F# and/or .NET I understand has a very good debugger. GHC has only a limited debugger that runs in the interpreter. I've not used a debugger in years (much of this due to using test-driven development) and when you work mostly with pure functions, as in Haskell, a debugger is much less necessary. So for me, the lack of this tool is fairly irrelevant. *Libraries. This depends mostly on what libraries you need, doesn't it? Lots of good ones doesn't help if the one you need isn't there, and having lots of poorly-designed libraries may not be so helpful. Haskell certainly has fewer libraries than .NET, but it does have a reasonable selection, and the quality of the API design in many of them is very, very high. I don't know what F#'s interface into native code libraries is like, but GHC is great for this, due to the fantastic FFI. I wrote a Windows DDE server entirely in Haskell (yes—not a line of C, not even to deal with callbacks from Windows C libraries) and it took considerably less time and was considerably simpler than doing the same thing in C or C++. If you need native code interfaces, Haskell is certainly the better choice. The "unpredictability" of memory usage and performance is a good point. Haskell seems to me actually reasonably predictable if you know what you're doing, but you won't know what you're doing when you start out, and you'll have a lot to learn. F# is much more similar to other .NET languages. Overall, this question probably comes down more to the platform than the language: the huge difference between the "Unixy world" of GHC generating native code and the "Windowsy world" of F# running on .NET is not a language issue. A: I'd say it depends on why you are learning it. If you are doing it for the experience of a pure functional language, go for Haskell. But if you are definitely going to use the language for more than that, F# is might be the better choice. A: I'd go for Haskell. HackageDB is a great collection of libraries that are written specifically for the language. In the case of F# you'd have to use mostly libraries that are not written with a functional language in mind so they will not be as 'elegant' to use. But, of course it depends largely on how much functional programming you want to do and constraints of the project you want to use it for. Even 'general purpose' does not mean it should be used in all cases ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/142878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Working with FedEx API and .Net I am trying to get a list of rates for all available services from FedEx using 1 call, and having a tough time with their documentation. Anyone have some code snippets of how you interfaced with them? .Net code is preferable but anything will help more than their crapping code samples! Thanks A: maybe you'll have a look over here http://fedex.com/us/solutions/shipapi/sample_code.html/ if you want to consume a webservice, visual studio makes your life easy as all you have to do is import a web reference (or service reference, as it is called with wcf) ans consume it.
{ "language": "en", "url": "https://stackoverflow.com/questions/142886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is the PageAction.Details route necessary in the default Dynamic Data template? In the default Visual Studio template for a dynamic data web application, Global.asax includes the following two sample routes. // route #1 routes.Add(new DynamicDataRoute("{table}/ListDetails.aspx") { Action = PageAction.List, ViewName = "ListDetails", Model = model }); // route #2 routes.Add(new DynamicDataRoute("{table}/ListDetails.aspx") { Action = PageAction.Details, ViewName = "ListDetails", Model = model }); They only differ by the Action property. The comments in Global.asax indicate the two routes are used to configure a single page that handles all CRUD behaviors. Why is route #2 is necessary? Does it do anything? ListDetails.aspx does not look at the Action property of the route. It seems that everything runs fine when I comment out route #2 and I only have route #1 in Global.asax. Route #2 looks like its not used. A: You're right, route #2 isn't going to be used in this instance. The only time route #2 would come into play is if you were requesting a details page URL from the route engine. Because the ListDetails.aspx page template handles both the list and details views, it never requests a details template URL.
{ "language": "en", "url": "https://stackoverflow.com/questions/142890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: LINQ group by question I started playing around with Linq today and ran into a problem I couldn't find an answer to. I was querying a simple SQL Server database that had some employee records. One of the fields is the full name (cn). I thought it would be interesting to group by the first name by splitting the full name at the first space. I tried group by person.cn.Split(separators)[0] but ran into a lengthy runtime exception (looked a lot like a C++ template instantiation error). Then I tried grouping by a few letters of the first name: group by person.cn.Substring(0,5) and that worked fine but is not what I want. I'm wondering about two things: * *Why does the first example not work when it looks so close to the second? *Knowing that behind the scenes it's SQL stuff going on, what's a good way to do this kind of thing efficiently Thanks, Andrew A: Split has no translation into SQL. So, how to do this string manipulation without split? Cheat like hell (untested): string oneSpace = " "; string fiftySpace = " "; var query = from person in db.Persons let lastname = person.cn.Replace(oneSpace, fiftySpace).SubString(0, 50).Trim() group person by lastname into g select new { Key = g.Key, Count = g.Count }; A: The reason your first attempt didn't work is because LINQ to SQL uses Expression Trees to translate your query into SQL. As a result any code that isn't directly translatable into SQL is an exception - this includes the call to Split. A: Thanks guys, I'll try the "Replace" trick to see if that runs. I'm very intrigued by LINQ but now it looks like there's some hidden mysteriousness where you have to know what your LINQ queries translate into before being able to use it effectively. The core problem is of course that I don't know SQL very well so that's where I'll start. Edit: I finally tried the "Replace" today and it works. I even got to sort the grouped results by count so now I have a pareto of name in my company. It's horrendously slow, though. Much faster to select everything and do the bucketing in C# directly. Thanks again, Andrew
{ "language": "en", "url": "https://stackoverflow.com/questions/142903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Image Recognition I'd like to do some work with the nitty-gritties of computer imaging. I'm looking for a way to read single pixels of data, analyze them programatically, and change them. What is the best language to use for this (Python, c++, Java...)? What is the best fileformat? I don't want any super fancy software/APIs... I'm looking for the bare basics. A: If you need speed (you'll probably always want speed with image processing) you definitely have to work with raw pixel data. Java has some real disadvantages as you cannot access memory directly which makes pixel access quite slow compared to accessing the memory directly. C++ is definitely the language of choice for production use image processing. But you can, for example, also use C# as it allows for unsafe code in specific areas. (Take a look at the scan0 pointer property of the bitmapdata class.) I've used C# successfully for image processing applications and they are definitely much faster than their java counterparts. I would not use any scripting language or java for such a purpose. A: It's very east to manipulate the large multi-dimensional or complex arrays of pixel information that are pictures using high-level languages such as Python. There's a library called PIL (the Python Imaging Library) that is quite useful and will let you do general filters and transformations (change the brightness, soften, desaturate, crop, etc) as well as manipulate the raw pixel data. It is the easiest and simplest image library I've used to date and can be extended to do whatever it is you're interested in (edge detection in very little code, for example). A: I studied Artificial Intelligence and Computer Vision, thus I know pretty well the kind of tools that are used in this field. Basically: you can use whatever you want as long as you know how it works behind the scene. Now depending on what you want to achieve, you can either use: * *C language, but you will lose a lot of time in bugs checking and memory management when implementing your algorithms. So theoretically, this is the fastest language to do that kind of job, but if your algorithms are not computationnally efficient (in terms of complexity) or if you lose too much time in bugs checking, this is clearly not worth it. So I would advise to first implement your application in another language, and then later you can always optimize small parts of your code with C bindings. *Octave/MatLab: very efficient language, almost as much as C, and you can make very elegant and succinct algorithms. If you are into vectorization, matrix and linear operations, you should go with that. However, you won't be able to develop a whole application with this language, it's more focused on algorithms, but then you can always develop an interface using another language later. *Python: all-in-one elegant and accessible language, used in gigantically large scale applications such as Google and Facebook. You can do pretty much everything you want with Python, any kind of application. It will be perfectly adapted if you want to make a full application (with client interaction and all, not only algorithms), or if you want to quickly draft a prototype using existent libraries since Python has a very large set of high quality libraries, like OpenCV. However if you only want to make algorithms, you should better use Octave/MatLab. The answer that was selected as a solution is very biaised, and you should be careful about this kind of archaic comment. Nowadays, hardware is cheaper than wetware (humans), and thus, you should use languages where you will be able to produce results faster, even if it's at the cost of a few CPU cycles or memory space. Also, a lot of people tends to think that as long as you implement your software in C/C++, you are making the Saint Graal of speedness: this is just not true. First, because algorithms complexity matters a lot more than the language you are using (a bad algorithm will never beat a better algorithm, even if implemented in the slowest language in the universe), and secondly because high-level languages are nowadays doing a lot of caching and speed optimization for you, and this can make your program run even faster than in C/C++. Of course, you can always do everything of the above in C/C++, but how much of your time are you willing to waste to reinvent the wheel? A: Not only will C/C++ be faster, but most of the image processing sample code you find out there will be in C as well, so it will be easier to incorporate things you find. A: (This might not apply for the OP who only wanted the bare basics -- but now that the speed issue was brought up, I do need to write this, just for the record.) If you really need speed, it's better to forget about working on the pixel-by-pixel level, and rather see whether the operations that you need to perform could be vectorized. For example, for your C/C++ code you could use the excellent Intel IPP library (no, I don't work for Intel). A: if you are looking to numerical work on your images (think matrix) and you into Python check out http://www.scipy.org/PyLab - this is basically the ability to do matlab in python, buddy of mine swears by it. A: It depends a little on what you're trying to do. If runtime speed is your issue then c++ is the best way to go. If speed of development is an issue, though, I would suggest looking at java. You said that you wanted low level manipulation of pixels, which java will do for you. But the other thing that might be an issue is the handling of the various file formats. Java does have some very nice APIs to deal with the reading and writing of various image formats to file (in particular the java2d library. You choose to ignore the higher levels of the API) If you do go for the c++ option (or python come to think of it) I would again suggest the use of a library to get you over the startup issues of reading and writing files. I've previously had success with libgd A: What language do you know the best? To me, this is the real question. If you're going to spend months and months learning one particular language, then there's no real advantage in using Python or Java just for their (to be proven) development speed. I'm particularly proficient in C++ and I think that for this particular task I can be as speedy as a Java programmer, for example. With the aid of some good library (OpenCV comes to mind) you can create anything you need in a matter of a couple of lines of C++ code, really. A: Short answer: C++ and OpenCV A: Short answer? I'd say C++, you have far more flexibility in manipulating raw chunks of memory than Python or Java.
{ "language": "en", "url": "https://stackoverflow.com/questions/142928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you implement a Highpass filter for the IPhone accelerometer? I remember seeing the code for a Highpass filter a few days back somewhere in the samples, however I can't find it anywhere now! Could someone remember me where the Highpass filter implementation code was? Or better yet post the algorithm? Thanks! A: Just in case someone wants to know, the highpass filter can be found in the Accelerometer Graph sample. A: From the idevkit.com forums: #define kFilteringFactor 0.1 static UIAccelerationValue rollingX=0, rollingY=0, rollingZ=0; - (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { // Calculate low pass values rollingX = (acceleration.x * kFilteringFactor) + (rollingX * (1.0 - kFilteringFactor)); rollingY = (acceleration.y * kFilteringFactor) + (rollingY * (1.0 - kFilteringFactor)); rollingZ = (acceleration.z * kFilteringFactor) + (rollingZ * (1.0 - kFilteringFactor)); // Subtract the low-pass value from the current value to get a simplified high-pass filter float accelX = acceleration.x - rollingX; float accelY = acceleration.y - rollingY; float accelZ = acceleration.z - rollingZ; // Use the acceleration data. } A: Here's the link, i was looking for this one too. This is an example for adaptive / non adaptive highpass and lowpass filter: Apple iOS Reference Library - AccelerometerGraph Example
{ "language": "en", "url": "https://stackoverflow.com/questions/142944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How can I use functional programming in the real world? Functional languages are good because they avoid bugs by eliminating state, but also because they can be easily parallelized automatically for you, without you having to worry about the thread count. As a Win32 developer though, can I use Haskell for some DLL files of my application? And if I do, is there a real advantage that would be taken automatically for me? If so, what gives me this advantage? The compiler? Does F# parallelize functions you write across multiple cores and CPUs automatically for you? Would you ever see the thread count in Task Manager increase? How can I start using Haskell in a practical way, and will I really see some benefits if I do? A: You didn't mention, but I'm assuming, that you're using C++. One potentially easy way to get into functional is via C++/CLI to F#. C++ contains "magic pixie dust" (called IJW: It Just Works) to allow you to call into and out of managed code. With this, calling F# code is almost as simple as it is from C#. I've used this in one program (FreeSWITCH), which is written entirely in C/C++. With a single managed C++/CLI (use the /clr switch), it magically transitions into managed code, and from there, I can go load my F# plugins and execute them. To make things even easier for deployment, F# can statically link all its dependencies, so you don't need to deploy the F# runtime files. One other thing that makes CLR code attractive is that you can pass managed code (delegates) to C code, and the runtime automatically makes a thunk for you. If you decide to go the Haskell way, the feature you'll be looking for is FFI: Foreign Function Interface. However, I don't think it'll give you the same level of integration as C++/CLI with F#. A: I'm currently learning Haskell myself. When you start out learning it, it doesn't seem very intriguing, because the learning experience is nothing like learning a language like C#. It's a whole new world, but I noticed I could write very very complex expressions in just a few lines of code. When I looked back at the code, it was much more concise; it was small and tight. I'm absolutely loving it! You can indeed write real-world programs that will be smaller, easier to maintain, and much more complex than most other languages allow. I vote for you to learn it!! A: It seems like the book Real World Haskell is just what you're looking for. You can read it free online. A: Since you mention Win32 and DLLs, I presume you're working with unmanaged code. In that case, GHC will work very well for you. Late last year I wrote a DDE server under Windows using FFI to talk to the Microsoft DDE libraries, and, surprisingly, it was an extremely pleasant experience (especially given that I'm a Unix guy). Haskell's FFI is powerful (even supporting, e.g., callbacks into Haskell functions from C or other libraries), and having Haskell's type checking when writing C-level code is like a dream come true. That last point is one of the major advantages of Haskell: the type system is amazing. That said, it's like any powerful tool; it needs time and effort to make good use of it. So yes, it is possible to start out writing small bits of code in Haskell that link into the rest of your code (though you may find it easier to start with small Haskell programs that link to your other code), and it's well worth spending a fair amount of time learning about this and using it wherever you can. You may end up like me, planning a fairly major project tightly integrated with Windows code (in my case, a sophisticated Excel add-in) in Haskell. A: F# does not contain any magic pixie dust that will pass functions off to different CPUs or machines. F#/Haskell and other functional programming languages make it easier for you to write functions that can be processed independent of the thread or CPU they were created on. I don't feel right posting a link here to a podcast I participate in. It seems a little off, but in the Herding Code episode, where we talked with Matt Podwysocki, we asked the same question and he gave some interesting answers. There are also a lot of good links relating to functional programming in that episode. I found one link titled "Why Functional Programming Matters". That may provide some answers for you. A: This might also be interesting: "Real World Functional Programming" Examples are in F# and C#, but the theory is fairly generic. From what I've read (pre-release) it is definitely interesting, but so far I think it is making me want to stick more and more with C#, using libraries like Parallel Extensions.
{ "language": "en", "url": "https://stackoverflow.com/questions/142948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: Handling context-path refs when migrating "/" site to Java EE packaging An existing Java site is designed to run under "/" on tomcat and there are many specific references to fixed absolute paths like "/dir/dir/page". Want to migrate this to Java EE packaging, where the site will need to run under a context-root e.g. "/dir/dir/page" becomes "/my-context-root/dir/dir/page" Now, the context-root can be easily with ServletRequest.getContextPath(), but that still means a lot of code changes to migrate a large code base. Most of these references are in literal HTML. I've experimented with using servlet filters to do rewrites on the oubound HTML, and that seems to work fine. But it does introduce some overhead, and I wouldn't see it as a permanent solution. (see EnforceContextRootFilter-1.0-src.zip for the servlet filter approach). Are there any better approaches to solving this problem? Anything obvious I'm missing? All comments appreciated! A: Check out a related question Also consider URLRewriteFilter Another thing (I keep editing this darn post). If you're using JSP (versus static HTML or something else) you could also create a Tag File to replace the common html tags with links (notably a, img, form). So <a href="/root/path">link</a> can become <t:a href="/root/path">link</t:a>. Then the tag can do the translation for you. This change can be easily done "en masse", using something like sed. sed -e 's/<a/<t:a/g' -e 's/<\/a>/<\/t:a>/g' old/x.jsp > new/x.jsp Form actions may be a bit trickier than sed, but you get the idea. A: the apache world used Redirects(mod_rewrite) to do the same. The Servlet world started using filters The ruby world (or the RoR) does more of the same stuff and they call it routing. So, there's no getting around it (Unless you want to use smart regex through out -- which has been tried and it works just fine).
{ "language": "en", "url": "https://stackoverflow.com/questions/142965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C#: Convert COMP-3 Packed Decimal to Human-Readable Value I have a series of ASCII flat files coming in from a mainframe to be processed by a C# application. A new feed has been introduced with a Packed Decimal (COMP-3) field, which needs to be converted to a numerical value. The files are being transferred via FTP, using ASCII transfer mode. I am concerned that the binary field may contain what will be interpreted as very-low ASCII codes or control characters instead of a value - Or worse, may be lost in the FTP process. What's more, the fields are being read as strings. I may have the flexibility to work around this part (i.e. a stream of some sort), but the business will give me pushback. The requirement read "Convert from HEX to ASCII", but clearly that didn't yield the correct values. Any help would be appreciated; it need not be language-specific as long as you can explain the logic of the conversion process. A: I have been watching the posts on numerous boards concerning converting Comp-3 BCD data from "legacy" mainframe files to something useable in C#. First, I would like to say that I am less than enamoured by the responses that some of these posts have received - especially those that have said essentially "why are you bothering us with these non-C#/C++ related posts" and also "If you need an answer about some sort of COBOL convention, why don't you go visit a COBOL oriented site". This, to me, is complete BS as there is going to be a need for probably many years to come, (unfortunately), for software developers to understand how to deal with some of these legacy issues that exist in THE REAL WORLD. So, even if I get slammed on this post for the following code, I am going to share with you a REAL WORLD experience that I had to deal with regarding COMP-3/EBCDIC conversion (and yes, I am he who talks of "floppy disks, paper-tape, Disc Packs etc... - I have been a software engineer since 1979"). First - understand that any file that you read from a legacy main-frame system like IBM is going to present the data to you in EBCDIC format and in order to convert any of that data to a C#/C++ string you can deal with you are going to have to use the proper code page translation to get the data into ASCII format. A good example of how to handle this would be: StreamReader readFile = new StreamReader(path, Encoding.GetEncoding(037); // 037 = EBCDIC to ASCII translation. This will ensure that anything that you read from this stream will then be converted to ASCII and can be used in a string format. This includes "Zoned Decimal" (Pic 9) and "Text" (Pic X) fields as declared by COBOL. However, this does not necessarily convert COMP-3 fields to the correct "binary" equivelant when read into a char[] or byte[] array. To do this, the only way that you are ever going to get this translated properly (even using UTF-8, UTF-16, Default or whatever) code pages, you are going to want to open the file like this: FileStream fileStream = new FileStream(path, FIleMode.Open, FIleAccess.Read, FileShare.Read); Of course, the "FileShare.Read" option is "optional". When you have isolated the field that you want to convert to a decimal value (and then subsequently to an ASCII string if need be), you can use the following code - and this has been basically stolen from the MicroSoft "UnpackDecimal" posting that you can get at: http://www.microsoft.com/downloads/details.aspx?familyid=0e4bba52-cc52-4d89-8590-cda297ff7fbd&displaylang=en I have isolated (I think) what are the most important parts of this logic and consolidated it into two a method that you can do with what you want. For my purposes, I chose to leave this as returning a Decimal value which I could then do with what I wanted. Basically, the method is called "unpack" and you pass it a byte[] array (no longer than 12 bytes) and the scale as an int, which is the number of decimal places you want to have returned in the Decimal value. I hope this works for you as well as it did for me. private Decimal Unpack(byte[] inp, int scale) { long lo = 0; long mid = 0; long hi = 0; bool isNegative; // this nybble stores only the sign, not a digit. // "C" hex is positive, "D" hex is negative, and "F" hex is unsigned. switch (nibble(inp, 0)) { case 0x0D: isNegative = true; break; case 0x0F: case 0x0C: isNegative = false; break; default: throw new Exception("Bad sign nibble"); } long intermediate; long carry; long digit; for (int j = inp.Length * 2 - 1; j > 0; j--) { // multiply by 10 intermediate = lo * 10; lo = intermediate & 0xffffffff; carry = intermediate >> 32; intermediate = mid * 10 + carry; mid = intermediate & 0xffffffff; carry = intermediate >> 32; intermediate = hi * 10 + carry; hi = intermediate & 0xffffffff; carry = intermediate >> 32; // By limiting input length to 14, we ensure overflow will never occur digit = nibble(inp, j); if (digit > 9) { throw new Exception("Bad digit"); } intermediate = lo + digit; lo = intermediate & 0xffffffff; carry = intermediate >> 32; if (carry > 0) { intermediate = mid + carry; mid = intermediate & 0xffffffff; carry = intermediate >> 32; if (carry > 0) { intermediate = hi + carry; hi = intermediate & 0xffffffff; carry = intermediate >> 32; // carry should never be non-zero. Back up with validation } } } return new Decimal((int)lo, (int)mid, (int)hi, isNegative, (byte)scale); } private int nibble(byte[] inp, int nibbleNo) { int b = inp[inp.Length - 1 - nibbleNo / 2]; return (nibbleNo % 2 == 0) ? (b & 0x0000000F) : (b >> 4); } If you have any questions, post them on here - because I suspect that I am going to get "flamed" like everyone else who has chosen to post questions that are pertinent to todays issues... Thanks, John - The Elder. A: First of all you must eliminate the end of line (EOL) translation problems that will be caused by ASCII transfer mode. You are absolutely right to be concerned about data corruption when the BCD values happen to correspond to EOL characters. The worst aspect of this problem is that it will occur rarely and unexpectedly. The best solution is to change the transfer mode to BIN. This is appropriate since the data you are transferring is binary. If it is not possible to use the correct FTP transfer mode, you can undo the ASCII mode damage in code. All you have to do is convert \r\n pairs back to \n. If I were you I would make sure this is well tested. Once you've dealt with the EOL problem, the COMP-3 conversion is pretty straigtforward. I was able to find this article in the MS knowledgebase with sample code in BASIC. See below for a VB.NET port of this code. Since you're dealing with COMP-3 values, the file format you're reading almost surely has fixed record sizes with fixed field lengths. If I were you, I would get my hands of a file format specification before you go any further with this. You should be using a BinaryReader to work with this data. If someone is pushing back on this point, I would walk away. Let them find someone else to indulge their folly. Here's a VB.NET port of the BASIC sample code. I haven't tested this because I don't have access to a COMP-3 file. If this doesn't work, I would refer back to the original MS sample code for guidance, or to references in the other answers to this question. Imports Microsoft.VisualBasic Module Module1 'Sample COMP-3 conversion code 'Adapted from http://support.microsoft.com/kb/65323 'This code has not been tested Sub Main() Dim Digits%(15) 'Holds the digits for each number (max = 16). Dim Basiceqv#(1000) 'Holds the Basic equivalent of each COMP-3 number. 'Added to make code compile Dim MyByte As Char, HighPower%, HighNibble% Dim LowNibble%, Digit%, E%, Decimal%, FileName$ 'Clear the screen, get the filename and the amount of decimal places 'desired for each number, and open the file for sequential input: FileName$ = InputBox("Enter the COBOL data file name: ") Decimal% = InputBox("Enter the number of decimal places desired: ") FileOpen(1, FileName$, OpenMode.Binary) Do Until EOF(1) 'Loop until the end of the file is reached. Input(1, MyByte) If MyByte = Chr(0) Then 'Check if byte is 0 (ASC won't work on 0). Digits%(HighPower%) = 0 'Make next two digits 0. Increment Digits%(HighPower% + 1) = 0 'the high power to reflect the HighPower% = HighPower% + 2 'number of digits in the number 'plus 1. Else HighNibble% = Asc(MyByte) \ 16 'Extract the high and low LowNibble% = Asc(MyByte) And &HF 'nibbles from the byte. The Digits%(HighPower%) = HighNibble% 'high nibble will always be a 'digit. If LowNibble% <= 9 Then 'If low nibble is a 'digit, assign it and Digits%(HighPower% + 1) = LowNibble% 'increment the high HighPower% = HighPower% + 2 'power accordingly. Else HighPower% = HighPower% + 1 'Low nibble was not a digit but a Digit% = 0 '+ or - signals end of number. 'Start at the highest power of 10 for the number and multiply 'each digit by the power of 10 place it occupies. For Power% = (HighPower% - 1) To 0 Step -1 Basiceqv#(E%) = Basiceqv#(E%) + (Digits%(Digit%) * (10 ^ Power%)) Digit% = Digit% + 1 Next 'If the sign read was negative, make the number negative. If LowNibble% = 13 Then Basiceqv#(E%) = Basiceqv#(E%) - (2 * Basiceqv#(E%)) End If 'Give the number the desired amount of decimal places, print 'the number, increment E% to point to the next number to be 'converted, and reinitialize the highest power. Basiceqv#(E%) = Basiceqv#(E%) / (10 ^ Decimal%) Print(Basiceqv#(E%)) E% = E% + 1 HighPower% = 0 End If End If Loop FileClose() 'Close the COBOL data file, and end. End Sub End Module A: If the original data was in EBCDIC your COMP-3 field has been garbled. The FTP process has done an EBCDIC to ASCII translation of the byte values in the COMP-3 field which isn't what you want. To correct this you can: 1) Use BINARY mode for the transfer so you get the raw EBCDIC data. Then you convert the COMP-3 field to a number and translate any other EBCDIC text on the record to ASCII. A packed field stores each digit in a half byte with the lower half byte as a sign (F is positive and other values, usually D or E, are negative). Storing 123.4 in a PIC 999.99 USAGE COMP-3 would be X'01234F' (three bytes) and -123 in the same field is X'01230D'. 2) Have the sender convert the field into a USAGE IS DISPLAY SIGN IS LEADING(or TRAILING) numeric field. This stores the number as a string of EBCDIC numeric digits with the sign as a separate negative(-) or blank character. All digits and the sign translate correctly to their ASCII equivalent on the FTP transfer. A: I apologize if I am way off base here, but perhaps this code sample I'll paste here could help you. This came from VBRocks... Imports System Imports System.IO Imports System.Text Imports System.Text.Encoding '4/20/07 submission includes a line spacing addition when a control character is used: ' The line spacing is calculated off of the 3rd control character. ' ' Also includes the 4/18 modification of determining end of file. '4/26/07 submission inclues an addition of 6 to the record length when the 4th control ' character is an 8. This is because these records were being truncated. 'Authored by Gary A. Lima, aka. VBRocks ''' <summary> ''' Translates an EBCDIC file to an ASCII file. ''' </summary> ''' <remarks></remarks> Public Class EBCDIC_to_ASCII_Translator #Region " Example" Private Sub Example() 'Set your source file and destination file paths Dim sSourcePath As String = "c:\Temp\MyEBCDICFile" Dim sDestinationPath As String = "c:\Temp\TranslatedFile.txt" Dim trans As New EBCDIC_to_ASCII_Translator() 'If your EBCDIC file uses Control records to determine the length of a record, then this to True trans.UseControlRecord = True 'If the first record of your EBCDIC file is filler (junk), then set this to True trans.IgnoreFirstRecord = True 'EBCDIC files are written in block lengths, set your block length (Example: 134, 900, Etc.) trans.BlockLength = 900 'This method will actually translate your source file and output it to the specified destination file path trans.TranslateFile(sSourcePath, sDestinationPath) 'Here is a alternate example: 'No Control record is used 'trans.UseControlRecord = False 'Translate the whole file, including the first record 'trans.IgnoreFirstRecord = False 'Set the block length 'trans.BlockLength = 134 'Translate... 'trans.TranslateFile(sSourcePath, sDestinationPath) '*** Some additional methods that you can use are: 'Trim off leading characters from left side of string (position 0 to...) 'trans.LTrim = 15 'Translate 1 EBCDIC character to an ASCII character 'Dim strASCIIChar as String = trans.TranslateCharacter("S") 'Translate an EBCDIC character array to an ASCII string 'trans.TranslateCharacters(chrEBCDICArray) 'Translates an EBCDIC string to an ASCII string 'Dim strASCII As String = trans.TranslateString("EBCDIC String") End Sub #End Region 'Example 'Translate characters from EBCDIC to ASCII Private ASCIIEncoding As Encoding = Encoding.ASCII Private EBCDICEncoding As Encoding = Encoding.GetEncoding(37) 'EBCDIC 'Block Length: Can be fixed (Ex: 134). Private miBlockLength As Integer = 0 Private mbUseControlRec As Boolean = True 'If set to False, will return exact block length Private mbIgnoreFirstRecord As Boolean = True 'Will Ignore first record if set to true (First record may be filler) Private miLTrim As Integer = 0 ''' <summary> ''' Translates SourceFile from EBCDIC to ASCII. Writes output to file path specified by DestinationFile parameter. ''' Set the BlockLength Property to designate block size to read. ''' </summary> ''' <param name="SourceFile">Enter the path of the Source File.</param> ''' <param name="DestinationFile">Enter the path of the Destination File.</param> ''' <remarks></remarks> Public Sub TranslateFile(ByVal SourceFile As String, ByVal DestinationFile As String) Dim iRecordLength As Integer 'Stores length of a record, not including the length of the Control Record (if used) Dim sRecord As String = "" 'Stores the actual record Dim iLineSpace As Integer = 1 'LineSpace: 1 for Single Space, 2 for Double Space, 3 for Triple Space... Dim iControlPosSix As Byte() 'Stores the 6th character of a Control Record (used to calculate record length) Dim iControlRec As Byte() 'Stores the EBCDIC Control Record (First 6 characters of record) Dim bEOR As Boolean 'End of Record Flag Dim bBOF As Boolean = True 'Beginning of file Dim iConsumedChars As Integer = 0 'Stores the number of consumed characters in the current block Dim bIgnoreRecord As Boolean = mbIgnoreFirstRecord 'Ignores the first record if set. Dim ControlArray(5) As Char 'Stores Control Record (first 6 bytes) Dim chrArray As Char() 'Stores characters just after read from file Dim sr As New StreamReader(SourceFile, EBCDICEncoding) Dim sw As New StreamWriter(DestinationFile) 'Set the RecordLength to the RecordLength Property (below) iRecordLength = miBlockLength 'Loop through entire file Do Until sr.EndOfStream = True 'If using a Control Record, then check record for valid data. If mbUseControlRec = True Then 'Read the Control Record (first 6 characters of the record) sr.ReadBlock(ControlArray, 0, 6) 'Update the value of consumed (read) characters iConsumedChars += ControlArray.Length 'Get the bytes of the Control Record Array iControlRec = EBCDICEncoding.GetBytes(ControlArray) 'Set the line spacing (position 3 divided by 64) ' (64 decimal = Single Spacing; 128 decimal = Double Spacing) iLineSpace = iControlRec(2) / 64 'Check the Control record for End of File 'If the Control record has a 8 or 10 in position 1, and a 1 in postion 2, then it is the end of the file If (iControlRec(0) = 8 OrElse iControlRec(0) = 10) AndAlso _ iControlRec(1) = 1 Then If bBOF = False Then Exit Do Else 'The Beginning of file flag is set to true by default, so when the first ' record is encountered, it is bypassed and the bBOF flag is set to False bBOF = False End If 'If bBOF = Fals End If 'If (iControlRec(0) = 8 OrElse 'Set the default value for the End of Record flag to True ' If the Control Record has all zeros, then it's True, else False bEOR = True 'If the Control record contains all zeros, bEOR will stay True, else it will be set to False For i As Integer = 0 To 5 If iControlRec(i) > 0 Then bEOR = False Exit For End If 'If iControlRec(i) > 0 Next 'For i As Integer = 0 To 5 If bEOR = False Then 'Convert EBCDIC character to ASCII 'Multiply the 6th byte by 6 to get record length ' Why multiply by 6? Because it works. iControlPosSix = EBCDICEncoding.GetBytes(ControlArray(5)) 'If the 4th position of the control record is an 8, then add 6 ' to the record length to pick up remaining characters. If iControlRec(3) = 8 Then iRecordLength = CInt(iControlPosSix(0)) * 6 + 6 Else iRecordLength = CInt(iControlPosSix(0)) * 6 End If 'Add the length of the record to the Consumed Characters counter iConsumedChars += iRecordLength Else 'If the Control Record had all zeros in it, then it is the end of the Block. 'Consume the remainder of the block so we can continue at the beginning of the next block. ReDim chrArray(miBlockLength - iConsumedChars - 1) 'ReDim chrArray(iRecordLength - iConsumedChars - 1) 'Consume (read) the remaining characters in the block. ' We are not doing anything with them because they are not actual records. 'sr.ReadBlock(chrArray, 0, iRecordLength - iConsumedChars) sr.ReadBlock(chrArray, 0, miBlockLength - iConsumedChars) 'Reset the Consumed Characters counter iConsumedChars = 0 'Set the Record Length to 0 so it will not be processed below. iRecordLength = 0 End If ' If bEOR = False End If 'If mbUseControlRec = True If iRecordLength > 0 Then 'Resize our array, dumping previous data. Because Arrays are Zero (0) based, subtract 1 from the Record length. ReDim chrArray(iRecordLength - 1) 'Read the specfied record length, without the Control Record, because we already consumed (read) it. sr.ReadBlock(chrArray, 0, iRecordLength) 'Copy Character Array to String Array, Converting in the process, then Join the Array to a string sRecord = Join(Array.ConvertAll(chrArray, New Converter(Of Char, String)(AddressOf ChrToStr)), "") 'If the record length was 0, then the Join method may return Nothing If IsNothing(sRecord) = False Then If bIgnoreRecord = True Then 'Do nothing - bypass record 'Reset flag bIgnoreRecord = False Else 'Write the line out, LTrimming the specified number of characters. If sRecord.Length >= miLTrim Then sw.WriteLine(sRecord.Remove(0, miLTrim)) Else sw.WriteLine(sRecord.Remove(0, sRecord.Length)) End If ' If sRecord.Length >= miLTrim 'Write out the number of blank lines specified by the 3rd control character. For i As Integer = 1 To iLineSpace - 1 sw.WriteLine("") Next 'For i As Integer = 1 To iLineSpace End If 'If bIgnoreRecord = True 'Obviously, if we have read more characters from the file than the designated size of the block, ' then subtract the number of characters we have read into the next block from the block size. If iConsumedChars > miBlockLength Then 'If iConsumedChars > iRecordLength Then iConsumedChars = iConsumedChars - miBlockLength 'iConsumedChars = iConsumedChars - iRecordLength End If End If 'If IsNothing(sRecord) = False End If 'If iRecordLength > 0 'Allow computer to process (works in a class module, not in a dll) 'Application.DoEvents() Loop 'Destroy StreamReader (sr) sr.Close() sr.Dispose() 'Destroy StreamWriter (sw) sw.Close() sw.Dispose() End Sub ''' <summary> ''' Translates 1 EBCDIC Character (Char) to an ASCII String ''' </summary> ''' <param name="chr"></param> ''' <returns></returns> ''' <remarks></remarks> Private Function ChrToStr(ByVal chr As Char) As String Dim sReturn As String = "" 'Convert character into byte Dim EBCDICbyte As Byte() = EBCDICEncoding.GetBytes(chr) 'Convert EBCDIC byte to ASCII byte Dim ASCIIByte As Byte() = Encoding.Convert(EBCDICEncoding, ASCIIEncoding, EBCDICbyte) sReturn = Encoding.ASCII.GetString(ASCIIByte) Return sReturn End Function ''' <summary> ''' Translates an EBCDIC String to an ASCII String ''' </summary> ''' <param name="sStringToTranslate"></param> ''' <returns>String</returns> ''' <remarks></remarks> Public Function TranslateString(ByVal sStringToTranslate As String) As String Dim i As Integer = 0 Dim sReturn As New System.Text.StringBuilder() 'Loop through the string and translate each character For i = 0 To sStringToTranslate.Length - 1 sReturn.Append(ChrToStr(sStringToTranslate.Substring(i, 1))) Next Return sReturn.ToString() End Function ''' <summary> ''' Translates 1 EBCDIC Character (Char) to an ASCII String ''' </summary> ''' <param name="sCharacterToTranslate"></param> ''' <returns>String</returns> ''' <remarks></remarks> Public Function TranslateCharacter(ByVal sCharacterToTranslate As Char) As String Return ChrToStr(sCharacterToTranslate) End Function ''' <summary> ''' Translates an EBCDIC Character (Char) Array to an ASCII String ''' </summary> ''' <param name="sCharacterArrayToTranslate"></param> ''' <returns>String</returns> ''' <remarks>Remarks</remarks> Public Function TranslateCharacters(ByVal sCharacterArrayToTranslate As Char()) As String Dim sReturn As String = "" 'Copy Character Array to String Array, Converting in the process, then Join the Array to a string sReturn = Join(Array.ConvertAll(sCharacterArrayToTranslate, _ New Converter(Of Char, String)(AddressOf ChrToStr)), "") Return sReturn End Function ''' <summary> ''' Block Length must be set. You can set the BlockLength for specific block sizes (Ex: 134). ''' Set UseControlRecord = False for files with specific block sizes (Default is True) ''' </summary> ''' <value>0</value> ''' <returns>Integer</returns> ''' <remarks></remarks> Public Property BlockLength() As Integer Get Return miBlockLength End Get Set(ByVal value As Integer) miBlockLength = value End Set End Property ''' <summary> ''' Determines whether a ControlKey is used to calculate RecordLength of valid data ''' </summary> ''' <value>Default value is True</value> ''' <returns>Boolean</returns> ''' <remarks></remarks> Public Property UseControlRecord() As Boolean Get Return mbUseControlRec End Get Set(ByVal value As Boolean) mbUseControlRec = value End Set End Property ''' <summary> ''' Ignores first record if set (Default is True) ''' </summary> ''' <value>Default is True</value> ''' <returns>Boolean</returns> ''' <remarks></remarks> Public Property IgnoreFirstRecord() As Boolean Get Return mbIgnoreFirstRecord End Get Set(ByVal value As Boolean) mbIgnoreFirstRecord = value End Set End Property ''' <summary> ''' Trims the left side of every string the specfied number of characters. Default is 0. ''' </summary> ''' <value>Default is 0.</value> ''' <returns>Integer</returns> ''' <remarks></remarks> Public Property LTrim() As Integer Get Return miLTrim End Get Set(ByVal value As Integer) miLTrim = value End Set End Property End Class A: Some useful links for EBCDIC translation: Translation table - useful to do check some of the values in the packed decimal fields: http://www.simotime.com/asc2ebc1.htm List of code pages in msdn: http://msdn.microsoft.com/en-us/library/dd317756(VS.85).aspx And a piece of code to convert the byte array fields in C#: // 500 is the code page for IBM EBCDIC International System.Text.Encoding enc = new System.Text.Encoding(500); string value = enc.GetString(byteArrayField); A: The packed fields are the same in EBCDIC or ASCII. Do not run the EBCDIC to ASCII conversion on them. In .Net dump them into a byte[]. You use bitwise masks and shifts to pack/unpack. -- But bitwise ops only apply to integer types in .Net so you need to jump through some hoops! A good COBOL or C artist can point you in the right direction. Find one of the old guys and pay your dues (about three beers should do it). A: The “ASCII transfer type” will transfer the files as regular text files. So files becoming corrupt when we transfer packed decimal or binary data files in ASCII transfer type. The “Binary transfer type” will transfer the data in binary mode which handles the files as binary data instead of text data. So we have to use Binary transfer type here. Reference : https://www.codeproject.com/Tips/673240/EBCDIC-to-ASCII-Converter Once your file is ready, here is the code to convert packed decimal to human readable decimal. using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; namespace ConsoleApp2 { class Program { static void Main(string[] args) { var path = @"C:\FileName.BIN.dat"; var templates = new List<Template> { new Template{StartPos=1,CharLength=4,Type="AlphaNum"}, new Template{StartPos=5,CharLength=1,Type="AlphaNum"}, new Template{StartPos=6,CharLength=8,Type="AlphaNum"}, new Template{StartPos=14,CharLength=1,Type="AlphaNum"}, new Template{StartPos=46,CharLength=4,Type="Packed",DecimalPlace=2}, new Template{StartPos=54,CharLength=5,Type="Packed",DecimalPlace=0}, new Template{StartPos=60,CharLength=4,Type="Packed",DecimalPlace=2}, new Template{StartPos=64,CharLength=1,Type="AlphaNum"} }; var allBytes = File.ReadAllBytes(path); for (int i = 0; i < allBytes.Length; i += 66) { var IsLastline = (allBytes.Length - i) < 66; var lineLength = IsLastline ? 64 : 66; byte[] lineBytes = new byte[lineLength]; Array.Copy(allBytes, i, lineBytes, 0, lineLength); var outArray = new string[templates.Count]; int index = 0; foreach (var temp in templates) { byte[] amoutBytes = new byte[temp.CharLength]; Array.Copy(lineBytes, temp.StartPos - 1, amoutBytes, 0, temp.CharLength); var final = ""; if (temp.Type == "Packed") { final = Unpack(amoutBytes, temp.DecimalPlace).ToString(); } else { final = ConvertEbcdicString(amoutBytes); } outArray[index] = final; index++; } Console.WriteLine(string.Join(" ", outArray)); } Console.ReadLine(); } private static string ConvertEbcdicString(byte[] ebcdicBytes) { if (ebcdicBytes.All(p => p == 0x00 || p == 0xFF)) { //Every byte is either 0x00 or 0xFF (fillers) return string.Empty; } Encoding ebcdicEnc = Encoding.GetEncoding("IBM037"); string result = ebcdicEnc.GetString(ebcdicBytes); // convert EBCDIC Bytes -> Unicode string return result; } private static Decimal Unpack(byte[] inp, int scale) { long lo = 0; long mid = 0; long hi = 0; bool isNegative; // this nybble stores only the sign, not a digit. // "C" hex is positive, "D" hex is negative, AlphaNumd "F" hex is unsigned. var ff = nibble(inp, 0); switch (ff) { case 0x0D: isNegative = true; break; case 0x0F: case 0x0C: isNegative = false; break; default: throw new Exception("Bad sign nibble"); } long intermediate; long carry; long digit; for (int j = inp.Length * 2 - 1; j > 0; j--) { // multiply by 10 intermediate = lo * 10; lo = intermediate & 0xffffffff; carry = intermediate >> 32; intermediate = mid * 10 + carry; mid = intermediate & 0xffffffff; carry = intermediate >> 32; intermediate = hi * 10 + carry; hi = intermediate & 0xffffffff; carry = intermediate >> 32; // By limiting input length to 14, we ensure overflow will never occur digit = nibble(inp, j); if (digit > 9) { throw new Exception("Bad digit"); } intermediate = lo + digit; lo = intermediate & 0xffffffff; carry = intermediate >> 32; if (carry > 0) { intermediate = mid + carry; mid = intermediate & 0xffffffff; carry = intermediate >> 32; if (carry > 0) { intermediate = hi + carry; hi = intermediate & 0xffffffff; carry = intermediate >> 32; // carry should never be non-zero. Back up with validation } } } return new Decimal((int)lo, (int)mid, (int)hi, isNegative, (byte)scale); } private static int nibble(byte[] inp, int nibbleNo) { int b = inp[inp.Length - 1 - nibbleNo / 2]; return (nibbleNo % 2 == 0) ? (b & 0x0000000F) : (b >> 4); } class Template { public string Name { get; set; } public string Type { get; set; } public int StartPos { get; set; } public int CharLength { get; set; } public int DecimalPlace { get; set; } } } } A: Files must be transferred as binary. Here's a much shorter way to do it: using System.Linq; namespace SomeNamespace { public static class SomeExtensionClass { /// <summary> /// computes the actual decimal value from an IBM "Packed Decimal" 9(x)v9 (COBOL) format /// </summary> /// <param name="value">byte[]</param> /// <param name="precision">byte; decimal places, default 2</param> /// <returns>decimal</returns> public static decimal FromPackedDecimal(this byte[] value, byte precision = 2) { if (value.Length < 1) { throw new System.InvalidOperationException("Cannot unpack empty bytes."); } double power = System.Math.Pow(10, precision); if (power > long.MaxValue) { throw new System.InvalidOperationException( $"Precision too large for valid calculation: {precision}"); } string hex = System.BitConverter.ToString(value).Replace("-", ""); var bytes = Enumerable.Range(0, hex.Length) .Select(x => System.Convert.ToByte($"0{hex.Substring(x, 1)}", 16)) .ToList(); long place = 1; decimal ret = 0; for (int i = bytes.Count - 2; i > -1; i--) { ret += (bytes[i] * place); place *= 10; } ret /= (long)power; return (bytes.Last() & (1 << 7)) != 0 ? ret * -1 : ret; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/142972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Different dependency settings for 'Debug' and 'Release' build configuration in VisualC++ 6.0 I'm using VisualC++ 6.0. (and, yes, I'm using Mosaic browser..;) My VC++ project has different dependency setting for 'Debug' and 'Release' build configuration. So, when I switch from one configuration to other, I have to change the dependency by hand every time. Is there any better way to do this? Can I keep my dependency setting according to each configuration? A: Which dependency are you dealing with? Under build, there is a configuration for debug/release/both. If you can supply more details, I might be able to help.
{ "language": "en", "url": "https://stackoverflow.com/questions/142996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why does my MFC app hang when I throw an exception? If you throw an exception from inside an MFC dialog, the app hangs, even if you have a catch block in your code. It refuses to respond to the mouse or keyboard, and the only way to shut it down is to use Task Manager. Why I'm posting this question To my shame, there is a popular shrink-wrapped application that hangs every time it encounters an exceptional error in a modal dialog. When we made a massive shift from integer error codes to exceptions, I was responsible for choosing std::exception as the base class for the thrown exceptions. It wasn't until a huge amount of work went into the conversion that our testing uncovered this problem, and by then it was too late to change. Hopefully this question/answer will keep someone from making the same mistake. A: The code for CDialog::DoModal makes the dialog modal by disabling the parent window. When the dialog code returns, the window is reenabled. There is an explicit catch for CException* errors, but not for any other kind of thrown exception; thus the parent window never gets reenabled. Change your code to throw a pointer to any exception derived from CException, and you'll fix the problem. A: If you are interested in learning about how Windows detects apphangs we have added some posts to this on the Windows Error Reporting blog: Let there be hangs part 1 of 4 Let there be hangs part 2 of 4 Let there be hangs part 3 of 4 Let there be hangs part 4 of 4 Important to note is that this information when sent through Microsoft's Windows Error Reporting gets communicated to the software developers to try and fix these issues. If you are sending in your error reports you WILL help fix issues that are occuring on your PC! I am a Program Manager at Microsoft on the Windows Error Reporting team. A: Mark's answer is correct. For a much more rigorous analysis of this problem and a detailed suggestion for dealing with it in your own code, see this FAQ by Doug Harrison (section Q6 in particular).
{ "language": "en", "url": "https://stackoverflow.com/questions/143006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Defaulting to different URLs when Generating Web Service Proxy with WSDL.exe I have numerous Web Services in my project that share types. For simplicity I will demonstrate with two Web Services. WebService1 at http://MyServer/WebService.asmx webService2 at http://MyServer/WebService.asmx When I generate the proxy for these two services I use: wsdl /sharetypes http://MyServer/WebService1.asmx http://MyServer/WebService2.asmx /appsettingurlkey:WebServiceUrl /namespace:MyNamespace /out:MyProxy.cs The problem is that the appsettingurlkey is the same for each Web Service in the proxy file. I want to be able to specify multiple appsettingurlkey parameters. How is this accomplished? I figure since the /sharetypes parameter became available, there should be a solution for specifying the appsettingurlkey specifically for each Web Service identified. If this is not possible with the wsdl.exe, what would you propose I do? I would rather not update the generated code that wsdl.exe outputs and I don't want to go through my whole application passing in the Url to each instance of the Web Services. A: The proxy classes generated are partial classes, so my solution would be to add your own constructor in a different (non-generated) code file, which explicitly reads a different setting for each proxy. A: To suplement Elijah's own answer, here's the email answer I gave him. I had to blog it because the XML didn't paste well into this text box: http://www.rickdoes.net/blog/archive/2008/09/29/wsdl-shared-types-and-configuration.aspx A: Ahh, instead of creating another partial class with an overloaded constructor passing in the Url, the following additional parameters to the wsdl.exe will solve my problem... wsdl /sharetypes http://MyServer/WebService1.asmx http://MyServer/WebService2.asmx /appsettingurlkey:WebServiceUrl /namespace:MyNamespace /out:MyProxy.cs /appsettingurlkey:BaseSoapUrl /appsettingbaseurl:http://MyServer/ If the web.config has a BaseSoapUrl appSetting, then it will use that to replace the http://MyServer/ sub string from the MyProxy.cs. If the appSetting is not present, then it will just use the path provided in the wsdl.exe (example: {BaseSoapUrl}/WebService1.asmx when using the appSetting or http://MyServer/WebService1.asmx when not using the appSetting). A thanks goes out to Rick Kierner for pointing me in the right direction.
{ "language": "en", "url": "https://stackoverflow.com/questions/143019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: mathematical optimization library for Java --- free or open source recommendations? Does anyone know of such a library that performs mathematical optimization (linear programming, convex optimization, or more general types of problems)? I'm looking for something like MATLAB, but with the ability to handle larger problems. Do I have to write my own implementations, or buy one of those commercial products (CPLEX and the like)? A: There is a linear optimization tool called lpsolve. It's written in C (I think) but comes with a Java/JNI wrapper (API is not very OO but it does the job). It's pretty easy to use and I have had it running quite happily and stably in a live system for the last year. A: OptaPlanner (Java, open source, ASL) can handle large problems and doesn't have an constraint type limitations (such as linear vs convex). A: You may try JOptimizer, open source and suitable for general convex optimization problems (linear programming, quadratic programming, qcqp, cone programming, semidefinite programming, ect A: A good answer is dependent on what you mean by "convex" and "more general" If you are trying to solve large or challenging linear or convex-quadratic optimization problems (especially with a discrete component to them), then it's hard to beat the main commercial solvers, gurobi, cplex and Dash unless money is a big issue for you. They all have clean JNI interfaces and are available on most major platforms. The coin-or project has several optimizers and have a project for JNI interface. It is totally free (EPL license), but will take more work to set-up and probably not give you the same performance. A: You may want to look at JScience, it looks pretty complete. (Mathematical structures, linear algebra solving, etc.) A: IPOPT has an interface for Java. You may also be able to adapt the APMonitor modeling language for Java. I develop this platform so I'll be glad to work with someone if they'd like to create a new interface to Java. It already has a Python API and MATLAB interface and includes solvers such as IPOPT, APOPT, BPOPT, and others that can handle large-scale systems. A: Look into AMPL. The basic edition is free, but it costs money for larger problems. You don't pay for the language; you pay for solvers. It is also possible to upload your code and have it run on their servers.
{ "language": "en", "url": "https://stackoverflow.com/questions/143020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How do I find the size of a struct? struct a { char *c; char b; }; What is sizeof(a)? A: Contrary to what some of the other answers have said, on most systems, in the absence of a pragma or compiler option, the size of the structure will be at least 6 bytes and, on most 32-bit systems, 8 bytes. For 64-bit systems, the size could easily be 16 bytes. Alignment does come into play; always. The sizeof a single struct has to be such that an array of those sizes can be allocated and the individual members of the array are sufficiently aligned for the processor in question. Consequently, if the size of the struct was 5 as others have hypothesized, then an array of two such structures would be 10 bytes long, and the char pointer in the second array member would be aligned on an odd byte, which would (on most processors) cause a major bottleneck in the performance. A: If you want to manually count it, the size of a struct is just the size of each of its data members after accounting for alignment. There's no magic overhead bytes for a struct. A: #include <stdio.h> typedef struct { char* c; char b; } a; int main() { printf("sizeof(a) == %d", sizeof(a)); } I get "sizeof(a) == 8", on a 32-bit machine. The total size of the structure will depend on the packing: In my case, the default packing is 4, so 'c' takes 4 bytes, 'b' takes one byte, leaving 3 padding bytes to bring it to the next multiple of 4: 8. If you want to alter this packing, most compilers have a way to alter it, for example, on MSVC: #pragma pack(1) typedef struct { char* c; char b; } a; gives sizeof(a) == 5. If you do this, be careful to reset the packing before any library headers! A: The exact value is sizeof(a). You might also take a risk and assume that it is in this case no less than 2 and no greater than 16. A: This will vary depending on your architecture and how it treats basic data types. It will also depend on whether the system requires natural alignment. A: I assume you mean struct and not strict, but on a 32-bit system it'll be either 5 or 8 bytes, depending on if the compiler is padding the struct. A: I suspect you mean 'struct', not 'strict', and 'char' instead of 'Char'. The size will be implementation dependent. On most 32-bit systems, it will probably be 5 -- 4 bytes for the pointer, one for the char. I don't believe alignment will come into play here. If you swapped 'c' and 'b', however, the size may grow to 8 bytes. Ok, I tried it out (g++ 4.2.3, with -g option) and I get 8. A: The sizeof the structure should be 8 bytes on a 32 bit system, so that the size of the structure becomes multiple of 2. This makes individual structures available at the correct byte boundaries when an array of structures is declared. This is achieved by padding the structure with 3 bytes at the end. If the structure had the pointer declared after the char, it would still be 8 bytes in size but the 3 byte padding would have been added to keep the pointer (which is a 4 byte element) aligned at a 4 byte address boundary. The rule of thumb is that elements should be at an offset which is the multiple of their byte size and the structure itself should be of a size which is a multiple of 2.
{ "language": "en", "url": "https://stackoverflow.com/questions/143025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Does VB6 have a #pragma pack equivalent? I am developing a TCP/IP client that has to deal with a proprietary binary protocol. I was considering using user-defined types to represent the protocol headers, and using CopyMemory to shuffle data to and from the UDT and a byte array. However, it appears that VB6 adds padding bytes to align user-defined types. Is there any way to force VB6 to not pad UDT's, similar to the #pragma pack directive available in many C/C++ compilers? Perhaps a special switch passed to the compiler? A: No. Your best bet is to write the low level code in C or C++ (where you do have #pragma pack), then expose the interface via COM. A: There is not any way to force VB6 to not pad UDT's, similar to the #pragma pack directive available in many C/C++ compilers, but you can do it the other way around. According to Q194609 Visual Basic uses 4 bytes alignment and Visual C++ uses 8 bytes by default. When using VB6 to call out to a C DLL, I used the MS "pshpack4.h" header files to handle the alignment because various compilers do this in different ways, as shown in this (rather edited) example: // this is in a header file called vbstruct.h ... # define VBSTRING char # define VBFIXEDSTRING char # define VBDATE double # define VBSINGLE float # ifdef _WIN32 # define VBLONG long # define VBINT short # else // and this was for 16bit code not 64bit!!!! # define VBLONG long # define VBINT int # endif ... # include "pshpack4.h" ... typedef struct VbComputerNameStruct { VBLONG sName; VBSTRING ComputerName[VB_COMPUTERNAME_LENGTH]; } VbComputerNameType; typedef struct VbNetwareLoginInfoStruct { VBLONG ObjectId; VBINT ObjectType; VBSTRING ObjectName[48]; } VbNetwareLoginInfoType; ... # include "poppack.h"
{ "language": "en", "url": "https://stackoverflow.com/questions/143032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Showing a tooltip on a non-focused ToolStripItem ToolStripItems show Active highlighting when you mouse over them, even if the form they are in is not in focus. They do not, however, show their tooltips, unless the form is focused. I have seen the ToolStrip 'click-though' hack. Anyone know how to make a ToolStripButton show its tooltip when its parent form is not in focus? Thanks! A: The problem is that the ToolStrip "controls" like ToolStripButton or ToolStripDropDownButton don't inherit from Control. For now I addressed the problem by focusing the ToolStrip whenever a user hovers over a button. The button's MouseHover event is fired too late -- after the "show tooltip" code would have been run, so I extended the ToolStripDropDownButton class and used my new button. This method should work for any of the other button-like classes inheriting from ToolStripItem public class ToolStripDropDownEx : ToolStripDropDownButton { public ToolStripDropDownEx(string text) { } protected override void OnMouseHover(EventArgs e) { if (this.Parent != null) Parent.Focus(); base.OnMouseHover(e); } } A: Perhaps one of the two approaches in this code will kick you off in the right direction... public Form1() { InitializeComponent(); tooltip = new ToolTip(); tooltip.ShowAlways = true; } private ToolTip tooltip; private void toolStripButton_MouseHover(object sender, EventArgs e) { if (!this.Focused) { ToolStripItem tsi = (ToolStripItem)sender; tooltip.SetToolTip(toolStrip1, tsi.AutoToolTip ? tsi.ToolTipText : tsi.Text); /*tooltip.Show(tsi.AutoToolTip ? tsi.ToolTipText : tsi.Text, this, new Point(toolStrip1.Left, toolStrip1.Bottom));*/ } } private void toolStripButton_MouseLeave(object sender, EventArgs e) { tooltip.RemoveAll(); } The problem with the first is you can't set it to the button directly, it doesn't inherit from Control, and the tooltip won't show up unless you're over the strip but not over a button. The problem with the second (commented out way) is it doesn't display at all. Not quite sure why, but maybe you can debug it out. A: i tried a few things and found this to be the simplest when i create the toolstripbutton items i added an event handler to its hover event: private sub SomeCodeSnippet() Me.tooltipMain.ShowAlways = True Dim tsi As New ToolStripButton(String.Empty, myImage) tsi.ToolTipText = "my tool tip text" toolstripMain.Add(tsi) AddHandler tsi.MouseHover, AddressOf ToolStripItem_MouseHover end sub then the event handler: Private Sub ToolStripItem_MouseHover(ByVal sender As Object, ByVal e As System.EventArgs) If TypeOf sender Is ToolStripButton Then Me.tooltipMain.SetToolTip(Me.toolstripMain, CType(sender, ToolStripButton).ToolTipText) End If End Sub this works really nicely, although i do notice a tiny initial delay when you hover over the toolstrip for the 1st time A: I was trying to do the same thing and determined it was going to be pretty challenging and not worth it. The reason is that internally, the .NET code is specifically designed to only show the tooltip if the window is active - they are checking this at a Win32 level so its going to be hard to fake the code out. Here is the code snippet in ToolTip.cs that checks "GetActiveWindow()" and returns false. You can see the comment in the code "ToolTips should be shown only on active Windows." By the way, you can see all the source code for the .NET BCL with Visual Studio 2008, here are the instructions I used: http://blogs.msdn.com/sburke/archive/2008/01/16/configuring-visual-studio-to-debug-net-framework-source-code.aspx // refer VsWhidbey 498263: ToolTips should be shown only on active Windows. private bool IsWindowActive(IWin32Window window) { Control windowControl = window as Control; // We want to enter in the IF block only if ShowParams does not return SW_SHOWNOACTIVATE. // for ToolStripDropDown ShowParams returns SW_SHOWNOACTIVATE, in which case we DONT want to check IsWindowActive and hence return true. if ((windowControl.ShowParams & 0xF) != NativeMethods.SW_SHOWNOACTIVATE) { IntPtr hWnd = UnsafeNativeMethods.GetActiveWindow(); IntPtr rootHwnd =UnsafeNativeMethods.GetAncestor(new HandleRef(window, window.Handle), NativeMethods.GA_ROOT); if (hWnd != rootHwnd) { TipInfo tt = (TipInfo)tools[windowControl]; if (tt != null && (tt.TipType & TipInfo.Type.SemiAbsolute) != 0) { tools.Remove(windowControl); DestroyRegion(windowControl); } return false; } } return true; }
{ "language": "en", "url": "https://stackoverflow.com/questions/143058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: EventWaitHandle behavior for pthread_cond_t I've recently seen the light of EventWaitHandle's powerful behavior in C# and decided to move some functionality in a sister application to do the same. The only problem is that the sister app is written in C. No big deal, I'm using pthreads, which have a pthread_cond_t datatype that allows for signalling. My only question is, is it possible for a cond to be 'signalled' before something is waiting on it? Right now my testing says no. That is, if ThreadA fires a signal before ThreadB is waiting, ThreadB will wait indefinately. Is there another pthread type that I can use that behaves closer to the functionality of the EventWaitHandle in C#? An object is signalled, meaning that the first thread to wait on it, will pass immediately, and set it to unsignalled. Wrapping the pthread_cond into another data structure wouldn't be too hard to achieve this. But again, is this functionality already available in the pthread library? A: If you're using condition variables correctly, this won't matter. The basic flow of your code should be (in pseudocode): lock(lockobj); while (!signalled) { wait(condvar); } signalled = false; unlock(lockobj); on the waiting side, and: lock(lockobj); signalled = true; notify(condvar); unlock(lockobj); on the signalling side. (Of course, the lock object and condition variable used have to be the same on both sides.) Hope this helps! A: Alternative answer (also in pseudocode) if you want multiple signallings (i.e., if signalled twice, then two threads can wait before the state is unsignalled again). Waiting side: lock(lockobj); while (signalled != 0) { wait(condvar); } --signalled; unlock(lockobj); Signalling side: lock(lockobj); ++signalled; notify(condvar); unlock(lockobj); A: I ended up just wrapping a condition type in a new structure and created some simple functions to behave much like the EventWaitHandle from C#. I needed two mutexes to acheive proper serialized access. The the cond_mutex is used for waiting on the conditional variable, while the data_mutex is used when setting the state from signaled to not signaled. The reset mode is the same from C#. AUTO or MANUAL. This allows for the event_wait_t to reset itself automatically after waiting. Or letting the programmer manually do it with a call to event_wait_reset(event_wait_t *ewh);
{ "language": "en", "url": "https://stackoverflow.com/questions/143063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: In Emacs, what is the opposite function of other-window (C-x o)? Possible Duplicate: Emacs, switch to previous window other-window advances me to the next window in the current frame, but I also want a way to move back to the previous window. Emacs has next-buffer and previous-buffer, but no analogous interactive functions for window navigation. Just other-window. A: A slightly less annoying shortcut available by default isC-- C-x o. That way you don't have to switch between Meta and Control while typing the prefixes. A: Different from what you asked for, but the windmove package lets you move between windows according to their relative screen locations, which can be much easier than repeatedly doing C-x o. A: Provide a negative argument with C-u - ("Control+U" then "minus"), or even more simply C-- ("Control minus"). * *Move to previous window: C-- C-x o *Move to previous frame: C-- C-x 5 o From code, (other-window -1) or (other-frame -1) will do the same thing. Check out the help for the key you want to reverse (e.g. C-h k C-x o to show help for C-x o) and if it says "A negative argument..." you know you can use C--. A: Instead of C-u -, you can also give a negative prefix argument with just M-- (Meta-Minus) , i.e. switch to previous window with M-- C-x o. A: This is an old post, but I just wondered the same. It seems there now is a function for this in Emacs: previous-multiframe-window. I have it bound to C-x O, as in uppercase letter o. Now I just throw in shift when I want to go backwards. (global-set-key (kbd "C-x O") 'previous-multiframe-window) A: Put this in your .emacs, and bind it to whatever key you like (defun back-window () (interactive) (other-window -1))
{ "language": "en", "url": "https://stackoverflow.com/questions/143072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Debugging C++ STL containers in Windbg Windbg fans claim that it is quite powerful and I tend to agree. But when it comes to debugging STL containers, I am always stuck. If the variable is on the stack, the !stl extension sometimes figures it out, but when a container with a complex type (e.g. std::vector<TemplateField, std::allocator<TemplateField> >) is on the heap or part of some other structure, I just don't know how to view its contents. Appreciate any tips, pointers. A: I often find debugger support for STL data types inadequate. For this reason I'm increasingly using logging frameworks and logging statements. I used to think that these are for people who can't use a debugger, but I now realize that they offer real value. They allow you to embed portable debugging knowledge in your code and maintain it together with the code. In contrast, work you do in the debugger is typically ephemeral. A: You might also want to give this debugger extension a try. It is a library called SDbgExt, developed by Skywing. A: Python extension for WinDbg (pykd) have snippet stlp.py which can dump map contents. Currently it supports STLPort map implementation. Tested on x86 and x64. This article demonstrates how to use it (its on Russian, but, examples are self-explanatory). A: I had the exact same question some time ago. My answer is that Visual Studio is truly a better debugger for STL and complex types (just like Visual Studio is just a plain better debugger than MDbg). This is not to say WinDBG is less powerful, just that it's lower level (e.g. try doing anything useful with crash dumps using Visual Studio -- you can't). Anyway, to answer your question, you can use Visual Studio to look at the data types using some tricks: * *Start another instance of WinDBG, attach non-invasively: cdb -p <PID> -pv. This will suspend the threads of the debugee. Now you can safely detach the original WinDBG qd *Attach Visual Studio to it, and then detach the non-invasive WinDBG qd. Look at the STL and continue as you wish. *When you need to go back to WinDBG, goto step 1, swap with an invasive WinDBG. A: I usually end up sticking a toString() method in a lot of my classes. This shows all the info that I deem important, any containers can then call this to display the class information in the console A: Use dt -r i.e dt yourapp!class 7ffdf000 -r5
{ "language": "en", "url": "https://stackoverflow.com/questions/143073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: NSDateFormatter, am I doing something wrong or is this a bug? I'm trying to print out the date in a certain format: NSDate *today = [[NSDate alloc] init]; NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateFormat:@"yyyyMMddHHmmss"]; NSString *dateStr = [dateFormatter stringFromDate:today]; If the iPhone is set to 24 hour time, this works fine, if on the other hand the user has set it to 24 hour time, then back to AM/PM (it works fine until you toggle this setting) then it appends the AM/PM on the end even though I didn't ask for it: 20080927030337 PM Am I doing something wrong or is this a bug with firmware 2.1? Edit 1: Made description clearer Edit 2 workaround: It turns out this is a bug, to fix it I set the AM and PM characters to "": [dateFormatter setAMSymbol:@""]; [dateFormatter setPMSymbol:@""]; A: Setting locale on date formatter to en_US fixes the problem for me: NSDateFormatter * f = [[NSDateFormatter alloc] init]; [f setDateFormat:@"yyyy-MM-dd'T'HH:mm:ss'Z'"]; f.timeZone = [NSTimeZone timeZoneForSecondsFromGMT:0]; f.calendar = [[[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar] autorelease]; f.locale = [[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"] autorelease]; I'm not sure if adding the calendar is also needed, but this works well. A: I think this is the solution . NSDateFormatter *df =[[NSDateFormatter alloc] init]; [df setDateFormat:@"yyyy-MM-dd HH:mm:ss"]; NSLocale *usLocale = [[NSLocale alloc] initWithLocaleIdentifier:@"en_US"]; [df setLocale: usLocale]; [usLocale release]; NSDate *documento_en_Linea =[[[NSDate alloc] init]autorelease]; documento_en_Linea=[df dateFromString:@"2010-07-16 21:40:33"]; [df release]; NSLog(@"fdocumentoenLineaUTC:%@!",documento_en_Linea); //ouput fdocumentoenLineaUTC:2010-07-16 09:40:33 p.m. -0500! A: The reason for this behaviour is Locale, It sets the correct Locale. Set the local of your NSDateFormatter to en_US_POSIX will fix this. It works for both 24-hour and 12 hour format. On iPhone OS, the user can override the default AM/PM versus 24-hour time setting (via Settings > General > Date & Time > 24-Hour Time), which causes NSDateFormatter to rewrite the format string you set. From apple doc Try this, NSDate *today = [[NSDate alloc] init]; NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setLocale:[[NSLocale alloc] initWithLocaleIdentifier:@"en_US_POSIX"]]; [dateFormatter setDateFormat:@"yyyyMMddHHmmss"]; NSString *dateStr = [dateFormatter stringFromDate:today]; A: Here's the explanation of the iPhone SDK bug (also still there in 3.1 beta SDK) First, a little background on the iPhone user interface. When iPhone users change their region format between, say, “United States” and “France”, the users’ “24-Hour Time” setting is automatically switched to the mode that is most prevalent in that region. In France, that would set 24-Hour Time to “ON”, and in the U.S., that would set it to “OFF”. The users can then manually override that setting and that’s where trouble starts. The problem comes from NSDateFormatter somehow “getting stuck” in the 12 or 24-hour time mode that the user has manually selected. So if a French user manually selects 12-hour mode, and the application requested NSDateFormatter to output time with the 24-hour format “HHmm”, it would actually receive time in a 12-hour format, e.g. “01:00 PM”, as if the application had instead requested “hhmm aa”. The reverse would happen if a US user manually selected 24-hour mode: outputting time with the 12-hour format “hhmm aa” would actually get you time in the 24-hour format instead, e.g. “17:00″. More details and a possible workaround can be found on this blog. A: Using the code you posted on both the simulator and a phone with the 2.1 firmware and 24-hour time set to off, I never had an AM/PM appended to dateStr when I do: NSLog(@"%@", dateStr); Are you doing anything else with dateStr that you didn't post here? How are you checking the value? Follow up Try turning the am/pm setting on then off. I didn't have the problem either, until I did that. I am printing it out the same way you are. Okay, I see it when I do this also. It's gotta be a bug. I recommend you file a bug report and just check for and filter out the unwanted characters in the meantime. A: For those finding this question who want to use NSDateFormatter to parse 24-hour time and are hitting this bug, using NSDateComponents to parse dates and times which have a known format sidesteps this issue: NSString *dateStr = @"2010-07-05"; NSString *timeStr = @"13:30"; NSDateComponents *components = [[NSDateComponents alloc] init]; components.year = [[dateStr substringToIndex:4] intValue]; components.month = [[dateStr substringWithRange:NSMakeRange(5, 2)] intValue]; components.day = [[dateStr substringFromIndex:8] intValue]; components.hour = [[timeStr substringToIndex:2] intValue]; components.minute = [[timeStr substringFromIndex:3] intValue]; NSCalendar *calendar = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDate *date = [calendar dateFromComponents:components]; [components release]; [calendar release]; A: This should also work (I am seeing some bizzare results though). -(NSString*)lowLevTime:(NSString*)stringFormat { char buffer[50]; const char *format = [stringFormat UTF8String]; time_t rawtime; struct tm * timeinfo; time(&rawtime); timeinfo = localtime(&rawtime); strftime(buffer, sizeof(buffer), format, timeinfo); return [NSString stringWithCString:buffer encoding:NSASCIIStringEncoding]; } A: Short answer: try [dateFormatter setDateFormat:@"yyyyMMddhhmmss"]; for 12 hour format (note the lowercase hh). It's been a frustrating topic because so many websites indicate to use HH for hours (including the official Apple documentation), but that sets it to 24 hour format, whereas hh uses 12 hour format. See http://unicode.org/reports/tr35/tr35-6.html#Date_Format_Patterns for more details. As a bonus, note that you can also use KK or kk for hour of the day format, which will likely be off by one. Update: I was recently looking at NSLocale (https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSLocale_Class/Reference/Reference.html) and it would seem that you can use autoupdatingCurrentLocale to apply changes made from within the app to the Locale. The upshot of this is that even if the phone is set to use a 24 hour clock (like when you switched to France), you can make a 12/24 toggle for the app that won't impact any other apps on the phone, or require you to leave the app to make the change.
{ "language": "en", "url": "https://stackoverflow.com/questions/143075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Is it OK to have an 'empty' class that extends another class? Let's say I have one class Foo that has a bunch of logic in it and another class Bar which is essentially the same. However, as Foo and Bar are different (but related) entities I need the difference to be apparent from my code (i.e. I can tell whether an instance is a Foo or a Bar) As I was whacking this together without much thought I ended up with the following: public class Foo { /* constructors, fields, method, logic and what-not */ } public class Bar extends Foo { /* nothing here but constructors */ } Is this OK? Is it better to make Bar a composite class? e.g: public class Bar { private Foo foo; /* constructors and a bunch of wrapper methods that call into foo */ } Or even, while we're at it, something much more low-tech: public class Foo { /* constructors, fields, method, logic and what-not */ private boolean isABar; // Could be an enum } What do you think? How do you deal with these 'marker classes'? As an example of how my code may wish to treat Foo and Bar differently, my code would need to be able to do stuff like List<Foo> and List<Bar>. A Foo couldn't go in a List<Bar> and vice versa. A: It all depends on meaning of the Foo and Bar classes. What they represent, and what's their purpose. Please clarify. I can imagine situations when each of your solutions and proposed solutions is the right one. A: If there is any likelihood that Foo and Bar could someday diverge in implementation, then your question is answered - use inheritance in whatever way seems best. But if you're absolutely sure that they'll never diverge, then clearly you're looking at something that should be represented by a single class, such as ThingThatIsEitherFooOrBar. And with that class made, rather than giving it a boolean property like isFoo, it would be much better to take another look at why you need to differentiate Foo from Bar. What is it about Foos that makes you handle them differently than Bars? Figure that out, and make a property that specifies the information that differs. Are Foos bigger? Then make a property for size (even if it's an enum with values "Foo-sized" and "Bar-sized"). That's about as much as one can say without specific examples of what Foo and Bar might be. A: Foo and Bar inherit from FooBarImplementation I'd make a class FooBarImplementation that would implement the common features of Foo and Bar. Foo and Bar would derive from it. But in your code, never ever use the type FooBarImplementation. My Java days are somewhat behind me, but I guess there must be some kind of way to hide FooBarImplementation from the user code (making it protected, or package visible only, depending on your project organization. This way, no user code will mix Foo for a Bar (and vice versa)? class FooBarImplementation { public void doSomething() { /* etc. */ } /* etc. */ } class Foo inherits FooBarImplementation { /* etc. */ } class Bar inherits FooBarImplementation { /* etc. */ } Foo and Bar composed with FooBarImplementation Another possibility would be to make Foo and Bar forward each of their methods to an internal class (again, FooBarImplementation). This way, there's no way the user code could be Foo and Bar. class FooBarImplementation { public void doSomething() { /* etc. */ } /* etc. */ } class Foo { private FooBarImplementation fooBarImplementation = new FooBarImplementation() ; public void doSomething() { this.fooBarImplementation.doSomething() ; } /* etc. */ } class Bar { private FooBarImplementation fooBarImplementation = new FooBarImplementation() ; public void doSomething() { this.fooBarImplementation.doSomething() ; } /* etc. */ } Do NOT make Foo inherits from Bar (or vice versa) Shoudl Foo inherits from Barn, Foo would be a Bar, as far as the language is concerned. Don't do it, you'll lose the difference between the objects, and this is what you don't want. Do not use boolean, and whataver type field This is the worst idea you could come accross. Bjarne Stroustrup warned against this kind of antipattern for C++, and C++ is not all about OOP. So I guess this pattern is even more "anti" for Java... :-) A: In my opinion, it's best if Foo and Bar subclass off a common ancestor class (maybe AbstractFoo), which has all the functionality. What difference in behaviour should exist between Foo and Bar? Code that difference as an abstract method in AbstractFoo, not by using a if statement in your code. Example: Rather than this: if (foo instanceof Bar) { // Do Bar-specific things } Do this instead: class Bar extends AbstractFoo { public void specialOp() { // Do Bar-specific things } } // ... foo.specialOp(); The benefit of this approach is that if you need a third class, that's much like Foo but has just a little bit of difference, you don't have to go through all your code and add edit all the if statements. :-) A: Basically you need to apply the Strategy Pattern. A: Definitely use a boolean property. It's the simplest solution, unless you foresee the Bar class needing to change it's interface later (e.g. override it's methods). A: Inheritance is best. With a boolean property, the class must know about the existence of two different types of objects, and this isn't easily extensible to more than two. Moreover, this approach doesn't let you overload functions. Composition makes you write wrappers for all functions. A: your data was not clear enough, but based on what I think you need, I am puzzled why you don't simply go with a 4th option: Class MainStuff; Class TypeA; Class TypeB; No either make TypeA and B inherit from MainStuff, or make MainStuff a data member of TypeA and TypeB. This depends on the meaning of what these 3 classes are. A: As others said, it depends, but if you have common functionality between Foo and Bar and the difference in functionality can be expressed as parameters, then I vote for subclasses. This was basically what we were doing in the end of a practical software course. We should implement a sudoku-game, and in the end, we had an "AbstractPlayfield", which was able to apply an arbitrary set of rules to an arbitrary playfield. This AbstractPlayfield was subclassed by the individual variants we were supposed to implement. Those subclasses set parameters (mostly the rules and the shape of the board) for the abstract playfield and everything worked like a charm. We even ended up with more inheritance in those subclasses, because several of the variants contained the rules "Numbers must be unique in a row" and "Numbers must be unique in a column". Using that, we were able to finish the work that was estimated for about 2 month in about 3 days :) (And they annoyed us with "Test those tiny attribute-setting classes, because you might have bugs in there! Test them! Test them! We dont care that all important logic is tested!".) On the other hand, if the class Bar has no special different functionality from Bar, I do not see the point of adding it - at least from the data you give me. It might make sense if you wanted to do some operations based on types and dispatch on type, but I cannot read that from Foo and Bar. In this case, I'd not create Bar, due to YAGNI. A: If there is no behavioral difference between Foo and Bar, then the class name "Foo" is not abstract enough. Identify the common abstraction between Foo and Bar and rename the class accordingly. Then provide a member field in the class to identify instances as "Foo", "Bar", etc. Use an enum if you wish to limit possible values to "Foo" and "Bar".
{ "language": "en", "url": "https://stackoverflow.com/questions/143084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: JavaScript object browser? I was recently tasked to document a large JavaScript application I have been maintaining for some time. So I do have a good knowledge of the system. But due the sheer size of the application, it will probably take a lot of time even with prior knowledge around the code and the source code itself in uncompressed form. So I'm looking for tools that would help me explore classes and methods and their relationships in JavaScript and if possible, document them along the way, is there one available? Something like object browser in VS would be nice, but any tools that help me get things done faster will do. Thanks! A: Firebug's DOM tab lets you browse the contents of the global window object, and you can inspect a particular object by entering inspect(whatever) in the command line. You won't be able to use it to detect relationships unless an instance of one object holds an instance of a related object, but it's a start. You can also use the Options menu on the DOM tab to restrict what's shown to user-defined functions and properties, which should help reduce clutter. A: Take a look at Aptana, they have an outline that can help you to determine what are the objects and somtetimes their relationship. A: Firebug + uneval(obj) is a simple trick that is often helpful. A: I see a lot of people talking about examining the DOM within Firebug. However, from your question it looks like you want something like jsdoc? just add type and class information through comments and jsdoc generates documentation including class relationships. http://jsdoc.sourceforge.net/ Google has a fork of it with added functionality http://code.google.com/p/jsdoc-toolkit/ UPDATE: It's not a fork, it's a rewrite by the developer that wrote jsdoc originally as a perl script. It aims at being more adaptable so you can use whatever js inheritance/events/properties style you'd like. Another feature is that it lets you modify the templates used to generate the HTML in a much simpler way. A: We don't know if this JS application is designed to run in a Web browser... If yes, as advised, Firebug (a Firefox extension) is excellent at debugging JS and exploring Dom. On the IE side, you have some tools like IEDocMon, Web Accessibility Toolbar (it does more than its name) or Fiddler (unrelated to your question, but still a good tool to have). A: Firebug (Firefox) / Dragonfly (Opera) can help you with viewing objects in realtime Aptana / JS/UML(Eclipse) can help with relationships of objects A: This is an old question, but let me answer it anyway. * *Use an IDE. Integrated Development Environments were made for jumping around rapidly among the code. The key features you will exercise during exploration are viewing the file structure or outline, jumping to a declaration or usage, and searching the entire project for all instances of a string. If you are using WebStorm, set up a custom scope for files except generated files and node.js to aid in searching. *Run 'npm la | less' which lists all your dependent modules with one line descriptions. You may have never seen moment.js and never need to read the documentation, but taking the time to read a one line summary of it is worthwhile. If you need more information on a tool than one line summary, search for the term on SlideShare. Slides are faster than ReadTheDocs. *Document a little as you go. I'm a fan of forcing people to use notebooks constantly rather than scratch paper. Also, I find adding a one line comment to each JavaScript file is worthwhile. You want to know what should be in each directory of your project. I also recommend building a glossary of the exact meaning of domain terms in your system, e.g., what does "job" in your system. *Finally, you may need to just fire up the application in a debugger and start stepping through parts of it. Most large projects have accreted worth from programmers of various skill levels and motivations. You are aiming for a level of "conceptual integrity" (to quote Yourdon) or to "grok" the software (to quote Heinlien). It does take some time, cannot be bypassed, and can be done efficiently.
{ "language": "en", "url": "https://stackoverflow.com/questions/143087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you use Presentation Model with Webforms? I have started using Presentation Model w/ ASP.NET webforms and like the pattern quite a bit. What I am really having an issue with is where to instantiate some of my classes, mostly the presentation model, my business object, and things like the data for drop down lists. So I could use some tips or a complete example of the Presentation Model (I have only find pieces so far). A: Video Links - Rik Bardof on Design Patterns (including MVP) Jean Paul Boohdoo on MVP As far as the rest of your architecture goes, I would recommend reading Jeffrey Palermo's blog series on Onion Architecture. He has a sample project CodeCampServer that illustrates some good practices. This is a web app using ASP.NET MVC, but the principles still apply.
{ "language": "en", "url": "https://stackoverflow.com/questions/143098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is Windows' rand_s thread-safe? Just as in title. Is suspect it is, but I couldn't find it anywhere explicitly stated. And for this property I wouldn't like to rely on speculations. A: Chris said: rand() is not thread-safe because its internal state is static, but rand_s() should be thread-safe, however. Jeff added however that with the multithreaded version of MSVCRT, rand()'s state is held in thread-local storage, so it's okay still. A: Visual Studio comes with the source to the runtime library. While some of it can be rather painful to wade through, rand_s() is pretty simple. All rand_s() does is call SystemFunction036() in ADVAPI32.DLL to get the random value. Anything in ADVAPI32.DLL should be thread-safe. For its part, rand_s() gets the pointer to that function in a thread-safe manner. A: If you use the multithreaded version of the CRT, all functions are thread safe, because any thread-specific information is stored in TLS. rand_s actually doesn't use state information in the first place, since it just calls an OS API, so question of thread-safety doesn't arise for rand_s. rand(), however depends on a seed value to generate a random number. A: I don't know if rand_s is thread-safe, but it seems like it probably is, since it seems to make a round-trip to the OS for entropy. (as long as you link to the VC++ multi-thread CRT, all bets are off if you link to the single-thread one) If it's supported by windows CRT, you can try a call to rand_r which is the posix reentrant version of rand. OR even better boost::random, if you're already using boost. considering how pervasive multi-threading will be soon, no one should be using rand() anymore in new code - always try to use rand_r/rand_s/boost/various platform-dependent secure rands/etc. A: I can't think of any reason why rand_s() or even rand() wouldn't be thread safe.
{ "language": "en", "url": "https://stackoverflow.com/questions/143108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Using SimpleXML to create an XML object from scratch Is it possible to use PHP's SimpleXML functions to create an XML object from scratch? Looking through the function list, there's ways to import an existing XML string into an object that you can then manipulate, but if I just want to generate an XML object programmatically from scratch, what's the best way to do that? I figured out that you can use simplexml_load_string() and pass in the root string that you want, and then you've got an object you can manipulate by adding children... although this seems like kind of a hack, since I have to actually hardcode some XML into the string before it can be loaded. I've done it using the DOMDocument functions, although it's a little confusing because I'm not sure what the DOM has to do with creating a pure XML document... so maybe it's just badly named :-) A: In PHP5, you should use the Document Object Model class instead. Example: $domDoc = new DOMDocument; $rootElt = $domDoc->createElement('root'); $rootNode = $domDoc->appendChild($rootElt); $subElt = $domDoc->createElement('foo'); $attr = $domDoc->createAttribute('ah'); $attrVal = $domDoc->createTextNode('OK'); $attr->appendChild($attrVal); $subElt->appendChild($attr); $subNode = $rootNode->appendChild($subElt); $textNode = $domDoc->createTextNode('Wow, it works!'); $subNode->appendChild($textNode); echo htmlentities($domDoc->saveXML()); A: Please see my answer here. As dreamwerx.myopenid.com points out, it is possible to do this with SimpleXML, but the DOM extension would be the better and more flexible way. Additionally there is a third way: using XMLWriter. It's much more simple to use than the DOM and therefore it's my preferred way of writing XML documents from scratch. $w=new XMLWriter(); $w->openMemory(); $w->startDocument('1.0','UTF-8'); $w->startElement("root"); $w->writeAttribute("ah", "OK"); $w->text('Wow, it works!'); $w->endElement(); echo htmlentities($w->outputMemory(true)); By the way: DOM stands for Document Object Model; this is the standardized API into XML documents. A: Sure you can. Eg. <?php $newsXML = new SimpleXMLElement("<news></news>"); $newsXML->addAttribute('newsPagePrefix', 'value goes here'); $newsIntro = $newsXML->addChild('content'); $newsIntro->addAttribute('type', 'latest'); Header('Content-type: text/xml'); echo $newsXML->asXML(); ?> Output <?xml version="1.0"?> <news newsPagePrefix="value goes here"> <content type="latest"/> </news> Have fun.
{ "language": "en", "url": "https://stackoverflow.com/questions/143122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: How to create a buffer for reading socket data in C Using C / C++ socket programming, and the "read(socket, buffer, BUFSIZE)" method. What exactly is the "buffer" I know that char and byte are the same thing, but does it matter how many elements the byte array has in it? Does the buffer need to be able to hold the entire message until the null character? A: As always, use sizeof when you have the chance. Using the built-in operator sizeof, you ask the compiler to compute the size of a variable, rather than specify it yourself. This reduces the risk of introducing bugs when the size of the actual variable is different from what you think. So, instead of doing #define BUFSIZE 1500 char buffer[BUFSIZE]; int n = read(sock, buffer, BUFSIZE); you really should use char buffer[1500]; int n = read(sock, buffer, sizeof buffer); Notice how you don't need parenthesis around the argument to sizeof, unless the argument is the name of a type. A: BUFSIZE should be equal to the size of your buffer in bytes. read() will stop reading when the buffer is full. Here is an example: #define MY_BUFFER_SIZE 1024 char mybuffer[MY_BUFFER_SIZE]; int nBytes = read(sck, mybuffer, MY_BUFFER_SIZE); A: Your sockets implementation doesn't require the buffer, to be big enough, to hold the entire message for sure, but it might be convenient depending on, what You are doing.
{ "language": "en", "url": "https://stackoverflow.com/questions/143123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to synchronize two Subversion repositories? My company has a subsidiary with a slow Internet connection. Our developers there suffer to interact with our central Subversion server. Is it possible to configure a slave/mirror for them? They would interact locally with the server and all the commits would be automatically synchronized to the master server. This should work as transparently as possible for the developers. Usability is a must. Please, no suggestions to change our version control system. A: You should try The SVK version control system SVK is a decentralized version control system built with the robust Subversion filesystem. It supports repository mirroring, disconnected operation, history-sensitive merging, and integrates with other version control systems, as well as popular visual merge tools. On this link there is text about Using SVK to Synchronize SVN Repositories A: It is possible but not necessarily simple: the problem you are trying to solve is dangerously close to setting up a distributed development environment which is not exactly what SVN is designed for. The SVN-mirror way You can use svn mirror as explained in the SVN book documentation to create a read-only mirror of your master repository. Your developers each interact with the mirror closest to them. However users of the slave repository will have to use svn switch --relocate master_url before they can commit and they will have to remember to relocate back on the slave once they are done. This could be automated using a wrapper script around the repository modifying commands on SVN if you use the command line client. Keep in mind that the relocate operation while fast adds a bit of overhead. (And be careful to duplicate the repository uuid - see the SVN documentation.) [Edit - Checking the TortoiseSVN documentation it seems that you can have TortoiseSVN execute hook scripts client side. You may be able to create a pre/post commit script at this point. Either that or try to see if you can use the TortoiseSVN automation interface to do it]. The SVK way svk is a set of Perl scripts which emulate a distributed mirroring service over SVN. You can set it up so that the local branch (the mirror) is shared by multiple developers. Then basic usage for the developers will be completely transparent. You will have to use the svk client for cherry picking, merging and starmerging. It is doable if you can get your head around the distributed concepts. The git-svn way While I never used that myself, you could also have distant developers use git locally and use the git-svn gateway for synchronization. Final words It all depends on your development environment and the level of integration you require. Depending on your IDE (and if you can change SCM) you might want to have a look at other fully distributed SCMs (think Mercurial/Bazaar/Git/...) which support distributed development out of the box. A: Subversion 1.5 introduced proxy support when you're using http to host your repository. Developers can checkout their working copies from the slave. Then all read-only operations (diff, log, update etc) will use the slave. When committing, the slave transparently passes all write operation to the master. A: If one of the repositories is fully read only you can use 'svnsync' to keep it up to date with the master repository. This tool is often used in combination with the proxy support to create a master slave setup. E.g. Apache does this to mirror their repository to different continents. The master repostitory is located in the US, but if I access the repository from the EU I get a local mirror that works just as well as the master server. A: The inotify-tools works well for me, details are mentioned on this website: http://planet.admon.org/synchronize-subversion-repositories-with-inotify-tools/ A: There is a commercial solution that provides true active-active replication (not master-slave) of Subversion repositories if you need performance and data safety beyond what svnsync provides called "Subversion MultiSite". Disclaimer: I work for the company that makes this solution A: VisualSVN Server's Multisite Repository Replication was designed for this case. You can keep the master repository in your main office and setup multiple writeable slave repositories at the remote locations. This should work as transparently as possible for the developers. Usability is a must. * *The replication between the slaves and the master is transparent and automatic, *Each master and slave repository is a writable Subversion repository from the user's standpoint, *Works out-of-the-box and can be configured in a couple of clicks via VisualSVN Server Manager MMC console.
{ "language": "en", "url": "https://stackoverflow.com/questions/143130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: Bron-Kerbosch algorithm for clique finding Can anyone tell me, where on the web I can find an explanation for Bron-Kerbosch algorithm for clique finding or explain here how it works? I know it was published in "Algorithm 457: finding all cliques of an undirected graph" book, but I can't find free source that will describe the algorithm. I don't need a source code for the algorithm, I need an explanation of how it works. A: i find the explanation of the algorithm here: http://www.dfki.de/~neumann/ie-seminar/presentations/finding_cliques.pdf it's a good explanation... but i need a library or implementation in C# -.-' A: Try finding someone with an ACM student account who can give you a copy of the paper, which is here: http://portal.acm.org/citation.cfm?doid=362342.362367 I just downloaded it, and it's only two pages long, with an implementation in Algol 60! A: There is the algorithm right here I have rewritten it using Java linkedlists as the sets R,P,X and it works like a charm (o good thing is to use the function "retainAll" when doing set operations according to the algorithm). I suggest you think a little about the implementation because of the optimization issues when rewriting the algorithm A: I was also trying to wrap my head around the Bron-Kerbosch algorithm, so I wrote my own implementation in python. It includes a test case and some comments. Hope this helps. class Node(object): def __init__(self, name): self.name = name self.neighbors = [] def __repr__(self): return self.name A = Node('A') B = Node('B') C = Node('C') D = Node('D') E = Node('E') A.neighbors = [B, C] B.neighbors = [A, C] C.neighbors = [A, B, D] D.neighbors = [C, E] E.neighbors = [D] all_nodes = [A, B, C, D, E] def find_cliques(potential_clique=[], remaining_nodes=[], skip_nodes=[], depth=0): # To understand the flow better, uncomment this: # print (' ' * depth), 'potential_clique:', potential_clique, 'remaining_nodes:', remaining_nodes, 'skip_nodes:', skip_nodes if len(remaining_nodes) == 0 and len(skip_nodes) == 0: print 'This is a clique:', potential_clique return for node in remaining_nodes: # Try adding the node to the current potential_clique to see if we can make it work. new_potential_clique = potential_clique + [node] new_remaining_nodes = [n for n in remaining_nodes if n in node.neighbors] new_skip_list = [n for n in skip_nodes if n in node.neighbors] find_cliques(new_potential_clique, new_remaining_nodes, new_skip_list, depth + 1) # We're done considering this node. If there was a way to form a clique with it, we # already discovered its maximal clique in the recursive call above. So, go ahead # and remove it from the list of remaining nodes and add it to the skip list. remaining_nodes.remove(node) skip_nodes.append(node) find_cliques(remaining_nodes=all_nodes) A: For what it is worth, I found a Java implementation: http://joelib.cvs.sourceforge.net/joelib/joelib2/src/joelib2/algo/clique/BronKerbosch.java?view=markup HTH. A: I have implemented both versions specified in the paper. I learned that, the unoptimized version, if solved recursively helps a lot to understand the algorithm. Here is python implementation for version 1 (unoptimized): def bron(compsub, _not, candidates, graph, cliques): if len(candidates) == 0 and len(_not) == 0: cliques.append(tuple(compsub)) return if len(candidates) == 0: return sel = candidates[0] candidates.remove(sel) newCandidates = removeDisconnected(candidates, sel, graph) newNot = removeDisconnected(_not, sel, graph) compsub.append(sel) bron(compsub, newNot, newCandidates, graph, cliques) compsub.remove(sel) _not.append(sel) bron(compsub, _not, candidates, graph, cliques) And you invoke this function: graph = # 2x2 boolean matrix cliques = [] bron([], [], graph, cliques) The variable cliques will contain cliques found. Once you understand this it's easy to implement optimized one. A: Boost::Graph has an excellent implementation of Bron-Kerbosh algorithm, give it a check.
{ "language": "en", "url": "https://stackoverflow.com/questions/143140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: splice() on std::list and iterator invalidation The 3-argument form of list::splice() moves a single element from one list to the other. SGI's documentation explicitly states that all iterators, including the one pointing to the element being moved remain valid. Roguewave's documentation does not say anything about iterator invalidation properties of splice() methods, whereas the C++ standard explicitly states that it invalidates all iterators and references to the element being spliced. splicing() in practice works as defined by SGI, but I get assertion failure (dereferencing invalid iterator) in debug / secure SCL versions of microsoft's STL implementation (which strictly follows the letter of the standard). Now, I'm using list exactly because I want to move an element between lists, while preserving the validity of the iterator pointing to it. The standard has made an extremely unhelpful change to the original SGI's specification. How can I work around this problem? Or should I just be pragmatic and stick my head in the sand (because the splicing does not invalidate iterators in practice -- not even in the MS's implementation, once iterator debugging is turned off). A: The problem is that if the iterator still points to the element that was moved, then the "end" iterator previously associated with the "moved" iterator has changed. Unless you write some complex loop, this is actually a bad thing to do -- especially since it will be more difficult for other developers to understand. A better way in my opinion is to use the iterators pointing to the elements prior and after the moved iterator. A: Ok, this seems to be a defect in the standard, according to this and this link. It seems that "sticking the head in the sand" is a good strategy, since it will be fixed in new library versions. A: I have an array of lists (equivalence classes of elements), and I'm using splice to move elements between the lists. I have an additional array of iterators which gives me direct access to any element in any of the lists and to move it to another list. None of the lists is searched and modified at the same time. I could reinitialize the element iterator after splice, but it's kinda ugly.. I guess I'll do that for the time being.
{ "language": "en", "url": "https://stackoverflow.com/questions/143156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Have you moved MOSS SharePoint 2007 out of the C:\Inetpub\wwwroot\wss\ folder? Is it best practice to not use C:\Inetpub\wwwroot\wss\ for SharePoint? My concern is that the configuration wizard seems to look for this C: path and it may be too complicated to not use the default path(s), A: What would be the reason for using an alternate location? A: Having failed miserably in the past merely trying to change machine names on a VM after Sharepoint was installed, it is hard to imagine a goal more likely to frustrate than this idea! A: The only arguments I've heard for not running IIS websites out of the Inetpub directory is that it's a commonly known location for evildoers to look at when attacking a system, but if security is your concern you're far past screwing the pooch if an attacker has file system access. A: You should not be changing anything in the sharepoint IIS sites through IIS Manager, except through the sharepoint Central Admin site. There are dependencies in the sharepoint configuration that are not just stored in IIS, especially around the users that are applied for app pools etc. This website does most of the things you need to do (i.e. host headers etc) So best practice is to create a folder in the C:\Inetpub\wwwroot\wss\ that is easy to map to the web application and then leave the folder as is. Although it is hard to find stuff in the Central Admin site, the Infrastructure Update for SharePoint helps. A: We've always let the configuration wizard pick that location for us. There's a lot of aspects of the underlying configuration that rely on that location and it's never seemed worthwhile to explore changing the home directory.
{ "language": "en", "url": "https://stackoverflow.com/questions/143162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I use xargs to copy files that have spaces and quotes in their names? I'm trying to copy a bunch of files below a directory and a number of the files have spaces and single-quotes in their names. When I try to string together find and grep with xargs, I get the following error: find .|grep "FooBar"|xargs -I{} cp "{}" ~/foo/bar xargs: unterminated quote Any suggestions for a more robust usage of xargs? This is on Mac OS X 10.5.3 (Leopard) with BSD xargs. A: Look into using the --null commandline option for xargs with the -print0 option in find. A: For those who relies on commands, other than find, eg ls: find . | grep "FooBar" | tr \\n \\0 | xargs -0 -I{} cp "{}" ~/foo/bar A: This is more efficient as it does not run "cp" multiple times: find -name '*FooBar*' -print0 | xargs -0 cp -t ~/foo/bar A: I ran into the same problem. Here's how I solved it: find . -name '*FoooBar*' | sed 's/.*/"&"/' | xargs cp ~/foo/bar I used sed to substitute each line of input with the same line, but surrounded by double quotes. From the sed man page, "...An ampersand (``&'') appearing in the replacement is replaced by the string matching the RE..." -- in this case, .*, the entire line. This solves the xargs: unterminated quote error. A: find | perl -lne 'print quotemeta' | xargs ls -d I believe that this will work reliably for any character except line-feed (and I suspect that if you've got line-feeds in your filenames, then you've got worse problems than this). It doesn't require GNU findutils, just Perl, so it should work pretty-much anywhere. A: This method works on Mac OS X v10.7.5 (Lion): find . | grep FooBar | xargs -I{} cp {} ~/foo/bar I also tested the exact syntax you posted. That also worked fine on 10.7.5. A: I have found that the following syntax works well for me. find /usr/pcapps/ -mount -type f -size +1000000c | perl -lpe ' s{ }{\\ }g ' | xargs ls -l | sort +4nr | head -200 In this example, I am looking for the largest 200 files over 1,000,000 bytes in the filesystem mounted at "/usr/pcapps". The Perl line-liner between "find" and "xargs" escapes/quotes each blank so "xargs" passes any filename with embedded blanks to "ls" as a single argument. A: Frame challenge — you're asking how to use xargs. The answer is: you don't use xargs, because you don't need it. The comment by user80168 describes a way to do this directly with cp, without calling cp for every file: find . -name '*FooBar*' -exec cp -t /tmp -- {} + This works because: * *the cp -t flag allows to give the target directory near the beginning of cp, rather than near the end. From man cp: -t, --target-directory=DIRECTORY copy all SOURCE arguments into DIRECTORY * *The -- flag tells cp to interpret everything after as a filename, not a flag, so files starting with - or -- do not confuse cp; you still need this because the -/-- characters are interpreted by cp, whereas any other special characters are interpreted by the shell. *The find -exec command {} + variant essentially does the same as xargs. From man find: -exec command {} + This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invoca‐ matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of `{}' is allowed within the command, and (when find is being invoked from a shell) it should be quoted (for example, '{}') to protect it from interpretation by shells. The command is executed in the starting directory. If any invocation returns a non-zero value as exit status, then find returns a non-zero exit status. If find encounters an error, this can sometimes cause an immedi‐ ate exit, so some pending commands may not be run at all. This variant of -exec always returns true. By using this in find directly, this avoids the need of a pipe or a shell invocation, such that you don't need to worry about any nasty characters in filenames. A: With Bash (not POSIX) you can use process substitution to get the current line inside a variable. This enables you to use quotes to escape special characters: while read line ; do cp "$line" ~/bar ; done < <(find . | grep foo) A: You can combine all of that into a single find command: find . -iname "*foobar*" -exec cp -- "{}" ~/foo/bar \; This will handle filenames and directories with spaces in them. You can use -name to get case-sensitive results. Note: The -- flag passed to cp prevents it from processing files starting with - as options. A: Be aware that most of the options discussed in other answers are not standard on platforms that do not use the GNU utilities (Solaris, AIX, HP-UX, for instance). See the POSIX specification for 'standard' xargs behaviour. I also find the behaviour of xargs whereby it runs the command at least once, even with no input, to be a nuisance. I wrote my own private version of xargs (xargl) to deal with the problems of spaces in names (only newlines separate - though the 'find ... -print0' and 'xargs -0' combination is pretty neat given that file names cannot contain ASCII NUL '\0' characters. My xargl isn't as complete as it would need to be to be worth publishing - especially since GNU has facilities that are at least as good. A: For me, I was trying to do something a little different. I wanted to copy my .txt files into my tmp folder. The .txt filenames contain spaces and apostrophe characters. This worked on my Mac. $ find . -type f -name '*.txt' | sed 's/'"'"'/\'"'"'/g' | sed 's/.*/"&"/' | xargs -I{} cp -v {} ./tmp/ A: Just don't use xargs. It is a neat program but it doesn't go well with find when faced with non trivial cases. Here is a portable (POSIX) solution, i.e. one that doesn't require find, xargs or cp GNU specific extensions: find . -name "*FooBar*" -exec sh -c 'cp -- "$@" ~/foo/bar' sh {} + Note the ending + instead of the more usual ;. This solution: * *correctly handles files and directories with embedded spaces, newlines or whatever exotic characters. *works on any Unix and Linux system, even those not providing the GNU toolkit. *doesn't use xargs which is a nice and useful program, but requires too much tweaking and non standard features to properly handle find output. *is also more efficient (read faster) than the accepted and most if not all of the other answers. Note also that despite what is stated in some other replies or comments quoting {} is useless (unless you are using the exotic fishshell). A: find . -print0 | grep --null 'FooBar' | xargs -0 ... I don't know about whether grep supports --null, nor whether xargs supports -0, on Leopard, but on GNU it's all good. A: The easiest way to do what the original poster wants is to change the delimiter from any whitespace to just the end-of-line character like this: find whatever ... | xargs -d "\n" cp -t /var/tmp A: bill_starr's Perl version won't work well for embedded newlines (only copes with spaces). For those on e.g. Solaris where you don't have the GNU tools, a more complete version might be (using sed)... find -type f | sed 's/./\\&/g' | xargs grep string_to_find adjust the find and grep arguments or other commands as you require, but the sed will fix your embedded newlines/spaces/tabs. A: I used Bill Star's answer slightly modified on Solaris: find . -mtime +2 | perl -pe 's{^}{\"};s{$}{\"}' > ~/output.file This will put quotes around each line. I didn't use the '-l' option although it probably would help. The file list I was going though might have '-', but not newlines. I haven't used the output file with any other commands as I want to review what was found before I just start massively deleting them via xargs. A: If find and xarg versions on your system doesn't support -print0 and -0 switches (for example AIX find and xargs) you can use this terribly looking code: find . -name "*foo*" | sed -e "s/'/\\\'/g" -e 's/"/\\"/g' -e 's/ /\\ /g' | xargs cp /your/dest Here sed will take care of escaping the spaces and quotes for xargs. Tested on AIX 5.3 A: I played with this a little, started contemplating modifying xargs, and realised that for the kind of use case we're talking about here, a simple reimplementation in Python is a better idea. For one thing, having ~80 lines of code for the whole thing means it is easy to figure out what is going on, and if different behaviour is required, you can just hack it into a new script in less time than it takes to get a reply on somewhere like Stack Overflow. See https://github.com/johnallsup/jda-misc-scripts/blob/master/yargs and https://github.com/johnallsup/jda-misc-scripts/blob/master/zargs.py. With yargs as written (and Python 3 installed) you can type: find .|grep "FooBar"|yargs -l 203 cp --after ~/foo/bar to do the copying 203 files at a time. (Here 203 is just a placeholder, of course, and using a strange number like 203 makes it clear that this number has no other significance.) If you really want something faster and without the need for Python, take zargs and yargs as prototypes and rewrite in C++ or C. A: I created a small portable wrapper script called "xargsL" around "xargs" which addresses most of the problems. Contrary to xargs, xargsL accepts one pathname per line. The pathnames may contain any character except (obviously) newline or NUL bytes. No quoting is allowed or supported in the file list - your file names may contain all sorts of whitespace, backslashes, backticks, shell wildcard characters and the like - xargsL will process them as literal characters, no harm done. As an added bonus feature, xargsL will not run the command once if there is no input! Note the difference: $ true | xargs echo no data no data $ true | xargsL echo no data # No output Any arguments given to xargsL will be passed through to xargs. Here is the "xargsL" POSIX shell script: #! /bin/sh # Line-based version of "xargs" (one pathname per line which may contain any # amount of whitespace except for newlines) with the added bonus feature that # it will not execute the command if the input file is empty. # # Version 2018.76.3 # # Copyright (c) 2018 Guenther Brunthaler. All rights reserved. # # This script is free software. # Distribution is permitted under the terms of the GPLv3. set -e trap 'test $? = 0 || echo "$0 failed!" >& 2' 0 if IFS= read -r first then { printf '%s\n' "$first" cat } | sed 's/./\\&/g' | xargs ${1+"$@"} fi Put the script into some directory in your $PATH and don't forget to $ chmod +x xargsL the script there to make it executable. A: You might need to grep Foobar directory like: find . -name "file.ext"| grep "FooBar" | xargs -i cp -p "{}" . A: If you are using Bash, you can convert stdout to an array of lines by mapfile: find . | grep "FooBar" | (mapfile -t; cp "${MAPFILE[@]}" ~/foobar) The benefits are: * *It's built-in, so it's faster. *Execute the command with all file names in one time, so it's faster. *You can append other arguments to the file names. For cp, you can also: find . -name '*FooBar*' -exec cp -t ~/foobar -- {} + however, some commands don't have such feature. The disadvantages: * *Maybe not scale well if there are too many file names. (The limit? I don't know, but I had tested with 10 MB list file which includes 10000+ file names with no problem, under Debian) Well... who knows if Bash is available on OS X?
{ "language": "en", "url": "https://stackoverflow.com/questions/143171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "249" }
Q: How do I get the directory that a program is running from? Is there a platform-agnostic and filesystem-agnostic method to obtain the full path of the directory from where a program is running using C/C++? Not to be confused with the current working directory. (Please don't suggest libraries unless they're standard ones like clib or STL.) (If there's no platform/filesystem-agnostic method, suggestions that work in Windows and Linux for specific filesystems are welcome too.) A: Maybe concatenate the current working directory with argv[0]? I'm not sure if that would work in Windows but it works in linux. For example: #include <stdio.h> #include <unistd.h> #include <string.h> int main(int argc, char **argv) { char the_path[256]; getcwd(the_path, 255); strcat(the_path, "/"); strcat(the_path, argv[0]); printf("%s\n", the_path); return 0; } When run, it outputs: jeremy@jeremy-desktop:~/Desktop$ ./test /home/jeremy/Desktop/./test A: For Win32 GetCurrentDirectory should do the trick. A: You can not use argv[0] for that purpose, usually it does contain full path to the executable, but not nessesarily - process could be created with arbitrary value in the field. Also mind you, the current directory and the directory with the executable are two different things, so getcwd() won't help you either. On Windows use GetModuleFileName(), on Linux read /dev/proc/procID/.. files. A: This is from the cplusplus forum On windows: #include <string> #include <windows.h> std::string getexepath() { char result[ MAX_PATH ]; return std::string( result, GetModuleFileName( NULL, result, MAX_PATH ) ); } On Linux: #include <string> #include <limits.h> #include <unistd.h> std::string getexepath() { char result[ PATH_MAX ]; ssize_t count = readlink( "/proc/self/exe", result, PATH_MAX ); return std::string( result, (count > 0) ? count : 0 ); } On HP-UX: #include <string> #include <limits.h> #define _PSTAT64 #include <sys/pstat.h> #include <sys/types.h> #include <unistd.h> std::string getexepath() { char result[ PATH_MAX ]; struct pst_status ps; if (pstat_getproc( &ps, sizeof( ps ), 0, getpid() ) < 0) return std::string(); if (pstat_getpathname( result, PATH_MAX, &ps.pst_fid_text ) < 0) return std::string(); return std::string( result ); } A: Just my two cents, but doesn't the following code portably work in C++17? #include <iostream> #include <filesystem> namespace fs = std::filesystem; int main(int argc, char* argv[]) { std::cout << "Path is " << fs::path(argv[0]).parent_path() << '\n'; } Seems to work for me on Linux at least. Based on the previous idea, I now have: std::filesystem::path prepend_exe_path(const std::string& filename, const std::string& exe_path = ""); With implementation: fs::path prepend_exe_path(const std::string& filename, const std::string& exe_path) { static auto exe_parent_path = fs::path(exe_path).parent_path(); return exe_parent_path / filename; } And initialization trick in main(): (void) prepend_exe_path("", argv[0]); Thanks @Sam Redway for the argv[0] idea. And of course, I understand that C++17 was not around for many years when the OP asked the question. A: Just to belatedly pile on here,... there is no standard solution, because the languages are agnostic of underlying file systems, so as others have said, the concept of a directory based file system is outside the scope of the c / c++ languages. on top of that, you want not the current working directory, but the directory the program is running in, which must take into account how the program got to where it is - ie was it spawned as a new process via a fork, etc. To get the directory a program is running in, as the solutions have demonstrated, requires that you get that information from the process control structures of the operating system in question, which is the only authority on this question. Thus, by definition, its an OS specific solution. A: #include <windows.h> using namespace std; // The directory path returned by native GetCurrentDirectory() no end backslash string getCurrentDirectoryOnWindows() { const unsigned long maxDir = 260; char currentDir[maxDir]; GetCurrentDirectory(maxDir, currentDir); return string(currentDir); } A: If you want a standard way without libraries: No. The whole concept of a directory is not included in the standard. If you agree that some (portable) dependency on a near-standard lib is okay: Use Boost's filesystem library and ask for the initial_path(). IMHO that's as close as you can get, with good karma (Boost is a well-established high quality set of libraries) A: I know it is very late at the day to throw an answer at this one but I found that none of the answers were as useful to me as my own solution. A very simple way to get the path from your CWD to your bin folder is like this: int main(int argc, char* argv[]) { std::string argv_str(argv[0]); std::string base = argv_str.substr(0, argv_str.find_last_of("/")); } You can now just use this as a base for your relative path. So for example I have this directory structure: main ----> test ----> src ----> bin and I want to compile my source code to bin and write a log to test I can just add this line to my code. std::string pathToWrite = base + "/../test/test.log"; I have tried this approach on Linux using full path, alias etc. and it works just fine. NOTE: If you are on windows you should use a '\' as the file separator not '/'. You will have to escape this too for example: std::string base = argv[0].substr(0, argv[0].find_last_of("\\")); I think this should work but haven't tested, so comment would be appreciated if it works or a fix if not. A: Filesystem TS is now a standard ( and supported by gcc 5.3+ and clang 3.9+ ), so you can use current_path() function from it: std::string path = std::experimental::filesystem::current_path(); In gcc (5.3+) to include Filesystem you need to use: #include <experimental/filesystem> and link your code with -lstdc++fs flag. If you want to use Filesystem with Microsoft Visual Studio, then read this. A: For Windows system at console you can use system(dir) command. And console gives you information about directory and etc. Read about the dir command at cmd. But for Unix-like systems, I don't know... If this command is run, read bash command. ls does not display directory... Example: int main() { system("dir"); system("pause"); //this wait for Enter-key-press; return 0; } A: Works with starting from C++11, using experimental filesystem, and C++14-C++17 as well using official filesystem. application.h: #pragma once // // https://en.cppreference.com/w/User:D41D8CD98F/feature_testing_macros // #ifdef __cpp_lib_filesystem #include <filesystem> #else #include <experimental/filesystem> namespace std { namespace filesystem = experimental::filesystem; } #endif std::filesystem::path getexepath(); application.cpp: #include "application.h" #ifdef _WIN32 #include <windows.h> //GetModuleFileNameW #else #include <limits.h> #include <unistd.h> //readlink #endif std::filesystem::path getexepath() { #ifdef _WIN32 wchar_t path[MAX_PATH] = { 0 }; GetModuleFileNameW(NULL, path, MAX_PATH); return path; #else char result[PATH_MAX]; ssize_t count = readlink("/proc/self/exe", result, PATH_MAX); return std::string(result, (count > 0) ? count : 0); #endif } A: Here's code to get the full path to the executing app: Variable declarations: char pBuf[256]; size_t len = sizeof(pBuf); Windows: int bytes = GetModuleFileName(NULL, pBuf, len); return bytes ? bytes : -1; Linux: int bytes = MIN(readlink("/proc/self/exe", pBuf, len), len - 1); if(bytes >= 0) pBuf[bytes] = '\0'; return bytes; A: If you fetch the current directory when your program first starts, then you effectively have the directory your program was started from. Store the value in a variable and refer to it later in your program. This is distinct from the directory that holds the current executable program file. It isn't necessarily the same directory; if someone runs the program from a command prompt, then the program is being run from the command prompt's current working directory even though the program file lives elsewhere. getcwd is a POSIX function and supported out of the box by all POSIX compliant platforms. You would not have to do anything special (apart from incliding the right headers unistd.h on Unix and direct.h on windows). Since you are creating a C program it will link with the default c run time library which is linked to by ALL processes in the system (specially crafted exceptions avoided) and it will include this function by default. The CRT is never considered an external library because that provides the basic standard compliant interface to the OS. On windows getcwd function has been deprecated in favour of _getcwd. I think you could use it in this fashion. #include <stdio.h> /* defines FILENAME_MAX */ #ifdef WINDOWS #include <direct.h> #define GetCurrentDir _getcwd #else #include <unistd.h> #define GetCurrentDir getcwd #endif char cCurrentPath[FILENAME_MAX]; if (!GetCurrentDir(cCurrentPath, sizeof(cCurrentPath))) { return errno; } cCurrentPath[sizeof(cCurrentPath) - 1] = '\0'; /* not really required */ printf ("The current working directory is %s", cCurrentPath); A: No, there's no standard way. I believe that the C/C++ standards don't even consider the existence of directories (or other file system organizations). On Windows the GetModuleFileName() will return the full path to the executable file of the current process when the hModule parameter is set to NULL. I can't help with Linux. Also you should clarify whether you want the current directory or the directory that the program image/executable resides. As it stands your question is a little ambiguous on this point. A: On Windows the simplest way is to use the _get_pgmptr function in stdlib.h to get a pointer to a string which represents the absolute path to the executable, including the executables name. char* path; _get_pgmptr(&path); printf(path); // Example output: C:/Projects/Hello/World.exe A: For relative paths, here's what I did. I am aware of the age of this question, I simply want to contribute a simpler answer that works in the majority of cases: Say you have a path like this: "path/to/file/folder" For some reason, Linux-built executables made in eclipse work fine with this. However, windows gets very confused if given a path like this to work with! As stated above there are several ways to get the current path to the executable, but the easiest way I find works a charm in the majority of cases is appending this to the FRONT of your path: "./path/to/file/folder" Just adding "./" should get you sorted! :) Then you can start loading from whatever directory you wish, so long as it is with the executable itself. EDIT: This won't work if you try to launch the executable from code::blocks if that's the development environment being used, as for some reason, code::blocks doesn't load stuff right... :D EDIT2: Some new things I have found is that if you specify a static path like this one in your code (Assuming Example.data is something you need to load): "resources/Example.data" If you then launch your app from the actual directory (or in Windows, you make a shortcut, and set the working dir to your app dir) then it will work like that. Keep this in mind when debugging issues related to missing resource/file paths. (Especially in IDEs that set the wrong working dir when launching a build exe from the IDE) A: A library solution (although I know this was not asked for). If you happen to use Qt: QCoreApplication::applicationDirPath() A: Path to the current .exe #include <Windows.h> std::wstring getexepathW() { wchar_t result[MAX_PATH]; return std::wstring(result, GetModuleFileNameW(NULL, result, MAX_PATH)); } std::wcout << getexepathW() << std::endl; // -------- OR -------- std::string getexepathA() { char result[MAX_PATH]; return std::string(result, GetModuleFileNameA(NULL, result, MAX_PATH)); } std::cout << getexepathA() << std::endl; A: This question was asked 15 years ago, so the existing answers are now incorrect. If you're using C++17 or greater, the solution is very straightforward today: #include <filesystem> std::cout << std::filesystem::current_path(); See cppreference.com for more information. A: On POSIX platforms, you can use getcwd(). On Windows, you may use _getcwd(), as use of getcwd() has been deprecated. For standard libraries, if Boost were standard enough for you, I would have suggested Boost::filesystem, but they seem to have removed path normalization from the proposal. You may have to wait until TR2 becomes readily available for a fully standard solution. A: Boost Filesystem's initial_path() behaves like POSIX's getcwd(), and neither does what you want by itself, but appending argv[0] to either of them should do it. You may note that the result is not always pretty--you may get things like /foo/bar/../../baz/a.out or /foo/bar//baz/a.out, but I believe that it always results in a valid path which names the executable (note that consecutive slashes in a path are collapsed to one). I previously wrote a solution using envp (the third argument to main() which worked on Linux but didn't seem workable on Windows, so I'm essentially recommending the same solution as someone else did previously, but with the additional explanation of why it is actually correct even if the results are not pretty. A: As Minok mentioned, there is no such functionality specified ini C standard or C++ standard. This is considered to be purely OS-specific feature and it is specified in POSIX standard, for example. Thorsten79 has given good suggestion, it is Boost.Filesystem library. However, it may be inconvenient in case you don't want to have any link-time dependencies in binary form for your program. A good alternative I would recommend is collection of 100% headers-only STLSoft C++ Libraries Matthew Wilson (author of must-read books about C++). There is portable facade PlatformSTL gives access to system-specific API: WinSTL for Windows and UnixSTL on Unix, so it is portable solution. All the system-specific elements are specified with use of traits and policies, so it is extensible framework. There is filesystem library provided, of course. A: The linux bash command which progname will report a path to program. Even if one could issue the which command from within your program and direct the output to a tmp file and the program subsequently reads that tmp file, it will not tell you if that program is the one executing. It only tells you where a program having that name is located. What is required is to obtain your process id number, and to parse out the path to the name In my program I want to know if the program was executed from the user's bin directory or from another in the path or from /usr/bin. /usr/bin would contain the supported version. My feeling is that in Linux there is the one solution that is portable. A: Use realpath() in stdlib.h like this: char *working_dir_path = realpath(".", NULL); A: The following worked well for me on macOS 10.15.7 brew install boost main.cpp #include <iostream> #include <boost/filesystem.hpp> int main(int argc, char* argv[]){ boost::filesystem::path p{argv[0]}; p = absolute(p).parent_path(); std::cout << p << std::endl; return 0; } Compiling g++ -Wall -std=c++11 -l boost_filesystem main.cpp
{ "language": "en", "url": "https://stackoverflow.com/questions/143174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "285" }
Q: library- vs. application-version If you have a project, that releases a library and an application, how you handle version-numbers between the two. Example: Your project delivers a library, that convert different file-formats into each other. The library is released for inclusion into other applications. But you also release a command-line-application, that uses this library and implements an interface to the functionality. New releases of the library lead to new releases of the application (to make use of all new features), but new releases of the application may not trigger new releases of the library. Now how are the versions numbers handled: Completely independent or should library- and application-version be dependent in some way? A: Completely independent version numbers, but the command line (or any other dependent) app should say which version of the library it was compiled against in the help section or a banner. That way you will be able to tell which functionality will the apps have and reduce potential confusion, especially given that somebody could compile a newer app version against an old library for any reason. Also, you decouple them and can add features on the library without depending on release of a new app version and so on. If you are sure you will always want all the apps and library to go in lockstep then you could use same numbers, but that's adding a constraint for not a strong reason. A: I'd say use separate version numbers, and of course document what minimum library version is required for each release of the app. If they always have the same version number, and you only ever test the app against the equal-numbered library version, then they aren't really separate components, so don't say they are. Release the whole lot as one lump. If you make them separate, you can still give them the same version number when it's appropriate - for example after a major compatibility break you might release Version 2.0 of both simultaneously. The following example illustrates: xsltproc (a command-line app) is released as part of libxslt (a library), so doesn't have its own version number. But libxslt depends on two other libraries, and the version numbers of those are independent. $ xsltproc --version Using libxml 20628, libxslt 10120 and libexslt 813 xsltproc was compiled against libxml 20628, libxslt 10120 and libexslt 813 libxslt 10120 was compiled against libxml 20628 libexslt 813 was compiled against libxml 20628 A: We built an application that uses a framework. We keep separate version numbers for both. This works well, especially that now the framework and application have grown large enough to be developed by different teams. So my opinion... keep the version numbers separate.
{ "language": "en", "url": "https://stackoverflow.com/questions/143181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Unit/Automated Testing in a workflow system Do you do automated testing on a complex workflow system like K2? We are building a system with extensive integration between Sharepoint 2007 and K2. I can't even imagine where to start with automated testing as the workflow involves multiple users interacting with Sharepoint, K2 workflows and custom web pages. Has anyone done automated testing on a workflow server like K2? Is it more effort than it's worth? A: I'm having a similar problem testing workflow-heavy MOSS-based application. Workflows in our case are based on WWF. My idea is to mock pretty much everything that you can't control from unit tests - documents storage, authentication, user rights and actions, sharepoint-specific parts of workflows for sharepoint (these mocks should be thoroughly tested to mirror behavior of real components). You use inversion of control to make code choose which component to use at runtime - real or mock. Then you can write system-wide tests to test workflows behavior - setting up your own environment, checking how workflow engine reacts. These tests are too big to call them unit-tests, still it is automated testing. This approach seems to work on trivial cases, but I still have to prove it is worthy to use in real-world workflows. A: Here's the solution I use. This is a simple wrapper around the runtime that allows executing single activity, simplifies passing the parameters, blocks the invoking thread until the workflow or activity is done, and translates / rethrows exceptions if any. Since my workflow only sends or waits for messages through a custom workflow service, I can mock out the service to expect certain messages from workflow and post certain messages to it and here I'm having real unit-tests for my WF! The credit for technology goes to Michael Kennedy. A: If you are going to do unit testing, Typemock Isolator is the only tool that can currently mock SharePoint objects. And by the way, Richard Fennell is working on a workflow mocking solution here. A: We've just today written an application that monitors our K2 worklist, picks up certain tasks from it, fills in some data and submits the tasks for completion. This is allowing us to perform automated testing, find regressions, and run through as many different paths of the workflow in a fraction of the time that it would take people to do it. I'd imagine a similar program could be written to pretend to be sharepoint. As for the unit testing of the workflow items themselves, we have a dll referenced from k2 which contains all of our line rule and processing logic. We don't have any code in the k2 workflows themselves, it is all referenced from these dlls. This allows us to easily write unit tests on them to test all of the individual line rules. A: I've done automated integration testing on K2 workflows using the K2ROM API (probably SourceCode.Workflow.Client if you're using K2 blackpearl). Basically you start a process on a test server with a known folio (I generate a GUID), then use the management API to delete it afterwards. I wrote helper methods like AssertAtClientActivity (basically calls ProvideWorkItem with criteria). Use the IsSynchronous parameter to StartProcessInstance, WorklistItem.Finish, etc. so that relevant method calls will not return until the process instance has reached a stable state. Expect tests to be slow and to occasionally fail. These are not unit tests. If you want to write unit tests against other systems, you'll probably want to wrap the K2 API. Consider looking at Windows Workflow 4 and the new workflow features in SharePoint 2010. You may not need K2.
{ "language": "en", "url": "https://stackoverflow.com/questions/143183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Getting a Linq-toSQL query to show up on a GridView I have a pretty complicated Linq query that I can't seem to get into a LinqDataSsource for use in a GridView: IEnumerable<ticket> tikPart = ( from p in db.comments where p.submitter == me.id && p.ticket.closed == DateTime.Parse("1/1/2001") && p.ticket.originating_group != me.sub_unit select p.ticket ).Distinct(); How can I get this into a GridView? Thank you! A: gridview.DataSource = tikPart.ToList(); gridview.DataBind(); A: @leppie - There is no need to call a ToList() on the IQueryable when attaching it as a data source. Provided your DataContext has not been disposed of prior to the DataBind method being called ToList is a redundant call. By default a DataContext uses lazy-loading, so that the data is only fetched from the database when the IQueryable is Enumerated. ToList() performs an Enumeration and does the call, so does DataBind(). So you can do something like this: using(MyDataContext ctx = new MyDataContext(){ this.MyGridView.DataSource = from something in ctx.Somethings where something.SomeProperty == someValue select something; this.MyGridView.DataBind(); } Depending on how your disposing your DataContext determins what to bind to a data source. You can then either use auto generated columns on the GridView, so that every property in your returned object is turned into a column, or you can write the columns with the designer and set up the binding rules there. A: You can setup your Gridview with no Datasource. Setup the gridview columns, and in codebehind bind that result to the grid view. A: You can bind IQueryable<> type to GridView using LinqDataSource control. http://johnsobrepena.blogspot.com/2010/01/data-bind-coolgridview-to-iqueryable.html
{ "language": "en", "url": "https://stackoverflow.com/questions/143194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you obtain Current Window Handle Count and Window Handle Limit in .NET? I want to obtain the current number of window handles and the system-wide window handle limit in C#. How do I go about this? A: As Raymond Chen put it some time ago, if you're thinking about window handle limits, you're probably doing something wrong :) Anyway, I bet there's no special C# way to do it, because it's very system-specific. You can use the same functions that you would use in a C++ application. Call the functions using P/Invoke. To learn how to write the imports, go to pinvoke.net. Edit: As I understand your question, I assume you already know how to do that in a Win32 application. A: If you read Raymond Chen's post, you'll probably find it as annoying as I did. You're only "probably doing something wrong" because you're doing something Windows isn't capable of. In my application, the first time a user visits a tab page, I create and lay out all the controls on that page. This takes a noticeable amount of time - there can easily be 50 controls on a page. So I don't discard the controls on a tab page after populating it, if it's at all possible, and leave closing sets of tab pages up to the user. As it happens, some users never want to close any sets of tab pages. Why should I be forcing them to? With my UI, they can navigate very quickly to any one of the 300+ sets of transactions that they're responsible for managing. Their machines are fast enough, and have enough memory, to make this all very responsive. The only problem is that Windows can't support it. Why am I using controls, and not some other UI technology? Because they work. I need to support focus events, tab order, validation events, dynamic layout, and data binding - the users are actually managing thousands of records, in dozens of tables, in an in-memory DataSet. The amount of development I'd have to do to - say - implement something using windowless controls is astronomical. I'm only "doing it wrong" because Windows has a hard limit on the number of window handles that it can support. That hard limit is based on a bunch of decade-old assumptions about how a computer's UI might be built. It's not me who's "doing something wrong." At any rate, my solution to this is in two parts. First, a class that can tell you how many window handles your process is using: using System; using System.Runtime.InteropServices; namespace StreamWrite.Proceedings.Client { public class HWndCounter { [DllImport("kernel32.dll")] private static extern IntPtr GetCurrentProcess(); [DllImport("user32.dll")] private static extern uint GetGuiResources(IntPtr hProcess, uint uiFlags); private enum ResourceType { Gdi = 0, User = 1 } public static int GetWindowHandlesForCurrentProcess(IntPtr hWnd) { IntPtr processHandle = GetCurrentProcess(); uint gdiObjects = GetGuiResources(processHandle, (uint)ResourceType.Gdi); uint userObjects = GetGuiResources(processHandle, (uint)ResourceType.User); return Convert.ToInt32(gdiObjects + userObjects); } } } Second, I maintain a least-recently-used cache of my tab page objects. The .NET framework doesn't provide a generic LRU cache class, so I built one, which you can get here if you need one. Every time the user visits a tab page, I add it to the LRU Cache. Then I check to see if I'm running low on window handles. If I am, I throw away the controls on the least-recently-used tab page, and keep doing that until I have enough window handles again. A: The full quote OregonGhost refers to is If you have to ask, you're probably doing something wrong. It's from Why is the limit of window handles per process 10,000? You should read this.
{ "language": "en", "url": "https://stackoverflow.com/questions/143206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Full screen mode in silverlight Would it be possible to show an image in full screen mode using silverlight. I'm looking out for some thing like the full screen option of the flash video players. A: You can set Application.Current.Host.Content.IsFullScreen = true; this has to be done from a mouse button event or a click, you can't force the user into full screen without some interaction on their part. Then you'll need to scale the image. If it's in an element that scales automatically, like a Grid cell and the Grid resizes automatically (like if it's the root element on the page and the page doesn't have a width or height specified) then you're good, but otherwise you'll need to handle the Application.Current.Host.Content.FullScreenChanged event and either resize or apply a scale transform to the image or its container to make it fill the screen, and do the same when you go back to non full screen mode. A: Set System.Windows.Interop.BrowserHost.IsFullScreen = true.
{ "language": "en", "url": "https://stackoverflow.com/questions/143212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to display text using Quartz on the iPhone? I've been trying to display text using a Quartz context, but no matter what I've tried I simply haven't had luck getting the text to display (I'm able to display all sorts of other Quartz objects though). Anybody knows what I might be doing wrong? example: -(void)drawRect:(CGRect)rect { // Drawing code CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSelectFont(context, "Arial", 24, kCGEncodingFontSpecific); CGContextSetTextPosition(context,80,80); CGContextShowText(context, "hello", 6); //not even this works CGContextShowTextAtPoint(context, 1,1, "hello", 6); } A: OK, I got it. First off, change your encoding mode to kCGEncodingMacRoman. Secondly, insert this line underneath it: CGContextSetTextMatrix(canvas, CGAffineTransformMake(1, 0, 0, -1, 0, 0)); This sets the conversion matrix for text so that it is drawn correctly. If you don't put that line in, your text will be upside down and back to front. No idea why this wasn't the default. Finally, make sure you've set the right fill colour. It's an easy mistake to make if you forget to change from the backdrop colour to the text colour and end up with white-on-white text. A: Here is a fragment of code that I'm using. UIColor *mainTextColor = [UIColor whiteColor]; [mainTextColor set]; drawTextLjust(@"Sample Text", 8, 50, 185, 18, 16); And: static void drawTextLjust(NSString* text, CGFloat y, CGFloat left, CGFloat right, int maxFontSize, int minFontSize) { CGPoint point = CGPointMake(left, y); UIFont *font = [UIFont systemFontOfSize:maxFontSize]; [text drawAtPoint:point forWidth:right - left withFont:font minFontSize:minFontSize actualFontSize:NULL lineBreakMode:UILineBreakModeTailTruncation baselineAdjustment:UIBaselineAdjustmentAlignBaselines]; }
{ "language": "en", "url": "https://stackoverflow.com/questions/143215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: cloning hierarchical data let's assume i have a self referencing hierarchical table build the classical way like this one: CREATE TABLE test (name text,id serial primary key,parent_id integer references test); insert into test (name,id,parent_id) values ('root1',1,NULL),('root2',2,NULL),('root1sub1',3,1),('root1sub2',4,1),('root 2sub1',5,2),('root2sub2',6,2); testdb=# select * from test; name | id | parent_id -----------+----+----------- root1 | 1 | root2 | 2 | root1sub1 | 3 | 1 root1sub2 | 4 | 1 root2sub1 | 5 | 2 root2sub2 | 6 | 2 What i need now is a function (preferrably in plain sql) that would take the id of a test record and clone all attached records (including the given one). The cloned records need to have new ids of course. The desired result would like this for example: Select * from cloningfunction(2); name | id | parent_id -----------+----+----------- root2 | 7 | root2sub1 | 8 | 7 root2sub2 | 9 | 7 Any pointers? Im using PostgreSQL 8.3. A: Pulling this result in recursively is tricky (although possible). However, it's typically not very efficient and there is a much better way to solve this problem. Basically, you augment the table with an extra column which traces the tree to the top - I'll call it the "Upchain". It's just a long string that looks something like this: name | id | parent_id | upchain root1 | 1 | NULL | 1: root2 | 2 | NULL | 2: root1sub1 | 3 | 1 | 1:3: root1sub2 | 4 | 1 | 1:4: root2sub1 | 5 | 2 | 2:5: root2sub2 | 6 | 2 | 2:6: root1sub1sub1 | 7 | 3 | 1:3:7: It's very easy to keep this field updated by using a trigger on the table. (Apologies for terminology but I have always done this with SQL Server). Every time you add or delete a record, or update the parent_id field, you just need to update the upchain field on that part of the tree. That's a trivial job because you just take the upchain of the parent record and append the id of the current record. All child records are easily identified using LIKE to check for records with the starting string in their upchain. What you're doing effectively is trading a bit of extra write activity for a big saving when you come to read the data. When you want to select a complete branch in the tree it's trivial. Suppose you want the branch under node 1. Node 1 has an upchain '1:' so you know that any node in the branch of the tree under that node must have an upchain starting '1:...'. So you just do this: SELECT * FROM table WHERE upchain LIKE '1:%' This is extremely fast (index the upchain field of course). As a bonus it also makes a lot of activities extremely simple, such as finding partial trees, level within the tree, etc. I've used this in applications that track large employee reporting hierarchies but you can use it for pretty much any tree structure (parts breakdown, etc.) Notes (for anyone who's interested): * *I haven't given a step-by-step of the SQL code but once you get the principle, it's pretty simple to implement. I'm not a great programmer so I'm speaking from experience. *If you already have data in the table you need to do a one time update to get the upchains synchronised initially. Again, this isn't difficult as the code is very similar to the UPDATE code in the triggers. *This technique is also a good way to identify circular references which can otherwise be tricky to spot. A: The Joe Celko's method which is similar to the njreed's answer but is more generic can be found here: * *Nested-Set Model of Trees (at the middle of the article) *Nested-Set Model of Trees, part 2 *Trees in SQL -- Part III A: @Maximilian: You are right, we forgot your actual requirement. How about a recursive stored procedure? I am not sure if this is possible in PostgreSQL, but here is a working SQL Server version: CREATE PROCEDURE CloneNode @to_clone_id int, @parent_id int AS SET NOCOUNT ON DECLARE @new_node_id int, @child_id int INSERT INTO test (name, parent_id) SELECT name, @parent_id FROM test WHERE id = @to_clone_id SET @new_node_id = @@IDENTITY DECLARE @children_cursor CURSOR SET @children_cursor = CURSOR FOR SELECT id FROM test WHERE parent_id = @to_clone_id OPEN @children_cursor FETCH NEXT FROM @children_cursor INTO @child_id WHILE @@FETCH_STATUS = 0 BEGIN EXECUTE CloneNode @child_id, @new_node_id FETCH NEXT FROM @children_cursor INTO @child_id END CLOSE @children_cursor DEALLOCATE @children_cursor Your example is accomplished by EXECUTE CloneNode 2, null (the second parameter is the new parent node). A: This sounds like an exercise from "SQL For Smarties" by Joe Celko... I don't have my copy handy, but I think it's a book that'll help you quite a bit if this is the kind of problems you need to solve.
{ "language": "en", "url": "https://stackoverflow.com/questions/143226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Protection against automation One of our next projects is supposed to be a MS Windows based game (written in C#, with a winform GUI and an integrated DirectX display-control) for a customer who wants to give away prizes to the best players. This project is meant to run for a couple of years, with championships, ladders, tournaments, player vs. player-action and so on. One of the main concerns here is cheating, as a player would benefit dramatically if he was able to - for instance - let a custom made bot play the game for him (more in terms of strategy-decisions than in terms of playing many hours). So my question is: what technical possibilites do we have to detect bot activity? We can of course track the number of hours played, analyze strategies to detect anomalies and so on, but as far as this question is concerned, I would be more interested in knowing details like * *how to detect if another application makes periodical screenshots? *how to detect if another application scans our process memory? *what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated? *is it possible to detect if another application requests informations about controls in our application (position of controls etc)? *what other ways exist in which a cheater could gather informations about the current game state, feed those to a bot and send the determined actions back to the client? Your feedback is highly appreciated! A: I wrote d2botnet, a .net diablo 2 automation engine a while back, and something you can add to your list of things to watch out for are malformed /invalid/forged packets. I assume this game will communicate over TCP. Packet sniffing and forging are usually the first way games (online anyways) are automated. I know blizzard would detect malformed packets, somehting i tried to stay away from doing in d2botnet. So make sure you detect invalid packets. Encrypt them. Hash them. do somethign to make sure they are valid. If you think about it, if someone can know exactly what every packet means that is sent back and forth they dont even need to run the client software, which then makes any process based detection a moot point. So you can also add in some sort of packet based challenge response that your cleint must know how to respond to. A: Just an idea what if the 'cheater' runs your software in a virtual machine (like vmware) and makes screenshots of that window? I doubt you can defend against that. You obviously can't defend against the 'analog gap', e.g. the cheater's system makes external screenshots with a high quality camera - I guess it's only a theoretical issue. Maybe you should investigate chess sites. There is a lot of money in chess, they don't like bots either - maybe they have come up with a solution already. A: The best protection against automation is to not have tasks that require grinding. That being said, the best way to detect automation is to actively engage the user and require periodic CAPTCHA-like tests (except without the image and so forth). I'd recommend utilizing a database of several thousand simple one-off questions that get posed to the user every so often. However, based on your question, I'd say your best bet is to not implement the anti-automation features in C#. You stand very little chance of detecting well-written hacks/bots from within managed code, especially when all the hacker has to do is simply go into ring0 to avoid detection via any standard method. I'd recommend a Warden-like approach (download-able module that you can update whenever you feel like) combined with a Kernel-Mode Driver that hooks all of the windows API functions and watches them for "inappropriate" calls. Note, however, that you're going to run into a lot of false positives, so you need to not base your banning system on your automated data. Always have a human look over it before banning. A: A common method of listening to keyboard and mouse input in an application is setting a windows hook using SetWindowsHookEx. Vendors usually try to protect their software during installation so that hacker won't automate and crack/find a serial for their application. Google the term: "Key Loggers"... Here's an article that describes the problem and methods to prevent it. A: I have no deeper understanding on how PunkBuster and such softwar works, but this is the way I'd go: Iintercept calls to the API functions that handle the memory stuff like ReadProcessMemory, WriteProcessMemory and so on. You'd detect if your process is involved in the call, log it, and trampoline the call back to the original function. This should work for the screenshot taking too, but you might want to intercept the BitBlt function. Here's a basic tutorial concerning the function interception: Intercepting System API Calls A: You should look into what goes into Punkbuster, Valve Anti-Cheat, and some other anti-cheat stuff for some pointers. Edit: What I mean is, look into how they do it; how they detect that stuff. A: I don't know the technical details, but Intenet Chess Club's BlitzIn program seems to have integrated program switching detection. That's of course for detecting people running a chess engine on the side and not directly applicable to your case, but you may be able to extrapolate the apporach to something like if process X takes more than Z% CPU time the next Y cycles, it's probably a bot running. That in addition to a "you must not run anything else while playing the game to be eligible for prizes" as part of the contest rules might work. Also, a draconian "we might decide in any time for any reason that you have been using a bot and disqualify you" rule also helps with the heuristic approach above (used in prized ICC chess tournaments). All these questions are easily solved by the rule 1 above: * how to detect if another application makes periodical screenshots? * how to detect if another application scans our process memory? * what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated? * is it possible to detect if another application requests informations about controls in our application (position of controls etc)? I think a good way to make harder the problem to the crackers is to have the only authoritative copies of the game state in your servers, only sending to and receiving updates from the clients, that way you can embed in the communication protocol itself client validation (that it hasn't been cracked and thus the detection rules are still in place). That, and actively monitoring for new weird behavior found might get you close to where you want to be.
{ "language": "en", "url": "https://stackoverflow.com/questions/143231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Which are the "must follow" FxCop rules for any C# developer? I'm planning to start using FxCop in one of our ongoing project. But, when i tried it with selecting all available rules, it looks like I have to make lots of changes in my code. Being a "team member" I can't right away start making these changes, like naming convention change etc. anyway i would like to start using FxCop with a minimal rule set and would gradually increase the rule set as we goes on. Can you suggest me some must have FxCop rules which i should start following. Or do you suggest any better approach? Note: Most of my code is in C#. A: In my opinion, do the following: For any new project, follow all FxCop rules. You may want to disable some of them, since not everything will make sense for your project. For an existing project, follow the rules from these categories as a minimum set: * *Globalization *Interoperability *Security *Performance *Portability Since these are typically only few rule violations in an existing project, compared to the other categories, but may improve the quality of your application. When these rules are clear, try to fix the following categories: * *Design *Usage Since these will make it easier for you to spot bugs that have to do with the violations, but you will have a large amount of violations in existing code. Always sort the violations by level/fix category and start with the critical ones. Skip the warnings for now. In case you didn't know, there's also StyleCop available from Microsoft, checking your code on the source level. Be sure to enable MSBuild integration during installation. A: Some of the rules avoid us bugs or leaks: * *Do not catch general exception types (May be the best rule for us. According to the case, it can be easy or difficult to enforce) *Test for NaN correctly (easy to enforce) *Disposable fields should be disposed (quite easy to enforce) *Dispose should call base dispose (quite easy to enforce) *Disposable types should declare finalizer (quite easy to enforce) Some help us have a better design, but be careful, they may lead you to big refactoring when central API is impacted. We like * *Collection properties should be readonly (difficult to enforce in our case) *Do not expose generic list *member should not expose certain conrete types *Review usuned parameters (Improves easily your API) Someone on our project tried the performance rules with no improvement. (Actually, these rules are about micro-optimizing, which gives no result if no bottleneck identification shows microoptimizing is needed). I would suggest not starting with these ones. A: An alternative to FxCop would be to use the tool NDepend that lets write Code Rules over C# LINQ Queries (namely CQLinq). Disclaimer: I am one of the developers of the tool More than 200 code rules are proposed by default. Customizing existing rules or creating your own rules is straightforward thanks to the well-known C# LINQ syntax. NDepend overlaps with FxCop on some code rules, but proposes plenty of unique code rules. Here are a few rules that I would classify as must-follow: * *Avoid decreasing code coverage by tests of types *Avoid making complex methods even more complex (Source CC) *Avoid transforming an immutable type into a mutable one *Overrides of Method() should call base.Method() *Avoid the Singleton pattern *Types with disposable instance fields must be disposable *Disposable types with unmanaged resources should declare finalizer *Avoid namespaces mutually dependent *Avoid namespaces dependency cycles *UI layer shouldn't use directly DB types *API Breaking Changes: Methods *Complex methods partially covered by tests should be 100% covered *Potentially dead Types *Structures should be immutable *Avoid naming types and namespaces with the same identifier Notice that Rules can be verified live in Visual Studio and at Build Process time, in a generated HTML+javascript report. A: Turn on one rule at a time. Fix or exclude any warnings it reports, then start on the next one. A: The minimal fxcop and also code analysis (if using VS2010 Premium or Ultimate) is the following: http://msdn.microsoft.com/en-us/library/dd264893.aspx A: On our most important code: * *Treat warnings as errors (level 4) *FxCop must pass 100% (no ignores generally allowed) *Gendarme used as a guideline (sometimes it conflicts with FxCop) Believe it or not, FxCop teaches you a hell of a lot on how to write better code... great tool! So for us, all rules are equally important. A: We're a web shop so we drop the following rules: * *Anything with Interop (we don't support COM integration unless a client pays for it!) *Key signing (web apps shouldn't need high security prilages) Occationally we'll drop the rule about using higher frameworks in dependancies as some of our CMS's are still .NET 2.0, but that doesn't mean the DAL/ Business Layers can't be .NET 3.5 as long as you're not trying to return an IQueryable (or anything .NET 3, 3.5). A: In our process, we enabled all the rules and then we have to justify any suppressions as part of our review process. Often, it's just not possible to fix the error in time-efficient manner with regards to deadlines or its an error raised in error (this sometimes occurs - especially if your architecture handles plug-ins via reflection). We also wrote a custom rule for globalization to replace an existing one because we didn't want to globalize the strings passed to exceptions. In general, I'd say it's best to try and adhere to all rules. In my current home project, I have four build configurations - one set that specify the CODE_ANALYSIS define and one set that don't. That way, I can see all the messages I have suppressed just by building a non-CODE_ANALYSIS configuration. This means that suppressed messages can be periodically reviewed and potentially addressed or removed as required. What I'd like to do in the long-run is have a build step that analyzes the SuppressMessage attributes against the actual errors and highlights those suppressions that are no longer required, but that's not currently possible with my setup. A: The design and security rules are a good place to start. A: I fully agree with Sklivvz. But for existing projects, you may clean up FxCop violations category by category. From time to time, Gendarme accepts new rules that are quite useful. So you may use Gendarme besides.
{ "language": "en", "url": "https://stackoverflow.com/questions/143232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: How to redirect data to stdin within a single executable? I am using cxxtest as the test framework for my C++ classes, and would like to figure out a way to simulate sending data to classes which would normally expect to receive it from standard input. I have several different files which I would like to send to the classes during different tests, so redirection from the command line to the test suite executable is not an option. Basically, what I would really like to do is find a way to redefine or redirect the 'stdin' handle to some other value that I create inside of my program, and then use fwrite() from these tests so that the corresponding fread() inside of the class pulls the data from within the program, not from the actual standard I/O handles associated with the executable. Is this even possible? Bonus points for a platform-independent solution, but at a very minimum, I need this to work with Visual Studio 9 under Windows. A: You should be able to use freopen() to point stdin to an arbitrary file. A: rdbuf does exactly what you want. You can open a file for reading and replace cin's rdbuf with the one from the file. (see the link for a example using cout). On unix-like OS you could close the 0 file handle (stdin) and open another file. It will have the lowest avaiable handle, which in this case would be 0. Or use one of posix calls that do exactly this. I'm not sure, but this may also work on Windows. A: I think you want your classes to use an input stream instead of std::cin directly. You'll want to either pass the input stream into the classes or set it on them via some method. You could, for exmaple, use a stringstream to pass in your test input: std::istringstream iss("1.0 2 3.1415"); some_class.parse_nums(iss, one, two, pi); A: We can redirect cin in order it read data from a file. Here is an example : #include <iostream> #include <fstream> #include <string> int main() { std::ifstream inputFile("Main.cpp"); std::streambuf *inbuf = std::cin.rdbuf(inputFile.rdbuf()); string str; // print the content of the file without 'space' characters while(std::cin >> str) { std::cout << str; } // restore the initial buffer std::cin.rdbuf(inbuf); } A: The appropriate method is to rewrite your classes so that they are testable. They should accept as a parameter the handle, stream or file from which they are supposed to read data - in your test framework, you can then mock in the stream or supply the path to the file containing the test data.
{ "language": "en", "url": "https://stackoverflow.com/questions/143233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Lua = operator as print In Lua, using the = operator without an l-value seems to be equivalent to a print(r-value), here are a few examples run in the Lua standalone interpreter: > = a nil > a = 8 > = a 8 > = 'hello' hello > = print function: 003657C8 And so on... My question is : where can I find a detailed description of this use for the = operator? How does it work? Is it by implying a special default l-value? I guess the root of my problem is that I have no clue what to type in Google to find info about it :-) edit: Thanks for the answers, you are right it's a feature of the interpreter. Silly question, for I don't know which reason I completely overlooked the obvious. I should avoid posting before the morning coffee :-) For completeness, here is the code dealing with this in the interpreter: while ((status = loadline(L)) != -1) { if (status == 0) status = docall(L, 0, 0); report(L, status); if (status == 0 && lua_gettop(L) > 0) { /* any result to print? */ lua_getglobal(L, "print"); lua_insert(L, 1); if (lua_pcall(L, lua_gettop(L)-1, 0, 0) != 0) l_message(progname, lua_pushfstring(L, "error calling " LUA_QL("print") " (%s)", lua_tostring(L, -1))); } } edit2: To be really complete, the whole trick about pushing values on the stack is in the "pushline" function: if (firstline && b[0] == '=') /* first line starts with `=' ? */ lua_pushfstring(L, "return %s", b+1); /* change it to `return' */ A: I think that must be a feature of the stand alone interpreter. I can't make that work on anything I have compiled lua into. A: Quoting the man page: In interactive mode ... If a line starts with '=', then lua displays the values of all the expressions in the remainder of the line. The expressions must be separated by commas. A: I wouldn't call it a feature - the interpreter just returns the result of the statement. It's his job, isn't it? A: Assignment isn't an expression that returns something in Lua like it is in C.
{ "language": "en", "url": "https://stackoverflow.com/questions/143234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Silverlight controls in ASP.NET application I need my ASP.NET web application to use silverlight controls in my web page. Please let me know how to use them. Do i need to add any reference for them in my Visual Studio 2005. Framework 2.0 and ASP.NET 2.0 application environment. A: A nice post over here which takes care of your problem. Cheers.
{ "language": "en", "url": "https://stackoverflow.com/questions/143281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How much memory do Enums take? For example if I have an Enum with two cases, does it make take more memory than a boolean? Languages: Java, C++ A: You would only worry about this when storing large quantities of enums. For Java, you may be able to use an EnumSet in some cases. It uses a bit vector internally which is very space efficient and fast. http://java.sun.com/j2se/1.5.0/docs/api/java/util/EnumSet.html A: bool might be implemented as a single byte, but typically in a structure it would be surrounded by other elements that have alignment requirements that would mean that the boolean would effectively be occupying at least as much space as an int. Modern processors load data from main memory as a whole cache line, 64 bytes. The difference between loading one byte from L1 cache and loading four bytes is negligible. If you're trying to optimise for cache lines in a very high-performance application, then you might worry about how big your enum is, but generally I'd say it's clearer to define an enum than to use a boolean. A: In Java, an enum is a full-blown class: Java programming language enum types are much more powerful than their counterparts in other languages. The enum declaration defines a class (called an enum type). The enum class body can include methods and other fields. In order to see the actual size of each enum, let's make an actual enum and examine the contents of the class file it creates. Let's say we have the following Constants enum class: public enum Constants { ONE, TWO, THREE; } Compiling the above enum and disassembling resulting class file with javap gives the following: Compiled from "Constants.java" public final class Constants extends java.lang.Enum{ public static final Constants ONE; public static final Constants TWO; public static final Constants THREE; public static Constants[] values(); public static Constants valueOf(java.lang.String); static {}; } The disassembly shows that that each field of an enum is an instance of the Constants enum class. (Further analysis with javap will reveal that each field is initialized by creating a new object by calling the new Constants(String) constructor in the static initialization block.) Therefore, we can tell that each enum field that we create will be at least as much as the overhead of creating an object in the JVM. A: In Java, it would take more memory. In C++, it would take no memory than required for a constant of the same type (it's evaluated at compile-time and has no residual significance at runtime). In C++, this means that the default type for an enum will occupy the same space as an int. A: In ISO C++ there is no obligation for an enum to be larger than its largest enumerator requires. In particular, enum {TRUE, FALSE} may have sizeof(1) even when sizeof(bool)==sizeof(int). There is simply no requirement. Some compilers make the enums the same size as an int. That is a compiler feature, which is allowed because the standard only imposes a minimum. Other compilers use extensions to control the size of an enum. A: In Java, there should only be one instance of each of the values of your enum in memory. A reference to the enum then requires only the storage for that reference. Checking the value of an enum is as efficient as any other reference comparison. A: printf("%d", sizeof(enum)); A: In C++ an enum is typically the same size as an int. That said it is not uncommon for compilers to provide a command line switch to allow the size of the enum to be set to the smallest size that fits the range of values defined. A: No, an enum is generally the same size as an int, same as boolean. A: If your enum will ever have only two cases, indeed using a boolean instead might be a better idea (memory size, performance, usage/logic), even more in Java. If you are wondering about memory cost, it might imply you plan to use lot of them. In Java you can use BitSet class, or on a smaller scale, in both languages you can manipulate bits with bitwise operations. A: sizeof(enum) depends upon what you have in the enum. I was recently trying to find the size of an ArrayList() with default constructor params and no objects stored inside (which means the capacity to store is 10). It turned out that ArrayList is not too big < 100 bytes. So, sizeof(enum) for a very simple enum should be less than 10 bytes. you can write a small program, give it a certain amount of memory and then try allocating enums. you should be able to figure it out(that's how i found out the memory of ArrayList) BR, ~A A: In C/C++ an enum will be the same size as an int. With gcc you can add attribute((packed)) to the enum definition to make it take the minimum footprint. If the largest value in the enum is < 256 this will be one byte, two bytes if the largest value is < 65536, etc. typedef enum { MY_ENUM0, MY_ENUM1, MY_ENUM2, MY_ENUM3, MY_ENUM4, MY_ENUM5 } __attribute__((packed)) myEnum_e;
{ "language": "en", "url": "https://stackoverflow.com/questions/143285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Pseudo class :hover does not work in IE7 I've got such a simple code: <div class="div1"> <div class="div2">Foo</div> <div class="div3"> <div class="div4"> <div class="div5"> Bar </div> </div> </div> </div> and this CSS: .div1{ position: relative; } .div1 .div3 { position: absolute; top: 30px; left: 0px; width: 250px; display: none; } .div1:hover .div3 { display: block; } .div2{ width: 200px; height: 30px; background: red; } .div4 { background-color: green; color: #000; } .div5 {} The problem is: When I move the cursor from .div2 to .div3 (.div3 should stay visible because it's the child of .div1) then the hover is disabled. I'm testing it in IE7, in FF it works fine. What am I doing wrong? I've also realized that when i remove .div5 tag than it's working. Any ideas? A: IE7 won't allow you to apply :hover pseudo-classes to non-anchor elements unless you explicitly specify a doctype. Just add a doctype declaration to your page and it should work perfectly. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> More on IE7/quirks mode can be found on this blog post. A: I found that this solution worked better and was a bit cleaner: <style type="text/css"> * { color: #fff; } .wrapper { } .trigger { background: #223; } .appear { background: #334; display: none; } .trigger:hover .appear { display: block; } </style> </head> <body> <div class="wrapper"> <div class="trigger"> <p>This is the trigger for the hover element.</p> <div class="appear"> <p>I'm <strong>alive!</strong></p> </div> </div> </div> </body> pastebin. A: Could it be the double margin problem? I did an display: inline-block when it happened for a li http://www.positioniseverything.net/explorer/doubled-margin.html
{ "language": "en", "url": "https://stackoverflow.com/questions/143296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Create cronjob with Zend Framework I am trying to write a cronjob controller, so I can call one website and have all modules cronjob.php executed. Now my problem is how do I do that? Would curl be an option, so I also can count the errors and successes? [Update] I guess I have not explained it enough. What I want to do is have one file which I can call like from http://server/cronjob and then make it execute every /application/modules/*/controller/CronjobController.php or have another way of doing it so all the cronjobs aren't at one place but at the same place the module is located. This would offer me the advantage, that if a module does not exist it does not try to run its cronjob. Now my question is how would you execute all the modules CronjobController or would you do it a completly different way so it still stays modular? And I want to be able to giveout how many cronjobs ran successfully and how many didn't A: I would caution putting your cronjobs accessible to the public because they could be triggered outside their normal times and, depending on what they do, cause problems (I know that is not what you intend, but by putting them into an actual controller it becomes reachable from the browser). For example, I have one cron that sends e-mails. I would be spammed constantly if someone found the cron URL and just began hitting it. What I did was make a cron folder and in there created a heartbeat.php which bootstraps Zend Framework (minus MVC) for me. It checks a database which has a list of all the installed cron jobs and, if it is time for them to run, generates an instances of the cron job's class and runs it. The cron jobs are just child classes from an abstract cron class that has methods like install(), run(), deactivate(), etc. To fire off my job I just have a simple crontab entry that runs every 5 minutes that hits heartbeat.php. So far it's worked wonderful on two different sites. A: Someone mentioned this blog entry a couple days ago on fw-general (a mailinglist which I recommend reading when you use the Zend Framework). There is also a proposal for Zend_Controller_Request_Cli, which should address this sooner or later. A: I have access to a dedicated server and I initially had a different bootstrap for the cron jobs. I eventually hated the idea, just wishing I could do this within the existing MVC setup and not have to bother about moving things around. I created a file cron.sh, saved is within my site root (not public) and in it I put a series of commands I would like to run. As I wanted to run many commands at once I wrote the PHP within my controllers as usual and added curl calls to those urls within cron.sh. for example curl http://www.mysite.com/cron_controller/action Then on the cron interface I ran bash /path/to/cron.sh. As pointed out by others your crons can be fired by anyone who guesses the url so there's always that caveat. You can find a solution to that in many different ways. A: After some research and a lot procrastination I came to the simple conclusion that a ZF-ized cron script should contain all the functionality of you zend framework app - without all the view stuff. I accomplished this by creating a new cronjobfoo.php file in my application directory. Then I took the bare minimum from: -my front controller (index.php) -my bootstrap.php I took out all the view stuff and focused on keeping the environment setup, db setup, autoloader, & registry setup. I had to take a little time to correct the document root variable and remove some of the OO functionality copied from my bootstrap. After that I just coded away.. in my case it was compiling and emailing out nightly reports. It was great to use Zend_Mail. When I was confident that my script was working the way I wanted, I just added it my crontab. good luck! A: For Zend Framework I am currently using the code outlined bellow. The script only includes the portal file index.php, where all the paths, environment and other Zendy code is bootstrapped. By defining a constant in the cron script we cancel the final step , where the application is run. This means the application is only setup, not even bootstrapped. At this point we start bootstraping the resources we need and that is that //public/index.php if(!defined('DONT_RUN_APP') || DONT_RUN_APP == false) { $application->bootstrap()->run(); } // application/../cron/cronjob.php define("DONT_RUN_APP",true); require(realpath('/srv/www/project/public/index.php')); $application->bootstrap('config'); $application->bootstrap('db'); //cron code follows A: Why not just create a crontab.php, including, or requiring the index.php bootstrap file? Considering that the bootstrap is executing Zend_Loader::registerAutoload(), you can start working directly with the modules, for instance, myModules_MyClass::doSomething(); That way you are skipping the controllers. The Controller job is to control the access via http. In this case, you don't need the controller approach because you are accessing locally. A: Take a look at zf-cli: * *scripts at master from padraic/ZFPlanet - GitHub This handles well all cron jobs. A: Do you have filesystem access to the modules' directories? You could iterate over the directories and determine where a CronjobController.php is available. Then you could either use Zend_Http_Client to access the controller via HTTP or use an approach like Zend_Test_PHPUnit: simulate the actual dispatch process locally. A: You could set up a database table to hold references to the cronjob scripts (in your modules), then use a exec command with a return value on pass/fail. A: I extended gregor answer with this post. This is what came out: //public/index.php // Run application, only if not started from command line (cli) if (php_sapi_name() != 'cli' || !empty($_SERVER['REMOTE_ADDR'])) { $application->run(); } Thanks gregor! A: My solution: * *curl /cron *Global cron method will include_once all controllers *Check whether each of the controllors has ->cron method *If they have, run those. Public cron url (for curl) is not a problem, there are many ways to avoid abuse. As said, checking remote IP is the easiest. A: This is my way to run Cron Jobs with Zend Framework In Bootstrap I will keep environment setup as it is minus MVC: public static function setupEnvironment() { ... self::setupFrontController(); self::setupDatabase(); self::setupRoutes(); ... if (PHP_SAPI !== 'cli') { self::setupView(); self::setupDbCaches(); } ... } Also in Bootstrap, I will modify setupRoutes and add a custom route: public function setupRoutes() { ... if (PHP_SAPI == 'cli') { self::$frontController->setRouter(new App_Router_Cli()); self::$frontController->setRequest(new Zend_Controller_Request_Http()); } } App_Router_Cli is a new router type which determines the controller, action, and optional parameters based on this type of request: script.php controller=mail action=send. I found this new router here: Setting up Cron with Zend Framework 1.11 : class App_Router_Cli extends Zend_Controller_Router_Abstract { public function route (Zend_Controller_Request_Abstract $dispatcher) { $getopt = new Zend_Console_Getopt (array()); $arguments = $getopt->getRemainingArgs(); $controller = ""; $action = ""; $params = array(); if ($arguments) { foreach($arguments as $index => $command) { $details = explode("=", $command); if($details[0] == "controller") { $controller = $details[1]; } else if($details[0] == "action") { $action = $details[1]; } else { $params[$details[0]] = $details[1]; } } if($action == "" || $controller == "") { die("Missing Controller and Action Arguments == You should have: php script.php controller=[controllername] action=[action]"); } $dispatcher->setControllerName($controller); $dispatcher->setActionName($action); $dispatcher->setParams($params); return $dispatcher; } echo "Invalid command.\n", exit; echo "No command given.\n", exit; } public function assemble ($userParams, $name = null, $reset = false, $encode = true) { throw new Exception("Assemble isnt implemented ", print_r($userParams, true)); } } In CronController I do a simple check: public function sendEmailCliAction() { if (PHP_SAPI != 'cli' || !empty($_SERVER['REMOTE_ADDR'])) { echo "Program cannot be run manually\n"; exit(1); } // Each email sent has its status set to 0; Crontab runs a command of this kind: * * * * * php /var/www/projectname/public/index.php controller=name action=send-email-cli >> /var/www/projectname/application/data/logs/cron.log A: It doesn't make sense to run the bootstrap in the same directory or in cron job folder. I've created a better and easy way to implement the cron job work. Please follow the below things to make your work easy and smart: * *Create a cron job folder such as "cron" or "crobjob" etc. whatever you want. *Sometimes we need the cron job to run on a server with different interval like for 1 hr interval or 1-day interval that we can setup on the server. *Create a file in cron job folder like I created an "init.php", Now let's say you want to send a newsletter to users in once per day. You don't need to do the zend code in init.php. *So just set up the curl function in init.php and add the URL of your controller action in that curl function. Because our main purpose is that an action should be called on every day. for example, the URL should be like this: https://www.example.com/cron/newsletters So set up this URL in curl function and call this function in init.php in the same file. In the above link, you can see "cron" is the controller and newsletters is the action where you can do your work, in the same way, don't need to run the bootstrap file etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/143320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Which games include coding in gameplay? One title per answer. A: http://gr1d.org is a persistent online programming rpg, where you write your own agents, advance in levels and attack or defend other players A: RoboRally! A: A little Flash game I found the other day: http://www.gameroo.nl/games/light-bot Be careful, you will likely lose about an hour of your life ;) A: Lists and Lists: An interactive tutorial by Andrew Plotkin. "An introductory course in the Scheme programming language (a dialect of Lisp) presented as a text adventure - or, to put it another way, a Scheme interpreter with a wee scrap of text adventure wrapped around it. Since it's Z-code, and the first Z-code games were written in another Lisp variant, there's an odd circularity to it all." - Carl Muckenhoupt A: Years ago, I wasted way too much time on Omega A: Second Life A: Core War (http://en.wikipedia.org/wiki/Core_war) is the classic, where two programs run in a simulated machine, each trying to halt the other one. A: Robocode is a great way to learn Java and have fun doing it. You write Java code to program a robot, which then battles it out against one or more other robots. It's years ago I tried it, but I remember having great fun doing it. We even programmed robots at work (in between the real work ;) ) and had a small competition going to see who could come up with the best algorithms. Recommended! A: Life?  A: Crobots A: Discover fundamentals of computer programming by playing a board game; c-jump helps children to learn basics of programming languages, such as C, C++ and Java. http://www.c-jump.com/ A: GNU Robots (http://en.wikipedia.org/wiki/GNU_Robots) is a great way to get stuck into Lisp. A: Starship Soccer (http://www.geocities.com/siliconvalley/horizon/8596/StarshipSoccer.html) pits C++ controlled teams against each other, playing a mix of Space War and football. A: Good list on Wikipedia! Programming games of note include Core War, Robocode, RoboWar, Robot Battle, Crobots and AI Wars. Final Fantasy XII also includes some elements of a programming game, as the player creates the AI of his characters, although the player can also choose to directly control the action. A: EpsiTec CeeBot A: Microsoft's Terrarium, which involved programming the intelligence of a creature using any .Net language, has now been open sourced to CodePlex
{ "language": "en", "url": "https://stackoverflow.com/questions/143322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Displaying controls on an alpha-blended form I tried the Visual C# Kicks code for an alpha-blended form. This works (as soon as I remove the TransparencyKey property); that is, I can use the W3C's PNG alpha test image and see other windows underneath, but makes all controls on the form invisible. Presumably, they simply aren't painted, as OnPaint is overridden. I tried calling the superclass's OnPaint: Protected Overrides Sub OnPaint(ByVal e As System.Windows.Forms.PaintEventArgs) UpdateFormDisplay() MyBase.OnPaint(e) End Sub , but this didn't change anything. (Calling MyBase.OnPaint first doesn't make any difference either.) Unfortunately most articles about alpha-blended forms focus on pure splash screens without any controls on them — but we need a panel which first shows sign-in fields, then a progress bar. The controls, by the way, do not need transparency; it's only on the outer edges that the PNG's transparency truly matters. So faking this by adding another form on top of this all (with the two always moving in tandem) might suffice, but I'd prefer a smoother solution. A: Try putting this in your form ctor after InitializeComponents(); base.SetStyle(ControlStyles.OptimizedDoubleBuffer | ControlStyles.AllPaintingInWmPaint | ControlStyles.UserPaint, true);
{ "language": "en", "url": "https://stackoverflow.com/questions/143338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Are there any frameworks for handling database requests in swing applications? I believe any programmer who has been dealing with database requests in a gui application has run into some or all of the following problems: * *Your GUI freezes because you call database layer from within the event dispatch thread *When you have multiple windows/panels/jframes where user can start a db request your performance degrades because you don't have any control about the threads your user creates *User may be able to lock down the application and even the database because he calls any action many times before the first action has been finished What I'd like to know about: are there any frameworks that handle the requirements of handling an ordered set of long running actions (including but not limited to database calls, i.e. calculations) outside the event dispatch thread? Note: I know of SwingWorker ;-) A: Naked Objects facilitate a clean domain model and they also have a GUI 2 DB mapping layer -- http://www.nakedobjects.org/home/index.shtml A: I doubt you will find something specific for database requests. You can try to reuse existing generic task scheduling libraries. An example is the Eclipse jobs API. This does not depend on the IDE. See http://www.eclipse.org/articles/Article-Concurrency/jobs-api.html A: Such a thing should be found in Netbeans for example. See RequestProcessor. But in simpler cases this is not required. Last time I need something like thread scheduling and control I simply used new concurrency packages included in J5 (I used J6). With its ExecutorFactory-ies you can simply achieve basic control over tasks. You can also use some queues. This PDF can help. The PDF is written in Slovak language but the Single/Multiple task workers are there written in Java ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/143341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to control mobile softkeys from Flash application embedded in HTML I have a flash application running Flash 9 (CS3). Application is able to control the Softkeys when this flash application is loaded in the supported mobile device. But, the application doesn't have control when the same is embedded in HTML page and browsed via supported mobile device. Any ideas how to make this work? Thanks Keerthi A: There is no special way to receive soft key events when embedded in HTML - if the browser/OS gives the events to Flash, then you can catch them like any other key event: var myListener = new Object(); myListener.onKeyDown = function() { var code = Key.getCode(); if (code==ExtendedKey.SOFT1) { trace("I got a soft key event"); } } Key.addListener(myListener); However, you'll find that most phones/browsers will not give you soft key events when your SWF is embedded in HTML. This isn't part of the Flash Lite spec - strictly speaking I believe they could give you those events if they wanted to, but most phones simply use those keys for browser functions, and consume them before they get to Flash. Note that you can check at runtime whether or not softkeys are available: trace(System.capabilities.hasMappableSoftKeys); trace(System.capabilities.softKeyCount); A: If you use a switch statement, you can have more than one keycode associated with an action, you make a desktop version for testing too. I have done it myself.
{ "language": "en", "url": "https://stackoverflow.com/questions/143365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Recommended HTML formatter script/utility? Simple question - I've got a bucketload of cruddy html pages to clean up and I'm looking for a open source or freeware script/utility to remove any junk and reformat them into nicely laid out consistent code. Any recommendations? If it's relevant I generally manipulate HTML inside Dreamweaver - but by editing the code and using the wysiwyg window as preview rather than vica-versa - so a Dreamweaver compatible script would be a plus. A: I second HTML Tidy. I just wanted to add it is a library with various ports and bindings. As such it is also integrated in some editors like HTML-Kit or NoteTab, and it has a GUI front end. All these are linked in the page given above. Note also that the W3C Markup Validation Service has an option to "Clean up Markup with HTML Tidy" (after validation result display). A: I don't think it plugs into Dreamweaver but whenever i need html cleaned up HTML Tidy is my go to guy A: Dreamweaver CS3 has a built in "Clean up HTML" choice under the "Commands" menu item. I don't think it is nearly as comprehensive as HTML Tidy though. From the Adobe site: Clean up code You can automatically remove empty tags, combine nested font tags, and otherwise improve messy or unreadable HTML or XHTML code. For information on how to clean up HTML generated from a Microsoft Word document, see Open and edit existing documents. * *Open a document: * *If the document is in HTML, select Commands > Clean Up HTML. *If the document is in XHTML, select Commands > Clean Up XHTML. -- For an XHTML document, the Clean Up XHTML command fixes XHTML syntax errors, sets the case of tag attributes to lowercase, and adds or reports the missing required attributes for a tag in addition to performing the HTML cleanup operations. *In the dialog box that appears, select any of the options, and click OK. -- Note: Depending on the size of your document and the number of options selected, it may take several seconds to complete the cleanup. Remove Empty Container Tags Removes any tags that have no content between them. For example, <b></b> and <font color="#FF0000"></font> are empty tags, but the &ly;b> tag in &ltb>some text</b> is not. Remove Redundant Nested Tags Removes all redundant instances of a tag. For example, in the code <b>This is what I <b>really</b> wanted to say</b>, the b tags surrounding the word really are redundant and would be removed. Remove Non-Dreamweaver HTML Comments Removes all comments that were not inserted by Dreamweaver. For example, <!--begin body text--> would be removed, but <!-- TemplateBeginEditable name="doctitle" --> wouldn’t, because it’s a Dreamweaver comment that marks the beginning of an editable region in a template. Remove Dreamweaver Special Markup Removes comments that Dreamweaver adds to code to allow documents to be automatically updated when templates and library items are updated. If you select this option when cleaning up code in a template-based document, the document is detached from the template. For more information, see Detach a document from a template. Remove Specific Tag(s) Removes the tags specified in the adjacent text box. Use this option to remove custom tags inserted by other visual editors and other tags that you don’t want to appear on your site (for example, blink). Separate multiple tags with commas (for example, font,blink). Combine Nested <font> Tags When Possible Consolidates two or more font tags when they control the same range of text. For example, <font size="7"><font color="#FF0000">big red</font></font> would be changed to <font size="7" color="#FF0000">big red</font>. Show Log On Completion Displays an alert box with details about the changes made to the document as soon as the cleanup is finished. A: I use the HTML Formatter...it does exactly what you are looking for. A: I definitely think the best tool out there is the HTML Formatter from Logichammer.com. It does exactly what you need and is dead simple to use. Worth it to check out...the guy even has a video on his site showing how easy it is to use. I've been using it for two years now and couldn't live with out it...I get lots of messy code. A: I use Cleanup HTML it does the job well cleaning and formatting HTML A: I would suggest purehtml.in...it beautifies html, style and JavaScript tags... A: You can even buffer your existing HTML through HTML Tidy before it reaches the browser - if it's a low traffic site, then this will make things neat without any effort. A: I too recommend HTML Tidy, whilst its not maintained by Dave Ragett anymore the tool is definitely being updated frequently with tweaks. I use HTML Trim which is a win32 app to cleanup some awful autogenerated blobs of code that some of our devs knock up. You can also grab the command line version which you may able to integrate into Dreamweaver. Sorry i cant post more than one hyperlink - still a n00b here. A: I've been using Polystyle for a long time, and I'm quite happy. It's fairly flexible about formatting rules and costs around $15. A trial version is available. A: I would recommend vim. You could format a block of code with v to select the block and '=' to indent the code.
{ "language": "en", "url": "https://stackoverflow.com/questions/143367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: CALL -151 What did it do on the APPLE ][ A long time ago I had an apple ][ . I remember the command call – 151 But I can not remember what it did ? A: May I also add that -151 is apple ]['s way of expressing hex number which should mean $FF69 (hex syntax used in Apple II i.e. 0xFF68). The CALL is an Apple Basic command that invokes an assembly subroutine given by the argument (-151 here). IIRC, this command can accept an address as negative decimal value for addresses between $8000-$FFFF using 2's complement interpretation. For those who are interested in history, here is the Apple ]['s monitor rom listing (in 6502 assembly) and address $FF69 is having the label MONZ which is the start of the command prompt that process machine code processing commands from user. One that uses a '*' as the prompt. A very primitive command prompt. Apple II System Monitor A: CALL -151 Enter the machine code monitor - http://www.skepticfiles.org/cowtext/apple/memorytx.htm Update: That link appears to be dead, here's a Wayback Machine alternative: http://web.archive.org/web/20090315100335/http://www.skepticfiles.org/cowtext/apple/memorytx.htm Here's the full article just in case Wayback goes away: APPLE CALL, PEEK, POKE LIST CALL 144 SCAN THE INPUT BUFFER CALL 151 ENTER THE MONITOR NORM APPLE CALL, PEEK, POKE LIST ------------------------------------------------------------------------------ CALL -144 SCAN THE INPUT BUFFER CALL -151 ENTER THE MONITOR NORMALLY CALL -155 ENTER THE MONITOR & SOUND BELL CALL -167 ENTER MONITOR AND RESET CALL -198 RING BELL (SIMULATE CONTROL G) CALL -211 PRINT "ERR" AND RING BELL CALL -259 READ FROM TAPE CALL -310 WRITE TO TAPE CALL -321 DISPLAYS A, S, Y, P, & S REGISTERS CALL -380 SET NORMAL VIDEO MODE CALL -384 SET INVERSE VIDEO MODE CALL -415 DISASSEMBLE 20 INSTRUCTIONS CALL -458 VERIFY (COMPARE & LIST DIFFERENCES) CALL -468 MEMORY MOVE AFTER POKING 60,61 OLD START - 62,63 OLD END 64,65 NEW END - 66,67 NEW STAR CALL -484 MOVE CALL -517 DISPLAY CHARACTER & UPDATE SCREEN LOCATION CALL -531 DISPLAY CHARACTER, MASK CONTROL CHAR., & SAVE 7 REG. & ACCU CALL -550 DISPLAY HEX VALUE OF A-REGISTER (ACCUMULATOR) CALL -656 RING BELL AND WAIT FOR A CARRIAGE RETURN CALL -657 GET LINE OF INPUT, NO PROMPT, NO L/F, & WAIT(COMMA,COLON OK CALL -662 GET LINE OF INPUT, WITH PROMPT, NO L/F, & WAIT CALL -665 GET LINE OF INPUT, WITH PROMPT, LINE FEED, & WAIT THE ABOVE 3 CALLS (-657, -662, -665) REFER TO THE INPUT BUFFER FROM 512-767 CALL -715 GET CHARACTER CALL -756 WAIT FOR KEY PRESS CALL -856 TIME DELAY (POKE 69,XX TO SET TIME OF DELAY) CALL -868 CLEARS CURSOR LINE FROM CURSOR TO END OF LINE CALL -912 SCROLLS TEXT UP 1 LINE CALL -922 LINE FEED CALL -936 CLEAR SCREEN (HOME) CALL -958 CLEAR SCREEN FROM CURSOR TO BOTTOM OF SCREEN CALL -998 MOVES CURSOR UP 1 LINE CALL -1008 MOVES CURSOR BACKWARD 1 SPACE CALL -1024 DISPLAY CHARACTER ONLY CALL -1036 MOVES CURSOR FORWARD 1 SPACE CALL -1063 SEND BELL TO CURRENT OUTPUT DEVICE CALL -1216 TEXT & GRAPHICS MODE CALL -1233 MOVE CURSOR TO BOTTOM OF SCREEN CALL -1321 CONTROL E CALL -1717 MOVES CURSOR DOWN 5 LINES CALL -1840 DISASSEMBLE 1 INSTRUCTION CALL -1953 CHANGE COLOR BY +3 CALL -1994 CLEAR LO-RES SCREEN (TOP 40 LINES) CALL -1998 CLEAR GRAPHIC SCREEN (LO-RES) CALL -2007 VERTICAL LINE CALL -2023 HORIZONTAL LINE CALL -2458 ENTER MINI ASSEMBLER CALL -3100 TURNS ON HIRES PAGE 1, WITHOUT CLEARING IT CALL -3776 SAVE INTEGER CALL -3973 LOAD INTEGER CALL -6090 RUN INTEGER CALL -8117 LIST INTEGER CALL -8189 ENTER BASIC & CONTINUE CALL -8192 ENTER BASIC AND RESET (INTEGER BASIC KILL) CALL -16303 TEXT MODE CALL -16304 GRAPHICS MODE CALL -16336 TOGGLE SPEAKER CALL 42350 CATALOGS DISK CALL 54915 CLEANS STACK, CLEARS THE "OUT OF MEMORY" ERROR CALL 64166 INITIATES A COLD START (BOOT OF THE DISK) CALL 64246 BRAND NEW-YOU FIGURE IT OUT CALL 64367 SCANS MEMORY LOC 1010 & 1011 & POKES VALUE INTO LOCATIONS 1012 THAT IS EQUAL TO (PEEK(1011)-165) ------------------------------------------------------------------------------ PEEK 33 WIDTH OF TEXT WINDOW (1-40) PEEK 34 TOP EDGE OF TEXT WINDOW (0-22) PEEK 35 BOTTOM OF TEXT WINDOW (1-24) PEEK 36 HORIZONTAL CURSOR POSITION (0-39) PEEK 37 VERTICAL CURSOR POSITION (0-23) PEEK 43 BOOT SLOT X 16 (AFTER BOOT) PEEK 44 END POINT OF LAST HLIN, VLIN, OR PLOT PEEK 48 LO-RES COLOR VALUE X 17 PEEK 50 TEXT OUTPUT FORMAT: 63=INVERSE 255=NORMAL 127=FLASH ( WITH PEEK 243 SET TO 64) PEEK 51 PROMPT CHARACTER PEEK 74,75 LOMEM ADDRESS (INT) PEEK 76,77 HIMEM ADDRESS (INT) PEEK 103,104 FP PROGRAM STARTING ADDRESS PEEK 104 IF 8 IS RETURNED, THEN FP IS IN ROM PEEK 105,106 FP VARIABLE SPACE STARTING ADDRESS PEEK 107,108 FP ARRAY STARTING ADDRESS PEEK 109,110 FP END OF NUMERIC STORAGE ADDRESS PEEK 111,112 FP STRING STORAGE STARTING ADDRESS PEEK 115,116 FP HIMEM ADDRESS PEEK 117,118 FP LINE NUMBER BEING EXECUTED PEEK 119,120 FP LINE WHERE PROGRAM STOPPED PEEK 121,122 FP LINE BEING EXECUTED ADDRESS PEEK 123,124 LINE WHERE DATA BEING READ PEEK 125,126 DATA LOCATION ADDRESS PEEK 127,128 INPUT OR DATA ADDRESS PEEK 129,130 FP LAST USED VARIABLE NAME PEEK 131,132 FP LAST USED VARIABLE ADDRESS PEEK 175,176 FP END OF PROGRAM ADDRESS PEEK 202,203 INT PROGRAM STARTING ADDRESS PEEK 204,205 INT END OF VARIABLE STORAGE PEEK 214 FP RUN FLAG (AUTO-RUN IF >127) PEEK 216 ONERR FLAG (>127 IF ONERR IS ACTIVE) PEEK 218,219 LINE WHERE ONERR OCCURED PEEK 222 ONERR ERROR CODE PEEK 224,225 X-COORDINATE OF LAST HPLOT PEEK 226 Y-COORDINATE OF LAST HPLOT PEEK 228 HCOLOR VALUE 0=0 85=2 128=4 213=6 42=1 127=3 170=5 255=7 PEEK 230 HI-RES PLOTING PAGE (32=PAGE 1 64=PAGE 2 96=PAGE 3) PEEK 231 SCALE VALUE PEEK 232,233 SHAPE TABLE STARTING ADDRESS PEEK 234 HI-RES COLLISION COUNTER PEEK 241 256 MINUS SPEED VALUE PEEK 243 FLASH MASK (64=FLASH WHEN PEEK 50 SET TO 127) PEEK 249 ROT VLAUE PEEK 976-978 DOS RE-ENTRY VECTOR PEEK 1010-1012 RESET VECTOR PEEK 1013-1015 AMPERSAND (&) VECTOR PEEK 1016-1018 CONTROL-Y VECTOR PEEK 43140-43271 DOS COMMAND TABLE PEEK 43378-43582 DOS ERROR MESSAGE TABLE PEEK 43607 MAXFILES VALUE PEEK 43616,46617 LENGTH OF LAST BLOAD PEEK 43624 DRIVE NUMBER PEEK 43626 SLOT NUMBER PEEK 43634,43635 STARTING ADDRESS OF LAST BLOAD PEEK 43697 MAXFILES DEFAULT VALUE PEEK 43698 DOS COMMAND CHARACTER PEEK 43702 BASIC FLAG (0=INT 64=FP ROM 128=FP RAM) PEEK 44033 CATALOG TRACK NUMBER (17 IS STANDARD) PEEK 44567 NUMBER OF CHARACTERS MINUS 1 IN CATALOG FILE NAMES PEEK 44611 NUMBER OF DIGITS MINUS 1 IN SECTOR AND VOLUME NUMBERS PEEK 45991-45998 FILE-TYPE CODE TABLE PEEK 45999-46010 DISK VOLUME HEADING PEEK 46017 DISK VOLUME NUMBER PEEK 46064 NUMBER OF SECTORS (13=DOS 3.2 16=DOS 3.3) PEEK 49152 READ KEYBOARD (IF >127 THEN KEY HAS BEEN PRESSED PEEK 49200 TOGGLE SPEAKER (CLICK) PEEK 49248 CASSETTE INPUT (>127=BINARY 1, 127 IF BUTTON PRESSED) PEEK 49250 PADDLE 1 BUTTON (>127 IF BUTTON PRESSGD) PEEK 49251 PADDLE 2 BUTTON (>127 IF BUTTON PRESSED) PEEK 49252 READ GAME PADDLE 0 (0-255) PEEK 49253 READ GAME PADDLE 1 (0-255) PEEK 49254 READ GAME PADDLE 2 (0-255) PEEK 49255 READ GAME PADDLE 3 (0-255) PEEK 49408 READ SLOT 1 PEEK 49664 READ SLOT 2 PEEK 49920 READ SLOT 3 PEEK 50176 READ SLOT 4 PEEK 50432 READ SLOT 5 PEEK 50688 READ SLOT 6 (162=DISK CONROLLOR CARD) PEEK 50944 READ SLOT 7 PEEK 64899 INDICATES WHICH COMPUTER YOU'RE USING 223=APPLE II OR II+, 234=FRANKLIN ACE OR ?, 255=APPLE IIE POKE 33,33 SCRUNCH LISTING AND REMOVE SPACES IN QUOTE STATEMENTS POKE 36,X USE AS PRINTER TAB (X=TAB - 1) POKE 50,128 MAKES ALL OUTPUT TO THE SCREEN INVISIBLE POKE 50,RANDOM SCRAMBLES OUTPUT TO SCREEN POKE 51,0 DEFEATS "NOT DIRECT COMMAND", SOMETIMES DOESN'T WORK POKE 82,128 MAKE CASETTE PROGRAM AUTO-RUN WHEN LOADED POKE 214,255 SETS RUN FLAG IN FP & ANY KEY STROKES WILL RUN DISK PROGRA POKE 216,0 CANCEL ONERR FLAG POKE 1010,3 SETS THE RESET VECTOR TO INITIATE POKE 1011,150 A COLD START (BOOT) POKE 1010,102 MAKE POKE 1011,213 RESET POKE 1012,112 RUN POKE 1014,165 SETS THE AMPERSAND (&) VECTOR POKE 1015,214 TO LIST YOUR PROGRAM POKE 1014,110 SETS THE AMPERSAND (&) VECTOR POKE 1015,165 TO CATALOG A DISK POKE 1912+SLOT,1 ON APPLE PARALLEL CARD (WITH P1-02 PROM) WILL ENABLE L/F'S POKE 1912+SLOT,0 ON APPLE PARALLEL CARD (WITH P1-02 PROM) WILL ENABLE L/F'S POKE 2049,1 THIS WILL CAUSE THE FIRST LINE OF PROGRAM TO LIST REPEATEDLY POKE 40514,20 ALLOWS TEXT FILE GREETING PROGRAM POKE 40514,52 ALLOWS BINARY FILE GREETING PROGRAM POKE 40993,24 THIS ALLOWS POKE 40994,234 DISK COMMANDS IN POKE 40995,234 THE DIRECT MODE POKE 42319,96 DISABLES THE INIT COMMAND POKE 42768,234 CANCEL ALL POKE 42769,234 DOS ERROR POKE 42770,234 MESSAGES POKE 43624,X SELECTS DISK DRIVE WITHOUT EXECUTING A COMMAND (48K SYSTEM) POKE 43699,0 TURNS AN EXEC FILE OFF BUT LEAVES IT OPEN UNTIL A FP, CLOSE POKE 43699,1 TURNS AN EXEC FILE BACK ON. INIT, OR MAXFILES IS ISSUE POKE 44452,24 ALLOWS 20 FILE NAMES (2 EXTRA) POKE 44605,23 BEFORE CATALOG PAUSE POKE 44505,234 REVEALS DELETED FILE POKE 44506,234 NAMES IN CATALG POKE 44513,67 CATALOG WILL RETURN ONLY LOCKED FILES POKE 44513,2 RETURN CATALOG TO NORMAL POKE 44578,234 CANCEL CARRIAGE POKE 44579,234 RETURNS AFTER CATALOG POKE 44580,234 FILE NAMES POKE 44596,234 CANCEL POKE 44597,234 CATALOG-STOP POKE 44598,234 WHEN SCREEN IS FULL POKE 44599,234 STOP CATALOG AT EACH FILE POKE 44600,234 NAME AND WAIT FOR A KEYPRESS POKE 46922,96 THIS ALLOWS DISK POKE 46923,234 INITIALATION POKE 46924,234 WITHOUT PUTTING POKE 44723,4 DOS ON THE DISK POKE 49107,234 PREVENT LANGUAGE POKE 49108,234 CARD FROM LOADING POKE 49109,234 DURING RE-BOOT POKE 49168,0 CLEAR KEYBOARD POKE 49232,0 DISPLAY GRAPHICS POKE 49233,0 DISPLAY TEXT POKE 49234,0 DISPLAY FULL GRAPHICS POKE 49235,0 DISPLAY TEXT/GRAPHICS POKE 49236,0 DISPLAY GRAPHICS PAGE 1 POKE 49237,0 DISPLAY GRAPHICS PAGE 2 POKE 49238,0 DISPLAY LORES POKE 49239,0 DISPLAY HIRES ------------------------------------------------------------------------------ 48K MEMORY MAP DECIMAL HEX USAGE ------------------------------------------------------------------------------ 0-255 $0-$FF ZERO-PAGE SYSTEM STORAGE 256-511 $100-$1FF SYSTEM STACK 512-767 $200-$2FF KEYBOARD CHARACTER BUFFER 768-975 $300-$3CF OFTEN AVAILABLE AS FREE SPACE FOR USER PROGRAMS 976-1023 $3D0-3FF SYSTEM VECTORS 1024-2047 $400-$7FF TEXT AND LO-RES GRAPHICS PAGE 1 2048-LOMEM $800-LOMEM PROGRAM STORAGE 2048-3071 $800-$BFF TEXT AND LO-RES GRAPHICS PAGE 2 OR FREE SPACE 3072-8191 $C00-$1FFF FREE SPACE UNLESS RAM APPLESOFT IS IN USE 8192-16383 $2000-$3FFF HI-RES PAGE 1 OR FREE SPACE 16384-24575 $4000-$5FFF HI-RES PAGE 2 OR FREE SPACE 24576-38999 $6000-$95FF FREE SPACE AND STRING STORAGE 38400-49151 $9600-$BFFF DOS 49152-53247 $C000-$CFFF I/O HARDWARE (RESERVED) 53248-57343 $D000-$DFFF APPLESOFT IN LANGUAGE CARD OR ROM 57344-63487 $E000-$F7FF APPLESOFT OR INTEGER BASIC IN LANGUAGE CARD OR ROM 63488-65535 $F800-$FFFF SYSTEM MONITOR PEEK: TO EXAMINE ANY MEMORY LOCATION L, PRINT PEEK (L), WHERE L IS A DECIMAL NUMBER 0-65535. TO PEEK AT A TWO-BYTE NUMBER AT CONSEQUTIVE LOCATIONS L AND L+1, PRINT PEEK (L) + PEEK (L+1) * 256 POKE: TO ASSIGN A VALUE X (0-255) TO LOCATION L; POKE L,X. TO POKE A TWO-BYT NUMBER (NECESSARY IF X>255), POKE L,X-INT(X/256)*256, AND POKE L+1,INT(X/256). CALL: TO EXECUTE A MACHINE LANGUAGE SUB ROUTINE AT LOCATION L, CALL L. JUST FOR FUN TRY THIS: POKE 33,90. THEN TRY LISTING YOUR PROGRAM. OR TRY: 0,99 OR POKE 50,250 OR POKE 50,127. USE RESET TO RETURN TO NORMAL. FOR TRUE RANDOM NUMBER GENERATION TRY THIS:X= RND(PEEK(78)+PEEK(79)*256) TO LOCATE THE STARTING ADDRESS OF THE LAST BLOADED FILE USE: PEEK(-21902)+PEEK (-21901)*256 (RESULT IS IN HEX) TO DETERMINE THE LENGTH OF THE LAST BLOADED FILE USE: PEEK(-21920)+PEEK(-21919 *256 (RESULT IS IN HEX) TO DETERMINE THE LINE NUMBER THAT CAUSED AN ERROR TO OCCUR, SET X TO: PEEK(218 +PEEK(219)*256 ------------------------------------------------------------------------------ E-Mail Fredric L. Rice / The Skeptic Tank A: Crikey, that's a blast from the past. I think it entered the monitor ROM (I was torn between this and Integer BASIC but I'm pretty certain it was the monitor). You could download an Apple II emulator and find out. A: As a side note, the reason why this is a negative number and not the proper CALL 65385 is because the very first form of BASIC for the Apple II was known as Integer BASIC. It only understood signed 16-bit Integer values from -32768 to 32767, and so it is impossible to directly address memory beyond 32767 in the normal positive value manner. If you tried actually typing POKE 49200,0 or CALL 65385 in Integer BASIC you will get a message like ">32767 ERR" When the replacement Microsoft Applesoft BASIC (yes, from them) with floating point numbers was introduced, they included support for the negative POKE values for some degree of backwards compatibility for the older Integer BASIC programs. Though this compatibility is limited, as Applesoft lacks other programming features of Integer like the MOD division remainder. Due to the strong influence of early Integer BASIC programming methods, there are many PEEK POKE and CALL commands that are generally only known by their hexadecimal and negative decimal values, but not by their positive decimal values. A: Call -151 enters the monitor, 3D0G brings you back to BASIC, and typing a slot # in the monitor followed by Ctrl-P will boot that device. Amazing what one remembers after 20 years!
{ "language": "en", "url": "https://stackoverflow.com/questions/143374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Servlet constructor and init() method Why do we need an init() method in servlet? Can't we use the constructor to initialization? A: Because Servlet is an interface, not an abstract class. Constructor arguments cannot be specified on an interface, so the ServletContext needs to be specified on a normal method signature. This allows the application server to know how to initialize any Servlet implementation properly. Another solution would have been to require, but not enforce at compile time, a constructor taking ServletContext. The application server would then call the constructor via reflection. However, the designers of the Servlet specification did not chose this path.
{ "language": "en", "url": "https://stackoverflow.com/questions/143386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Java: Text to Speech engines overview I'm now in search for a Java Text to Speech (TTS) framework. During my investigations I've found several JSAPI1.0-(partially)-compatible frameworks listed on JSAPI Implementations page, as well as a pair of Java TTS frameworks which do not appear to follow JSAPI spec (Mary, Say-It-Now). I've also noted that currently no reference implementation exists for JSAPI. Brief tests I've done for FreeTTS (first one listed in JSAPI impls page) show that it is far from reading simple and obvious words (examples: ABC, blackboard). Other tests are currently in progress. And here goes the question (6, actually): * *Which of the Java-based TTS frameworks have you used? *Which ones, by your opinion, are capable of reading the largest wordbase? *What about their voice quality? *What about their performance? *Which non-Java frameworks with Java bindings are there on the scene? *Which of them would you recommend? Thank you in advance for your comments and suggestions. A: Actually, there is not a big choice: * *Festival, most old. Written in C++ but has bindings to Java. *eSpeak, quick and simple, used by Google Translate *mbrola Pure Java: * *FreeTTS, which code was ported from Festival, and then was open-sourced and development was stopped. *MaryTTS - more powerful and looks production ready. Also there is other proprietary programs like: * *Acapella *Nuance Vocalizer If your software is Windows only, you can use Microsoft Speech API. A: I've used Mary before and I was very impressed with the quality of the voices. Unfortunately, I haven't used any of the other ones. A: I've used AT&T Natural Voices which provides JSAPI and MS SAPI hooks. It provides excellent quality voices, a good "general" speech dictionary, many controls over pronunciation, and multiple languages. It's a little pricey, but works very well. I used it to read important sensor telemetry to drivers in a mobile sensor application. We had no complaints about the voice quality. It had about 75% out-of-the-box accuracy with scientific terms and a much higher (maybe 90%+) with normal dialogue. We got it up to about 99+% accuracy by using markups (most errors were on scientific terms with unusual phoneme combinations). It was a bit hard on the processor (we were running on a Pentium-III equivalent machine and it was pushing 50%-75% peak CPU). This uses a native speech engine (Windows, Linux, and Mac compatible) with a Java interface. There's a huge variety of voices and languages... A: I've actually had pretty good luck with FreeTTS A: Google Translate has a secret tts api: https://translate.google.com/translate_tts?ie=utf-8&tl=en&q=Hello%20World A: Thanks a lot everyone, the trick is in FreeTTS source. Briefly: if being run as java -jar freetts.jar some-more-args-here, it spells lesser words than when being executed in a manner of bin/Server.jar and bin/Client.jar. A: I used FreeTTS but had a major problem getting the MBrola voices to run on My MacbookPro. I did get MBrola voices to run on Windows (painfully) and Linux. I've had no luck loading any other voice packages on FreeTTS which is a shame because the supplied voices are horrible IMO. Outside of that I had a little success with Cloudgarden as well but that only runs on Windows AFAIK. I'd be interested to hear others successes/failures with Voice engines as this type of work is particular challenging. I'm also toying a bit with Sphinx4. I just pulled down JVXML (which appears to be based on Sphinx4) last night but could not get it to run for some strange reason. A: I've contributed to mary. I feel it has potential if someone smarter than me separated the HMM voices out of the core (those voices don't need large data sets and sound ok). I'm also trying to do a event system to freetts to send events when it says a word. I've had success, but it is broken in linux now. (probably because of a timer bug). A: I found little comfortable with MarryTTS It has multilanguage and clear voice to understand. T convert speech to text, the better optiion is sphinx4-5prealpha. I give one thumb, because it has adjustable, flexibility and modifiable recognizer and grammer.
{ "language": "en", "url": "https://stackoverflow.com/questions/143390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Does A#.Net work in Visual Studio 2008? As per the title. For those who don't know, it's basically Ada for .Net. A: According to [wikipedia](http://en.wikipedia.org/wiki/A_Sharp_(.NET)), A#.Net has been folded into "GNAT for .Net" for future releases AdaCore has taken over this development, and announced "GNAT for .NET", which is a fully supported .NET product with all of the features of A# and more. A: The 2010 .Net. version is supposed to work with MVS 2008.
{ "language": "en", "url": "https://stackoverflow.com/questions/143396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How does the portability of PureMVC benefit the application developer? One of the stated goals of the PureMVC framework is to avoid platform dependencies in order to be portable. Considering that because of language and API differences application code will always be heavily dependent on the platform, and that avoiding platform dependencies makes the framework reinvent the wheel and/or only provide a least-common-denominator feature set, in what way does the portability of the framework benefit me as an application developer? A: I've worked with PureMVC. They're trying to implement their stuff in quite a lot of languages. You may be right about the least common denominator, but overall, it's not a bad framework, and I've seen a really nice AS3 app in PureMVC. I don't think they're talking about portability in terms of porting actual code. The idea there is more that you're using a generalized MVC architecture, which you could apply to other projects and other languages. They're trying to say that if you become familiar with the PureMVC pattern, you could potentially come into a new PureMVC codebase, even if it's another language, and you would already know the lay of the land. You might also say that developers who develop good PureMVC skills are likely to develop good habits which will translate as they go from language to language. But then again, maybe not.. for the reasons you mentioned. A: We've been using PureMVC on two projects now and in my opinion the attempted language-independence is quite a burden. The promise of jumping straight in a project because the framework is already know does not seem relevant to me if the languages are not already pretty similar (C# to java would make sense, as3 to php not) -- I agree that it is useful to have known ways of solving things, but for that the 'plain' patterns are good enough. However, I also don't really agree with the usage of the various patterns the project uses, so our choice to not use it on the next project might be related to both issues, and not just the attempt at language/platform independence. A: PureMVC's portability will help you when you migrate to or reimplement in another language. I can't count the number of platforms and languages I've written code for that are now extinct and for which, even if I still had the source code it would be mostly worthless and have to be rewritten from the ground up today, since the code was usually 100% platform specific. But all application code need not be heavily dependent upon the platform. View components and services (the boundaries of your application) will necessarily be, but your application logic which is sandwiched between the boundaries need not be. The scope of PureMVC is really quite narrow; merely to help you split your code into the three tiers proscribed by the MVC meta-pattern. There is no reason why this code has to be tied deeply to your platform in order to be optimal. When it comes time to migrate, you'll appreciate that the framework actors and their roles, responsibilities and collaborations remain the same. This leaves you to deal with syntactic differences of the language, recreating the view components and services. At least you won't have to completely re-architect. And for the case of reimplementing in a different language, imagine you're trying to capture a significant part of the mobile market with your app. The market is so fractured, you'll have to implement the same program on 2 or more of Windows Mobile, iPhone, Flash, and Java. Sure you'll probably have separate teams in charge of the apps, but why have a totally different architecture? With PureMVC, you could have a single architecture for all versions of your application. -=Cliff> A: PureMVC is the only real option for Flash Platform developers who choose not to use the Flex Framework. For certain projects the size cost of Flex is too expensive (it happens!). I like to prototype in Flex and then rip it out and replace my views with custom components when the application is near completion. PureMVC makes this really easy to do with it's Mediator pattern. I'm not sure there is any other framework that would allow me this workflow. Personally, I think PureMVC went too far with it's portability goals: I enjoy that fact that it works with Flash AND Flex (for the reasons mentioned above), but feel that it should have stopped there, and made use of the native Flash Player event architecture. A: Are there examples of people using PureMVC to build and port applications across multiple platforms? My company is building a Flex application that we may need to port to other platforms: * *Silverlight (likely) *Mobile (maybe) *Desktop (maybe -- not just AIR!) *TV sets (maybe eventually) I am considering PureMVC as a framework if it can ease porting and maintenance. I am curious to know if other people have ported a PureMVC app to a different platform and what their experience was with porting and then having development proceed in parallel for the app on multiple platforms. Cheers, Karthik A: PureMVC does not rely on a platform for its internal workings (Flash Events etc). So, while it does not make porting any easier per-say, it can assist simply by showing us its friendly and familiar face wherever we may choose to go ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/143403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: C# Interfaces. Implicit implementation versus Explicit implementation What are the differences in implementing interfaces implicitly and explicitly in C#? When should you use implicit and when should you use explicit? Are there any pros and/or cons to one or the other? Microsoft's official guidelines (from first edition Framework Design Guidelines) states that using explicit implementations are not recommended, since it gives the code unexpected behaviour. I think this guideline is very valid in a pre-IoC-time, when you don't pass things around as interfaces. Could anyone touch on that aspect as well? A: If you implement explicitly, you will only be able to reference the interface members through a reference that is of the type of the interface. A reference that is the type of the implementing class will not expose those interface members. If your implementing class is not public, except for the method used to create the class (which could be a factory or IoC container), and except for the interface methods (of course), then I don't see any advantage to explicitly implementing interfaces. Otherwise, explicitly implementing interfaces makes sure that references to your concrete implementing class are not used, allowing you to change that implementation at a later time. "Makes sure", I suppose, is the "advantage". A well-factored implementation can accomplish this without explicit implementation. The disadvantage, in my opinion, is that you will find yourself casting types to/from the interface in the implementation code that does have access to non-public members. Like many things, the advantage is the disadvantage (and vice-versa). Explicitly implementing interfaces will ensure that your concrete class implementation code is not exposed. A: In addition to excellent answers already provided, there are some cases where explicit implementation is REQUIRED for the compiler to be able to figure out what is required. Take a look at IEnumerable<T> as a prime example that will likely come up fairly often. Here's an example: public abstract class StringList : IEnumerable<string> { private string[] _list = new string[] {"foo", "bar", "baz"}; // ... #region IEnumerable<string> Members public IEnumerator<string> GetEnumerator() { foreach (string s in _list) { yield return s; } } #endregion #region IEnumerable Members IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } #endregion } Here, IEnumerable<string> implements IEnumerable, hence we need to too. But hang on, both the generic and the normal version both implement functions with the same method signature (C# ignores return type for this). This is completely legal and fine. How does the compiler resolve which to use? It forces you to only have, at most, one implicit definition, then it can resolve whatever it needs to. ie. StringList sl = new StringList(); // uses the implicit definition. IEnumerator<string> enumerableString = sl.GetEnumerator(); // same as above, only a little more explicit. IEnumerator<string> enumerableString2 = ((IEnumerable<string>)sl).GetEnumerator(); // returns the same as above, but via the explicit definition IEnumerator enumerableStuff = ((IEnumerable)sl).GetEnumerator(); PS: The little piece of indirection in the explicit definition for IEnumerable works because inside the function the compiler knows that the actual type of the variable is a StringList, and that's how it resolves the function call. Nifty little fact for implementing some of the layers of abstraction some of the .NET core interfaces seem to have accumulated. A: An implicit interface implementation is where you have a method with the same signature of the interface. An explicit interface implementation is where you explicitly declare which interface the method belongs to. interface I1 { void implicitExample(); } interface I2 { void explicitExample(); } class C : I1, I2 { void implicitExample() { Console.WriteLine("I1.implicitExample()"); } void I2.explicitExample() { Console.WriteLine("I2.explicitExample()"); } } MSDN: implicit and explicit interface implementations A: Every class member that implements an interface exports a declaration which is semantically similar to the way VB.NET interface declarations are written, e.g. Public Overridable Function Foo() As Integer Implements IFoo.Foo Although the name of the class member will often match that of the interface member, and the class member will often be public, neither of those things is required. One may also declare: Protected Overridable Function IFoo_Foo() As Integer Implements IFoo.Foo In which case the class and its derivatives would be allowed to access a class member using the name IFoo_Foo, but the outside world would only be able to access that particular member by casting to IFoo. Such an approach is often good in cases where an interface method will have specified behavior on all implementations, but useful behavior on only some [e.g. the specified behavior for a read-only collection's IList<T>.Add method is to throw NotSupportedException]. Unfortunately, the only proper way to implement the interface in C# is: int IFoo.Foo() { return IFoo_Foo(); } protected virtual int IFoo_Foo() { ... real code goes here ... } Not as nice. A: Implicit is when you define your interface via a member on your class. Explicit is when you define methods within your class on the interface. I know that sounds confusing but here is what I mean: IList.CopyTo would be implicitly implemented as: public void CopyTo(Array array, int index) { throw new NotImplementedException(); } and explicitly as: void ICollection.CopyTo(Array array, int index) { throw new NotImplementedException(); } The difference is that implicit implementation allows you to access the interface through the class you created by casting the interface as that class and as the interface itself. Explicit implementation allows you to access the interface only by casting it as the interface itself. MyClass myClass = new MyClass(); // Declared as concrete class myclass.CopyTo //invalid with explicit ((IList)myClass).CopyTo //valid with explicit. I use explicit primarily to keep the implementation clean, or when I need two implementations. Regardless, I rarely use it. I am sure there are more reasons to use/not use explicit that others will post. See the next post in this thread for excellent reasoning behind each. A: The previous answers explain why implementing an interface explicitly in C# may be preferrable (for mostly formal reasons). However, there is one situation where explicit implementation is mandatory: In order to avoid leaking the encapsulation when the interface is non-public, but the implementing class is public. // Given: internal interface I { void M(); } // Then explicit implementation correctly observes encapsulation of I: // Both ((I)CExplicit).M and CExplicit.M are accessible only internally. public class CExplicit: I { void I.M() { } } // However, implicit implementation breaks encapsulation of I, because // ((I)CImplicit).M is only accessible internally, while CImplicit.M is accessible publicly. public class CImplicit: I { public void M() { } } The above leakage is unavoidable because, according to the C# specification, "All interface members implicitly have public access." As a consequence, implicit implementations must also give public access, even if the interface itself is e.g. internal. Implicit interface implementation in C# is a great convenience. In practice, many programmers use it all the time/everywhere without further consideration. This leads to messy type surfaces at best and leaked encapsulation at worst. Other languages, such as F#, don't even allow it. A: Reason #1 I tend to use explicit interface implementation when I want to discourage "programming to an implementation" (Design Principles from Design Patterns). For example, in an MVP-based web application: public interface INavigator { void Redirect(string url); } public sealed class StandardNavigator : INavigator { void INavigator.Redirect(string url) { Response.Redirect(url); } } Now another class (such as a presenter) is less likely to depend on the StandardNavigator implementation and more likely to depend on the INavigator interface (since the implementation would need to be cast to an interface to make use of the Redirect method). Reason #2 Another reason I might go with an explicit interface implementation would be to keep a class's "default" interface cleaner. For example, if I were developing an ASP.NET server control, I might want two interfaces: * *The class's primary interface, which is used by web page developers; and *A "hidden" interface used by the presenter that I develop to handle the control's logic A simple example follows. It's a combo box control that lists customers. In this example, the web page developer isn't interested in populating the list; instead, they just want to be able to select a customer by GUID or to obtain the selected customer's GUID. A presenter would populate the box on the first page load, and this presenter is encapsulated by the control. public sealed class CustomerComboBox : ComboBox, ICustomerComboBox { private readonly CustomerComboBoxPresenter presenter; public CustomerComboBox() { presenter = new CustomerComboBoxPresenter(this); } protected override void OnLoad() { if (!Page.IsPostBack) presenter.HandleFirstLoad(); } // Primary interface used by web page developers public Guid ClientId { get { return new Guid(SelectedItem.Value); } set { SelectedItem.Value = value.ToString(); } } // "Hidden" interface used by presenter IEnumerable<CustomerDto> ICustomerComboBox.DataSource { set; } } The presenter populates the data source, and the web page developer never needs to be aware of its existence. But's It's Not a Silver Cannonball I wouldn't recommend always employing explicit interface implementations. Those are just two examples where they might be helpful. A: To quote Jeffrey Richter from CLR via C# (EIMI means Explicit Interface Method Implementation) It is critically important for you to understand some ramifications that exist when using EIMIs. And because of these ramifications, you should try to avoid EIMIs as much as possible. Fortunately, generic interfaces help you avoid EIMIs quite a bit. But there may still be times when you will need to use them (such as implementing two interface methods with the same name and signature). Here are the big problems with EIMIs: * *There is no documentation explaining how a type specifically implements an EIMI method, and there is no Microsoft Visual Studio IntelliSense support. *Value type instances are boxed when cast to an interface. *An EIMI cannot be called by a derived type. If you use an interface reference ANY virtual chain can be explicitly replaced with EIMI on any derived class and when an object of such type is cast to the interface, your virtual chain is ignored and the explicit implementation is called. That's anything but polymorphism. EIMIs can also be used to hide non-strongly typed interface members from basic Framework Interfaces' implementations such as IEnumerable<T> so your class doesn't expose a non strongly typed method directly, but is syntactical correct. A: Implicit definition would be to just add the methods / properties, etc. demanded by the interface directly to the class as public methods. Explicit definition forces the members to be exposed only when you are working with the interface directly, and not the underlying implementation. This is preferred in most cases. * *By working directly with the interface, you are not acknowledging, and coupling your code to the underlying implementation. *In the event that you already have, say, a public property Name in your code and you want to implement an interface that also has a Name property, doing it explicitly will keep the two separate. Even if they were doing the same thing I'd still delegate the explicit call to the Name property. You never know, you may want to change how Name works for the normal class and how Name, the interface property works later on. *If you implement an interface implicitly then your class now exposes new behaviours that might only be relevant to a client of the interface and it means you aren't keeping your classes succinct enough (my opinion). A: One important use of explicit interface implementation is when in need to implement interfaces with mixed visibility. The problem and solution are well explained in the article C# Internal Interface. For example, if you want to protect leakage of objects between application layers, this technique allows you to specify different visibility of members that could cause the leakage. A: In addition to the other reasons already stated, this is the situation in which a class is implementing two different interfaces that have a property/method with the same name and signature. /// <summary> /// This is a Book /// </summary> interface IBook { string Title { get; } string ISBN { get; } } /// <summary> /// This is a Person /// </summary> interface IPerson { string Title { get; } string Forename { get; } string Surname { get; } } /// <summary> /// This is some freaky book-person. /// </summary> class Class1 : IBook, IPerson { /// <summary> /// This method is shared by both Book and Person /// </summary> public string Title { get { string personTitle = "Mr"; string bookTitle = "The Hitchhikers Guide to the Galaxy"; // What do we do here? return null; } } #region IPerson Members public string Forename { get { return "Lee"; } } public string Surname { get { return "Oades"; } } #endregion #region IBook Members public string ISBN { get { return "1-904048-46-3"; } } #endregion } This code compiles and runs OK, but the Title property is shared. Clearly, we'd want the value of Title returned to depend on whether we were treating Class1 as a Book or a Person. This is when we can use the explicit interface. string IBook.Title { get { return "The Hitchhikers Guide to the Galaxy"; } } string IPerson.Title { get { return "Mr"; } } public string Title { get { return "Still shared"; } } Notice that the explicit interface definitions are inferred to be Public - and hence you can't declare them to be public (or otherwise) explicitly. Note also that you can still have a "shared" version (as shown above), but whilst this is possible, the existence of such a property is questionable. Perhaps it could be used as a default implementation of Title - so that existing code would not have to be modified to cast Class1 to IBook or IPerson. If you do not define the "shared" (implicit) Title, consumers of Class1 must explicitly cast instances of Class1 to IBook or IPerson first - otherwise the code will not compile. A: I use explicit interface implementation most of the time. Here are the main reasons. Refactoring is safer When changing an interface, it's better if the compiler can check it. This is harder with implicit implementations. Two common cases come to mind: * *Adding a function to an interface, where an existing class that implements this interface already happens to have a method with the same signature as the new one. This can lead to unexpected behavior, and has bitten me hard several times. It's difficult to "see" when debugging because that function is likely not located with the other interface methods in the file (the self-documenting issue mentioned below). *Removing a function from an interface. Implicitly implemented methods will be suddenly dead code, but explicitly implemented methods will get caught by compile error. Even if the dead code is good to keep around, I want to be forced to review it and promote it. It's unfortunate that C# doesn't have a keyword that forces us to mark a method as an implicit implementation, so the compiler could do the extra checks. Virtual methods don't have either of the above problems due to required use of 'override' and 'new'. Note: for fixed or rarely-changing interfaces (typically from vendor API's), this is not a problem. For my own interfaces, though, I can't predict when/how they will change. It's self-documenting If I see 'public bool Execute()' in a class, it's going to take extra work to figure out that it's part of an interface. Somebody will probably have to comment it saying so, or put it in a group of other interface implementations, all under a region or grouping comment saying "implementation of ITask". Of course, that only works if the group header isn't offscreen.. Whereas: 'bool ITask.Execute()' is clear and unambiguous. Clear separation of interface implementation I think of interfaces as being more 'public' than public methods because they are crafted to expose just a bit of the surface area of the concrete type. They reduce the type to a capability, a behavior, a set of traits, etc. And in the implementation, I think it's useful to keep this separation. As I am looking through a class's code, when I come across explicit interface implementations, my brain shifts into "code contract" mode. Often these implementations simply forward to other methods, but sometimes they will do extra state/param checking, conversion of incoming parameters to better match internal requirements, or even translation for versioning purposes (i.e. multiple generations of interfaces all punting down to common implementations). (I realize that publics are also code contracts, but interfaces are much stronger, especially in an interface-driven codebase where direct use of concrete types is usually a sign of internal-only code.) Related: Reason 2 above by Jon. And so on Plus the advantages already mentioned in other answers here: * *When required, as per disambiguation or needing an internal interface *Discourages "programming to an implementation" (Reason 1 by Jon) Problems It's not all fun and happiness. There are some cases where I stick with implicits: * *Value types, because that will require boxing and lower perf. This isn't a strict rule, and depends on the interface and how it's intended to be used. IComparable? Implicit. IFormattable? Probably explicit. *Trivial system interfaces that have methods that are frequently called directly (like IDisposable.Dispose). Also, it can be a pain to do the casting when you do in fact have the concrete type and want to call an explicit interface method. I deal with this in one of two ways: * *Add publics and have the interface methods forward to them for the implementation. Typically happens with simpler interfaces when working internally. *(My preferred method) Add a public IMyInterface I { get { return this; } } (which should get inlined) and call foo.I.InterfaceMethod(). If multiple interfaces that need this ability, expand the name beyond I (in my experience it's rare that I have this need). A: I've found myself using explicit implementations more often recently, for the following practical reasons: * *Always using explicit from the starts prevents having any naming collisions, in which explicit implementation would be required anyways *Consumers are "forced" to use the interface instead of the implementation (aka not "programming to an implementation") which they should / must do anyways when you're using DI *No "zombie" members in the implementations - removing any member from the interface declaration will result in compiler errors if not removed from the implementation too *Default values for optional parameters, as well constraints on generic arguments are automatically adopted - no need to write them twice and keep them in sync
{ "language": "en", "url": "https://stackoverflow.com/questions/143405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "680" }
Q: How can I modify a saved Microsoft Access 2007 or 2010 Import Specification? Does anyone know how to modify an existing import specification in Microsoft Access 2007 or 2010? In older versions there used to be an Advanced button presented during the import wizard that allowed you to select and edit an existing specification. I no longer see this feature but hope that it still exists and has just been moved somewhere else. A: Tim Lentine's answer seems to be true even in the full release. There is just one other thing I would like to mention. If you complete your import without going into "Advanced..." and saving the spec, but you do save the import for reuse at the end of the wizard (new feature AFAIK), you will not be able to go back and edit that spec. It is built into the "Saved Import". This may be what Knox was referring to. You can, however, do a partial work around: * *Import a new file (or the same one all over again) but, *This time choose to append, instead of making a new *Click OK. *Go into "advanced" All your column heading and data-types will be there. *Now you can make the changes you need and save the spec inside that dialog. Then cancel out of that import (that is not what you wanted anyway, right?) *You can then use that spec for any further imports. It's not a full solution, but saves some of the work. A: Below are three functions you can use to alter and use the MS Access 2010 Import Specification. The third sub changes the name of an existing import specification. The second sub allows you to change any xml text in the import spec. This is useful if you need to change column names, data types, add columns, change the import file location, etc.. In essence anything you want modify for an existing spec. The first Sub is a routine that allows you to call an existing import spec, modify it for a specific file you are attempting to import, importing that file, and then deleting the modified spec, keeping the import spec "template" unaltered and intact. Enjoy. Public Sub MyExcelTransfer(myTempTable As String, myPath As String) On Error GoTo ERR_Handler: Dim mySpec As ImportExportSpecification Dim myNewSpec As ImportExportSpecification Dim x As Integer For x = 0 To CurrentProject.ImportExportSpecifications.Count - 1 If CurrentProject.ImportExportSpecifications.Item(x).Name = "TemporaryImport" Then CurrentProject.ImportExportSpecifications.Item("TemporaryImport").Delete x = CurrentProject.ImportExportSpecifications.Count End If Next x Set mySpec = CurrentProject.ImportExportSpecifications.Item(myTempTable) CurrentProject.ImportExportSpecifications.Add "TemporaryImport", mySpec.XML Set myNewSpec = CurrentProject.ImportExportSpecifications.Item("TemporaryImport") myNewSpec.XML = Replace(myNewSpec.XML, "\\MyComputer\ChangeThis", myPath) myNewSpec.Execute myNewSpec.Delete Set mySpec = Nothing Set myNewSpec = Nothing exit_ErrHandler: For x = 0 To CurrentProject.ImportExportSpecifications.Count - 1 If CurrentProject.ImportExportSpecifications.Item(x).Name = "TemporaryImport" Then CurrentProject.ImportExportSpecifications.Item("TemporaryImport").Delete x = CurrentProject.ImportExportSpecifications.Count End If Next x Exit Sub ERR_Handler: MsgBox Err.Description Resume exit_ErrHandler End Sub Public Sub fixImportSpecs(myTable As String, strFind As String, strRepl As String) Dim mySpec As ImportExportSpecification Set mySpec = CurrentProject.ImportExportSpecifications.Item(myTable) mySpec.XML = Replace(mySpec.XML, strFind, strRepl) Set mySpec = Nothing End Sub Public Sub MyExcelChangeName(OldName As String, NewName As String) Dim mySpec As ImportExportSpecification Dim myNewSpec As ImportExportSpecification Set mySpec = CurrentProject.ImportExportSpecifications.Item(OldName) CurrentProject.ImportExportSpecifications.Add NewName, mySpec.XML mySpec.Delete Set mySpec = Nothing Set myNewSpec = Nothing End Sub A: I am able to use this feature on my machine using MS Access 2007. * *On the Ribbon, select External Data *Select the "Text File" option *This displays the Get External Data Wizard *Specify the location of the file you wish to import *Click OK. This displays the "Import Text Wizard" *On the bottom of this dialog screen is the Advanced button you referenced *Clicking on this button should display the Import Specification screen and allow you to select and modify an existing import spec. For what its worth, I'm using Access 2007 SP1 A: When I want to examine or change an import / export specification I query the tables in MS Access where the specification is defined. SELECT MSysIMEXSpecs.SpecName, MSysIMexColumns.* FROM MSysIMEXSpecs LEFT JOIN MSysIMEXColumns ON MSysIMEXSpecs.SpecID = MSysIMEXColumns.SpecID WHERE SpecName = 'MySpecName' ORDER BY MSysIMEXSpecs.SpecID, MSysIMEXColumns.Start; You can also use an UPDATE or INSERT statement to alter existing columns or insert and append new columns to an existing specification. You can create entirely new specifications using this methodology. A: Another great option is the free V-Tools addin for Microsoft Access. Among other helpful tools it has a form to edit and save the Import/Export specifications. Note: As of version 1.83, there is a bug in enumerating the code pages on Windows 10. (Apparently due to a missing/changed API function in Windows 10) The tools still works great, you just need to comment out a few lines of code or step past it in the debug window. This has been a real life-saver for me in editing a complex import spec for our online orders. A: I don't believe there is a direct supported way. However, if you are desparate, then under navigation options, select to show system objects. Then in your table list, system tables will appear. Two tables are of interest here: MSysIMEXspecs and MSysIMEXColumns. You'll be able edit import and export information. Good luck! A: Why so complicated? Just check System Objects in Access-Options/Current Database/Navigation Options/Show System Objects Open Table "MSysIMEXSpecs" and change according to your needs - its easy to read... A: Tim Lentine's answer works IF you have yours specs saved. Your question did not specify that, it only stated you had imported the data. His method would not save your specs that way. The way to save the spec of that current import is to re-open the import, hit "apend" and that will allow you to use your current import settings that MS Access picked up. (This is useful if your want to keep the import specs from an Excel format you worked on prior to importing into MS ACCESS.) Once you're in the apend option, use Tim's instructions, which is using the advanced option and "Save As." From there, simply click cancel, and you can now import any other similar data to various tables, etc. A: I have just discovered an apparent bug in the whole Saved Import/XML setup in Access. Also frustrated by the rigidity of the Saved Import system, I created forms and wrote code to pick apart the XML in which the Saved Import specs are stored, to the point that I could use this tool to actually create a Saved Import from scratch via coded examination of a source Excel workbook. What I've found out is that, while Access correctly imports a worksheet per modifications of default settings by the user (for example, it likes to take any column with a header name ending with "ID" and make it an indexed field in the resulting table, but you can cancel this during the import process), and while it also correctly creates XML in accordance to the user changes, if you then drop the table and use the Saved Import to re-import the worksheet, it ignores the XML import spec and reverts back to using its own invented defaults, at least in the case of the "ID" columns. You can try this on your own: import an worksheet Excel with at least one column header name ending with "ID" ("OrderID", "User ID", or just plain "ID"). During the process, be sure to set "Indexed" to No for those columns. Execute the import and check "Save import steps" in the final dialog window. If you inspect the resulting table design, you will see there is no index on the field(s) in question. Then delete the table, find the saved import and execute it again. This time, those fields will be set as Indexed in the table design, even though the XML still says no index. I was pulling my hair out until I discovered what was going on, comparing the XML I built from scratch with examples created through the Access tool. A: I used Mike Hansen's solution, it is great. I modified his solution in one point, instead of replacing parts of the string I modified the XML-attribute. Maybe it is too much of an effort when you can modify the string but anyway, here is my solution for that. This could easily be further modified to change the table etc. too, which is very nice imho. What was helpful for me was a helper sub to write the XML to a file so I could check the structure and content of it: Sub writeStringToFile(strPath As String, strText As String) '#### writes a given string into a given filePath, overwriting a document if it already exists Dim objStream Set objStream = CreateObject("ADODB.Stream") objStream.Charset = "utf-8" objStream.Open objStream.WriteText strText objStream.SaveToFile strPath, 2 End Sub The XML of an/my ImportExportSpecification for a table with 2 columns looks like this: <?xml version="1.0"?> <ImportExportSpecification Path="mypath\mydocument.xlsx" xmlns="urn:www.microsoft.com/office/access/imexspec"> <ImportExcel FirstRowHasNames="true" AppendToTable="myTableName" Range="myExcelWorksheetName"> <Columns PrimaryKey="{Auto}"> <Column Name="Col1" FieldName="SomeFieldName" Indexed="NO" SkipColumn="false" DataType="Double"/> <Column Name="Col2" FieldName="SomeFieldName" Indexed="NO" SkipColumn="false" DataType="Text"/> </Columns> </ImportExcel> </ImportExportSpecification> Then I wrote a function to modify the path. I left out error-handling here: Function modifyDataSourcePath(strNewPath As String, strXMLSpec As String) As String '#### Changes the path-name of an import-export specification Dim xDoc As MSXML2.DOMDocument60 Dim childNodes As IXMLDOMNodeList Dim nodeImExSpec As MSXML2.IXMLDOMNode Dim childNode As MSXML2.IXMLDOMNode Dim attributesImExSpec As IXMLDOMNamedNodeMap Dim attributeImExSpec As IXMLDOMAttribute Set xDoc = New MSXML2.DOMDocument60 xDoc.async = False: xDoc.validateOnParse = False xDoc.LoadXML (strXMLSpec) Set childNodes = xDoc.childNodes For Each childNode In childNodes If childNode.nodeName = "ImportExportSpecification" Then Set nodeImExSpec = childNode Exit For End If Next childNode Set attributesImExSpec = nodeImExSpec.Attributes For Each attributeImExSpec In attributesImExSpec If attributeImExSpec.nodeName = "Path" Then attributeImExSpec.Value = strNewPath Exit For End If Next attributeImExSpec modifyDataSourcePath = xDoc.XML End Function I use this in Mike's code before the newSpec is executed and instead of the replace statement. Also I write the XML-string into an XML-file in a location relative to the database but that line is optional: Set myNewSpec = CurrentProject.ImportExportSpecifications.item("TemporaryImport") myNewSpec.XML = modifyDataSourcePath(myPath, myNewSpec.XML) Call writeStringToFile(Application.CurrentProject.Path & "\impExpSpec.xml", myNewSpec.XML) myNewSpec.Execute
{ "language": "en", "url": "https://stackoverflow.com/questions/143420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: What's the least useful comment you've ever seen? We all know that commenting our code is an important part of coding style for making our code understandable to the next person who comes along, or even ourselves in 6 months or so. However, sometimes a comment just doesn't cut the mustard. I'm not talking about obvious jokes or vented frustraton, I'm talking about comments that appear to be making an attempt at explanation, but do it so poorly they might as well not be there. Comments that are too short, are too cryptic, or are just plain wrong. As a cautonary tale, could you share something you've seen that was really just that bad, and if it's not obvious, show the code it was referring to and point out what's wrong with it? What should have gone in there instead? See also: * *When NOT to comment your code *How do you like your comments? (Best Practices) *What is the best comment in source code you have ever encountered? A: The worst comment is one that gives a wrong explanation of what the code does. That is worse than no comment at all. I've seen this kind of thing in code with way too many comments (that shouldn't be there because the code is clear enough on its own), and it happens mostly when the code is updated (refactored, modified, etc.) but the comments aren't updated along with it. A good rule of thumb is: only write comments to explain why code is doing something, not what it does. A: Would definitely have to be comments that stand in place of error handling. if(some_condition){ do_stuff(); } else{ //An error occurred! } A: I just found this one, written on the line before a commented-out line of code: //This causes a crash for some reason. I know the real reason but it doesn't fit on this line. A: Unfilled javadoc boilerplate comments are particularly useless. They consume a lot of screen real estate without contributing anything useful. And the worst part is that where one such comment appears, hundreds of others are surely lurking behind. /** * Method declaration * * * @param table * @param row * * @throws SQLException */ void addTransactionDelete(Table table, Object row[]) throws SQLException { A: 100k LOC application that was ported from vb6 to vb.net. It looks as though a previous developer had put a comment header on one method and then copied and pasted the exact comment onto every method he wrote from then on. Hundreds of methods and each one incorrectly commented... When i first saw it i laughed... 6 months later the joke is wearing thin. A: This is an absolutely real example from a database trigger: /****************************************************************************** NAME: (repeat the trigger name) PURPOSE: To perform work as each row is inserted or updated. REVISIONS: Ver Date Author Description --------- ---------- --------------- ------------------------------------ 1.0 27.6.2000 1. Created this trigger. PARAMETERS: INPUT: OUTPUT: RETURNED VALUE: CALLED BY: CALLS: EXAMPLE USE: ASSUMPTIONS: LIMITATIONS: ALGORITHM: NOTES: ******************************************************************************/ A: /** function header comments required to pass checkstyle */ A: I've found myself writing this little gem before: //@TODO: Rewrite this, it sucks. Seriously. Usually it's a good sign that I've reached the end of my coding session for the night. A: // remember to comment code wtf? :D A: The two most unhelpful comments I've ever seen... try { ... } catch { // TODO: something catchy } I posted this one at the Daily WTF also, so I'll trim it to just the comment... // TODO: The following if block should be reduced to one return statememt: // return Regex.IsMatch(strTest, NAME_CHARS); if (!Regex.IsMatch(strTest, NAME_CHARS)) return false; else return true; A: One I've never found very helpful: <!--- Lasciate ogne speranza, voi ch'intrate ---> A: Something like this: // This method takes two integer values and adds them together via the built-in // .NET functionality. It would be possible to code the arithmetic function // by hand, but since .NET provides it, that would be a waste of time private int Add(int i, int j) // i is the first value, j is the second value { // add the numbers together using the .NET "+" operator int z = i + j; // return the value to the calling function // return z; // this code was updated to simplify the return statement, eliminating the need // for a separate variable. // this statement performs the add functionality using the + operator on the two // parameter values, and then returns the result to the calling function return i + j; } And so on. A: Every comment that just repeats what the code says is useless. Comments should not tell me what the code does. If I don't know the programming language well enough, to understand what's going on by just reading the code, I should not be reading that code at all. Comments like // Increase i by one i++; are completely useless. I see that i is increased by one, that is what the code says, I don't need a comment for that! Comments should be used to explain why something is done (in case it is far from being obvious) or why something is done that way and not any other way (so I can understand certain design decisions another programmer made that are by far not obvious at once). Further comments are useful to explain tricky code, where it is absolutely not possible to determine what's going on by having a quick look at the code (e.g. there are tricky algorithms to count the number of bits set in a number; if you don't know what this code does, you have no chance of guessing what goes on there). A: Just the typical Comp Sci 101 type comments: I have threatened my students with random acts of extreme violence if they ever did this in assignments. And they still did. The sense in proper indentation, however, seemed to be totally lost to them. Goes to show why Python would be the ideal language for beginners, I guess. A: Comments generated by an auto-javadoc tool (e.g. JAutoDoc). I had a team member submit a large amount of code that was commented like: /** * Gets the something * * @param num The num * @param offset The offset */ public void getSomething(int num, bool offset) Maybe it's helpful as a starting point, but by definition if the program is parsing the variable and method names to make its comments it can't be doing much useful. A: Whenever I teach OOP in C++ or Java, I typically get the following: // My class! Class myclass { //Default constructor public myClass() { ... } } My policy is to announce to students that they would lose points for both insufficient and superfluous documentation A: I have a lot of these: # For each pose in the document doc.elements.each('//pose') do |pose| ... # For each sprite in sprites @sprites.each do |sprite| ... # For each X in Y for X in Y do ... I'm trying to cut back on that, though. :( A: #include <stdio.h> //why isn't this working! With a c-compiler that only supports /*-style */ global comments. A: We are maintaining terrible mess of PHP application and the original developer had a habit of leaving 'debugging code' commented out in the place. As he always said, it was because "in case he ever needs them again, he just uncomments them and voila, so it saves him a lot of work". So all the scripts are literally riddled with lines like: //echo "asdfada"; //echo $query."afadfadf"; None of them is actually functional in any way. They are mostly there to confirm that code execution reaches that point. On a related note, he never deleted any obsolete script or database table. So we have directories filled with files like dosomething.php, dosomething1.php, dosomething1.bak, dosomething1.bak.php etc... Everybody scared to delete anything because nobody knows, what is really used. A: Thread.Sleep(1000); // this will fix .NET's crappy threading implementation A: I once worked on a project with a strange C compiler. It gave an error on a valid piece of code unless a comment was inserted between two statements. So I changed the comment to: // Do not remove this comment else compilation will fail. And it worked great. A: My research deals with API usability and I've encountered a lot of comments which are bad simply because they are misleading, misplaced, incorrect, or incomplete. For example, in Java Messaging Service (JMS or within J2EE), the QueueReceiver.receive class contains the following gem: "This call blocks until a message arrives, the timeout expires, or this message consumer is closed. A timeout of zero never expires and the call blocks indefinitely." Sounds great? right? Problem is, as my lab studies show, that users believe that comments cover everything. Faced with a situation where messages are not received, they refuse to look elsewhere for the explanation. In this case, when you create a QueueConnection from the QueueConnectionFactory, it tells you that the messages would not be delivered until start is called. But that does not appear in the receive method. I believe that if that line wasn't there, more people would have searched for it elsewhere. By the way, my study deals with JavaDoc usability in general, and in whether people actually find the important directives in JavaDocs. If anybody wants to take a look, a related is here. A: I have a very bad habit of doing this, especially when I'm on a roll: // TODO: Documentation. A: A very large source file, implementing multi-threading in a single process. In the midst of all the call-stack switching and semaphore grabbing and thread suspension and resumption was a simple comment regarding a particularly obscure bit of pointer manipulation: /* Trickiness */ Gee, thanks for sharing. A: Extraneous comment breaks. Normally, if there's a logical separation of flow, a line of comments like: /***************************************************************************/ above and below that section of code can be helpful. Its also nice for when you need to come back later and split apart a large function (that started out small) into several smaller functions to keep the code easy to read. A former programmer, who shall remain nameless, decided to add the following two lines: //-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= //-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= After every single line of code. A: Once I saw the following comment in some code: //I know that this is very ugly, but I am tired and in a hurry. //You would do the same if you were me... //... //[A piece of nasty code here] A: Here are my two favorites: // do nothing This doesn't really help as it just takes up space. Then somewhere further along: // TODO: DAN to fix this. Not Wes. No sir. Not Wes. I guess if I'm not Dan or Wes, I should just ignore this, right? A: cntrVal = ""+ toInteger(cntrVal) //<---MAYBE THIS IS THE WAY I'M GOING THROUGH CHANGES (comin' up comin' up) THIS IS THE WAY I WANNA LIVE That's lyrics from an E-type song btw... A: Excessive redundancy doesn't clarify what's going on. This one is from mobile phone firmware /*======================================================================== FUNCTION DtFld_SetMin DESCRIPTION This local function sets a nMin member of the Dtfld struct. DEPENDENCIES None ARGUMENTS [in]me Pointer to the Dtfld struct. [in]val Value to set RETURN VALUE None. SIDE EFFECTS None NOTE None ========================================================================*/ /** @brief This local function sets a nMin member of the Dtfld struct.. @param [in] me Pointer to the Dtfld struct. @param [in]val Value to set @retval None @note None @see None */ static __inline void DtFld_SetMin(DtFld *me, int val) { me->nMin = val; } A: // initialise the static variable to 0 count = 1; A: I don't believe it. I came into this question after it had 22 answers, and no one pointed out the least possibly useful type of comment: comments that are wrong. It's bad enough that people write superfluous comments that get in the way of understanding code, but when someone writes a detailed comment explaining how something works, and it's either wrong in the first place, or wrong after the code was changed without changing the comment (much more likely scenario), that is definitely the worst kind of comment. A: GhostDoc comes up with some pretty interesting ones on its own. /// <summary> /// Toes the foo. /// </summary> /// <returns></returns> public Foo ToFoo() A: // secret sauce A: // Don't know why we have to do this A: try { ...some code... } catch { // Just don't crash, it wasn't that important anyway. } *sigh A: Not quite a comment, but from the JavaDoc that described the API of a system I once had to work with. setAttribute(attributeName, attributeValue) Sets an attribute Nowhere was it documented what an attribute was (they were not HTML/XML/etc attributes), what attributes existed or what values they could have. A: /* FIXME: documentation for the bellow functionality - and why are we doing it this way */ It was a huge statistical program for an accounting application. We had never figured out why she had done it that - wrong - way. But we had to rewrite it, and paid penalty for the customer. A: // Magic menu.Visible = False menu.Visible = True This is from the UI framework in some PowerBuilder code I used to work on. The framework created menu items dynamically (from database data). However, when PowerBuilder was upgraded from 16-bit to 32-bit, the menu code stopped working. The lead developer somehow determined that hiding the menu and then showing it caused it to display properly. A: Once upon a time, I saw: #region This is ugly but a mas has to do what a man has to do Initialization of a gigantic array (...) #endregion // Aren't you glad this has ended? I was glad I was not that developer. A: I'm suprised nobody posted one like this before. #contentWrapper{ position:absolute; top: 150px; /*80 = 30 + 50 where 50 is margin and 30 is the height of the header*/ } Plain wrong comments are the worst kind of comments. A: // Good luck A: /// <summary> /// Disables the web part. (Virtual method) /// </summary> public virtual void EnableWebPart() { /* nothing - you have to override it */ } A: I work in two languages, (English and French) but my favorite comment was in french: /* La passe du coyote qui tousse */ Translated it would gives something like this: /* The coughing coyote trick */ It usually represented a segment of code that either seemed like a clever idea to the author and was completly obscure or it was a weird bugfix that even the author did not understand why it worked (think fixing a race condition by moving if statements around). In all cases it was poorly written code that scared anybody who had to refactor it because it was very hard to predict the effect of changing it. A: add ax,1 ;add 1 to the accumulator seriously? that comment wasted 5 seconds of my life. also outdated comments FTL //the system can only handle 5 people right now. make sure where not over if(num_people>20){ A: One of the funniest I have ever come across. // HACK HACK HACK. REMOVE THIS ONCE MARLETT IS AROUND One that made me wonder. // this is a comment - don't delete A: // yes, this is going to break in 2089, but, one, I'll be dead, and two, we really ought to be using // a different system by then if (yearPart >= 89) { // naughty bits removed.... } (Not useful as comments go, but both are truthful statements.) A: I just saw this in an INI file for a software (one of several dumped on me not long ago) I'm maintaining: ;--- if LOGERR=1, errors are logged but debugging is difficult ;--- if LOGERR=0, errors are not logged but debugging is easy LOGERR=1 Well, debugging was indeed difficult, but I did not dare change the setting. A: Came across a file once. Thousands of lines of code, most of it quite horrendous. Badly named variables, tricky conditionals on loops and one comment buried in the middle of the file. /* Hmmm. A bit tricky. */ A: //' OOOO oooo that smell!! Can't you smell that smell!??!??!!!!11!??/!!!!!1!!!!!!1 If Not Me.CurrentMenuItem.Parent Is Nothing Then For Each childMenuItem As MenuItem In aMenuItem.Children do something Next If Not Me.CurrentMenuItem.Parent.Parent Is Nothing Then //'item is at least a grand child For Each childMenuItem As MenuItem In aMenuItem.Children For Each grandchildMenuItem As MenuItem In childMenuItem.Children do something Next Next If Not Me.CurrentMenuItem.Parent.Parent.Parent Is Nothing Then //'item is at least a grand grand child For Each childMenuItem As MenuItem In aMenuItem.Children For Each grandchildMenuItem As MenuItem In childMenuItem.Children For Each grandgrandchildMenuItem As MenuItem In grandchildMenuItem.Children do something Next Next Next End If End If End If A: Default comments inserted by IDEs. The last project I worked on which used WebSphere Application Developer had plenty of maintenance developers and contractors who didn't seem to be bothered by the hundreds, if not thousands of Java classes which contained the likes of this: /** * @author SomeUserWhoShouldKnowBetter * * To change this generated comment edit the template variable "typecomment": * Window>Preferences>Java>Templates. * To enable and disable the creation of type comments go to * Window>Preferences>Java>Code Generation. */ There was always that split-second between thinking you'd actually found a well-commented source file and realising that, yup, it's another default comment, which forced you to use SWEAR_WORD_OF_CHOICE. A: I saw this comment yesterday in a C# app: //TODO: Remove this comment. A: My favorite all-time comment. /* our second do loop */ do { Whoever wrote it - you know who you are. A: Just the typical Comp Sci 101 type comments: $i = 0; //set i to 0 $i++; //use sneaky trick to add 1 to i! if ($i==$j) { // I made sure to use == rather than = here to avoid a bug That sort of thing. A: a very large database engine project in C many many years ago - thousands of lines of code with short and misspelled variable names, and no comments... until way deep in nested if-conditions several thousands of lines into the module the following comment appeared: //if you get here then you really f**ked by that time, i think we knew that already! A: In a huge VB5 application dim J J = 0 'magic J = J 'more magic for J=1 to 100 ...do stuff... The reference is obviously THIS ... and yes, the application without those two lines fails at runtime with an unknown error code. We still don't know why. A: Taken from one of my blog posts: In the process of cleaning up some of the source code for one of the projects I manage, I came across the following comments: /* MAB 08-05-2004: Who wrote this routine? When did they do it? Who should I call if I have questions about it? It's worth it to have a good header here. It should helps to set context, it should identify the author (hero or culprit!), including contact information, so that anyone who has questions can call or email. It's useful to have the date noted, and a brief statement of intention. On the other hand, this isn't meant to be busy work; it's meant to make maintenance easier--so don't go overboard. One other good reason to put your name on it: take credit! This is your craft */ and then a little further down: #include "xxxMsg.h" // xxx messages /* MAB 08-05-2004: With respect to the comment above, I gathered that from the filename. I think I need either more or less here. For one thing, xxxMsg.h is automatically generated from the .mc file. That might be interesting information. Another thing is that xxxMsg.h should NOT be added to source control, because it's auto-generated. Alternatively, don't bother with a comment at all. */ and then yet again: /* MAB 08-05-2004: Defining a keyword?? This seems problemmatic [sic], in principle if not in practice. Is this a common idiom? */ A: AHHHRRRGGHHH Just found this in some ancient code, bet the guy thought he was pretty funny private //PRIVATE means PRIVATE so no comments for you function LoadIt(IntID: Integer): Integer; A: I have removed the name to avoid embarassment but this is a comment found in some production code. Unfortunately, as this was ASP code, referring to a VB6 module, and the customer was quite inquisitive, it was she who pointed out the comment to me whilst I was on-site during a consultancy visit. Luckily she had a sense of humour about it. 'I don't know how the help this @"%& works. It is a load of &£$! created by that contractor ---------. I will just leave it in place and hope nobody ever needs it changing. Unfortunately for me the code did need changing about a year later, at which point we found we had no source code and had to junk it and rewrite for free. A: I would have to say that the least useful type of commenting I have encountered is second-language commenting. I would rather see the comments written clearly in someone's native language than scrawled in a very poor approximation of English. At least then a native speaker of that language could translate it. ESL comments are often unreadable to everyone on the planet except the person who wrote them, and sometimes not even by them. A: Taken from legacy code, this was the only description of the following if condition's purpose (the condition spanned 4 rows at 120 cols): #-- Whoa, now that's a big if condition. A: Quoting this from memory so it might not be exact. I don't know what the f*ck this does, but it seems to work so I am not touching it. The funny thing is the way I found out about it. This comment was embedded in an access application some developer in our company had written for a client and distributed in an MDB. Unfortunately the code that "seems to work" bombed and Access dutifully opened the code window with the debugger highlighting the line right below the comment. It didn't exactly inspire confidence with that customer. A: Someone's name or initials, and that's it. Sometimes these signatures define a block of code... //SFD Start ...code... //SFD End Like the code is such a work of art they have to sign it! Plus, what if someone else needs to change code marked this way? This should not be confused with the "blame" or "annotate" feature in source control systems - they rock! A: Ran across a doozy today. I should have expected it given that it was part of a VBA macro in an excel workbook. a.writeline s 'write line I found it particularly charming that whomever wrote this took the time to write a comment that used a space to clear up the incredibly confusing jumbled together "writeline" command, but didn't find it necessary to use meaningful variable names. Best I can tell a is short for "a file", and s is short for "a String" (because "a" was already taken). A: Randomly, in the middle of code: //??? A: if (someFlag) { // YES DoSomething(); } else { // NO DoSomethingElse(); } There was one guy who did that constantly, the rest of the team eventually convinced him to stop doing it! A: This: Yup, a blank space, left as a subversion change log. A: I found this in a sample application for a mapping product: // Return value does not matter return 0; A: I found this in a twisted program # Let them send messages to the world #del self.irc_PRIVMSG # By deleting the method? Hello? A: /** * Implements the PaymentType interface. */ public class PaymentTypePo implements PaymentType A: I once worked with a very talented assembly language programmer who had augmented the basic ARM instruction set with a number of macros. His code was made up of tens of thousands of instructions and looked something like the following - with macro instructions that I (a competent ARM programmer) couldn't read represented by ??? and an occasional regular ARM instruction like ADD: ... ??? R0,R0,#1 ??? R0,R1 ADD R0,R0,#6 ; Add 6 ??? R1,R0 ??? R0,R0,R1 ... I can only presume that when you have a brain the size of a planet, it is too high brow to cope with those pesky instructions that are just too damn simple. A: Found this one today in the middle of a block of declarations: // other variables Gee, really? Thanks. A: I one time came across this little beauty in a VB.NET app Dim huh as String 'Best name for a variable ever. A: // return return; A: Just found this one today... // TODO: this is basically a copy of the code at line 743!!! A: This is a comment I wrote in a file in my group's final project in college /* http://youtube.com/watch?v=oHg5SJYRHA0 */ A: A classic that we always joke about at my job (complete with typos): // Its stupid but it work This was found multiple times in an old code base. A: I had to fix a bug in 2000 lines of code that transcoded audio from GSM into mu-law (mostly using bit shifting and arrays of conversion values). The only comment in the file was at the top of the only method defined in it. It was: /* Do the business */ A: // Undocumented A: I was once maintaining the operating system code we customized for a Harris minicomputer (yes, this was a long time ago). Going through the scheduler code (assembler) one day, I came across a code block that had the comment "Begin Magic" at the top and about 25 lines later the comment "End Magic." The code in-between was some of the tightest, most complicated, elegant code I've ever seen. It took 4 of us to figure out what that little section of code was actually doing (VM switching). A: I'm making some changes in a java class that has more than 1000 lines but without any comments. I'm newbie in their coding style so i can't help myself about adding a comment like /*Added because someone asked me to add it*/ A: if (returnValue ==0) doStuff(); else System.out.println("Beware of you, the Dragons are coming!"); A: /* this is a hack. ToDo: change this code */ A: //I am not sure why this works but it fixes the problem. This one tops the list for my useless comments. A: // this is messed up, and no one actually knows how it works anymore... A: someone send me a c file which described a binary file his program created. it contained no comments except somewhere in the writing of the real data SwapArray(..); // Big endian ??? write(); I asked about the implementation of the SwapArray and he told me I didn't need it, it's just to make sure it works on linux machines. After experimenting I found out that he used little endian every where (which is like normal) but only the real data was written in big endian. Normally you could see it in a hex editor, but the data was stored in floating point, so it's not that easy to notice the mixed endian. A: Top of the Pops surely has to be // This code should never be called A: My favorite from when I worked on a legacy communications application. // Magic happens here... A: Came across this one today: /// <summary> /// The Page_Load runs when the page loads /// </summary> private void Page_Load(Object sender, EventArgs e) {} A: Another one I remember: //TODO: This needs to be reworked. THIS CRAP NEEDS TO STOP!!! A: { Long complicated code logic //Added this } A: {Some Code;} // I don't Remember why I do this, but it works... A: Actually I have a few of these, // 18042009: (Name here) made me do this Not very proud of those comments but I keep them to remind me why I did WTF code that particular section, so useful in that aspect. A: I recently found this in some code I wrote aeons ago: // it's a kind of magic (number) $descr_id = 2; $url_id = 34; A: This comment was actually written in a different language, but I'll try to get the effect across in a translation: //we trick it, if forbidden, as if it had already existed What the comment was trying to describe was the way it was dealing with list items that were turned off - the code marked the item as a duplicate which should therefore be skipped. Yes, a very bass-ackwards way of doing things, but it paled in comparison to the nonsensical comment. A: [some code] // [a commented out code line] // this line added 2004-10-24 by JD. // removed again 2004-11-05 by JD. // [another commented out code line] [some more code] a) WHY? b) Which line? A: I saw an awesome code inside the AI part of a game: ..."AI code"... if(something) goto MyAwesomeLabel; //Who's gonna be the first to dump crap on me for this? ..."More Ai code"... MyAwesomeLabel: //It wasn't that hard to get here, right? ..."Even more AI code"... A: //URGENT TODO: Reimplement this shit, the old code is as broken as hell... and we tought we solved all the problems Just found that in one of my old projects. At first I laughed but in the end I was bitching because I still couldn't find the bug. A: # Below is stub documentation for your module. You'd better edit it A: Not quite fitting to the question, but I hate when I see: try { someSeeminglyTrivialMethod(); } catch (Exception e) { //Ignore. Should never happen. } Whenever I see that during a code review, I tell them to replace the catch with: catch (Exception e) { System.exit(0); } A: I thought this was about the worst comment on a SO post, and was disappointed to find otherwise. A: Commented code is the least useful comment :)
{ "language": "en", "url": "https://stackoverflow.com/questions/143429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Libraries of audio samples (spoken text) For a project we're currently working on, we need a library of spoken words in many different languages. Two options seem possible: text-to-speech or "real" recordings by native speakers. As the quality is important to us, we're thinking about going the latter path. In order to create a prototype for our application, we're looking for libraries that contain as many words in different languages as possible. To get a feeling for the quality of our approach, this library should not be made up of synthesized speech. Do you know of any available/accessible libraries? A: A co-worker just found this community based library, which is nice, but rather small in size: Forvo.com A: I've just found this on the Audacity wiki: VoxForge. From their site: VoxForge was set up to collect transcribed speech for use with Free and Open Source Speech Recognition Engines (on Linux, Windows and Mac). We will make available all submitted audio files under the GPL license, and then 'compile' them into acoustic models for use with Open Source speech recognition engines such as Sphinx, ISIP, Julius and HTK (note: HTK has distribution restrictions). A: There is also Old time radio, not sure if this is the sort of spoken word you're after though. A: My guess is that you won't find a library anywhere that consists of just individual words. Whatever you find, you're going to have to open the audio up in an editor (like Pro Tools or Cool Edit) and chop it up into individual words. You would probably be better off creating a list of all the words you need for each language, and then finding native speakers to read them while you record. You can have them read slowly, so that you'll have an easy time chopping up each individual word. A: One I use to use a lot: http://shtooka.net/index.php Easy access to the recordings.
{ "language": "en", "url": "https://stackoverflow.com/questions/143431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Can Greasemonkey cause the displayed title to change I want to change the title showing in a page based on information I pick up from within the page (eg to show the number of inbox messages) document.getElementsByTagName('title')[0].innerHTML="foo"; does change the title tag, but firefox does not update the displayed title (in window and tags) when this happens. Is this possible? A: Try setting document.title to the new value A: Try using this instead: document.title = "MyTitle";
{ "language": "en", "url": "https://stackoverflow.com/questions/143484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: implicit operator using interfaces I have a generic class that I'm trying to implement implicit type casting for. While it mostly works, it won't work for interface casting. Upon further investigation, I found that there is a compiler error: "User-defined conversion from interface" that applies. While I understand that this should be enforced in some cases, what I'm trying to do does seem like a legitimate case. Here's an example: public class Foo<T> where T : IBar { private readonly T instance; public Foo(T instance) { this.instance = instance; } public T Instance { get { return instance; } } public static implicit operator Foo<T>(T instance) { return new Foo<T>(instance); } } Code to use it: var concreteReferenceToBar = new ConcreteBar(); IBar intefaceReferenceToBar = concreteReferenceToBar; Foo<ConcreteBar> concreteFooFromConcreteBar = concreteReferenceToBar; Foo<IBar> fooFromConcreteBar = concreteReferenceToBar; Foo<IBar> fooFromInterfaceBar = intefaceReferenceToBar; // doesn't work Does anyone know a workaround, or can anyone explain in a satisfactory way why I shuouldn't be able to cast interfaceReferenceToBar implicitly to Foo<IBar>, since in my case it is not being converted, but only contained within Foo? EDIT: It looks like covariance might offer salvation. Let's hope the C# 4.0 specification allows for implicit casting of interface types using covariance. A: The reason you can't do this is because it is specifically forbidden in the C# language specification: Source: ECMA-334 Section 15.10.4 A class or struct is permitted to declare a conversion from a source type S to a target type T provided all of the following are true: * *... *Neither S nor T is object or an interface-type. and User-defined conversions are not allowed to convert from or to interface-types. In particular, this restriction ensures that no user-defined transformations occur when converting to an interface-type, and that a conversion to an interface-type succeeds only if the object being converted actually implements the specified interface-type.
{ "language": "en", "url": "https://stackoverflow.com/questions/143485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Unobtrusive JavaScript: A: There are two possibilities for truly unobtrusive scripts: * *including an external script file via a script tag in the head section *including an external script file via a script tag at the bottom of the body (before </body></html>) The second one can be faster as the original Yahoo research showed some browsers try to load script files when they hit the script tag and therefore don't load the rest of the page until they have finished. However, if your script has a 'ready' portion which must execute as soon as the DOM is ready you may need to have it in the head. Another issue is layout - if your script is going to change the page layout you want it loaded as early as possible so your page does not spend a long time redrawing itself in front of your users. If the external script site is on another domain (like external widgets) it may be worth putting it at the bottom to avoid it delaying loading of the page. And for any performance issues do your own benchmarks - what may be true at one time when a study is done might change with your own local setup or changes in browsers. A: If you want to tinker with the position of your scripts, YSlow is a great tool for giving you a flavour if it's going to improve or hurt performance. Putting javascript in certain document positions can really kill page load times. http://developer.yahoo.com/yslow/ A: No it should not be after the </html> as that would be invalid. The best place to put scripts is right before the </body> This is basically because most browsers stop rendering the page while they eval the script that you provide. So its OK to put non-blocking code anywhere in the page (I'm mainly thinking of things that attach functions to the onLoad event, since event binding is so fast as to effectively be free). A big killer here is at the beginning of the page putting in some ad server script, which can prevent any of the page loading before the ads have totally downloaded, making your page load times balloon A: It's never so cut and dry - Yahoo recommends putting the scripts just before the closing </body> tag, which will create the illusion that the page loads faster on an empty cache (since the scripts won't block downloading the rest of the document). However, if you have some code you want to run on page load, it will only start executing after the entire page has loaded. If you put the scripts in the <head> tag, they would start executing before - so on a primed cache the page would actually appear to load faster. Also, the privilege of putting scripts at the bottom of the page is not always available. If you need to include inline scripts in your views that depend on a library or some other JavaScript code being loaded before, you must load those dependencies in the <head> tag. All in all Yahoo's recommendations are interesting but not always applicable and should be considered on a case-by-case basis. A: As others have said, place it before the closing body html tags. The other day we had numerous calls from clients complaining their sites were extremely slow. We visited them locally and found they took 20-30 seconds to load a single page. Thinking it was the servers performing badly, we logged on - but both web and sql servers were ~0% activity. After a few minutes, we realised an external site was down, which we were linking to for Javascript tracking tags. This meant browsers were hitting the script tag in the head section of the site and waiting to download the script file. So, for 3rd party/external scripts at least, I'd recommend putting them as the last thing on the page. Then if they were unavailable, the browser would at least load the page up until that point - and the user would be oblivious to it. A: If you put it at the bottom, it loads last, hence speeding up the speed that the user can see the page. It does need to be before the final </html> though otherwise it won't be part of the DOM. If the code is needed instantly though, then put it in the head. It's best to put things like blog widgets at the bottom so that if they don't load, it doesn't affect the usability of the page. A: To summarize, based on the suggestions above: * *For external scripts (Google analytics, 3rd party marketing trackers, etc.) place them before the </body> tag. *For scripts that affect page layout, place in head. *For scripts that rely on 'dom ready' (like jquery), consider placing before </body> unless you have an edge-case reason to place scripts in head. *If there are inline scripts with dependencies, place the required scripts in head.
{ "language": "en", "url": "https://stackoverflow.com/questions/143486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "92" }
Q: How can I add fonts to netbeans? I'm using netbeans on ubuntu, I would like to add some fonts to it. Could anyone tell me how this is done ? A: I assume you mean the IDE's editor font? I'm on Windows with 6.1 but I assume the process will be the same. Tools > Options > Fonts & Colours > Syntax Category: default Font: ([...]) > Select Font A: Adding them to the .fonts/ directory did the trick .
{ "language": "en", "url": "https://stackoverflow.com/questions/143487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: OCSP libraries for python / java / c? Going back to my previous question on OCSP, does anybody know of "reliable" OCSP libraries for Python, Java and C? I need "client" OCSP functionality, as I'll be checking the status of Certs against an OCSP responder, so responder functionality is not that important. Thanks A: Java 5 has support of revocation checking via OCSP built in. If you want to build an OCSP responder, or have finer control over revocation checking, check out Bouncy Castle. You can use this to implement your own CertPathChecker that, for example, uses non-blocking I/O in its status checks. A: Have you check pyOpenSSL.. am sure openssl supports ocsp and python binding may support it A: OpenSSL is the most widely used product for OCSP in C. It's quite reliable, although incredibly obtuse. I'd recommend looking at apps/ocsp.c for a pretty good example of how to make OCSP requests and validate responses. Vista and Server 2008 have built-in OCSP support in CAPI; check out CertVerifyRevocation.
{ "language": "en", "url": "https://stackoverflow.com/questions/143515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: x509 certificate parsing libraries for Java Any recommended crypto libraries for Java. What I need is the ability to parse X.509 Certificates to extract the information contained in them. Thanks A: In Java, java.security.cert.CertificateFactory. "A certificate factory for X.509 must return certificates that are an instance of java.security.cert.X509Certificate" A: There's a lot more in most certificates than what's handled by java.security.cert.X509Certificate. If you need to parse extension values, check out the Bouncy Castle Crypto API. (C# version is offered too.) A: Java doesn't need crypto libraries, it ships with that functionality already. In particular, java.security.cert.X509Certificate. A: You might also have a look at Keyczar, developed by google. This library tries to make security as simple as possible and might be easier to use than the standard java libraries...
{ "language": "en", "url": "https://stackoverflow.com/questions/143523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: linker woes - undefined reference I'm having a problem with my compiler telling me there is an 'undefined reference to' a function I want to use in a library. Let me share some info on the problem: * *I'm cross compiling with gcc for C. *I am calling a library function which is accessed through an included header which includes another header, which contains the prototype. *I have included the headers directory using -I and i'm sure it's being found. *I'm first creating the .o files then linking them in a separate command. So my thought is it might be the order in which I include the library files, but i'm not sure what is the correct way to order them. I tried with including the headers folder both before and after the .o file. Some suggests would be great, and maybe and explanation of how the linker does its thing. Thanks! Response to answers * *there is no .a library file, just .h and .c in the library, so -l isn't appropriate *my understanding of a library file is that it is just a collection of header and source files, but maybe it's a collection of .o files created from the source?! *there is no library object file being created, maybe there should be?? Yes seems I don't understand the difference between includes and libraries...i'll work on that :-) Thanks for all the responses! I learned a lot about libraries. I'd like to put all the responses as the accepted answer :-) A: Headers provide function declarations and function definitions. To allow the linker find the function's implementation (and get rid of the undefined reference) you need to ask the compiler driver (gcc) to link the specific library where the function resides using the -l flag. For instance, -lm will link the math library. A function's manual page typically specifies what library, if any, must be specified to find the function. If the linker can't find a specified library you can add a library search path using the -L switch (for example, -L/usr/local/lib). You can also permanently affect the library path through the LIBRARY_PATH environment variable. Here are some additional details to help you debug your problem. By convention the names of library files are prefixed with lib and (in their static form) have a .a extension. Thus, the statically linked version of the system's default math library (the one you link with -lm) typically resides in /usr/lib/libm.a. To see what symbols a given library defines you can run nm --defined-only on the library file. On my system, running the command on libm.a gives me output like the following. e_atan2.o: 00000000 T atan2 e_asinf.o: 00000000 T asinf e_asin.o: 00000000 T asin To see the library path that your compiler uses and which libraries it loads by default you can invoke gcc with the -v option. Again on my system this gives the following output. GNU assembler version 2.15 [FreeBSD] 2004-05-23 (i386-obrien-freebsd) using BFD version 2.15 [FreeBSD] 2004-05-23 /usr/bin/ld -V -dynamic-linker /libexec/ld-elf.so.1 /usr/lib/crt1.o /usr/lib/crti.o /usr/lib/crtbegin.o -L/usr/lib /var/tmp//ccIxJczl.o -lgcc -lc -lgcc /usr/lib/crtend.o /usr/lib/crtn.o A: It sounds like you are not compiling the .c file in the library to produce a .o file. The linker would look for the prototype's implementation in the .o file produced by compiling the library Does your build process compile the library .c file? Why do you call it a "library" if it's actually just source code? A: I fear you mixed the library and header concepts. Let's say you have a library libmylib.a that contains the function myfunc() and a corresponding header mylib.h that defines its prototype. In your source file myapp.c you include the header, either directly or including another header that includes it. For example: /* myapp.h ** Here I will include and define my stuff */ ... #include "mylib.h" ... your source file looks like: /* myapp.c ** Here is my real code */ ... #include "myapp.h" ... /* Here I can use the function */ myfunc(3,"XYZ"); Now you can compile it to obtain myapp.o: gcc -c -I../mylib/includes myapp.c Note that the -I just tells gcc where the headers files are, they have nothing to do with the library itself! Now you can link your application with the real library: gcc -o myapp -L../mylib/libs myapp.o -lmylib Note that the -L switch tells gcc where the library is, and the -l tells it to link your code to the library. If you don't do this last step, you may encounter the problem you described. There might be other more complex cases but from your question, I hope this would be enough to solve your problem. A: Post your makefile, and the library function you are trying to call. Even simple gcc makefiles usually have a line like this: LIBFLAGS =-lc -lpthread -lrt -lstdc++ -lShared -L../shared In this case, it means link the standard C library, among others A: I guess you have to add the path where the linker can find the libraray. In gcc/ld you can do this with -L and libraray with -l. -Ldir, --library-path=dir Search directory dir before standard search directories (this option must precede the -l option that searches that directory). -larch, --library=archive Include the archive file arch in the list of files to link. Response to answers - there is no .a library file, just .h and .c in the library, so -l isn't approriate Then you may have to create the libraray first? gcc -c mylib.c -o mylib.o ar rcs libmylib.a mylib.o A: I have encountered this problem when building a program with a new version of gcc. The problem was fixed by calling gcc with the -std=gnu89 option. Apparently this was due to inline function declarations. I have found this solution at https://gcc.gnu.org/gcc-5/porting_to.html
{ "language": "en", "url": "https://stackoverflow.com/questions/143530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Comparing date ranges In MySQL, If I have a list of date ranges (range-start and range-end). e.g. 10/06/1983 to 14/06/1983 15/07/1983 to 16/07/1983 18/07/1983 to 18/07/1983 And I want to check if another date range contains ANY of the ranges already in the list, how would I do that? e.g. 06/06/1983 to 18/06/1983 = IN LIST 10/06/1983 to 11/06/1983 = IN LIST 14/07/1983 to 14/07/1983 = NOT IN LIST A: Taking your example range of 06/06/1983 to 18/06/1983 and assuming you have columns called start and end for your ranges, you could use a clause like this where ('1983-06-06' <= end) and ('1983-06-18' >= start) i.e. check the start of your test range is before the end of the database range, and that the end of your test range is after or on the start of the database range. A: If your RDBMS supports the OVERLAP() function then this becomes trivial -- no need for homegrown solutions. (In Oracle it apparantly works but is undocumented). A: This is a classical problem, and it's actually easier if you reverse the logic. Let me give you an example. I'll post one period of time here, and all the different variations of other periods that overlap in some way. |-------------------| compare to this one |---------| contained within |----------| contained within, equal start |-----------| contained within, equal end |-------------------| contained within, equal start+end |------------| not fully contained, overlaps start |---------------| not fully contained, overlaps end |-------------------------| overlaps start, bigger |-----------------------| overlaps end, bigger |------------------------------| overlaps entire period on the other hand, let me post all those that doesn't overlap: |-------------------| compare to this one |---| ends before |---| starts after So if you simple reduce the comparison to: starts after end ends before start then you'll find all those that doesn't overlap, and then you'll find all the non-matching periods. For your final NOT IN LIST example, you can see that it matches those two rules. You will need to decide wether the following periods are IN or OUTSIDE your ranges: |-------------| |-------| equal end with start of comparison period |-----| equal start with end of comparison period If your table has columns called range_end and range_start, here's some simple SQL to retrieve all the matching rows: SELECT * FROM periods WHERE NOT (range_start > @check_period_end OR range_end < @check_period_start) Note the NOT in there. Since the two simple rules finds all the non-matching rows, a simple NOT will reverse it to say: if it's not one of the non-matching rows, it has to be one of the matching ones. Applying simple reversal logic here to get rid of the NOT and you'll end up with: SELECT * FROM periods WHERE range_start <= @check_period_end AND range_end >= @check_period_start A: In your expected results you say 06/06/1983 to 18/06/1983 = IN LIST However, this period does not contain nor is contained by any of the periods in your table (not list!) of periods. It does, however, overlap the period 10/06/1983 to 14/06/1983. You may find the Snodgrass book (http://www.cs.arizona.edu/people/rts/tdbbook.pdf) useful: it pre-dates mysql but the concept of time hasn't changed ;-) A: I created function to deal with this problem in MySQL. Just convert the dates to seconds before use. DELIMITER ;; CREATE FUNCTION overlap_interval(x INT,y INT,a INT,b INT) RETURNS INTEGER DETERMINISTIC BEGIN DECLARE overlap_amount INTEGER; IF (((x <= a) AND (a < y)) OR ((x < b) AND (b <= y)) OR (a < x AND y < b)) THEN IF (x < a) THEN IF (y < b) THEN SET overlap_amount = y - a; ELSE SET overlap_amount = b - a; END IF; ELSE IF (y < b) THEN SET overlap_amount = y - x; ELSE SET overlap_amount = b - x; END IF; END IF; ELSE SET overlap_amount = 0; END IF; RETURN overlap_amount; END ;; DELIMITER ; A: Look into the following example. It will helpful for you. SELECT DISTINCT RelatedTo,CAST(NotificationContent as nvarchar(max)) as NotificationContent, ID, Url, NotificationPrefix, NotificationDate FROM NotificationMaster as nfm inner join NotificationSettingsSubscriptionLog as nfl on nfm.NotificationDate between nfl.LastSubscribedDate and isnull(nfl.LastUnSubscribedDate,GETDATE()) where ID not in(SELECT NotificationID from removednotificationsmaster where Userid=@userid) and nfl.UserId = @userid and nfl.RelatedSettingColumn = RelatedTo A: CREATE FUNCTION overlap_date(s DATE, e DATE, a DATE, b DATE) RETURNS BOOLEAN DETERMINISTIC RETURN s BETWEEN a AND b or e BETWEEN a and b or a BETWEEN s and e; A: Try This on MS SQL WITH date_range (calc_date) AS ( SELECT DATEADD(DAY, DATEDIFF(DAY, 0, [ending date]) - DATEDIFF(DAY, [start date], [ending date]), 0) UNION ALL SELECT DATEADD(DAY, 1, calc_date) FROM date_range WHERE DATEADD(DAY, 1, calc_date) <= [ending date]) SELECT P.[fieldstartdate], P.[fieldenddate] FROM date_range R JOIN [yourBaseTable] P on Convert(date, R.calc_date) BETWEEN convert(date, P.[fieldstartdate]) and convert(date, P.[fieldenddate]) GROUP BY P.[fieldstartdate], P.[fieldenddate]; A: Another method by using BETWEEN sql statement Periods included : SELECT * FROM periods WHERE @check_period_start BETWEEN range_start AND range_end AND @check_period_end BETWEEN range_start AND range_end Periods excluded : SELECT * FROM periods WHERE (@check_period_start NOT BETWEEN range_start AND range_end OR @check_period_end NOT BETWEEN range_start AND range_end) A: SELECT * FROM tabla a WHERE ( @Fini <= a.dFechaFin AND @Ffin >= a.dFechaIni ) AND ( (@Fini >= a.dFechaIni AND @Ffin <= a.dFechaFin) OR (@Fini >= a.dFechaIni AND @Ffin >= a.dFechaFin) OR (a.dFechaIni>=@Fini AND a.dFechaFin <=@Ffin) OR (a.dFechaIni>=@Fini AND a.dFechaFin >=@Ffin) )
{ "language": "en", "url": "https://stackoverflow.com/questions/143552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "126" }
Q: firefox not opening - cron, ruby, firewatir I have written a ruby script which opens up dlink admin page in firefox and does a ADSL connection or disconnection. I could run this script in the terminal without any problem. But if I put it as cron job, it doesn't fire up firefox. This is the entry I have in crontab # connect to dataone 55 17 * * * ruby /home/raguanu/Dropbox/nettie.rb >> /tmp/cron_test I see the following entries in /tmp/cron_test. So it looks like the script indeed ran. PROFILE: i486-linux /usr/bin/firefox -jssh But I couldn't figure out why I didn't see firefox opening up, for this automation to work. Here is /home/raguanu/Dropbox/nettie.rb #!/usr/bin/ruby -w require 'rubygems' require 'firewatir' require 'optiflag' module Options extend OptiFlagSet character_flag :d do long_form 'disconnect' description 'Mention this flag if you want to disconnect dataone' end flag :l do optional long_form 'admin_link' default 'http://192.168.1.1' description 'Dlink web administration link. Defaults to http://192.168.1.1' end flag :u do optional long_form 'user' default 'admin' description 'Dlink administrator user name. Defaults to "admin"' end flag :p do optional long_form 'password' default 'admin' description 'Dlink administrator password. Defaults to "admin"' end flag :c do optional long_form 'connection_name' default 'bsnl' description 'Dataone connection name. Defaults to "bsnl"' end extended_help_flag :h do long_form 'help' end and_process! end class DlinkAdmin include FireWatir def initialize(admin_link = "http://192.168.1.1", user = 'admin', pwd = 'admin') @admin_link, @user, @pwd = admin_link, user, pwd end def connect( connection_name = 'bsnl' ) goto_connection_page connection_name # disconnect prior to connection @browser.button(:value, 'Disconnect').click # connect @browser.button(:value, 'Connect').click # done! @browser.close end def disconnect( connection_name = 'bsnl' ) goto_connection_page connection_name # disconnect @browser.button(:value, 'Disconnect').click # done! @browser.close end private def goto_connection_page( connection_name = 'bsnl') @browser ||= Firefox.new @browser.goto(@admin_link) # login @browser.text_field(:name, 'uiViewUserName').set(@user) @browser.text_field(:name, 'uiViewPassword').set(@pwd) @browser.button(:value,'Log In').click # setup > dataone @browser.image(:alt, 'Setup').click @browser.link(:text, connection_name).click end end admin = DlinkAdmin.new(Options.flags.l, Options.flags.u, Options.flags.p) unless Options.flags.d? admin.connect( Options.flags.c ) else admin.disconnect( Options.flags.c ) end Any help is appreciated. A: You need to have a DISPLAY environment pointing at a valid X-server. This could either involve setting it to the value ":0.0" (without quotes), such that it refers to your local standard DISPLAY. There's a few things to keep in mind though: You could run an X virtual frame buffer (xvfb), so that Firefox simply uses that as it's display. This would mean that Firefox would be able to do all its graphical operations, but that it would be independent of your standard graphical environment. You'll have to set the DISPLAY variable appropriately so that it points to the xvfb instance. For instance, if you invoke xvfb as follows: Xvfb :1 -screen 0 1600x1200x32 Then you'll be able to use this by setting the DISPLAY variable to :1 You're starting a full-blown firefox instance to simply connect or disconnect your modem. You would most likely be able to use "curl" to send the appropriate HTTP requests to the server, such that it performs a connect or disconnect for you. One way to trivially see what you should recreate would be to install a Firefox plugin such as LiveHTTPHeaders and note down the most important HTTP requests as you perform the actions manually. There's even a ruby binding for curl: libcurl for Ruby. The resulting script should be much smaller than your current script. A: Programs run from cron don't have your interactive environment. Therefore they don't have and DISPLAY variable, and so you can't run any X (graphical) programs, e.g. Firefox. I would suggest doing the HTTP connections yourself, in ruby, rather than trying to automate Firefox. A: the crontab entry is wrong it is like #min hour day month dow user command 55 17 * * * ur_user_is_missing ruby /home/raguanu/Dropbox/nettie.rb >> /tmp/cron_test
{ "language": "en", "url": "https://stackoverflow.com/questions/143554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Software Synth Library for Java I've been thinking a lot lately about a music-oriented project I'd like to work on. Kind of like a game... kind of like a studio workstation (FL Studio, Reason). I guess the best way to describe it would be: like "Guitar Hero", but with no canned tracks. All original music--composed by you, on the fly--but the software would use its knowledge of music theory (as well as some supervised learning algorithms) to make sure that your input gets turned into something that sounds great. It sounds a little silly, explaining it like that, but there ya go. It's something I think would make an interesting side project. Anyhow, I'm looking for a Java library for generating the actual audio. Browsing around on sourceforge, there are countless software synths, and I have no idea which to choose. My top priority is that it should sound incredible... Really rich, layered, textured synths, with gobs of configurable parameters. Emulation of acoustic instruments is not important to me. My second priority is that it ought to be straightforward to use strictly as a library, with no GUI involved at all. (If there's a synth with really breathtaking output, but it's tightly-coupled with a GUI, then I might consider ripping the audio portion out of the application, but I'd rather start with a nicely contained library). I know I could send MIDI to a standalone synth, but I think it'd be cool to read the actual synth code and learn a little DSP while I'm at it. Any suggestions? Oh yeah, I'm on Windows, so posix-only stuff is a no go. Thanks! A: Have you checked out JFugue? It's an "open-source Java API for programming music without the complexities of MIDI". Additional information: Found a couple of other resources referenced in the JFugue documentation (pdf): * *Audio Synthesis Engine Project: open source version of Java’s MIDI synthesizer *Gervill: open source software synthesizer created as a proposal for the Audio Synthesis Engine Project A: Yeah, I noticed JFugue a few years ago. It's on my list of interesting computer/music bookmarks: http://delicious.com/BenjiSmith/computermusic http://delicious.com/BenjiSmith/programming.java.libraries.music But JFugue is all about the structure of the music itself... the melodies, harmonies, rhythms, etc.... What I'm looking for right now is just the synthesizer. Something like this... Synth s = new Synth(); Instrument i = s.getInstrument("Robot Bass"); i.makeAwesome(true); And then I'll send my events into the MIDI stream (or into whatever control API the synth library provides). A: If Clojure is an acceptable option (runs on the JVM, easy to integrate with Java), then it's definitely worth checking out Overtone. It uses SuperCollider as the synthesis engine, but wraps it all in a nice DSL and interactive programming environment. A: minim isn't exactly a java synth, but it is a processing lib and I imagine that it should be pretty easy to use with vanilla java too.
{ "language": "en", "url": "https://stackoverflow.com/questions/143566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Including Partials, ASP.NET MVC I'm building my first ASP.NET MVC application and I am having some troubles with Partial Views. If I, as an example, want to put a "Footer" as a Partial I create an "MVC View User Control" in "/Views/Shared/Footer.ascx". (I leave it empty for now) What is the correct way for adding it to my Layout? I have tried: <%=Html.RenderPartial("Footer")%> and: <%=Html.RenderPartial("~/Views/Shared/Footer.ascx")%> For each one I get an exception: "CS1502: The best overloaded method match for 'System.IO.TextWriter.Write(char)' has some invalid arguments" What is the correct way to deal with partials in ASP.NET MVC? A: In this case don't use the <%= syntax. Just use the <% %> syntax. Then the first form in your examples should work. For more info, check here: http://bradwilson.typepad.com/blog/2008/08/partial-renderi.html A: Do what @BenScheirman said, and add a semi-colon at the end of your statement :) <% Html.RenderPartial("~/Views/Shared/Footer.ascx"); %> Update: I guess VB doesn't require the semi-colon. So you would only need that if you are programming in C#.
{ "language": "en", "url": "https://stackoverflow.com/questions/143571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Where can I find open source 2d bin packing algorithms? I'm looking for open source (preferably c++) algorithms for 2d bin packing of rectangular and or irregular shapes. I've found several papers on the subject but no code. A: Here is the best to my knowledge about rectangular bin packing: http://clb.demon.fi/projects/rectangle-bin-packing
{ "language": "en", "url": "https://stackoverflow.com/questions/143580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Exception thrown inside catch block - will it be caught again? This may seem like a programming 101 question and I had thought I knew the answer but now find myself needing to double check. In this piece of code below, will the exception thrown in the first catch block then be caught by the general Exception catch block below? try { // Do something } catch(IOException e) { throw new ApplicationException("Problem connecting to server"); } catch(Exception e) { // Will the ApplicationException be caught here? } I always thought the answer would be no, but now I have some odd behaviour that could be caused by this. The answer is probably the same for most languages but I'm working in Java. A: If you want to throw an exception from the catch block you must inform your method/class/etc. that it needs to throw said exception. Like so: public void doStuff() throws MyException { try { //Stuff } catch(StuffException e) { throw new MyException(); } } And now your compiler will not yell at you :) A: No. It's very easy to check. public class Catch { public static void main(String[] args) { try { throw new java.io.IOException(); } catch (java.io.IOException exc) { System.err.println("In catch IOException: "+exc.getClass()); throw new RuntimeException(); } catch (Exception exc) { System.err.println("In catch Exception: "+exc.getClass()); } finally { System.err.println("In finally"); } } } Should print: In catch IOException: class java.io.IOException In finally Exception in thread "main" java.lang.RuntimeException at Catch.main(Catch.java:8) Technically that could have been a compiler bug, implementation dependent, unspecified behaviour, or something. However, the JLS is pretty well nailed down and the compilers are good enough for this sort of simple thing (generics corner case may be a different matter). Also note, if you swap around the two catch blocks, it wont compile. The second catch would be completely unreachable. Note the finally block always runs even if a catch block is executed (other than silly cases, such as infinite loops, attaching through the tools interface and killing the thread, rewriting bytecode, etc.). A: The Java Language Specification says in section 14.19.1: If execution of the try block completes abruptly because of a throw of a value V, then there is a choice: * *If the run-time type of V is assignable to the Parameter of any catch clause of the try statement, then the first (leftmost) such catch clause is selected. The value V is assigned to the parameter of the selected catch clause, and the Block of that catch clause is executed. If that block completes normally, then the try statement completes normally; if that block completes abruptly for any reason, then the try statement completes abruptly for the same reason. Reference: http://java.sun.com/docs/books/jls/second_edition/html/statements.doc.html#24134 In other words, the first enclosing catch that can handle the exception does, and if an exception is thrown out of that catch, that's not in the scope of any other catch for the original try, so they will not try to handle it. One related and confusing thing to know is that in a try-[catch]-finally structure, a finally block may throw an exception and if so, any exception thrown by the try or catch block is lost. That can be confusing the first time you see it. A: No -- As Chris Jester-Young said, it will be thrown up to the next try-catch in the hierarchy. A: No, since the new throw is not in the try block directly. A: As said above... I would add that if you have trouble seeing what is going on, if you can't reproduce the issue in the debugger, you can add a trace before re-throwing the new exception (with the good old System.out.println at worse, with a good log system like log4j otherwise). A: It won't be caught by the second catch block. Each Exception is caught only when inside a try block. You can nest tries though (not that it's a good idea generally): try { doSomething(); } catch (IOException) { try { doSomething(); } catch (IOException e) { throw new ApplicationException("Failed twice at doSomething" + e.toString()); } } catch (Exception e) { } A: No, since the catches all refer to the same try block, so throwing from within a catch block would be caught by an enclosing try block (probably in the method that called this one) A: Old post but "e" variable must be unique: try { // Do something } catch(IOException ioE) { throw new ApplicationException("Problem connecting to server"); } catch(Exception e) { // Will the ApplicationException be caught here? }
{ "language": "en", "url": "https://stackoverflow.com/questions/143622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "239" }
Q: Binsor and log4net I'm using Castle Windsor and Binsor to use dependency injection in my application. I'm no expert at either one. Usually I can figure out how to bend Windsor to my will, but I find Binsor much harder, especially since I haven't found any decent documentation for it. I'm trying to create a binsor configuration file where I use logging. I configure logging using the following binsor code: facility LoggingFacility: loggingApi = LoggerImplementation.Log4net configFile = "ParasiteLogConf.log4net" This works great, all components that are registered with the container and that takes an ILogger object as an argument to the constructor will receive the correct ILogger instance. However, what I want to do now is to use another logger for one specific component. I want that component to log to a file, whereas the other components should only log to screen. How would I go about expressing that using Binsor code? A: Aynede@Rahien is your friend here. He has many blog posts on using and configuring Binsor. For the special logger, you need to add it as a component and then explicitly set the logger property of the dependent component to the id of the special logger component.
{ "language": "en", "url": "https://stackoverflow.com/questions/143623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Crypto/X509 certificate parsing libraries for Python Any recommended crypto libraries for Python. I know I've asked something similar in x509 certificate parsing libraries for Java, but I should've split the question in two. What I need is the ability to parse X.509 Certificates to extract the information contained in them. Looking around, I've found two options: * *Python OpenSSL Wrappers (http://sourceforge.net/projects/pow) *pyOpenSSL Of the two, pyOpenSSL seems to be the most "maintained", but I'd like some feedback on anybody who might have experience with them? A: Use M2Crypto, it is the most complete tool IMHO A: You might want to try keyczar as mentioned by me in your other post, since that library actually has implementations for both python and java. That would make it easier to use it in both contexts. A word of warning: I have not actually used this library 8(, so please take this with a grain of salt. A: My experience is that most crypto libraries are focused on a particular workflow - making a certain set of tasks easier and others hard or perhaps impossible. The exception to this would be ones that have really been around a long time and have matured (e.g. openssl, bounceycastle, but none of the python libraries in my experience). So, you really need to evaluate libraries in the context of what you are trying to do. More specifically, I've used pyOpenSSL for simple generation of private keys and certificates requests (i.e. being a client to a CA) and am quite happy with it. A: The keyczar project is deprecated. You can check out tink.
{ "language": "en", "url": "https://stackoverflow.com/questions/143632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Window styles / Minimal titlebar/borders I'm looking for some kind of a resource (website) that would list all possible window/dialog frame styles and their respective combinations with images. I'm only really interested in Vista, as my software won't support older platforms anyway. I have a more specific case here too: I'm wondering if there are other ways to achieve a smaller-than-normal titlebar for my window than WS_EX_TOOLWINDOW? The tool window style would otherwise suit my needs, but in addition to the normal window border, it seems to add this one-pixel wide white border inside the black outline, and that just looks really ugly for my purposes. I remember older versions of Adobe Photoshop (CS2?) having these ridiculously tiny titlebars on the tool windows, like 8-10px wide. I'm wondering if those can be done with normal winapi, since IIRC they came in vista flavour too, and conformed to whatever windows skin was in use..? A: If it's Vista-only, you can try to use your own window decoration and use the Desktop Window Manager (DWM) API to still provide Aero Glass Theming. On the other hand, if you're targeting Vista and later, you'll most likely not have to deal with low resolutions. Don't think too much about a few pixels more or less.
{ "language": "en", "url": "https://stackoverflow.com/questions/143633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Keep focus on an application I have written an application which has a modal form. How can I ensure that this form does not lose the focus even when an other application is started? A: You can set the "Topmost" property to true to keep the form in front of all others but that doesn't make it keep focus. A: Actually, this is exactly the sort of thing you shouldn't be doing. There's too many programs around that assume they control the computer they're installed on. It is the user of your application that should be in control. That's why later versions of Windows disallowed stealing of focus instead insisting on just blinking the entry in the task list bar. You may well find a way to do it (though I doubt it), but I urge you to rethink it. I'd be interested in knowing why you thought it was necessary. A: You must make the dialog system modal. A: I use SetForegroundWindow(Me.Handle) Me.Handle is the handle of your form. You need to declare the following somewhere in your class or winform, but not inside a function Declare Unicode Function SetForegroundWindow Lib "user32.dll" (ByVal hWnd As IntPtr) As Boolean You might need to initiate a timer and call SetForegroundWindow on every tick of say 10 seconds, depending on your preference. EDIT: It works for me, if it doesn't add the following Declare Unicode Function SystemParametersInfo Lib "user32.dll" Alias "SystemParametersInfoW" (ByVal uiAction As Int32, ByVal uiParam As Int32, ByRef pvParam As Int32, ByVal fWinIni As Int32) As Int32 And surround SetForegroundWindow with these Dim _timeout As Int32 SystemParametersInfo(&H2000, 0, _timeout, 0) SystemParametersInfo(&H2001, 0, 0, 3) SetForegroundWindow(Me.Handle) SystemParametersInfo(&H2001, 0, _timeout, 2) That's the last resort
{ "language": "en", "url": "https://stackoverflow.com/questions/143651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What's the best system for installing a Perl web app? It seems that most of the installers for Perl are centered around installing Perl modules, not applications. Things like ExtUtils::MakeMaker and Module::Build are very well suited for modules, but require some additional work for Web Apps. Ideally it would be nice to be able to do the following after checking out the source from the repository: * *Have missing dependencies detected *Download and install dependencies from CPAN *Run a command to "Build" the source into a final state (perform any source parsing or configuration necessary for the local environment). *Run a command to install the built files into the appropriate locations. Not only the perl modules, but also things like template (.tt) files, and CGI scripts, JS and image files that should be web-accessible. *Make sure proper permissions are set on installed files (and SELinux context if necessary). Right now we have a system based on Module::Build that does most of this. The work was done by done by my co-worker who was learning to use Module::Build at the time, and we'd like some advice on generalizing our solution, since it's fairly app-specific right now. In particular, our system requires us to install dependencies by hand (although it does detect them). Is there any particular system you've used that's been particularly successful? Do you have to write an installer based on Module::Build or ExtUtils::MakeMaker that's particular to your application, or is something more general available? EDIT: To answer brian's questions below: * *We can log into the machines *We do not have root access to the machines *The machines are all (ostensibly) identical builds of RHEL5 with SELinux enabled *Currently, the people installing the machines are only programmers from our group, and our source is not available to the general public. However, it's conceivable our source could eventually be installed on someone else's machines in our organization, to be installed by their programmers or systems people. *We install by checking out from the repository, though we'd like to have the option of using a distributed archive (see above). A: The answer suggesting RPM is definitely a good one. Using your system's package manager can definitely make your life easier. However, it might mean you also need to package up a bunch of other Perl modules. You might also take a look at Shipwright. This is a Perl-based tool for packaging up an app and all its Perl module dependencies. It's early days yet, but it looks promising. As far as installing dependencies, it wouldn't be hard to simply package up a bunch of tarballs and then have you Module::Build-based solution install them. You should take a look at pip, which makes installing a module from a tarball quite trivial. You could package this with your code base and simply call it from your own installer to handle the deps. I question whether relying on CPAN is a good idea. The CPAN shell always fetches the latest version of a distro, rather than a specific version. If you're interested in ensuring repeatable installs, it's not the right tool. A: What are your limitations for installing web apps? Can you log into the machine? Are all of the machines running the same thing? Are the people installing the web apps co-workers or random people from the general public? Are the people installing this sysadmins, programmers, web managers, or something else? Do you install by distributed an archive or checking out from source control? For most of my stuff, which involves sysadmins familiar with Perl installing in control environments, I just use MakeMaker. It's easy to get it to do all the things you listed if you know a little about MakeMaker. If you want to know more about that, ask a another question. ;) Module::Build is just as easy, though, and the way to go if you don't already like using MakeMaker. Module::Build would be a good way to go to handle lots of different situations if the people are moderately clueful about the command line and installing software. You'll have a lot of flexibility with Module::Build, but also a bit more work. And, the cpan tool (which comes with Perl), can install from the current directory and handle dependencies for you. Just tell it to install the current directory: $ cpan . If you only have to install on a single platorm, you'll probably have an easier time making a package in the native format. You could even have Module::Build make that package for you so the developers have the flexibility of Module::Build, but the installers have the ease of the native process. Sticking with Module::Build also means that you could create different packages for different platforms from a single build tool. If the people installing the web application really have no idea about command lines, CPAN, and other things, you'll probably want to use a packager and installer that doesn't scare them or make them think about what is going on, and can accurately report problems to you automatically. As Dave points out, using a real CPAN mirror always gets you the latest version of a module, but you can also make your own "fake" CPAN mirror with exactly the distributions you want and have the normal CPAN tools install from that. For our customers, we make "CPAN on a CD" (although thumb drives are good now too). With a simple "run me" script everything gets installed in exactly the versions they need. See, for instance, my Making my own CPAN talk if you're interested in that. Again, consider the audience when you think about that. It's not something you'd hand to the general public. Good luck, :) A: I'd recommend seriously considering a package system such as RPM to do this. Even if you're running on Windows I'd consider RPM and cygwin to do the installation. You could even set up a yum or apt repository to deliver the packages to remote systems. If you're looking for a general installer for customers running any number of OSes and distros, then the problem becomes much harder. A: Take a look at PAR. Jonathan Rockway as a small section on using this with Catalyst in his book.
{ "language": "en", "url": "https://stackoverflow.com/questions/143680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is the best method to gather data about the use of your application? My company releases a small software product for which I've recently been taking over the development side. It is a C# Windows Forms application. One of the things I've noticed is that much of the information about how the software is used is filtered through my superiors and I get the feeling that I'm missing important detail in some of the messages. I realise I'll have to work on the management issues with this situation, however in order to give another view on the problem I've been considering a technological solution. Perhaps something similar to the "Microsoft Customer Experience Improvement Program". I was wondering if anyone out there had any experience or advice monitoring and reporting on user behaviour in their applications? A: I would suggest you get your application to write its "usage information" somewhere and then, with the users permission, transmit it electronically every so often. Note the emphasis above. Depending on your jurisdiction, you could get into serious trouble transmitting any sort of data from someone else's computer without permission. You're more likely to get permission if: 1/ You make it clear, on install or update, that your program will collect information and transmit it only with permission. 2/ You explain clearly what the information is and that it only holds "usage information", nothing that can be traced back to the user of the software (NO serial numbers, etc). 3/ You request permission to transmit infrequently. If I had an obnoxious program that asked me daily, I'd soon stop using it altogether. A: I believe that you've already received a good answer to your question re the privacy aspect of the technological solution. I would also like to mention that you should try to use HTTP or HTTPS over normal 80/443 ports - these are the least likely to cause problems with the firewalls and proxy. Use MS IE proxy settings as they are usually set properly. From a totally different prospective I would like to say that the best way to learn about the usage of your software is to check if you have any 'friendly' users in your install base and interview them. It could be some partner company or people who are your 'pilot group' for the betas of new release. Talk to them. Grap your manager and spend a day going to their site if possible and just seeing for yourself how and then they click the buttons. Make notes. Listen to the feedback. Establish personal relationship so next time they talk to you directly. If your application is small and your company is no Microsoft its much better to be close to your userbase than to collect anonymous usage data which takes significant effort to collect and process and then even more effort to understand. A: One idea is to send anonymous statistics with your users' permission. Another idea is to provide a big report bug/ask question button in your app so they can tell you when something they think is wrong happens you'd send app state along the report. Always be clear on what are you going to be sending and make users have an option to be non anonymous (always set anonymized data as default) and you might be surprised when you get a lot of non anonymous data by choice of users. And be clear by using users' language, NEVER say things like "I'm going to send a Blowfish encrypted memory dump of the current state of the application's stack and heap. Yes/no?", but things like "I'm going to send a list of your activity in the program: the buttons you clicked and the type and amount of files you opened. This will help us to create a better program for you but you can choose what you want us to receive. " A: Disclaimer: I am a developer on this product, so I may be a bit biased on how great it is :) There is currently a product on the market that can provide you with this functionality for both .NET and Java applications that we call Runtime Intelligence. See: http://www.preemptive.com/runtime-intelligence-services.html for details. This product is currently shipping for both .NET and Java and a free version offering a limited feature set will be included in Visual Studio 2010. Usage of an application can be tracked ranging from high level statistics on number of times executed and on which Operating System/Framework versions down to the frequency of usage of individual features. We provide a way for you to give your users either an opt-in or opt-out choice in the transmission of the usage data and the data is sent either over SSL (the default) or standard HTTP. The performance impact on your application is minimal and we take great care to not impact the responsiveness of your code. This solution is great for evaluations or beta testing as you can track the usage of your application directly rather than relying on surveys or guessing at what the users are doing. For released applications this provides the scaffolding of a customer experience improvement program and the visibility into the accumulated data is valuable to everyone from upper management down to the developer. We have found that it takes less than an hour to set up an application for basic usage reporting with no code changes required since we can work directly on the assembly binaries. For pricing information or to obtain an evaluation please contact sales from our website as I'm just a developer :) For detailed technical information or any other questions feel free to contact me. A: Actually, I think you're trying to do simple Business Intelligence. Don't forget to set the "appropriate" dashbord in order to track your collected data, think deeply on the indicator you'll implement.
{ "language": "en", "url": "https://stackoverflow.com/questions/143681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the performance hit of using TLS with apache? How much of a performance hit will running everything over TLS do to my server? I would assume this is completely ignorable in this day and age? I heard once that servers today could encrypt gigabytes of data per second, is that true? And if so, is it linearly scalable so that if top speed is 10GB/second, encrypting 1GB would take 0.1 second? I'm not in some kind of pickle with any admin over this (yet). I'm just curious and if I can mostly ignore the hit, why not just encrypt everything? A: Performance Analysis of TLS Web Servers (pdf), a paper written at Rice University, covered this topic back in 2002, and they came to this conclusion: Apache TLS without the AXL300 served between 149 hits/sec and 259 hits/sec for the CS trace, and between 147 hits/sec and 261 hits/sec for the Amazon trace. This confirms that TLS incurs a substantial cost and reduces the throughput by 70 to 89% relative to the insecure Apache. So without the AXL300 board, which offloads encryption, there was a reduction in throughput of 70-89% on a PIII-933MHz. However, they note in the next section that as CPU speeds increase, the throughput is expected to increase accordingly. So since 2002, you may find that there is no noticeable difference for your workload.
{ "language": "en", "url": "https://stackoverflow.com/questions/143692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is it possible that F# will be optimized more than other .Net languages in the future? Is it possible that Microsoft will be able to make F# programs, either at VM execution time, or more likely at compile time, detect that a program was built with a functional language and automatically parallelize it better? Right now I believe there is no such effort to try and execute a program that was built as single threaded program as a multi threaded program automatically. That is to say, the developer would code a single threaded program. And the compiler would spit out a compiled program that is multi-threaded complete with mutexes and synchronization where needed. Would these optimizations be visible in task manager in the process thread count, or would it be lower level than that? A: Being that F# is derived from Ocaml and Ocaml compilers can optimize your programs far better than other compilers, it probably could be done. A: I don't believe it is possible to autovectorize code in a generally-useful way and the functional programming facet of F# is essentially irrelevant in this context. The hardest problem is not detecting when you can perform subcomputations in parallel, it is determining when that will not degrade performance, i.e. when the subtasks will take sufficiently long to compute that it is worth taking the performance hit of a parallel spawn. We have researched this in detail in the context of scientific computing and we have adopted a hybrid approach in our F# for Numerics library. Our parallel algorithms, built upon Microsoft's Task Parallel Library, require an additional parameter that is a function giving the estimated computational complexity of a subtask. This allows our implementation to avoid excessive subdivision and ensure optimal performance. Moreover, this solution is ideal for the F# programming language because the function parameter describing the complexity is typically an anonymous first-class function. Cheers, Jon Harrop. A: I think the question misses the point of the .NET architecture-- F#, C# and VB (etc.) all get compiled to IL, which then gets compiled to machine code via the JIT compiler. The fact that a program was written in a functional language isn't relevant-- if there are optimizations (like tail recursion, etc.) available to the JIT compiler from the IL, the compiler should take advantage of it. Naturally, this doesn't mean that writing functional code is irrelevant-- obviously, there are ways to write IL which will parallelize better-- but many of these techniques could be used in any .NET language. So, there's no need to flag the IL as coming from F# in order to examine it for potential parallelism, nor would such a thing be desirable. A: There's active research for autoparallelization and auto vectorization for a variety of languages. And one could hope (since I really like F#) that they would concive a way to determine if a "pure" side-effect free subset was used and then parallelize that. Also since Simon Peyton-Jones the father of Haskell is working at Microsoft I have a hard time not beliving there's some fantastic stuff comming. A: It's possible but unlikely. Microsoft spends most of it's time supporting and implementing features requested by their biggest clients. That usually means C#, VB.Net, and C++ (not necessarily in that order). F# doesn't seem like it's high on the list of priorities. A: Microsoft is currently developing 2 avenues for parallelisation of code: PLINQ (Pararllel Linq, which owes much to functional languages) and the Task Parallel Library (TPL) which was originally part of Robotics Studio. A beta of PLINQ is available here. I would put my money on PLINQ becoming the norm for auto-parallelisation of .NET code. A: I think this is unlikely in the near future. And if it does happen, I think it would be more likely at the IL level (assembly rewriting) rather than language level (e.g. something specific to F#/compiler). It's an interesting question, and I expect that some fine minds have been looking at this and will continue to look at this for a while, but in the near-term, I think the focus will be on making it easier for humans to direct the threading/parallelization of programs, rather than just having it all happen as if by magic. (Language features like F# async workflows, and libraries like the task-parallel library and others, are good examples of near-term progress here; they can do most of the heavy lifting for you, especially when your program is more declarative than imperative, but they still require the programmer to opt-in, do analysis for correctness/meaningfulness, and probably make slight alterations to the structure of the code to make it all work.) Anyway, that's all speculation; who can say what the future will bring? I look forward to finding out (and hopefully making some of it happen). :)
{ "language": "en", "url": "https://stackoverflow.com/questions/143708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Comparing two bitmasks in SQL to see if any of the bits match Is there a way of comparing two bitmasks in Transact-SQL to see if any of the bits match? I've got a User table with a bitmask for all the roles the user belongs to, and I'd like to select all the users that have any of the roles in the supplied bitmask. So using the data below, a roles bitmask of 6 (designer+programmer) should select Dave, Charlie and Susan, but not Nick. User Table ---------- ID Username Roles 1 Dave 6 2 Charlie 2 3 Susan 4 4 Nick 1 Roles Table ----------- ID Role 1 Admin 2 Programmer 4 Designer Any ideas? Thanks. A: The answer to your question is to use the Bitwise & like this: SELECT * FROM UserTable WHERE Roles & 6 != 0 The 6 can be exchanged for any combination of your bitfield where you want to check that any user has one or more of those bits. When trying to validate this I usually find it helpful to write this out longhand in binary. Your user table looks like this: 1 2 4 ------------------ Dave 0 1 1 Charlie 0 1 0 Susan 0 0 1 Nick 1 0 0 Your test (6) is this 1 2 4 ------------------ Test 0 1 1 If we go through each person doing the bitwaise And against the test we get these: 1 2 4 ------------------ Dave 0 1 1 Test 0 1 1 Result 0 1 1 (6) Charlie 0 1 0 Test 0 1 1 Result 0 1 0 (2) Susan 0 0 1 Test 0 1 1 Result 0 0 1 (4) Nick 1 0 0 Test 0 1 1 Result 0 0 0 (0) The above should demonstrate that any records where the result is not zero has one or more of the requested flags. Edit: Here's the test case should you want to check this with test (id, username, roles) AS ( SELECT 1,'Dave',6 UNION SELECT 2,'Charlie',2 UNION SELECT 3,'Susan',4 UNION SELECT 4,'Nick',1 ) select * from test where (roles & 6) != 0 // returns dave, charlie & susan or select * from test where (roles & 2) != 0 // returns Dave & Charlie or select * from test where (roles & 7) != 0 // returns dave, charlie, susan & nick A: Use the Transact-SQL bitwise AND operator "&" and compare the result to zero. Even better, instead of coding the roles as bits of an integer column, use boolean columns, one for each role. Then your query would simply be designer AND programmer friendly. If you expect the roles to change a lot over the lifetime of your application, then use a many-to-many table to map the association between users and their roles. both alternatives are more portable than relying on the existence of the bitwise-AND operator. A: SELECT * FROM UserTable WHERE Roles & 6 > 0 A: SELECT * FROM table WHERE mask1 & mask2 > 0 A: example: DECLARE @Mask int SET @Mask = 6 DECLARE @Users TABLE ( ID int, Username varchar(50), Roles int ) INSERT INTO @Users (ID, Username, Roles) SELECT 1, 'Dave', 6 UNION SELECT 2, 'Charlie', 2 UNION SELECT 3, 'Susan', 4 UNION SELECT 4, 'Nick', 1 SELECT * FROM @Users WHERE Roles & @Mask > 0 A: To find all programmers use: SELECT * FROM UserTable WHERE Roles & 2 = 2
{ "language": "en", "url": "https://stackoverflow.com/questions/143712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: Is there any difference between "string" and 'string' in Python? In PHP, a string enclosed in "double quotes" will be parsed for variables to replace whereas a string enclosed in 'single quotes' will not. In Python, does this also apply? A: In some other languages, meta characters are not interpreted if you use single quotes. Take this example in Ruby: irb(main):001:0> puts "string1\nstring2" string1 string2 => nil irb(main):002:0> puts 'string1\nstring2' string1\nstring2 => nil In Python, if you want the string to be taken literally, you can use raw strings (a string preceded by the 'r' character): >>> print 'string1\nstring2' string1 string2 >>> print r'string1\nstring2' string1\nstring2 A: Single and double quoted strings in Python are identical. The only difference is that single-quoted strings can contain unescaped double quote characters, and vice versa. For example: 'a "quoted" word' "another 'quoted' word" Then again, there are triple-quoted strings, which allow both quote chars and newlines to be unescaped. You can substitute variables in a string using named specifiers and the locals() builtin: name = 'John' lastname = 'Smith' print 'My name is %(name)s %(lastname)s' % locals() # prints 'My name is John Smith' A: The interactive Python interpreter prefers single quotes: >>> "text" 'text' >>> 'text' 'text' This could be confusing to beginners, so I'd stick with single quotes (unless you have different coding standards). A: The difference between " and ' string quoting is just in style - except that the one removes the need for escaping the other inside the string content. Style PEP8 recommends a consistent rule, PEP257 suggests that docstrings use triple double quotes. In Python, single-quoted strings and double-quoted strings are the same. This PEP does not make a recommendation for this. Pick a rule and stick to it. When a string contains single or double quote characters, however, use the other one to avoid backslashes in the string. It improves readability. For triple-quoted strings, always use double quote characters to be consistent with the docstring convention in PEP 257 . Widely used however is the practice to prefer double-quotes for natural language strings (including interpolation) - thus anything which is potentially candidate for I18N. And single quotes for technical strings: symbols, chars, paths, command-line options, technical REGEXes, ... (For example, when preparing code for I18N, I run a semi-automatic REGEX converting double quoted strings quickly for using e.g. gettext) A: Python is one of the few (?) languages where ' and " have identical functionality. The choice for me usually depends on what is inside. If I'm going to quote a string that has single quotes within it I'll use double quotes and visa versa, to cut down on having to escape characters in the string. Examples: "this doesn't require escaping the single quote" 'she said "quoting is easy in python"' This is documented on the "String Literals" page of the python documentation: * *http://docs.python.org/2/reference/lexical_analysis.html#string-literals (2.x) *http://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals (3.x) A: No: 2.4.1. String and Bytes literals ...In plain English: Both types of literals can be enclosed in matching single quotes (') or double quotes ("). They can also be enclosed in matching groups of three single or double quotes (these are generally referred to as triple-quoted strings). The backslash (\) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character... A: There are 3 ways you can qoute strings in python: "string" 'string' """ string string """ they all produce the same result. A: There is no difference in Python, and you can really use it to your advantage when generating XML. Correct XML syntax requires double-quotes around attribute values, and in many languages, such as Java, this forces you to escape them when creating a string like this: String HtmlInJava = "<body bgcolor=\"Pink\">" But in Python, you simply use the other quote and make sure to use the matching end quote like this: html_in_python = '<body bgcolor="Pink">' Pretty nice huh? You can also use three double quotes to start and end multi-line strings, with the EOL's included like this: multiline_python_string = """ This is a multi-line Python string which contains line breaks in the resulting string variable, so this string has a '\n' after the word 'resulting' and the first word 'word'.""" A: Yes. Those claiming single and double quotes are identical in Python are simply wrong. Otherwise in the following code, the double-quoted string would not have taken an extra 4.5% longer for Python to process: import time time_single = 0 time_double = 0 for i in range(10000000): # String Using Single Quotes time1 = time.time() str_single1 = 'Somewhere over the rainbow dreams come true' str_single2 = str_single1 time2 = time.time() time_elapsed = time2 - time1 time_single += time_elapsed # String Using Double Quotes time3 = time.time() str_double1 = "Somewhere over the rainbow dreams come true" str_double2 = str_double1 time4 = time.time() time_elapsed = time4 - time3 time_double += time_elapsed print 'Time using single quotes: ' + str(time_single) print 'Time using double quotes: ' + str(time_double) Output: >python_quotes_test.py Time using single quotes: 13.9079978466 Time using double quotes: 14.5360121727 So if you want fast clean respectable code where you seem to know your stuff, use single quotes for strings whenever practical. You will also expend less energy by skipping the shift key.
{ "language": "en", "url": "https://stackoverflow.com/questions/143714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: How do I access a MessageBox with white? I have a simple message box in a WPF application that is launched as below: private void Button_Click(object sender, RoutedEventArgs e) { MessageBox.Show("Howdy", "Howdy"); } I can get white to click my button and launch the message box. UISpy shows it as a child of my window I couldn't work out the method to access it. How do I get access to my MessageBox to verify its contents? A: Found it! The window class has a MessageBox method that does the trick: var app = Application.Launch(@"c:\ApplicationPath.exe"); var window = app.GetWindow("Window1"); var helloButton = window.Get<Button>("Hello"); Assert.IsNotNull(helloButton); helloButton.Click(); var messageBox = window.MessageBox("Howdy"); Assert.IsNotNull(messageBox); A: Please try this Window messageBox = window.MessageBox(""); var label = messageBox.Get<Label>(SearchCriteria.Indexed(0)); Assert.AreEqual("Hello",label.Text); A: Contained in the White source code are some UI tests projects (to test White itself). One of the test includes MessageBox tests, which includes a way to obtain the displayed message. [TestFixture, WinFormCategory, WPFCategory] public class MessageBoxTest : ControlsActionTest { [Test] public void CloseMessageBoxTest() { window.Get<Button>("buttonLaunchesMessageBox").Click(); Window messageBox = window.MessageBox("Close Me"); var label = window.Get<Label>("65535"); Assert.AreEqual("Close Me", label.Text); messageBox.Close(); } [Test] public void ClickButtonOnMessageBox() { window.Get<Button>("buttonLaunchesMessageBox").Click(); Window messageBox = window.MessageBox("Close Me"); messageBox.Get<Button>(SearchCriteria.ByText("OK")).Click(); } } Evidently, the label used to display the text message is owned by the window displaying the messagebox, and its primary identification is the max word value (65535). A: window.MessageBox() is a good solution!! But this method would stuck for a long time if the messagebox doesn't appear. Sometimes I want to check "Not Appearance" of a messagebox (Warning, Error, etc.). So I write a method to set the timeOut by threading. [TestMethod] public void TestMethod() { // arrange var app = Application.Launch(@"c:\ApplicationPath.exe"); var targetWindow = app.GetWindow("Window1"); Button button = targetWindow.Get<Button>("Button"); // act button.Click(); var actual = GetMessageBox(targetWindow, "Application Error", 1000L); // assert Assert.IsNotNull(actual); // I want to see the messagebox appears. // Assert.IsNull(actual); // I don't want to see the messagebox apears. } private void GetMessageBox(Window targetWindow, string title, long timeOutInMillisecond) { Window window = null ; Thread t = new Thread(delegate() { window = targetWindow.MessageBox(title); }); t.Start(); long l = CurrentTimeMillis(); while (CurrentTimeMillis() - l <= timeOutInMillsecond) { } if (window == null) t.Abort(); return window; } public static class DateTimeUtil { private static DateTime Jan1st1970 = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc); public static long currentTimeMillis() { return (long)((DateTime.UtcNow - Jan1st1970).TotalMilliseconds); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/143736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Font graphics routines How do you do your own fonts? I don't want a heavyweight algorithm (freetype, truetype, adobe, etc) and would be fine with pre-rendered bitmap fonts. I do want anti-aliasing, and would like proportional fonts if possible. I've heard I can use Gimp to do the rendering (with some post processing?) I'm developing for an embedded device with an LCD. It's got a 32 bit processor, but I don't want to run Linux (overkill - too much code/data space for too little functionality that I would use) C. C++ if necessary, but C is preferred. Algorithms and ideas/concepts are fine in any language... -Adam A: In my old demo-scene days I often drew all characters in the font in one big bitmap image. In the code, I stored the (X,Y) coordinates of each character in the font, as well as the width of each character. The height was usually constant throughout the font. If space isn't an issue, you can put all characters in a grid, that is - have a constant distance between the top-left corner of each character. Rendering the text then becomes a matter of copying one letter at a time to the destination position. At that time, I usually reserved one color as being the "transparent" color, but you could definitely use an alpha-channel for this today. A simpler approach, that can be used for small b/w fonts, is to define the characters directly in code: LetterA db 01111100b db 11000110b db 11000110b db 11111110b db 11000110b db 11000110b The XPM file format is actually a file format with C syntax that can be used as a hybrid solution for storing the characters. A: Pre-rendered bitmap fonts are probably the way to go. Render your font using whatever, arrange the characters in a grid, and save the image in a simple uncompressed format like PPM, BMP or TGA. If you want antialiasing, make sure to use a format that supports transparency (BMP and TGA do; PPM does not). In order to support proportional widths, you'll need to extract the widths of each character from the grid. There's no simple way to do this, it depends on how you generate the grid. You could probably write some short little program to analyze each character and find the minimal bounding box. Once you have the width data, you put it in an auxiliary file which contains the coordinates and sizes of each character. Finally, to render a string, you look up each character and bitblit its rectangle from the font bitmap onto your frame buffer, advancing the raster position by the width of the character. A: We have successfully used the SRGP package for fonts. We did use fixed-pitch fonts, so I'm not sure if it can proportional fonts. A: We're using bitmap fonts generated by anglecode#s bitmap font generator : http://www.angelcode.com/products/bmfont/ This is very usable as it has XML output which will be easy to convert to any data format you need. Angel Code's bmfont also adds kerning and better packing to the old alternative that was MudFont.
{ "language": "en", "url": "https://stackoverflow.com/questions/143739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Refactoring "include file hell" One thing that's really been making life difficult in getting up to speed on the codebase on an ASP classic project is that the include file situation is kind of a mess. I sometimes find the function I was looking for being included in an include file that is totally unrelated. Does anyone have any advice on how to refactor this such that one can more easily tell where a function is if they need to find it? EDIT: One thing I forgot to ask: does vbscript have any kind of mechanism for preventing a file from being included twice? Sorta like #ifndef's from C? A: @MusiGenisis bullet point list is good advice to follow but I'd disagree with - "I wouldn't do any of this. To paraphrase Steve Yegge (I think), "there's nothing wrong with a classic ASP application that can't be fixed with a total rewrite". I'm very serious about this - I don't think there's a bigger waste of a programmer's time in this world than maintaining an ASP app, and the problem just gets worse as ASP gets more and more out of date." All very well, but if it's a sizable legacy app doing complete re-writes is often not possible due to a lack of developer time/resource. We have a fairly large classic ASP app which has grown arms and legs over the years, it's not pretty but it does serve the business needs. We have no time to spend the next six months doing a complete re-write, it would be nice, but just not possible. Our approach is - * *Where there's new functionality required, it's implemented in ASP.NET. This happens 95% of the time. The 5% edge cases usually being that there are a large number of points where the new app code touches the old app requiring us to do a lot of classic ASP re-work potentially making the app more fragile. *Where there's a change in functionality we assess whether we can refactor to ASP.NET with minimal impact. If this isn't possible then we'll implement the change in classic ASP and tidy up existing code as we go along e.g. simplifying include file nesting, replacing javascript with more cross browser friendly code, that kinda thing. In answer to your question about #ifndef's, there isn't an equivalent I'm afraid. A: * *Use one file to global headings and includes (lets name it t-head.asp). This file is included in all asp files. *Use one file to make the site visual global header (logos, menus, etc) and include it right behind . Let call it t-begin.asp *Use one file to make the site visual global footer (copyright, google analytics, etc.) and closing all divs or tables opened in t-begin.asp. Lets call this file t-end.asp *Use one folder to put the business logic files, called BUS. The files in this folder can not have includes. Every function inside the file must be preceded by the name of the logic unit (IE: all function in products.asp must begin with product_*) *Use one folder to put some reused UI code called UI. The files in this folder can not have includes. Example: <%@ Language=VBScript %> <% Option Explicit %> <% Response.Buffer = true%> <html> <head> <!--#include file="../general/t-head.asp"--> <!--#include file="../bus/product.asp"--> <title>Products page</title> </head> <body> <!--#include file="../general/t-begin.asp"--> <% 'all your code %> <!--#include file="../general/t-end.asp"--> </body> </html> A: Wow. It constantly surprises me how many people have a hate for ASP. In decent hands it's a perfectly capable language for designing web applications. However, I will concede that the way include files are managed in ASP can be a bit of a brainache -- because (depending on how you use them) they have to be loaded and parsed even if you're not using half the functions contained within. I tend to have one include file (initialise.asp or some such) that itself includes links to several functions libraries (lib_http.asp, lib_mssql.asp or similar) and all library functions are self-contained so there is no worry about crossing variables. Any global vars are declared and set in the master file. This means I can use a function anywhere, any time and not worry about where it was defined, it's just there for use. And IDEs such as Visual Studio and Primalscript have the ability to "jump to definition" when you find a call to a function that you don't recognise. Then, any script-specific includes are included in the script after the call to this master include file. I concede that this is a memory-hungry approach as all the functions in all the libraries are compiled for every script call, so the method needs refining for each site you develop -- decide what to call via the master include and what is more page-specific. It would be nice to be able to only load what you need -- but that's the DLL approach and is not available for the majority of real-world developments, and also you'd have to weigh up the processor cost of compiling small scripts vs loading components. A concise directory structure is requisite and easily developed, but it can be a chore to wade through all the code in an existing site and change any links or mappath calls. Also, be aware that some IIS administrators disallow the '..\' method of traversing directories via VBScript, so then all file references have to be absolute paths. A: There are a few basic things you can do when taking over a classic ASP application, but you will probably end up regretting doing them. * *Eliminate duplicate include files. Every classic ASP app I've ever seen has had 5 "login.asp" pages and 7 "datepicker.js" files and so forth. Hunt down and remove all the duplicates, and then change references in the rest of the app as necessary. Be careful to do a diff check on each file as you remove it - often the duplicated files have slight differences because the original author copied it and then changed just the copy. This is a great thing for Evolution, but not so much for code. *Create a rational folder structure and move all the files into it. This one is obvious, but it's the one you will most regret doing. Whether the links in the application are relative or absolute, you'll have to change most of them. *Combine all of your include files into one big file. You can then re-order all the functions logically and break them up into separate, sensibly-named files. You'll then have to go through the app page by page and figure out what the include statements on each page need to be (or stick with the one file, and just include it on every page - I can't remember whether or not that's a good idea in ASP). I can't comprehend the pain level involved here, and that's assuming that the existing include files don't make heavy use of same-named globals. I wouldn't do any of this. To paraphrase Steve Yegge (I think), "there's nothing wrong with a classic ASP application that can't be fixed with a total rewrite". I'm very serious about this - I don't think there's a bigger waste of a programmer's time in this world than maintaining an ASP app, and the problem just gets worse as ASP gets more and more out of date. A: i think you should consider moving your code from ASP VBScript to Visual Basic COM DLLs. that'll ease on you having too much includes. A: I don't know of a way to prevent a double inclusion, other than getting an error message that is. Are you seeing includes placed throughout the page, which is making them difficult to spot? Just as an aside, are you working with a copy of the code and the database on a development server? From my experience, the first thing to do is separate yourself from the live site ASAP. While a hassle initially, it'll give you the freedom to make changes without messing up the live site. It's easy to make that one tiny change in an include and BAM! the whole site goes down. I've worked through a few projects like you've described and used the following strategies: Complete rewrite - perfect when there's time/money, but usually I get the call when something has gone wrong and results are needed ASAP. Smaller projects - I open up everything in the IDE and just start searching all the project files for the functions/sub, in order to build a knowledge of the include logic. Pretty much each time, everything is spread out everywhere, so I start rebuilding the includes organized by business logic. I've also run across inline code (raw code, not subs or functions) thrown into an include, so I'll usually just pull the code back into the page for refactoring later. Larger projects - I'll use some code I have laying around to parse the includes for lines with sub/function headers and dump those to a text file to build up a list of what routines are where and refer to that. This comes in handy when you've got a ton of includes on each page and can't get your head around the codebase.
{ "language": "en", "url": "https://stackoverflow.com/questions/143745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What you think about throwing an exception for not found in C++? I know most people think that as a bad practice but when you are trying to make your class public interface only work with references, keeping pointers inside and only when necessary, I think there is no way to return something telling that the value you are looking doesn't exist in the container. class list { public: value &get(type key); }; Let's think that you don't want to have dangerous pointers being saw in the public interface of the class, how do you return a not found in this case, throwing an exception? What is your approach to that? Do you return an empty value and check for the empty state of it? I actually use the throw approach but I introduce a checking method: class list { public: bool exists(type key); value &get(type key); }; So when I forget to check that the value exists first I get an exception, that is really an exception. How would you do it? A: The correct answer (according to Alexandrescu) is: Optional and Enforce First of all, do use the Accessor, but in a safer way without inventing the wheel: boost::optional<X> get_X_if_possible(); Then create an enforce helper: template <class T, class E> T& enforce(boost::optional<T>& opt, E e = std::runtime_error("enforce failed")) { if(!opt) { throw e; } return *opt; } // and an overload for T const & This way, depending on what might the absence of the value mean, you either check explicitly: if(boost::optional<X> maybe_x = get_X_if_possible()) { X& x = *maybe_x; // use x } else { oops("Hey, we got no x again!"); } or implicitly: X& x = enforce(get_X_if_possible()); // use x You use the first way when you’re concerned about efficiency, or when you want to handle the failure right where it occurs. The second way is for all other cases. A: Don't use an exception in such a case. C++ has a nontrivial performance overhead for such exceptions, even if no exception is thrown, and it additially makes reasoning about the code much harder (cf. exception safety). Best-practice in C++ is one of the two following ways. Both get used in the STL: * *As Martin pointed out, return an iterator. Actually, your iterator can well be a typedef for a simple pointer, there's nothing speaking against it; in fact, since this is consistent with the STL, you could even argue that this way is superior to returning a reference. *Return a std::pair<bool, yourvalue>. This makes it impossible to modify the value, though, since a copycon of the pair is called which doesn't work with referende members. /EDIT: This answer has spawned quite some controversy, visible from the comments and not so visible from the many downvotes it got. I've found this rather surprising. This answer was never meant as the ultimate point of reference. The “correct” answer had already been given by Martin: execeptions reflect the behaviour in this case rather poorly. It's semantically more meaningful to use some other signalling mechanism than exceptions. Fine. I completely endorse this view. No need to mention it once again. Instead, I wanted to give an additional facet to the answers. While minor speed boosts should never be the first rationale for any decision-making, they can provide further arguments and in some (few) cases, they may even be crucial. Actually, I've mentioned two facets: performance and exception safety. I believe the latter to be rather uncontroversial. While it's extremely hard to give strong exceptions guarantees (the strongest, of course, being “nothrow”), I believe it's essential: any code that is guaranteed to not throw exceptions makes the whole program easier to reason about. Many C++ experts emphasize this (e.g. Scott Meyers in item 29 of “Effective C++”). About speed. Martin York has pointed out that this no longer applies in modern compilers. I respectfully disagree. The C++ language makes it necessary for the environment to keep track, at runtime, of code paths that may be unwound in the case of an exception. Now, this overhead isn't really all that big (and it's quite easy to verify this). “nontrivial” in my above text may have been too strong. However, I find it important to draw the distinction between languages like C++ and many modern, “managed” languages like C#. The latter has no additional overhead as long as no exception is thrown because the information necessary to unwind the stack is kept anyway. By and large, stand by my choice of words. A: The problem with exists() is that you'll end up searching twice for things that do exist (first check if it's in there, then find it again). This is inefficient, particularly if (as its name of "list" suggests) your container is one where searching is O(n). Sure, you could do some internal caching to avoid the double search, but then your implementation gets messier, your class becomes less general (since you've optimised for a particular case), and it probably won't be exception-safe or thread-safe. A: STL Iterators? The "iterator" idea proposed before me is interesting, but the real point of iterators is navigation through a container. Not as an simple accessor. If you're accessor is one among many, then iterators are the way to go, because you will be able to use them to move in the container. But if your accessor is a simple getter, able to return either the value or the fact there is no value, then your iterator is perhaps only a glorified pointer... Which leads us to... Smart pointers? The point of smart pointers is to simplify pointer ownership. With a shared pointer, you'll get a ressource (memory) which will be shared, at the cost of an overhead (shared pointers needs to allocate an integer as a reference counter...). You have to choose: Either your Value is already inside a shared pointer, and then, you can return this shared pointer (or a weak pointer). Or Your value is inside a raw pointer. Then you can return the row pointer. You don't want to return a shared pointer if your ressource is not already inside a shared pointer: A World of funny things will happen when your shared pointer will get out of scope an delete your Value without telling you... :-p Pointers? If your interface is clear about its ownership of its ressources, and by the fact the returned value can be NULL, then you could return a simple, raw pointer. If the user of your code is dumb enough ignore the interface contract of your object, or to play arithmetics or whatever with your pointer, then he/she will be dumb enough to break any other way you'll choose to return the value, so don't bother with the mentally challenged... Undefined Value Unless your Value type really has already some kind of "undefined" value, and the user knows that, and will accept to handle that, it is a possible solution, similar to the pointer or iterator solution. But do not add a "undefined" value to your Value class because of the problem you asked: You'll end up raising the "references vs. pointer" war to another level of insanity. Code users want the objects you give them to either be Ok, or to not exist. Having to test every other line of code this object is still valid is a pain, and will complexify uselessly the user code, by your fault. Exceptions Exceptions are usually not as costly as some people would like them to be. But for a simple accessor, the cost could be not trivial, if your accessor is used often. For example, the STL std::vector has two accessors to its value through an index: T & std::vector::operator[]( /* index */ ) and: T & std::vector::at( /* index */ ) The difference being that the [] is non-throwing . So, if you access outside the range of the vector, you're on your own, probably risking memory corruption, and a crash sooner or later. So, you should really be sure you verified the code using it. On the other hand, at is throwing. This means that if you access outside the range of the vector, then you'll get a clean exception. This method is better if you want to delegate to another code the processing of an error. I use personnaly the [] when I'm accessing the values inside a loop, or something similar. I use at when I feel an exception is the good way to return the current code (or the calling code) the fact something went wrong. So what? In your case, you must choose: If you really need a lightning-fast access, then the throwing accessor could be a problem. But this means you already used a profiler on your code to determinate this is a bottleneck, doesn't it? ;-) If you know that not having a value can happen often, and/or you want your client to propagate a possible null/invalid/whatever semantic pointer to the value accessed, then return a pointer (if your value is inside a simple pointer) or a weak/shared pointer (if your value is owned by a shared pointer). But if you believe the client won't propagate this "null" value, or that they should not propagate a NULL pointer (or smart pointer) in their code, then use the reference protected by the exception. Add a "hasValue" method returning a boolean, and add a throw should the user try the get the value even if there is none. Last but not least, consider the code that will be used by the user of your object: // If you want your user to have this kind of code, then choose either // pointer or smart pointer solution void doSomething(MyClass & p_oMyClass) { MyValue * pValue = p_oMyClass.getValue() ; if(pValue != NULL) { // Etc. } } MyValue * doSomethingElseAndReturnValue(MyClass & p_oMyClass) { MyValue * pValue = p_oMyClass.getValue() ; if(pValue != NULL) { // Etc. } return pValue ; } // ========================================================== // If you want your user to have this kind of code, then choose the // throwing reference solution void doSomething(MyClass & p_oMyClass) { if(p_oMyClass.hasValue()) { MyValue & oValue = p_oMyClass.getValue() ; } } So, if your main problem is choosing between the two user codes above, your problem is not about performance, but "code ergonomy". Thus, the exception solution should not be put aside because of potential performance issues. :-) A: Accessor? The "iterator" idea proposed before me is interesting, but the real point of iterators is navigation through a container. Not as an simple accessor. I agree with paercebal, an iterator is to iterate. I don't like the way STL does. But the idea of an accessor seems more appealing. So what we need? A container like class that feels like a boolean for testing but behaves like the original return type. That would be feasible with cast operators. template <T> class Accessor { public: Accessor(): _value(NULL) {} Accessor(T &value): _value(&value) {} operator T &() const { if (!_value) throw Exception("that is a problem and you made a mistake somewhere."); else return *_value; } operator bool () const { return _value != NULL; } private: T *_value; }; Now, any foreseeable problem? An example usage: Accessor <type> value = list.get(key); if (value) { type &v = value; v.doSomething(); } A: The STL deals with this situation by using iterators. For example, the std::map class has a similar function: iterator find( const key_type& key ); If the key isn't found, it returns 'end()'. You may want to use this iterator approach, or to use some sort of wrapper for your return value. A: How about returning a shared_ptr as the result. This can be null if the item wasn't found. It works like a pointer, but it will take care of releasing the object for you. A: (I realize this is not always the right answer, and my tone a bit strong, but you should consider this question before deciding for other more complex alternatives): So, what's wrong with returning a pointer? I've seen this one many times in SQL, where people will do their earnest to never deal with NULL columns, like they have some contagious decease or something. Instead, they cleverly come up with a "blank" or "not-there" artificial value like -1, 9999 or even something like '@X-EMPTY-X@'. My answer: the language already has a construct for "not there"; go ahead, don't be afraid to use it. A: what I prefer doing in situations like this is having a throwing "get" and for those circumstances where performance matter or failiure is common have a "tryGet" function along the lines of "bool tryGet(type key, value **pp)" whoose contract is that if true is returned then *pp == a valid pointer to some object else *pp is null. A: @aradtke, you said. I agree with paercebal, an iterator is to iterate. I don't like the way STL does. But the idea of an accessor seems more appealing. So what we need? A container like class that feels like a boolean for testing but behaves like the original return type. That would be feasible with cast operators. [..] Now, any foreseeable problem? First, YOU DO NOT WANT OPERATOR bool. See Safe Bool idiom for more info. But about your question... Here's the problem, users need to now explict cast in cases. Pointer-like-proxies (such as iterators, ref-counted-ptrs, and raw pointers) have a concise 'get' syntax. Providing a conversion operator is not very useful if callers have to invoke it with extra code. Starting with your refence like example, the most concise way to write it: // 'reference' style, check before use if (Accessor<type> value = list.get(key)) { type &v = value; v.doSomething(); } // or if (Accessor<type> value = list.get(key)) { static_cast<type&>(value).doSomething(); } This is okay, don't get me wrong, but it's more verbose than it has to be. now consider if we know, for some reason, that list.get will succeed. Then: // 'reference' style, skip check type &v = list.get(key); v.doSomething(); // or static_cast<type&>(list.get(key)).doSomething(); Now lets go back to iterator/pointer behavior: // 'pointer' style, check before use if (Accessor<type> value = list.get(key)) { value->doSomething(); } // 'pointer' style, skip check list.get(key)->doSomething(); Both are pretty good, but pointer/iterator syntax is just a bit shorter. You could give 'reference' style a member function 'get()'... but that's already what operator*() and operator->() are for. The 'pointer' style Accessor now has operator 'unspecified bool', operator*, and operator->. And guess what... raw pointer meets these requirements, so for prototyping, list.get() returns T* instead of Accessor. Then when the design of list is stable, you can come back and write the Accessor, a pointer-like Proxy type. A: Interesting question. It's a problem in C++ to exclusively use references I guess - in Java the references are more flexible and can be null. I can't remember if it's legal C++ to force a null reference: MyType *pObj = nullptr; return *pObj But I consider this dangerous. Again in Java I'd throw an exception as this is common there, but I rarely see exceptions used so freely in C++. If I was making a puclic API for a reusable C++ component and had to return a reference, I guess I'd go the exception route. My real preference is to have the API return a pointer; I consider pointers an integral part of C++.
{ "language": "en", "url": "https://stackoverflow.com/questions/143746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Is it possible to trigger a link's (or any element's) click event through JavaScript? I'm writing some JavaScript code that needs to fire the click event for a link. In Internet Explorer I can do this var button = document.getElementById('myButton'); button.click(); But this doesn't work in Firefox, and I assume any other browser. In Firefox, I've done this var button = document.getElementById('myButton'); window.location = button.href; I feel like this is not the best way to do this. Is there a better way to trigger a click event? Preferably something that works regardless of the type of element or the browser. A: http://jehiah.cz/archive/firing-javascript-events-properly function fireEvent(element,event) { if (document.createEvent) { // dispatch for firefox + others var evt = document.createEvent("HTMLEvents"); evt.initEvent(event, true, true ); // event type,bubbling,cancelable return !element.dispatchEvent(evt); } else { // dispatch for IE var evt = document.createEventObject(); return element.fireEvent('on'+event,evt) } } A: I wouldn't recommend it, but you can call the onclick attribute of an HTML element as a method. <a id="my-link" href="#" onclick="alert('Hello world');">My link</a> document.getElementById('my-link').onclick(); A: It's not generally possible, afaik, mozilla has the click() method but for input elements only, not links. Why don't you just create a function that the button will call on the onClick handler and, whenever you want to 'click' the button call the function instead? A: Mozilla has a stricter policy for allowed JS actions/events - I had similar problems with the click() event too. It's disabled on some elements to prevent XSS. What is wrong with redirecting the browser? This sould work everywhere. A: Hey, I don't mean to dig up an old thread - but I was searching for an answer to this same problem as well, and found a function new to jQuery 1.3x (I was having a problem with Ajax Loaded content) Here's how I implemented it: HTML <a class="navlink" href="mypage.html">Online Estimate</a> LOADED SCRIPT $(".pagelink").click(function(){ $(".navlink[href="+$(this).attr("href")+"]").trigger('click'); return false; }); LOADED HTML <a class="pagelink" href="mypage.html">Online Estimate</a> The function is the 'Trigger Event'... More details on it here: http://docs.jquery.com/Events/trigger#eventdata A: i was searching for this one quiet desperately and the simplest one seemed to work! document.getElementById('foo').onclick(); it worked in chrome 7.0.5 and ie 8.0.6
{ "language": "en", "url": "https://stackoverflow.com/questions/143747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: MySQL and data file encryption Is there a way to encrypt the data file that mysql uses? I have a mysql server on an open machine, and I would like to encrypt the data file so even if someone copies the data files, they cannot read the data. Thanks A: To anyone researching a transparent MySQL encryption solution for Linux, there's a relatively new product on the block that we've been working with: http://www.gazzang.com/ I am not affiliated with Gazzang... just a happy customer. A: MySQL doesn't support data file encryption natively. There are 3rd products out there such as: http://www.vormetric.com/products/vormetric_database_encryption_expert.html There's a 'white paper' on the topic here: http://www.vormetric.com/documents/FINALPart2DatabaseEncryptionCoreGuardvsColumnLevelWhitePaper7.pdf To be honest, if the database content has any commercial value or contains personal data about individuals, you should really control who has access to the datafiles (whether encrypted or not). In the UK, leaving such data files open to casual passers-by, would be a data protection no no. A: I am not sure what do you mean when you say that your machine is open. If people have access to the console, or to your account it is much harder of a task to encrypt the file. Did you look at Truecrypt? It works for most popular operating systems and allows to create a virtual encrypted partition, lock down a hard drive partition,an external drive or a usb device. A: You can use an encrypted filesystem, like the native one for NTFS on Windows or one of the various options for linux. In addition you can store the data encrypted. A: If you are using windows EFS and starting MySQL as a service, you will need to do the following: * *go to Services and find the MySQL service *stop the service *right-click -> properties -> LogON TAB *check "This account" *fill your windows account name eg. ".\username" *provide your password *start the service The MySQL service should now start without errors. To use the windows EFS encryption: http://windows.microsoft.com/en-us/windows/encrypt-decrypt-folder-file#1TC=windows-7 Read more obout it: http://www.petri.co.il/how_does_efs_work.htm# !!! Don't forget to export the certificate !!! A: you could encrypt the data within mysql using the built in encryption functionality. as for the files, any file solution should work fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/143750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: PostgreSQL - Rename database I need to rename the database but when I do in PGAdmin : ALTER DATABASE "databaseName" RENAME TO "databaseNameOld" it told me that it cannot. How can I do it? (Version 8.3 on WindowsXP) Update * *The first error message : Cannot because I was connect to it. So I selected an other database and did the queries. *I get a second error message telling me that it has come user connect. I see in the PGAdmin screen that it has many PID but they are inactive... I do not see how to kill them. A: I just ran into this and below is what worked: 1) pgAdmin is one of the sessions. Use psql instead. 2) Stop the pgBouncer and/or scheduler services on Windows as these also create sessions A: Unexist told me in comment to restart the database and it works! Restarting the database kill all existing connection and then I connect to an other database and was able to rename it with my initial query. Thx all. A: Instead of deploying a nuke (restarting the server) you should try to close those connections that bother you either by finding where are they from and shutting down the client processes or by using the pg_cancel_backend() function. A: Try not quoting the database name: ALTER DATABASE people RENAME TO customers; Also ensure that there are no other clients connected to the database at the time. Lastly, try posting the error message it returns so we can get a bit more information. A: For future reference, you should be able to: -- disconnect from the database to be renamed \c postgres -- force disconnect all other clients from the database to be renamed SELECT pg_terminate_backend( pid ) FROM pg_stat_activity WHERE pid <> pg_backend_pid( ) AND datname = 'name of database'; -- rename the database (it should now have zero clients) ALTER DATABASE "name of database" RENAME TO "new name of database"; Note that table pg_stat_activity column pid was named as procpid in versions prior to 9.2. So if your PostgreSQL version is lower than 9.2, use procpid instead of pid. A: When connected via pgadmin, the default database will be postgres. ALTER DATABASE postgres RENAME TO pgnew; This will not work. You need to right click on server in pgadmin and set Maintenance DB to some other DB and save. Then retry and it should work if no other connections exists. A: For anyone running into this issue using DBeaver and getting an error message like this: ERROR: database "my_stubborn_db" is being accessed by other users Detail: There is 1 other session using the database. Disconnect your current connection, and reconnect to the same server with a connection that doesn't target the database you are renaming. Changing the active database is not enough. A: ALTER DATABASE old_database RENAME TO new_database; old_database means already existing database. new_database means need to modify this name. Example: ALTER DATABASE profile RENAME TO address;
{ "language": "en", "url": "https://stackoverflow.com/questions/143756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "160" }
Q: What books should I read to have an undergraduate education in Computer Science? I've always been a largely independent learner gleaning what I can from Wikipedia and various books. However, I fear that I may have biased my self-education by inadvertent omission of topics and concepts. My goal is to teach myself the equivalent of an undergraduate degree in Computer Science from a top university (doesn't matter which one). To that end, I've purchased and started reading a few academic textbooks: * *Structure and Interpretation of Computer Programs *Introduction to Algorithms *Artificial Intelligence: A Modern Approach As well as a few textbooks I have left over from classes I've taken at a mediocre-at-best state university: * *An Introduction to Computer Simulation Methods *Calculus: Concepts and Connections *Computer Organization and Architecture *Operating System Concepts *A First Course in Database Systems *Formal Languages and Automata My questions are: * *What topics aren't covered by this collection? *Are there any books that are more rigorous or thorough (or even easier to read) than a book listed here? *Are there any books that are a waste of my time? *In what order should I read the books? *What does an MIT or Stanford (or UCB or CMU ...) undergrad learn that I might miss? Software engineering books are welcome, but in the context of academic study only please. I'm aware of Code Complete and the Pragmatic Programmer, but I'm looking for a more theoretical approach. Thanks! A: The Art of Computer Programming by Don Knuth A: MIT introduced their "OpenCourseWare" program several years ago. They put books/homework assignments/lectures on-line and so you can follow along with the MIT curriculum: http://web.mit.edu/catalogue/degre.engin.ch6.shtml A: Many courses at top universities don't use a textbook because none of the available books are good enough. (I was educated at Princeton and taught for 8 years at Harvard.) When someone recommends a book, ask if the book is really good or if it is just the best available in the field. For example, in compilers, I'm not a big fan of the 'Dragon Book'; I never liked the approach, and the current edition is very outdated. I think you'd be better off with a book like Michael Scott's Programming Language Pragmatics which although a bit scattershot is a lovely book to read. (I've never taught from it, so I can't say what students think of it.) I don't know of a really good book on compiler construction for the autodidact, although you might look at Cooper and Torczon's Engineering a Compiler because it is up to date and written by two of the best compiler engineers in the business. A: Sorry, you can't replace four years of university by reading a book or a number of books, no matter how good the books are. If you could, why would anyone go to university? A: First, I wouldn't worry about it. But if you'd like a book to learn some of the abstract CS ideas, I'd recommend The Turing Omnibus or Theoretical Introduction to Programming. If I were deciding between hiring two programmers and neither had much experience, but one had a CS degree and the other didn't, I'd hire the one with the CS degree. But when you get to comparing two programmers with a dozen years of experience, the degree hardly matters. A: I probably can't suggest the best books, but there are several important subjects you are missing: * *Statistics *Linear Algebra *Graph Theory and Discrete Math *Computer Graphics *Scientific Computing *Computer Networks *Software Engineering *Data Structures Some topics that might be considered more "optional" might be: * *Cryptography *Image/Pattern processing and recognition *Bioinformatics *Internet Computing *Classical Physics A: The Elements of Computing Systems This book takes you from the basics of hardware design all the way to writing programs in object oriented languages. Using a simulator, you build a complete computer. Then you write an assembler, an operating system, a compiler (for an object oriented language compiled to run on a VM) and then a game written using that language. It's a lot of work, but the authors have carefully made the task as simple as possible. You'll have to work hard to work through this one, but it gives you a complete perspective of computer programming. You can view some sample chapters, as well as play with the simulators here. Highly recommended! (Even for CS graduates) A: * *Discrete Math I & II (my private school) *Physics I (Missouri most state schools) *Statistics I *Cryptography(optional but I liked it) *OpenGL in C++ (optional but neat) *Systems analysis and Design(Software engineering from the business perspective. The class was so so.) *Ethics. *Networking (teaches network algorithms, bit patterns dealing with network data). There are some things that to comprehend them require help; not necessarily a Professor(tho they do that for a living), but maybe a friend that has taken the class or studied the field. Reading books on these subjects doesn't mean you know them. Doing math on the math, coding on the code, and so on is what helps you understand, and shows you know the subject in the end. A: I think you can use most of the other books for reference and just absorb Programming Pearls in its entirety. Doing so would make you better than 90% of the programmers I've ever met. A: Biggest two omissions I see: * *The Dragon Book *Computer Networks For operating systems I prefer the Tanenbaum instead of the Silberschatz but both are good: * *Operating Systems And about the order, that would depend on your interests. There aren't many prerequisites, automata for compilers is the most obvious one. First read the automata book and then the dragon one. I don't know all the books you have, but the ones I know are good enough so that may mean the others are decent as well. You are missing some logic and discrete math books as well. And let's not forget some database theory books! A: The best approach I've found is to pick your favorite University, go to their website, and check out the courses for the degree. Many of the big ones have their required readings published for each course. MIT's Open Course Ware is a good example. This, by the way, works for non-CS degree programs as well. A: I don't know how is it in the US, but in my country we study discrete mathematics and an introduction to graph theory before formal languages and automata. Also, I don't see any book covering computer networks... why don't you try Andrew Tanenbaum's Computer Networks? A: Before anything else, read Computer Science: a Modern Introduction. This will give you a good grounding and overview of the subjects there are to pursue. Introduction to Algorithms is very good. For an introduction to functional programming, I recommend working through ML for the working programmer. Areas that differentiate the computer scientist from the programmer: a grounding in discrete mathematics, a basic understanding of VLSI and systems architecture, an understanding of the basics of cryptography and security, an understanding of computability theory, an understanding of information theory. A: This is a pretty good list. The two topics I would definitely add to the mix are discrete math, and networks. Other topics that may be interesting to you are compilers, computer graphics, distributed operating systems. There are also cool sub-fields of AI, like computer vision and machine learning. And in order to handle all that, you definitely need linear algebra and probability. And it goes without saying that you cannot really do computer science by just reading books. To really understand each topic, you have to do projects in it. I would also suggest looking at MIT's Open Courseware, where professors post syllabus, lecture notes, and assignments. A: Concrete Mathematics A: You should also have a book on general databases without going deep into the specifics of Oracle, MySQL, SQL Server, etc. I'd recommend: Database Systems: The Complete Book A: Concepts, Techniques and Models of Computer Programming seems to have the broadest overview I've seen of the various higher-level language programming styles and techniques. A: Books on Professional Software Development covering how software projects work, different methodologies, and design patters are great. Web-design knowledge is also very useful when it comes to employment. I don't understand why you are trying to do this yourself though. Even a 'mediocre-at-best' university will be able to teach you the skills far better than you can teach yourself. It's all about meeting people who have experience actually working in the computing industry. It's not about the university it's about the effort you put in that determines how well you do. My answer is perhaps unhelpful to you though because I don't know where you are from. In Scotland where I live I got to go to University for free, this may not be the case for you. A: File Structures: An object oriented approach with C++ A lot of good info about block devices and file structuring which you won't find in any of the books you listed. It got a few critical reviews on Amazon because people didn't like his code examples, but the point of the book is to teach the concepts, not give cut and paste code examples. Also make sure to get a book on compilers A: I would add Introduction to the Theory of Computation to the list A: The "Gang of Four" Design Patterns book. The Design Patterns course I took in college was probably the most beneficial class I've ever taken. A: Even i'm in the same plane: studying computer science in my free time after work; These are some of the books i have in my shelf right now * *Applying UML and patterns - Larman *Introduction to algorithms - Cormen *Discrete mathematics and its applications - Rosen *Software Engineering *Advanced Programming in the UNIX Environment Will udpate this list further as soon as i finish them... :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/143760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: What is search.twitter.com's "trending topics" algorithm? What algorithm does twitter use to determine the 10 topics that you can see at search.twitter.com? I would like to implement that algorithm and I would also like to show the 50 most popular topics (instead of 10). Can you describe the most efficient algorithm? Thanks! (Twitters API can be found at- http://apiwiki.twitter.com/REST%20API%20Documentation) Also, I would like to be able to implement the algorithm by searching through the public timeline- http://twitter.com/statuses/public_timeline.rss A: Twitter's trending algorithm is not just volume of keywords. That's part of it, but there's also a decay factor so that "justin beiber" isn't top trending forever. This post on quora backs this up. http://www.quora.com/Trending-Topics-Twitter/What-is-the-basis-of-Twitters-current-Trending-Topics-algorithm?q=trending+algorithm decay is typically done by using the relative age of the post in the algorithm, giving more weight to newer topics/posts/etc. see also http://www.quora.com/What-tools-algorithms-or-data-structures-would-you-use-to-build-a-Trending-Topics-algorithm-for-a-high-velocity-stream?q=trending+algorithm A: So what Twitter probably does is it counts the number of mentions of a particular term minus stop words (stop words like : do, me, you, I, not, on etc) So "the cat is out of the bag" and "my dog ate my cat" would mean that cat ,dog and bag would be the terms it extracted (the rest are all stop words) And it then counts 'cat' as 2 references, so 'cat' would be a trending topic in this case.
{ "language": "en", "url": "https://stackoverflow.com/questions/143781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: when to pass function arguments by reference and when by address? Could anyone explain with some examples when it is better to call functions by reference and when it is better to call by address? A: This has already been discussed. See Pointer vs. Reference. A: Pass your arguments to function using reference whenever possible. Passing arguments by reference eliminate the chance of them being NULL. If you want it to be possible to pass NULL value to a function then use pointer. A: One nice convention is to: * *Pass objects by pointer whenever they may be manipulated (side-effect or as output) by the function. *Pass all other objects by const reference. This makes it very clear to the caller, with minimal documentation and zero performance cost, which parameters are const or not. You can apply this to primitive types as well, but it's debatable as to whether or not you need to use const references for non-output parameters, since they are clearly pass-by-value and cannot act as output of the function in any way (for direct types - not pointers/references - of course).
{ "language": "en", "url": "https://stackoverflow.com/questions/143788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }