Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is there a better way to find which *X* gives me the *Y* I am looking for in SciPy? I just began using SciPy and I am not too familiar with each function. ``` import numpy as np import matplotlib.pyplot as plt from scipy import interpolate x = [70, 80, 90, 100, 110] y = [49.7, 80.6, 122.5, 153.8, 163.0] tck = interpolate.splrep(x,y,s=0) xnew = np.arange(70,111,1) ynew = interpolate.splev(xnew,tck,der=0) plt.plot(x,y,'x',xnew,ynew) plt.show() t,c,k=tck yToFind = 140 print interpolate.sproot((t,c-yToFind,k)) #Lowers the spline at the abscissa ```
The UnivariateSpline class in scipy makes doing splines much more pythonic. ``` x = [70, 80, 90, 100, 110] y = [49.7, 80.6, 122.5, 153.8, 163.0] f = interpolate.UnivariateSpline(x, y, s=0) xnew = np.arange(70,111,1) plt.plot(x,y,'x',xnew,f(xnew)) ``` To find x at y then do: ``` yToFind = 140 yreduced = np.array(y) - yToFind freduced = interpolate.UnivariateSpline(x, yreduced, s=0) freduced.roots() ``` I thought interpolating x in terms of y might work but it takes a somewhat different route. It might be closer with more points.
If all you need is linear interpolation, you could use the [interp](http://docs.scipy.org/doc/numpy/reference/generated/numpy.interp.html?highlight=interp#numpy.interp) function in numpy.
Interpolation in SciPy: Finding X that produces Y
[ "", "python", "numpy", "scipy", "interpolation", "scientific-computing", "" ]
*I've found a solution, see my own answer below. Does anyone have a more elegant one?* I want to do this to increase code-coverage and find subtle bugs. Assume the following class to be tested: ``` public class Foo { private final Logger logger = LoggerFactory.getLogger(Foo.class); public void bar() { String param=[..]; if(logger.isInfoEnabled()) logger.info("A message with parameter {}", param); if(logger.isDebugEnabled()) { // some complicated preparation for the debug message logger.debug([the debug message]); } } } ``` and the following test-class: ``` public class FooTest { @Test public void bar() { Foo foo=new Foo(); foo.bar(); } } ``` A code-coverage tool like e.g. Cobertura will correctly report that only some of the conditional branches have been checked. info and debug are either activated or deactivated for the logger. Besides looking bad in your coverage score, this poses a real risk. What if there is some side effect caused by code inside if(logger.isDebugEnabled())? What if your code does only work if DEBUG is enabled and fails miserably if the log level is set to INFO? (This actually happened in one of our projects :p) **So my conclusion is that code containing logger statements should always be tested once with all logging enabled and once with all logging disabled...** Is there a way to do something like that with JUnit? I know how to globally enable or disable all my logging in Logback so the problem is: How can I execute the tests twice, once with logging enabled, once with logging disabled. p.s. I'm aware of [this question](https://stackoverflow.com/questions/265363/100-code-coverage-with-logging-code) but I don't think this is a duplicate. I'm less concerned about the absolute coverage values but about subtle, hard-to-find bugs that might be contained inside of a if(logger.isDebugEnabled()).
I've solved this problem by implementing a base class that test classes should extend if such functionality is desired. The article [Writing a parameterized JUnit test](http://ourcraft.wordpress.com/2008/08/27/writing-a-parameterized-junit-test/) contained the solution. See [LoggingTestBase](http://apps.sourceforge.net/trac/sulky/browser/trunk/sulky-junit/src/main/java/de/huxhorn/sulky/junit/LoggingTestBase.java) for the logging base class and [LoggingTestBaseExampleTest](http://apps.sourceforge.net/trac/sulky/browser/trunk/sulky-junit/src/test/java/de/huxhorn/sulky/junit/LoggingTestBaseExampleTest.java) for a simple example that's using it. Every contained test method is executed three times: **1.** It's executed using the logging as defined in logback-test.xml as usual. This is supposed to help while writing/debugging the tests. **2.** It's executed with all logging enabled and written to a file. This file is deleted after the test. **3.** It's executed with all logging disabled. Yes, LoggingTestBase needs documentation ;)
Have you tried simply maintaining two separate log configuration files? Each one would log at different levels from the root logger. **All logging disabled**: ``` ... <root> <priority value="OFF"/> <appender-ref ref="LOCAL_CONSOLE"/> </root> ... ``` **All logging enabled**: ``` ... <root> <priority value="ALL"/> <appender-ref ref="LOCAL_CONSOLE"/> </root> ... ``` Execution would specify different configurations on the classpath via a system parameter: ``` -Dlog4j.configuration=path/to/logging-off.xml -Dlog4j.configuration=path/to/logging-on.xml ```
Can I automatically execute JUnit testcases once with all logging enabled and once with all logging disabled?
[ "", "java", "junit", "code-coverage", "logback", "cobertura", "" ]
I've been looking through some similar questions without any luck. What I'd like to do is have a gridview which for certain items shows a linkbutton and for other items shows a hyperlink. This is the code I currently have: ``` public void gv_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { var data = (FileDirectoryInfo)e.Row.DataItem; var img = new System.Web.UI.HtmlControls.HtmlImage(); if (data.Length == null) { img.Src = "/images/folder.jpg"; var lnk = new LinkButton(); lnk.ID = "lnkFolder"; lnk.Text = data.Name; lnk.Command += new CommandEventHandler(changeFolder_OnCommand); lnk.CommandArgument = data.Name; e.Row.Cells[0].Controls.Add(lnk); } else { var lnk = new HyperLink(); lnk.Text = data.Name; lnk.Target = "_blank"; lnk.NavigateUrl = getLink(data.Name); e.Row.Cells[0].Controls.Add(lnk); img.Src = "/images/file.jpg"; } e.Row.Cells[0].Controls.AddAt(0, img); } } ``` where the first cell is a TemplateField. Currently, everything displays correctly, but the linkbuttons don't raise the Command event handler, and all of the controls disappear on postback. Any ideas?
I think you should try forcing a rebind of the GridView *upon* postback. This will ensure that any dynamic controls are recreated and their event handlers reattached. This should also prevent their disappearance after postback. IOW, call `DataBind()` on the GridView upon postback.
You can also add these in the Row\_Created event and then you don't have to undo !PostBack check
Can I programmatically add a linkbutton to gridview?
[ "", "c#", "asp.net", "gridview", "templatefield", "" ]
I'd like to start using iReport (netbeans edition) and replace the good old classic iReport 3.0.x. Seems like the classic iReport won't be improved anymore and abandoned at some point. The point is that I need to start iReport from another java application. With iReport 3.0 it was pretty easy and straightforward: just invoke `it.businesslogic.ireport.gui.MainFrame.main(args);` and iReport is up and running. The problem is I have no clue how to do the same thing in iReport-nb. The netbeans platform is a completely unkown for me and I could not find anything that looks like a main method or application starting point. It seems to load a lot of net beans platform stuff first and somehow hides the iReport starting point.
iReport based on the NetBeans platform works as standalone application (just like the classic one), even if it can be installed and used as NetBeans plugin too. Soon iR 3.5.2 will be released, it will cover all the remanining features present in iR classic that have been not covered yet in the previous versions, but on the other hand it provides plenty of new features and support for JasperReports 3.5.2 including a full new implementation of Barcode component, List (which are kind of light subreports), new chart types, support for multi-bands for detail and group header/footer, integrated preview and so on. Here you can find some tips about how to start a NetBeans platform based applications from another java application. Not trivial, since you need to set up a little bit the environment, but definitively doable: <http://wiki.netbeans.org/DevFaqPlatformAppAuthStrategies> Giulio
1)Designing:For designing the report the idea is pretty much the same. After installing plugin make new->report and start designing.When you finish choose preview and the iReport will compile your report resulting a .jasper file. 2)Execute:Write the code to pass the data and run the .jasper from your java code Something like that: ``` JasperPrint print=null; ResultSet rs=null; try { Statement stmt = (Statement) myConnection.createStatement (ResultSet.TYPE_SCROLL_SENSITIVE,//Default either way ResultSet.CONCUR_READ_ONLY); rs = stmt.executeQuery("select * from Table"); } catch (SQLException sQLException) { } try { print = JasperFillManager.fillReport(filename, new HashMap(), new JRResultSetDataSource(rs)); } catch (JRException ex) { } try{ JRExporter exporter=new net.sf.jasperreports.engine.export.JRPdfExporter(); exporter.setParameter(JRExporterParameter.OUTPUT_FILE_NAME, pdfOutFileName); exporter.setParameter(JRExporterParameter.JASPER_PRINT, print); exporter.exportReport(); ``` }..........
How to run iReport-nb 3.x.x from whithin another java application?
[ "", "java", "netbeans", "jasper-reports", "ireport", "" ]
I have been testing inline function calls in C++. ``` Thread model: win32 gcc version 4.3.3 (4.3.3-tdm-1 mingw32) ``` Stroustrup in The C++ Programming language wirtes: > The inline specifier is a hint to the compiler that it should attempt to generate code [...] inline rather than laying down the code for the function once and then calling through the usual function call mechanism. However, I have found out that the generated code is simply not inline. There is a *CALL* instrction for the **isquare** function. [![alt text](https://i.stack.imgur.com/LbyAx.jpg)](https://i.stack.imgur.com/LbyAx.jpg) **Why is this happening? How can I use inline functions then?** **EDIT:** The command line options used: ``` **** Build of configuration Debug for project InlineCpp **** **** Internal Builder is used for build **** g++ -O0 -g3 -Wall -c -fmessage-length=0 -osrc\InlineCpp.o ..\src\InlineCpp.cpp g++ -oInlineCpp.exe src\InlineCpp.o ```
There is no generic C++ way to FORCE the compiler to create inline functions. Note the word 'hint' in the text you quoted - the compiler is not obliged to listen to you. If you really, absolutely have to make something be in-line, you'll need a compiler specific keyword, OR you'll need to use macros instead of functions. EDIT: njsf gives the proper gcc keyword in his response.
Like Michael Kohne mentioned, the inline keyword is always a hint, and GCC in the case of your function decided not to inline it. Since you are using Gcc you can force inline with the \_\_attribute((always\_inline)). Example: ``` /* Prototype. */ inline void foo (const char) __attribute__((always_inline)); ``` Source:[GCC inline docs](http://gcc.gnu.org/onlinedocs/gcc/Inline.html)
C++ inline functions using GCC - why the CALL?
[ "", "c++", "inline", "tdm-mingw", "" ]
Can structs contain fields of reference types? And if they can is this a bad practice?
Yes, they can. Is it a good idea? Well, that depends on the situation. Personally I rarely create my own structs in the first place... I would treat any new user-defined struct with a certain degree of scepticism. I'm not suggesting that it's *always* the wrong option, just that it needs more of a clear argument than a class. It would be a bad idea for a struct to have a reference to a mutable object though... otherwise you can have two values which *look* independent but aren't: ``` MyValueType foo = ...; MyValueType bar = foo; // Value type, hence copy... foo.List.Add("x"); // Eek, bar's list has now changed too! ``` Mutable structs are evil. Immutable structs with references to mutable types are sneakily evil in different ways.
Sure thing and it's not bad practice to do so. ``` struct Example { public readonly string Field1; } ``` The readonly is not necessary but it is good practice to make struct's immutable.
Can structs contain fields of reference types
[ "", "c#", "struct", "reference-type", "" ]
I am working on a daemon where I need to embed a HTTP server. I am attempting to do it with BaseHTTPServer, which when I run it in the foreground, it works fine, but when I try and fork the daemon into the background, it stops working. My main application continues to work, but BaseHTTPServer does not. I believe this has something to do with the fact that BaseHTTPServer sends log data to STDOUT and STDERR. I am redirecting those to files. Here is the code snippet: ``` # Start the HTTP Server server = HTTPServer((config['HTTPServer']['listen'],config['HTTPServer']['port']),HTTPHandler) # Fork our process to detach if not told to stay in foreground if options.foreground is False: try: pid = os.fork() if pid > 0: logging.info('Parent process ending.') sys.exit(0) except OSError, e: sys.stderr.write("Could not fork: %d (%s)\n" % (e.errno, e.strerror)) sys.exit(1) # Second fork to put into daemon mode try: pid = os.fork() if pid > 0: # exit from second parent, print eventual PID before print 'Daemon has started - PID # %d.' % pid logging.info('Child forked as PID # %d' % pid) sys.exit(0) except OSError, e: sys.stderr.write("Could not fork: %d (%s)\n" % (e.errno, e.strerror)) sys.exit(1) logging.debug('After child fork') # Detach from parent environment os.chdir('/') os.setsid() os.umask(0) # Close stdin sys.stdin.close() # Redirect stdout, stderr sys.stdout = open('http_access.log', 'w') sys.stderr = open('http_errors.log', 'w') # Main Thread Object for Stats threads = [] logging.debug('Kicking off threads') while ... lots of code here ... server.serve_forever() ``` Am I doing something wrong here or is BaseHTTPServer somehow prevented from becoming daemonized? Edit: Updated code to demonstrate the additional, previously missing code flow and that log.debug shows in my forked, background daemon I am hitting code after fork.
After a bit of googling I [finally stumbled over this BaseHTTPServer documentation](http://pymotw.com/2/BaseHTTPServer/index.html#module-BaseHTTPServer) and after that I ended up with: ``` from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer from SocketServer import ThreadingMixIn class ThreadedHTTPServer(ThreadingMixIn, HTTPServer): """Handle requests in a separate thread.""" server = ThreadedHTTPServer((config['HTTPServer']['listen'],config['HTTPServer']['port']), HTTPHandler) server.serve_forever() ``` Which for the most part comes after I fork and ended up resolving my problem.
Here's how to do this with the [python-daemon](http://pypi.python.org/pypi/python-daemon) library: ``` from BaseHTTPServer import (HTTPServer, BaseHTTPRequestHandler) import contextlib import daemon from my_app_config import config # Make the HTTP Server instance. server = HTTPServer( (config['HTTPServer']['listen'], config['HTTPServer']['port']), BaseHTTPRequestHandler) # Make the context manager for becoming a daemon process. daemon_context = daemon.DaemonContext() daemon_context.files_preserve = [server.fileno()] # Become a daemon process. with daemon_context: server.serve_forever() ``` As usual for a daemon, you need to decide how you will interact with the program after it becomes a daemon. For example, you might register a systemd service, or write a PID file, etc. That's all outside the scope of the question though. In particular, it's outside the scope of the question to ask: once it's become a daemon process (necessarily detached from any controlling terminal), [how do I stop the daemon process](https://stackoverflow.com/questions/12517482/how-to-stop-a-daemon)? That's up to you to decide, as part of defining the program's behaviour.
Daemonizing python's BaseHTTPServer
[ "", "python", "daemon", "basehttpserver", "" ]
Although this seems like a trivial question, I am quite sure it is not :) I need to validate names and surnames of people from all over the world. Imagine a huge list of miilions of names and surnames where I need to remove as well as possible any cruft I identify. How can I do that with a regular expression? If it were only English ones I think that this would cut it: ``` ^[a-z -']+$ ``` However, I need to support also these cases: * other punctuation symbols as they might be used in different countries (no idea which, but maybe you do!) * different Unicode letter sets (accented letter, greek, japanese, chinese, and so on) * no numbers or symbols or unnecessary punctuation or runes, etc.. * titles, middle initials, suffixes are not part of this data * names are already separated by surnames. * we are prepared to force ultra rare names to be simplified (there's a person named '@' in existence, but it doesn't make sense to allow that character everywhere. Use pragmatism and good sense.) * note that many countries have laws about names so there are standards to follow Is there a standard way of validating these fields I can implement to make sure that our website users have a great experience and can actually *use their name* when registering in the list? I would be looking for something similar to the many "email address" regexes that you can find on google.
I'll try to give a proper answer myself: The only punctuations that should be allowed in a name are full stop, apostrophe and hyphen. I haven't seen any other case in the list of corner cases. Regarding numbers, there's only one case with an 8. I think I can safely disallow that. Regarding letters, any letter is valid. I also want to include space. This would sum up to this regex: ``` ^[\p{L} \.'\-]+$ ``` This presents one problem, i.e. the apostrophe can be used as an attack vector. It should be encoded. So the validation code should be something like this (untested): ``` var name = nameParam.Trim(); if (!Regex.IsMatch(name, "^[\p{L} \.\-]+$")) throw new ArgumentException("nameParam"); name = name.Replace("'", "&#39;"); //&apos; does not work in IE ``` Can anyone think of a reason why a name should not pass this test or a XSS or SQL Injection that could pass? --- complete tested solution ``` using System; using System.Text.RegularExpressions; namespace test { class MainClass { public static void Main(string[] args) { var names = new string[]{"Hello World", "John", "João", "タロウ", "やまだ", "山田", "先生", "мыхаыл", "Θεοκλεια", "आकाङ्क्षा", "علاء الدين", "אַבְרָהָם", "മലയാളം", "상", "D'Addario", "John-Doe", "P.A.M.", "' --", "<xss>", "\"" }; foreach (var nameParam in names) { Console.Write(nameParam+" "); var name = nameParam.Trim(); if (!Regex.IsMatch(name, @"^[\p{L}\p{M}' \.\-]+$")) { Console.WriteLine("fail"); continue; } name = name.Replace("'", "&#39;"); Console.WriteLine(name); } } } } ```
I sympathize with the need to constrain input in this situation, but I don't believe it is possible - Unicode is vast, expanding, and so is the subset used in names throughout the world. Unlike email, there's no universally agreed-upon standard for the names people may use, or even which representations they may register as official with their respective governments. I suspect that any regex will eventually fail to pass a name considered valid by *someone*, *somewhere* in the world. Of course, you do need to sanitize or escape input, to avoid the [Little Bobby Tables](http://xkcd.com/327/ "Little Bobby Tables") problem. And there may be other constraints on which input you allow as well, such as the underlying systems used to store, render or manipulate names. As such, I recommend that you determine first the restrictions necessitated by the system your validation belongs to, and create a validation expression based on those alone. This may still cause inconvenience in some scenarios, but they should be rare.
Regular expression for validating names and surnames?
[ "", "regex", "c#", "globalization", "" ]
i seem to write this code over and over again and wanted to see if there was a better way of doing it more generically. I start out with a list of Foo objects ``` Foo[] foos = GenerateFoos(); ``` I think want to create a dictionary where the key and value are both properties of Foo for example: ``` Dictionary<string, string> fooDict = new Dictionary<string, string>(): foreach (Foo foo in foos) { fooDict[foo.Name] = foo.StreetAddress; } ``` is there anyway of writing this code generically as it seems like a basic template where there is an array of objects, a key property a value property and a dictionary. Any suggestions? **I am using VS 2005 (C#, 2.0)**
With LINQ: ``` var fooDict = foos.ToDictionary(x=>x.Name,x=>x.StreetAddress); ``` (and yes, `fooDict` is `Dictionary<string, string>`) --- edit to show the pain in VS2005: ``` Dictionary<string, string> fooDict = Program.ToDictionary<Foo, string, string>(foos, delegate(Foo foo) { return foo.Name; }, delegate(Foo foo) { return foo.StreetAddress; }); ``` where you have (in `Program`): ``` public static Dictionary<TKey, TValue> ToDictionary<TSource, TKey, TValue>( IEnumerable<TSource> items, Converter<TSource, TKey> keySelector, Converter<TSource, TValue> valueSelector) { Dictionary<TKey, TValue> result = new Dictionary<TKey, TValue>(); foreach (TSource item in items) { result.Add(keySelector(item), valueSelector(item)); } return result; } ```
If you are using framework 3.5, you can use the `ToDictionary` extension: ``` Dictionary<string, string> fooDict = foos.ToDictionary(f => f.Name, f => f.StreetAddress); ``` For framework 2.0, the code is pretty much as simple as it can be. You can improve the performance a bit by specifying the capacity for the dictionary when you create it, so that it doesn't have to do any reallocations while you fill it: ``` Dictionary<string, string> fooDict = new Dictionary<string, string>(foos.Count): ```
Elegant way to go from list of objects to dictionary with two of the properties
[ "", "c#", "arrays", "visual-studio-2005", "dictionary", "" ]
I've been desperately looking for an easy way to display HTML in a WPF-application. There are some options: 1) use the WPF WebBrowser Control 2) use the Frame Control 3) use a third-party control but, i've ran into the following problems: 1) the WPF WebBrowser Control is not real WPF (it is a Winforms control wrapped in WPF). I have found a way to create a wrapper for this and use DependencyProperties to navigate to HTML text with bindings and propertychanged. The problem with this is, if you put a Winforms control in WPF scrollviewer, it doesnt respect z-index, meaning the winform is always on top of other WPF control. This is very annoying, and I have tried to work around it by creating a WindowsFormsHost that hosts an ElemenHost etc.. but this completely breaks my binding obviously. 2) Frame Control has the same problems with displaying if it shows HTML content. Not an option. 3) I haven't found a native HTML-display for WPF. All options are winforms, and with the above mentioned problems. the only way out I have at the moment is using Microsoft's buggy HtmlToXamlConverter, which crashes hard sometimes. ([MSDN](http://msdn.microsoft.com/en-us/library/aa972129.aspx)) Does anybody have any other suggestions on how to display HTLM in WPF, without these issues? sorry for the long question, hope anyone knows what i'm talking about...
If you can't use WebBrowser, your best bet is to probably rewrite your HTML content into a FlowDocument (if you're using static HTML content). Otherwise, as you mention, you kind of have to special-case WebBrowser, you're right that it doesn't act like a "real" WPF control. You should probably create a ViewModel object that you can bind to that represents the WebBrowser control where you can hide all of the ugly non-binding code in one place, then never open it again :)
have you tried the Awesomium? please refer to : <http://chriscavanagh.wordpress.com/2009/08/25/a-real-wpf-webbrowser/>
WPF WebBrowser (3.5 SP1) Always on top - other suggestion to display HTML in WPF
[ "", "c#", "html", "wpf", ".net-3.5", "" ]
All of a sudden my printStackTrace's stopped printing anything. When I give it a different output stream to print to it works (like e.printStackTrace(System.out)) but obviously I would like to get it figured out.
In your launch profile, on the common tab, check to make sure "Allocate console" is checked in the "Standard Input and Output" section.
Check to see if some library you are using is not redirecting the standard err with the System.setErr(PrintStream) method.
Eclipse Stacktrace System.err problem
[ "", "java", "eclipse", "error-handling", "printstacktrace", "" ]
I have eclipse and I can test run java apps but I am not sure how to compile them. I read that I should type javac -version into my cmd.exe and see if it is recognized. It is not. So I went to sun's website and downloaded/installed JDK v6. Yet it still says 'javac' is an unrecognized command. What am I doing wrong? Thanks! **UPDATE** OK after reading some replies it seems like what I am trying to do is create a .jar file that can be ran on another computer (with the runtime). However I am having trouble figuring out how to do that. This might be because I am using Flex Builder(eclipse), but I added the ability to create java projects as well. Thanks **UPDATE** OK I do not want to make a JAR file, I am not trying to archive it...the whole point of making a program is to send it to users so they can use the program...THAT is what I am trying to do...why is this so hard?
A JAR file can function as an executable, when you export your project as a JAR file in Eclipse (as Michael Borgwardt pointed out) you can specify what's the executable class, that meaning which one has the entry point [aka `public static void main(String[] args)`] If the user installed the JRE he/she can double-click it and the application would be executed. ***EDIT:*** For a detailed explanation of how this works, see the ["How do I create executable Java program?"](https://stackoverflow.com/questions/804466/how-do-i-create-executable-java-program)
To setup Eclipse to use the JDK you must follow these steps. 1.**Download the JDK** First you have to download the JDK from Suns [site](http://java.sun.com/javase/downloads/index.jsp). (Make sure you download one of them that has the JDK) 2.**Install JDK** Install it and it will save some files to your hard drive. On a Windows machine this could be in c:\program files\java\jdk(version number) 3.**Eclipse Preferences** Go to the Eclipse Preferences -> Java -> Installed JREs 4.**Add the JDK** Click Add JRE and you only need to located the Home Directory. Click **Browse...** and go to where the JDK is installed on your system. The other fields will be populated for you after you locate the home directory. 5.**You're done** Click Okay. If you want that JDK to be the default then put a Check Mark next to it in the Installed JRE's list.
How do you install JDK?
[ "", "eclipse", "java", "" ]
I have an file uploading site, it has an option of uploading through urls, what I am trying to do is whenever a user uploads through url, I check my database if a file exists that was uploaded through same url it displays the download url directly instead of uploading it again. The data sent to uploading script is in array form like: ``` Array ( [0] => http://i41.tinypic.com/3342r93.jpg [1] => http://i41.tinypic.com/28cfub7.jpg [2] => http://i41.tinypic.com/12dsa32.jpg ) ``` and the array used for outputing the results is in form like this: ``` Array ( [0] => Array ( [id] => 43 [name] => 3342r93.jpg [size] => 362750 [descr] => [password] => [delete_id] => 75CE [upload_id] => 75F45CAE1 ) [1] => Array ( [id] => 44 [name] => 28cfub7.jpg [size] => 105544 [descr] => [password] => [delete_id] => D392 [upload_id] => 6676FD881 ) [2] => Array ( [id] => 45 [name] => 12dsa32.jpg [size] => 49000 [descr] => [password] => [delete_id] => 54C9 [upload_id] => A58614C01 ) ) ``` Now I want is that if the link <http://i41.tinypic.com/28cfub7.jpg> is already upload I just add it to output array but maintain it in a order (if the link added was 2nd in array the output result should also show it in 2nd) So what function should be used to remove the matched urls from input array and a function to add it output array in the order no. **// edited** Yes unset will do the thing but I want to maintain the order: For example after unsetting the array looks like this: ``` Array ( [0] => http://i41.tinypic.com/3342r93.jpg // [1] was removed [2] => http://i41.tinypic.com/12dsa32.jpg ) ``` but the output array would be ``` Array ( [0] => Array ( [id] => 43 [name] => 3342r93.jpg [size] => 362750 [descr] => [password] => [delete_id] => 75CE [upload_id] => 75F45CAE1 ) // this will become [1], so how can i add another output[1] and shift other // items after it to [2], [3] and so on... [1] => Array ( [id] => 45 [name] => 12dsa32.jpg [size] => 49000 [descr] => [password] => [delete_id] => 54C9 [upload_id] => A58614C01 ) ) ```
Well, you can add it to the output array by doing something like: ``` $OutArray[2] = $element; ``` Where $element is another Array with the id, name, size (etc...) elements. As for removing from the array: ``` unset($OutArray[2]); ``` You may want to read [Array (PHP manual)](http://php.net/manual/en/language.types.array.php).
If you have an indexed array, you can remove a value by doing: ``` unset ($array[2]); ``` If you want to add an item to an array, use this shorthand of [array\_push](http://php.net/array_push) (you don't need to specify an index!): ``` $array[] = "new object"; ``` All documentation is on [php.net/arrays](http://php.net/arrays)
Manipulating arrays in php
[ "", "php", "" ]
In ActiveRecord there are two ways to declare indexes for multiple columns: ``` add_index :classifications, [:species, :family, :trivial_names] ``` ``` add_index :classifications, :species add_index :classifications, :family add_index :classifications, :trivial_names ``` Is there any difference between the first approach and the second one? If so, when should I use the first and when the second?
You are comparing a composite index with a set of independent indices. They are just different. Think of it this way: a compound index gives you rapid look-up of the first field in a nested set of fields followed by rapid look-up of the second field *within ONLY the records already selected by the first field*, followed by rapid look-up of the third field - again, only within the records selected by the previous two indices. Lets take an example. Your database engine will take no more than 20 steps to locate a unique value within 1,000,000 records (if memory serves) **if** you are using an index. This is true whether you are using a composite or and independent index - but ONLY for the first field ("species" in your example although I'd think you'd want Family, Species, and then Common Name). Now, let's say that there are 100,000 matching records for this first field value. If you have only single indices, then any lookup within these records will take 100,000 steps: one for each record retrieved by the first index. This is because the second index will not be used (in most databases - this is a bit of a simplification) and a brute force match must be used. If you have a *composite index* then your search is much faster because your second field search will have an index *within* the first set of values. In this case you'll need no more than 17 steps to get to your first matching value on field 2 within the 100,000 matches on field 1 (log base 2 of 100,000). So: steps needed to find a unique record out of a database of 1,000,000 records using a composite index on 3 nested fields where the first retrieves 100,000 and the second retrieves 10,000 = 20 + 17 + 14 = 51 steps. Steps needed under the same conditions with just independent indices = 20 + 100,000 + 10,000 = 110,020 steps. Big difference, eh? Now, *don't* go nuts putting composite indices everywhere. First, they are expensive on inserts and updates. Second, they are only brought to bear if you are truly searching across nested data (for another example, I use them when pulling data for logins for a client over a given date range). Also, they are not worth it if you are working with relatively small data sets. Finally, check your database documentation. Databases have grown extremely sophisticated in the ability to deploy indices these days and the Database 101 scenario I described above may not hold for some (although I always develop as if it does just so I know what I am getting).
The two approaches are different. The first creates a single index on three attributes, the second creates three single-attribute indices. Storage requirements will be different, although without distributions it's not possible to say which would be larger. Indexing three columns [A, B, C] works well when you need to access for values of A, A+B and A+B+C. It won't be any good if your query (or find conditions or whatever) doesn't reference A. When A, B and C are indexed separately, some DBMS query optimizers will consider combining two or more indices (subject to the optimizer's estimate of efficiency) to give a similar result to a single multi-column index. Suppose you have some e-commerce system. You want to query orders by purchase\_date, customer\_id and sometimes both. I'd start by creating two indices: one for each attribute. On the other hand, if you always specify purchase\_date *and* customer\_id, then a single index on both columns would probably be most efficient. The order is significant: if you also wanted to query orders for all dates for a customer, then make the customer\_id the first column in the index.
Index for multiple columns in ActiveRecord
[ "", "sql", "ruby-on-rails", "activerecord", "indexing", "" ]
I am using Hibernate + JPA as my ORM solution. I am using HSQL for unit testing and PostgreSQL as the real database. I want to be able to use Postgres's native [UUID](http://www.postgresql.org/docs/8.3/static/datatype-uuid.html) type with Hibernate, and use the UUID in its String representation with HSQL for unit testing (since HSQL does not have a UUID type). I am using a persistence XML with different configurations for Postgres and HSQL Unit Testing. Here is how I have Hibernate "see" my custom UserType: ``` @Id @Column(name="UUID", length=36) @org.hibernate.annotations.Type(type="com.xxx.UUIDStringType") public UUID getUUID() { return uuid; } public void setUUID(UUID uuid) { this.uuid = uuid; } ``` and that works great. But what I need is the ability to swap out the "com.xxx.UUIDStringType" part of the annotation in XML or from a properties file that can be changed without re-compiling. Any ideas?
This question is really old and has been answered for a long time, but I recently found myself in this same situation and found a good solution. For starters, I discovered that Hibernate has three different built-in UUID type implementations: 1. `binary-uuid` : stores the UUID as binary 2. `uuid-char` : stores the UUID as a character sequence 3. `pg-uuid` : uses the native Postgres UUID type These types are registered by default and can be specified for a given field with a `@Type` annotation, e.g. ``` @Column @Type(type = "pg-uuid") private UUID myUuidField; ``` There's *also* a mechanism for overriding default types in the `Dialect`. So if the final deployment is to talk to a Postgres database, but the unit tests use HSQL, you can override the `pg-uuid` type to read/write character data by writing a custom dialect like so: ``` public class CustomHSQLDialect extends HSQLDialect { public CustomHSQLDialect() { super(); // overrides the default implementation of "pg-uuid" to replace it // with varchar-based storage. addTypeOverride(new UUIDCharType() { @Override public String getName() { return "pg-uuid"; } }); } } ``` Now just plug in the custom dialect, and the the `pg-uuid` type is available in both environments.
Hy, for those who are seeking for a solution in Hibernate 4 (because the Dialect#addTypeOverride method is no more available), I've found one, underlying on [this Steve Ebersole's comment](https://hibernate.atlassian.net/browse/HHH-9574?focusedCommentId=65721&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-65721) You have to build a custom user type like this one : ``` public class UUIDStringCustomType extends AbstractSingleColumnStandardBasicType { public UUIDStringCustomType() { super(VarcharTypeDescriptor.INSTANCE, UUIDTypeDescriptor.INSTANCE); } @Override public String getName() { return "pg-uuid"; } } ``` And to bind it to the HSQLDB dialect, you must build a custom dialect that override the Dialect#contributeTypes method like this : ``` public class CustomHsqlDialect extends HSQLDialect { @Override public void contributeTypes(TypeContributions typeContributions, ServiceRegistry serviceRegistry) { super.contributeTypes(typeContributions,serviceRegistry); typeContributions.contributeType(new UUIDStringCustomType()); } } ``` Then you can use the @Type(type="pg-uuid") with the two databases. Hope it will help someone...
Using Different Hibernate User Types in Different Situations
[ "", "java", "hibernate", "jpa", "annotations", "hsqldb", "" ]
I have problem when i using ListView and Linq as datasource. The error down: ``` Specified cast is not valid. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidCastException: Specified cast is not valid. System.Data.SqlClient.SqlBuffer.get_Int64() +58 System.Data.SqlClient.SqlDataReader.GetInt64(Int32 i) +38 Read_ForumThreadPostDetail(ObjectMaterializer`1 ) +95 System.Data.Linq.SqlClient.ObjectReader`2.MoveNext() +29 System.Linq.WhereSelectEnumerableIterator`2.MoveNext() +96 System.Collections.Generic.List`1..ctor(IEnumerable`1 collection) +7667556 System.Linq.Enumerable.ToList(IEnumerable`1 source) +61 ``` Source code ``` Public IEnumerable<IForumThreadPost> GetForumPostByThreadAndPost() { ScoutDataDataContext sd = new ScoutDataDataContext(); long ThreadId = Convert.ToInt64(HttpContext.Current.Request.QueryString["id"]); long PostId = Convert.ToInt64(HttpContext.Current.Request.QueryString["postId"]); ///.Skip((pageIndex - 1)*pageSize).Take(pageSize) + int pageIndex, int pageSize return sd.ForumThreadPostDetails .AsEnumerable() .Where(f => f.ThreadId.Equals(ThreadId) && f.PostId.Equals(PostId)) .Select(f => new IForumThreadPost { Id = f.Id, ThreadId = f.ThreadId, PostId = f.PostId, Title = f.Title, ThreadTitle = f.ThreadTitle, Content = f.Content, UserFullName = f.UserFullName, UserId = f.UserId }).ToList(); // error here } ``` This function has work before, so i don't can figure out what the problem is. Thanks for your help.
Without seeing more of the code or database structure, it will be hard to come to a great solution. Seeing the ForumThreadPostDetails table and generated LinqToSql class would be ideal. Are the ThreadId and PostId both 'BigInt's in the database? Do the types match up between the properties within IForumThreadPost and ForumThreadPostDetails (and does the details class inherit from this interface, if it even is an interface)? Have any changes been made to the ForumThreadPostDetails table in the database (fields changing type, etc)? Why are you calling .AsEnumerable() on your table?
The problem was that in the database the Id was int and in the code it was long.
Specified cast is not valid. ListView and Linq
[ "", "c#", "linq", "" ]
I'm working on a one-time PHP (5.2.6) script that migrates several million of MySQL (5.0.45) database rows to another format in another table while keeping (a lot) of relevant data in memory for incremental calculations. The data is calculated incrementally. (in chunks of about 1000 lines) The script stops unexpectedly in random points without an error message. My question is- how can I find out whats the reason for the script stopping. (memory outage? timeout by MySQL etc...) I have set\_time\_limit (0); so its not PHP timeout.
see the log file, probably is memory you need to add more memory in php.ini to **`memory_limit`** parameter
You could try turning the error reporting level (in php.ini) up really high, so it complains about more things. My first guess would have been that you hit your execution time limit or memory limit and the script was terminated, but you covered that.
Update script stops randomly
[ "", "php", "mysql", "" ]
> **Possible Duplicate:** > [Javascript === vs ==](https://stackoverflow.com/questions/359494/javascript-vs) What's the diff between "===" and "==" ? Thanks!
'===' means *equality without type coersion*. In other words, if using the triple equals, the values must be equal in type as well. e.g. ``` 0==false // true 0===false // false, because they are of a different type 1=="1" // true, auto type coersion 1==="1" // false, because they are of a different type ``` Source: <http://longgoldenears.blogspot.com/2007/09/triple-equals-in-javascript.html>
> Ripped from my blog: keithdonegan.com **The Equality Operator (==)** The equality operator (==) checks whether two operands are the same and returns true if they are the same and false if they are different. **The Identity Operator (===)** The identity operator checks whether two operands are “identical”. These rules determine whether two values are identical: * They have to have the same type. * If number values have the same value they are identical, unless one or both are NaN. * If string values have the same value they are identical, unless the strings differ in length or content. * If both values refer to the same object, array or function they are identical. * If both values are null or undefined they are identical.
What is exactly the meaning of "===" in javascript?
[ "", "javascript", "" ]
I was wondering if there was some sort of cheat sheet for which objects go well with the using statement... `SQLConnection`, `MemoryStream`, etc. Taking it one step further, it would be great to even show the other "pieces of the puzzle", like how you should actually call `connection.Close()` before the closing using statement bracket. Anything like that exist? If not, maybe we should make one.
Microsoft FxCop has a rule checking that you use an IDisposbale in a using block.
Perhaps glance at my post on this at <http://www.lancemay.com/2010/01/idisposable-cheat-sheet/>. Not sure if that's what you're looking for, but based on the original question, it sounds like it may be.
Is there a list of common object that implement IDisposable for the using statement?
[ "", "c#", "asp.net", "dispose", "idisposable", "using-statement", "" ]
So I have a program that has a list of timers. Each of the timers has a tick event and lets just say for example, i have 10 timers started (all are in the List). What is the best way to sit forever (or until i tell it to stop)? Should I just have a while loop? ``` foreach(Timer t in _timers) { t.Start(); } while(true) { Application.DoEvents(); System.Threading.Thread.Sleep(5000); } ``` I have a feeling that this isn't the best way... -- Update Here's my entire program: ``` public static void Main() { // set some properties and set up the timers foreach(Timer t in _timers) { t.Start(); } while(true) { Application.DoEvents(); System.Threading.Thread.Sleep(5000); } } ``` Thats it. There is no UI, there's nothing else. If I don't have the while loop, then the program just finishes.
Use an `EventWaitHandle` or array of `EventWaitHandle`s to block thread execution by using the `WaitOne()` or `WaitAll()` methods. <http://msdn.microsoft.com/en-us/library/kad9xah9.aspx> So for example ``` ManualResetEvent mre = new ManualResetEvent(false); mre.WaitOne(); ``` will wait for eternity. **edit** Since you're making a service, you might want to read [this article.](http://www.c-sharpcorner.com/UploadFile/mahesh/window_service11262005045007AM/window_service.aspx)
By the `Application.DoEvents`, I assume you are on a UI thread here. It is never a good idea to keep the UI thread active (even with `DoEvents`). Why not just start the timers and release control back to the message pump. When the events tick it'll pick up the events. Why do you want to loop? --- Re the update; which `Timer` are you using? If you use `System.Timers.Timer` (with the `Elapsed` event) then it isn't bound to the message-loop (it fires on a separate thread): you can just hang the main thread, perhaps waiting on some exit condition: ``` using System; using System.Timers; static class Program { static void Main() { using (Timer timer = new Timer()) { timer.Interval = 2000; timer.Elapsed += delegate { Console.Error.WriteLine("tick"); }; timer.Start(); Console.WriteLine("Press [ret] to exit"); Console.ReadLine(); timer.Stop(); } } } ```
Correct way to have an endless wait
[ "", "c#", ".net", "" ]
I have an HTML page and and external JavaScript file. How do I access the value of an `input` tag from the HTML page in JavaScript? My HTML is as follows: ``` <form name="formxml"> <input type="text" name="xmlname"/> <input type="submit" value="Click me please" onclick="loadxml()"/> </form> ``` In the Javascript file, I was trying: ``` var name = document.formxml.xmlname.value ``` but this gives an error of "document.formxml is undefined" What should I do?
Looks like the external js can't find the form yet because it's parsed before the page is rendered? Not sure about it, but putting it in a function and calling the function on the page (when it's done loading) does work: ``` function foo () { var name = document.formxml.xmlname.value; alert(name); } ``` and ``` <form action="" method="post" name="formxml"> <input type="text" name="xmlname" value="123" id="xmlname"> <input type="button" onclick="foo();"> </form> ```
You can use the following after you add an id attribute to input tag ``` document.getElementById('id').value ```
Getting value from input box in html in external Javascript
[ "", "javascript", "html", "" ]
I'm having difficulty understanding why a particular selector isn't working for me. Admittedly I'm a bit of a JQuery newbie. This correctly selects the **first** div.editbox on the page and colors it yellow: ``` $('div.editbox:first').css("background-color","yellow"); ``` However, this `if ... is` construct makes the highlighted border appear for **each** box as it is moused-over. ``` $('div.editbox').bind('mouseover', function(e) { if ($(this).is('div.editbox:first')) {$(this).css("border", "1px solid red");} }); ``` I have tried variations such as '.editbox :first', '.editbox div:first', etc. Essentially, I want to be able to reliably test whether this is the first or last element of with the class name. Thanks! Edit: here's the HTML I'm using: ``` <body> <div class="container"> <p></p> <div class="editbox" id="box1">Foo</div> <div class="editbox" id="box2">Bar</div> <div class="editbox" id="box3">Baz</div> <div class="responsebox" id="rbox"></div> </div> </body> ``` This is just a proof-of-concept page; the actual page will of course be much more complex. Again: what I want is to reliably detect if I am in the first or last "div.editbox". A workaround I used is: ``` $('div.editbox:last').addClass("lasteditbox"); ``` Then test for `if ($(this).is(".lasteditbox"))` which works, but it seems clumsy and I'm trying to learn the proper way to do this with JQuery.
**UPDATE:** This works for the first element. ``` $('div.editbox').bind('mouseover', function(e) { if ($("div.editBox").index(this) == 0) { $(this).css("border", "1px solid red"); } }); ``` And for the last element, this selector works: ``` if($("div.editBox").index(this) == ($("div.editBox").length-1)){ $(this).css("color","red"); } ```
If you want the mouseover on just the first occurence of the class editbox inside div ``` $('div.editbox:first').mouseover(function() { $(this).css("border", "1px solid red"); }); ``` **Edit** ``` $('div.editbox').mouseover(function() { $(this).css("border", "1px solid yellow"); }).filter(':first').mouseover(function(){ $(this).css("border", "1px solid red"); }).filter(':last').mouseover(function(){ $(this).css("border", "1px solid blue"); }) ```
Problems with JQuery selectors and "is"
[ "", "javascript", "jquery", "jquery-selectors", "" ]
I need to store key/value info in some type of collection. In C#, I'd define a dictionary like this: ``` var entries = new Dictionary<string, int>(); entries.Add("Stop me", 11); entries.Add("Feed me", 12); entries.Add("Walk me", 13); ``` Then I would access the values so: ``` int value = entries["Stop me"]; ``` How do I do this in Java? I've seen examples with `ArrayList`, but I'd like the solution with generics, if possible.
You want to use a `Map` ``` Map<String, Integer> m = new HashMap<String, Integer>(); m.put("Stop me", 11); Integer i = m.get("Stop me"); // i == 11 ``` Note that on the last line, I could have said: ``` int i = m.get("Stop me"); ``` Which is shorthand for (with Java's auto-unboxing): ``` int i = m.get("Stop me").intValue() ``` If there is no value in the map at the given key, the `get` returns `null` and this expression throws a `NullPointerException`. Hence it's **always** a good idea to use the *boxed type* `Integer` in this case
Use a `java.util.Map`. There are several implementations: * `HashMap`: O(1) lookup, does not maintain order of keys * `TreeMap`: O(log n) lookup, maintains order of keys, so you can iterate over them in a guaranteed order * `LinkedHashMap`: O(1) lookup, iterates over keys in the order they were added to the map. You use them like: ``` Map<String,Integer> map = new HashMap<String,Integer>(); map.put("Stop me", 11); map.put("Feed me", 12); int value = map.get("Stop me"); ``` For added convenience working with collections, have a look at the [Google Collections library](http://code.google.com/p/google-collections/). It's excellent.
C# refugee seeks a bit of Java collections help
[ "", "java", "collections", "dictionary", "" ]
Which version of Python is recommended for Pylons, and why?
[You can use Python 2.3 to 2.6](http://pylonshq.com/docs/en/0.9.7/gettingstarted/#requirements), though 2.3 support will be dropped in the next version. [You can't use Python 3 yet](http://wiki.pylonshq.com/display/pylonscommunity/Pylons+Roadmap+to+1.0). There's no real reason to favor Python 2.5 or 2.6 at this point. Use what works best for you.
Pylons itself [says](http://pylonshq.com/docs/en/0.9.7/gettingstarted/) it needs at least 2.3, and recommends 2.4+. Since 2.6 is [production ready](http://python.org/download/), I'd use that.
Pylons - use Python 2.5 or 2.6?
[ "", "python", "pylons", "" ]
**Please note that [`Object.Watch`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/watch) and [`Object.Observe`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/observe) are both deprecated now (as of Jun 2018).** --- I was looking for an easy way to monitor an object or variable for changes, and I found `Object.watch()`, that's supported in Mozilla browsers, but not IE. So I started searching around to see if anyone had written some sort of equivalent. About the only thing I've found has been [a jQuery plugin](http://plugins.jquery.com/watch/), but I'm not sure if that's the best way to go. I certainly use jQuery in most of my projects, so I'm not worried about the jQuery aspect... Anyway, the question: Can someone show me a working example of that jQuery plugin? I'm having problems making it work... Or, does anyone know of any better alternatives that would work cross browser? **Update after answers**: Thanks everyone for the responses! I tried out the code posted here: <http://webreflection.blogspot.com/2009/01/internet-explorer-object-watch.html> But I couldn't seem to make it work with IE. The code below works fine in Firefox, but does nothing in IE. In Firefox, each time `watcher.status` is changed, the `document.write()` in `watcher.watch()` is called and you can see the output on the page. In IE, that doesn't happen, but I can see that `watcher.status` is updating the value, because the last `document.write()` call shows the correct value (in both IE and FF). But, if the callback function isn't called, then that's kind of pointless... :) Am I missing something? ``` var options = {'status': 'no status'}, watcher = createWatcher(options); watcher.watch("status", function(prop, oldValue, newValue) { document.write("old: " + oldValue + ", new: " + newValue + "<br>"); return newValue; }); watcher.status = 'asdf'; watcher.status = '1234'; document.write(watcher.status + "<br>"); ```
(Sorry for the cross-posting, but this answer I gave to a similar question works fine here) I have created a small [object.watch shim](http://gist.github.com/384583) for this a while ago. It works in IE8, Safari, Chrome, Firefox, Opera, etc.
That plugin simply uses a timer/interval to repeatedly check for changes on an object. Maybe good enough but personally I would like more immediacy as an observer. Here's an attempt at bringing `watch`/`unwatch` to IE: <http://webreflection.blogspot.com/2009/01/internet-explorer-object-watch.html>. It does change the syntax from the Firefox way of adding observers. Instead of : ``` var obj = {foo:'bar'}; obj.watch('foo', fooChanged); ``` You do: ``` var obj = {foo:'bar'}; var watcher = createWatcher(obj); watcher.watch('foo', fooChanged); ``` Not as sweet, but as an observer you are notified immediately.
Object.watch() for all browsers?
[ "", "javascript", "jquery", "dom-events", "" ]
`urllib.urlretrieve` returns silently even if the file doesn't exist on the remote http server, it just saves a html page to the named file. For example: ``` urllib.urlretrieve('http://google.com/abc.jpg', 'abc.jpg') ``` just returns silently, even if abc.jpg doesn't exist on google.com server, the generated `abc.jpg` is not a valid jpg file, it's actually a html page . I guess the returned headers (a httplib.HTTPMessage instance) can be used to actually tell whether the retrieval successes or not, but I can't find any doc for `httplib.HTTPMessage`. Can anybody provide some information about this problem?
Consider using `urllib2` if it possible in your case. It is more advanced and easy to use than `urllib`. You can detect any HTTP errors easily: ``` >>> import urllib2 >>> resp = urllib2.urlopen("http://google.com/abc.jpg") Traceback (most recent call last): <<MANY LINES SKIPPED>> urllib2.HTTPError: HTTP Error 404: Not Found ``` `resp` is actually `HTTPResponse` object that you can do a lot of useful things with: ``` >>> resp = urllib2.urlopen("http://google.com/") >>> resp.code 200 >>> resp.headers["content-type"] 'text/html; charset=windows-1251' >>> resp.read() "<<ACTUAL HTML>>" ```
I keep it simple: ``` # Simple downloading with progress indicator, by Cees Timmerman, 16mar12. import urllib2 remote = r"http://some.big.file" local = r"c:\downloads\bigfile.dat" u = urllib2.urlopen(remote) h = u.info() totalSize = int(h["Content-Length"]) print "Downloading %s bytes..." % totalSize, fp = open(local, 'wb') blockSize = 8192 #100000 # urllib.urlretrieve uses 8192 count = 0 while True: chunk = u.read(blockSize) if not chunk: break fp.write(chunk) count += 1 if totalSize > 0: percent = int(count * blockSize * 100 / totalSize) if percent > 100: percent = 100 print "%2d%%" % percent, if percent < 100: print "\b\b\b\b\b", # Erase "NN% " else: print "Done." fp.flush() fp.close() if not totalSize: print ```
How to know if urllib.urlretrieve succeeds?
[ "", "python", "networking", "urllib", "" ]
I have three tables something like the following: ``` Customer (CustomerID, AddressState) Account (AccountID, CustomerID, OpenedDate) Payment (AccountID, Amount) ``` The Payment table can contain multiple payments for an Account and a Customer can have multiple accounts. What I would like to do is retrieve the total amount of all payments on a State by State and Month by Month basis. E.g. ``` Opened Date| State | Total -------------------------- 2009-01-01 | CA | 2,500 2009-01-01 | GA | 1,000 2009-01-01 | NY | 500 2009-02-01 | CA | 1,500 2009-02-01 | NY | 2,000 ``` In other words, I'm trying to find out what States paid the most for each month. I'm only interested in the month of the OpenedDate but I get it as a date for processing afterwards. I was trying to retrieve all the data I needed in a single query. I've been trying something along the lines of: ``` select dateadd (month, datediff(month, 0, a.OpenedDate), 0) as 'Date', c.AddressState as 'State', ( select sum(x.Amount) from ( select p.Amount from Payment p where p.AccountID = a.AccountID ) as x ) from Account a inner join Customer c on c.CustomerID = a.CustomerID where *** group by dateadd(month, datediff(month, 0, a.OpenedDate), 0), c.AddressState ``` The where clause includes some general stuff on the Account table. The query won't work because the a.AccountID is not included in the aggregate function. Am I approaching this the right way? How can I retrieve the data I require in order to calculate which States' customers pay the most?
If you want the data grouped by month, you need to group by month: ``` SELECT AddressState, DATEPART(mm, OpenedDate), SUM(Amount) FROM Customer c INNER JOIN Account a ON a.CustomerID = c.CustomerID INNER JOIN Payments p ON p.AccountID = a.AccountID GROUP BY AddressState, DATEPART(mm, OpenedDate) ``` This shows you the monthnumber (1-12) and the total amount per state. Note that this example doesn't include years: all amounts of month 1 are summed regardless of year. Add a datepart(yy, OpenedDate) if you like.
> In other words, I'm trying to find out what States paid the most for each month This one will select the most profitable state for each month: ``` SELECT * FROM ( SELECT yr, mon, AddressState, amt, ROW_NUMBER() OVER (PARTITION BY yr, mon, addressstate ORDER BY amt DESC) AS rn FROM ( SELECT YEAR(OpenedDate) AS yr, MONTH(OpenedDate) AS mon, AddressState, SUM(Amount) AS amt FROM Customer c JOIN Account a ON a.CustomerID = c.CustomerID JOIN Payments p ON p.AccountID = a.AccountID GROUP BY YEAR(OpenedDate), MONTH(OpenedDate), AddressState ) ) q WHERE rn = 1 ``` Replace the last condition with `ORDER BY yr, mon, amt DESC` to get the list of all states like in your resultset: ``` SELECT * FROM ( SELECT yr, mon, AddressState, amt, ROW_NUMBER() OVER (PARTITION BY yr, mon, addressstate ORDER BY amt DESC) AS rn FROM ( SELECT YEAR(OpenedDate) AS yr, MONTH(OpenedDate) AS mon, AddressState, SUM(Amount) AS amt FROM Customer c JOIN Account a ON a.CustomerID = c.CustomerID JOIN Payments p ON p.AccountID = a.AccountID GROUP BY YEAR(OpenedDate), MONTH(OpenedDate), AddressState ) ) q ORDER BY yr, mon, amt DESC ```
Sum a subquery and group by customer info
[ "", "sql", "sql-server", "" ]
I am sending and receiving binary data to/from a device in packets (64 byte). The data has a specific format, parts of which vary with different request / response. Now I am designing an interpreter for the received data. Simply reading the data by positions is OK, but doesn't look that cool when I have a dozen different response formats. I am currently thinking about creating a few structs for that purpose, but I don't know how will it go with padding. Maybe there's a better way? --- Related: * [Safe, efficient way to access unaligned data in a network packet from C](https://stackoverflow.com/questions/529327/safe-efficient-way-to-access-unaligned-data-in-a-network-packet-from-c)
I've done this innumerable times before: it's a very common scenario. There's a number of things which I virtually always do. Don't worry too much about making it the most efficient thing available. If we do wind up spending a lot of time packing and unpacking packets, then we can always change it to be more efficient. Whilst I've not encountered a case where I've had to as yet, I've not been implementing network routers! Whilst using structs/unions is the most efficient approach in term of runtime, it comes with a number of complications: convincing your compiler to pack the structs/unions to match the octet structure of the packets you need, work to avoid alignment and endianness issues, and a lack of safety since there is no or little opportunity to do sanity checks on debug builds. I often wind up with an architecture including the following kinds of things: * A packet base class. Any common data fields are accessible (but not modifiable). If the data isn't stored in a packed format, then there's a virtual function which will produce a packed packet. * A number of presentation classes for specific packet types, derived from common packet type. If we're using a packing function, then each presentation class must implement it. * Anything which can be inferred from the specific type of the presentation class (i.e. a packet type id from a common data field), is dealt with as part of initialisation and is otherwise unmodifiable. * Each presentation class can be constructed from an unpacked packet, or will gracefully fail if the packet data is invalid for the that type. This can then be wrapped up in a factory for convenience. * If we don't have RTTI available, we can get "poor-man's RTTI" using the packet id to determine which specific presentation class an object really is. In all of this, it's possible (even if just for debug builds) to verify that each field which is modifiable is being set to a sane value. Whilst it might seem like a lot of work, it makes it very difficult to have an invalidly formatted packet, a pre-packed packets contents can be easilly checked by eye using a debugger (since it's all in normal platform-native format variables). If we do have to implement a more efficient storage scheme, that too can be wrapped in this abstraction with little additional performance cost.
You need to use structs and or unions. You'll need to make sure your data is properly packed on both sides of the connection and you may want to translate to and from network byte order on each end if there is any chance that either side of the connection could be running with a different endianess. As an example: ``` #pragma pack(push) /* push current alignment to stack */ #pragma pack(1) /* set alignment to 1 byte boundary */ typedef struct { unsigned int packetID; // identifies packet in one direction unsigned int data_length; char receipt_flag; // indicates to ack packet or keep sending packet till acked char data[]; // this is typically ascii string data w/ \n terminated fields but could also be binary } tPacketBuffer ; #pragma pack(pop) /* restore original alignment from stack */ ``` and then when assigning: ``` packetBuffer.packetID = htonl(123456); ``` and then when receiving: ``` packetBuffer.packetID = ntohl(packetBuffer.packetID); ``` Here are some discussions of [Endianness](http://en.wikipedia.org/wiki/Endianness) and [Alignment and Structure Packing](http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Structure_002dPacking-Pragmas.html) If you don't pack the structure it'll end up aligned to word boundaries and the internal layout of the structure and it's size will be incorrect.
How to interpret binary data in C++?
[ "", "c++", "embedded", "byte", "" ]
Here's my situation. I'm developing an ASP.NET (2.0) app for internal use. In it, I've got a number of pages with GridViews. I've included an option to export the data from the GridView to Excel (client-side using Javascript). The Excel workbook has 2 tabs - one tab is formatted like the GridView, the other tab contains the raw data. Everything works great and looks good from the client's standpoint. The issue is that the Javascript is pretty ugly. When I fill my dataset with the data the GridView is bound to, I also build a delimited string that's used for the export. I store the delimited string in a HiddenField, and retrieve the value of the HiddenField when the export button is pressed. I have several different delimiters, and it generally has that hacked together feel. Is there a better way to store the data for export and is there a more standard method of storing it instead of a roll-your-own delimited string? I haven't dug into JSON yet. Is this the right path to go down?
JSON is an excellent solution and very fast, although splitting strings is very fast too. You may also want to look into using window.name as a client-based storage solution. The window.name property can easily store a few megabytes worth of data. I've played around with my own implementation. You "String-ify" your JSON data and stash it in window.name. When your page loads, you grab window.name and "JSON-ify" it, assign it to a JavaScript variable and see if you got what you expected, if not, go grab it from the server via AJAX. I use [Prototype](http://prototypejs.org) for my JSON-string conversion and AJAX, but you can just as easily use jQuery. <http://www.thomasfrank.se/sessionvars.html>
I abhor delimited strings, if not the people who rely on them ! JSON is a pretty good bet, though in my opinion an AJAX call to the server to get the converted data might also be worth looking into. You should research your options starting with "[Client-side Persistent Data (CSPD)](http://en.wikipedia.org/wiki/Client-side_persistent_data)". [Here's a JS implementation](http://blogs.vinuthomas.com/2008/05/27/persistjs-cross-browser-client-side-persistent-storage/) by Vinu Thomas that simplifies this task (though I haven't tried it) Additionally, the spec for HTML 5 includes options for [DOM based data storage](http://en.wikipedia.org/wiki/DOM_storage). I think that's so exciting! ;-)
Storing data for client-side export. I've got a mess
[ "", "asp.net", "javascript", "" ]
I'm trying to add FileInfo details to my xml file by recursively scanning and collecting the files. However, i'm having a bit of trouble and i'm not quite sure how to resolve this issue. Here's my code: ``` public static class ScanAndSerialize { public static void Serialize() { List<string> dirs = FileHelper.GetFilesRecursive(@"s:\project\"); XmlSerializer SerializeObj = new XmlSerializer(dirs.GetType()); string sDay = DateTime.Now.ToString("MMdd"); string fileName = string.Format(@"s:\project\{0}_file.xml", sDay); TextWriter WriteFileStream = new StreamWriter(fileName); SerializeObj.Serialize(WriteFileStream, dirs); WriteFileStream.Close(); } } static class FileHelper { public static List<string> GetFilesRecursive(string b) { List<string> result = new List<string>(); Stack<string> stack = new Stack<string>(); stack.Push(b); while (stack.Count > 0) { string dir = stack.Pop(); try { result.AddRange(Directory.GetFiles(dir, "*.*")); foreach (string dn in Directory.GetDirectories(dir)) { DirectoryInfo dirInfo = new DirectoryInfo(@"s:\project\"); dirInfo.Name.ToString(); dirInfo.Attributes.ToString(); dirInfo.CreationTime.ToString(); dirInfo.Exists.ToString(); dirInfo.LastAccessTime.ToString(); dirInfo.LastWriteTime.ToString(); stack.Push(dn); } } catch { } } return result; } } ``` } Ok where i run into an error is where I'm trying to take the dirInfo.Name.ToString() and the other lines of code and somehow incorporate those into rendering my XML file with these details. The error is that Push cannot be overloaded as i tried: ``` stack.Push(dn, dirInfo.Name.ToString(), dirInfo.Attributes.ToString().....) ``` Anybody have any ideas? Am i doing this backwards?
**UPDATE** Fixed error Just use a `Stack<DirectoryInfo>` instead of a `Stack<string>` ``` public static class ScanAndSerialize { public static void Serialize() { List<string> dirs = FileHelper.GetFilesRecursive(@"s:\project\"); XmlSerializer SerializeObj = new XmlSerializer(dirs.GetType()); string sDay = DateTime.Now.ToString("MMdd"); string fileName = string.Format(@"s:\project\{0}_file.xml", sDay); TextWriter WriteFileStream = new StreamWriter(fileName); SerializeObj.Serialize(WriteFileStream, dirs); WriteFileStream.Close(); } } static class FileHelper { public static List<string> GetFilesRecursive(string b) { List<string> result = new List<string>(); var stack = new Stack<DirectoryInfo>(); stack.Push(new DirectoryInfo (b)); while (stack.Count > 0) { var actualDir = stack.Pop(); string dir = actualDir.FullName; try { result.AddRange(Directory.GetFiles(dir, "*.*")); foreach (string dn in Directory.GetDirectories(dir)) { DirectoryInfo dirInfo = new DirectoryInfo(dn); dirInfo.Name.ToString(); dirInfo.Attributes.ToString(); dirInfo.CreationTime.ToString(); dirInfo.Exists.ToString(); dirInfo.LastAccessTime.ToString(); dirInfo.LastWriteTime.ToString(); stack.Push(dirInfo); } } catch { } } return result; } } ```
I'm not sure what all is going on here... First off, Directory.GetFiles can already get all files recursively: ``` Directory.GetFiles(path, "*.*", SearchOptions.AllDirectories); ``` GetDirectories can do the same thing. Judging from your code, your function is named GetFiles but it is getting Directories instead? Second off, for pushing information onto a stack, why not have a `Stack<FileInfo>` or `Stack<DirectoryInfo>`, then manually pop off each one while writing your XML file, reading the information you need and writing it to XML? **Edit:** Here's a simpler example. I'm assuming you're trying to get all files from a directory (recursively) and output information about them. I'll combine your two functions together (since a good GetFiles already exists). ``` using System.Xml; public static void WriteXMLForAllFiles(string directory, string outputFilePath) { XmlWriterSettings settings = new XmlWriterSettings(); settings.Indent = true; XmlWriter writer = XmlTextWriter.Create(outputFilePath, settings); writer.WriteStartDocument(); writer.WriteStartElement("Files"); foreach( string file in Directory.GetFiles(directory, "*.*", SearchOptions.AllDirectories) ) { FileInfo fileInfo = new FileInfo(file); writer.WriteStartElement("file"); writer.WriteAttributeString("path", file); writer.WriteAttributeString("creationTime", fileInfo.CreationTimeUtc.ToString()); writer.WriteAttributeString("lastWriteTime", fileInfo.LastWriteTimeUtc.ToString()); writer.WriteEndElement(); } writer.WriteEndElement(); writer.WriteEndDocument(); writer.Close(); } ``` After i looked through your code, you're not using that Stack<> for anything, so ignore that. If you wanted an example in relation to your original code: ``` public static List<FileInfo> GetFilesRecursive(string b) { List<FileInfo> fileList = new List<FileInfo>(); foreach( string file in Directory.GetFiles(directory, "*.*", SearchOptions.AllDirectories) ) { fileList.Add(new FileInfo(file)); } return fileList; } ```
Overloading Stack.Push?
[ "", "c#", "" ]
I am using VB6 for my application. I've populated Excel with the RecordSet obtained from a SQL query. One column called `Time_period` has values like ``` "2/31/2006" "12/29/2000" etc. ``` I need to pass these inputs to another SQL query for processing. I am little confused with the formats, as Oracle accepts inputs of type "23-Jul-2009", "02-Jan-1998" and so on. Can you help converting from the one format to the other in VB6?
Use Format. ``` Debug.Print Format$(Now, "dd-mmm-yyyy") ``` This will work in your case. You could try using the following format (ISO standard): ``` Debug.Pring Format$(Now, "yyyy-MM-dd") ```
Dim oracleDate As String Dim excelDate As DateTime ``` oracleDate = Format$(excelDate , "dd-mmm-yyyy") ```
Date format in VB6
[ "", "sql", "excel", "datetime", "vb6", "" ]
Consider a database with tables Products and Employees. There is a new requirement to model current product managers, being the sole employee responsible for a product, noting that some products are simple or mature enough to require no product manager. That is, each product can have zero or one product manager. Approach 1: alter table `Product` to add a new `NULL`able column `product_manager_employee_ID` so that a product with no product manager is modelled by the `NULL` value. Approach 2: create a new table `ProductManagers` with non-`NULL`able columns `product_ID` and `employee_ID`, with a unique constraint on `product_ID`, so that a product with no product manager is modelled by the absence of a row in this table. There are other approaches but these are the two I seem to encounter most often. Assuming these are both legitimate design choices (as I'm inclined to believe) and merely represent differing styles, do they have names? I prefer approach 2 and find it hard to convey the difference in style to someone who prefers approach 1 without employing an actual example (as I have done here!) I'd would be nice if I could say, "I'm prefer the inclination-towards-6NF (or whatever) style myself." Assuming one of these approaches is in fact an anti-pattern (as I merely suspect may be the case for approach 1 by modelling a relationship between two entities as an attribute of one of those entities) does this anti-pattern have a name?
Well the first is nothing more than a one-to-many relationship (one employee to many products). This is sometimes referred to as a O:M relationship (zero to many) because it's optional (not every product has a product manager). Also not every employee is a product manager so its optional on the other side too. The second is a join table, usually used for a many-to-many relationship. But since one side is only one-to-one (each product is only in the table once) it's really just a convoluted one-to-many relationship. Personally I prefer the first one but neither is wrong (or bad). The second would be used for two reasons that come to mind. 1. You envision the possibility that a product will have more than one manager; or 2. You want to track the history of who the product manager is for a product. You do this with, say a current\_flag column set to 'Y' (or similar) where only one at a time can be current. This is actually a pretty common pattern in database-centric applications.
It looks to me like the two model different behaviour. In the first example, you can have one product manager per product and one employee can be product manager for more than one product (one to many). The second appears to allow for more than one product manager per product (many to many). This would suggest the two solutions are equally valid in different situations and which one you use would depend on the business rule.
Do these database design styles (or anti-pattern) have names?
[ "", "sql", "design-patterns", "database-design", "anti-patterns", "" ]
I'm suppose to capture everything inside a tag and the next lines after it, but it's suppose to stop the next time it meets a bracket. What am i doing wrong? ``` import re #regex regex = re.compile(r""" ^ # Must start in a newline first \[\b(.*)\b\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next (\b(?:.|\s)*(?!\[)) # should read: anyword that doesn't precede a bracket """, re.MULTILINE | re.VERBOSE) haystack = """ [tab1] this is captured but this is suppose to be captured too! @[this should be taken though as this is in the content] [tab2] help me write a better RE """ m = regex.findall(haystack) print m ``` **what im trying to get is:** **[('tab1', 'this is captured\nbut this is suppose to be captured too!\n@[this should be taken though as this is in the content]\n', '[tab2]','help me\nwrite a better RE\n')]** edit: ``` regex = re.compile(r""" ^ # Must start in a newline first \[(.*?)\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next ([^\[]*) # stop reading at opening bracket """, re.MULTILINE | re.VERBOSE) ``` this seems to work but it's also trimming the brackets inside the content.
Python regex doesn't support recursion afaik. EDIT: but in your case this would work: ``` regex = re.compile(r""" ^ # Must start in a newline first \[(.*?)\] # Get what's enclosed in brackets \n # only capture bracket if a newline is next ([^\[]*) # stop reading at opening bracket """, re.MULTILINE | re.VERBOSE) ``` EDIT 2: yes, it doesn't work properly. ``` import re regex = re.compile(r""" (?:^|\n)\[ # tag's opening bracket ([^\]\n]*) # 1. text between brackets \]\n # tag's closing bracket (.*?) # 2. text between the tags (?=\n\[[^\]\n]*\]\n|$) # until tag or end of string but don't consume it """, re.DOTALL | re.VERBOSE) haystack = """[tag1] this is captured [not a tag[ but this is suppose to be captured too! [another non-tag [tag2] help me write a better RE[[[] """ print regex.findall(haystack) ``` I do agree with viraptor though. Regex are cool but you can't check your file for errors with them. A hybrid perhaps? :P ``` tag_re = re.compile(r'^\[([^\]\n]*)\]$', re.MULTILINE) tags = list(tag_re.finditer(haystack)) result = {} for (mo1, mo2) in zip(tags[:-1], tags[1:]): result[mo1.group(1)] = haystack[mo1.end(1)+1:mo2.start(1)-1].strip() result[mo2.group(1)] = haystack[mo2.end(1)+1:].strip() print result ``` EDIT 3: That's because `^` character means negative match only inside `[^squarebrackets]`. Everywhere else it means string start (or line start with `re.MULTILINE`). There's no good way for negative string matching in regex, only character.
First of all why a regex if you're trying to parse? As you can see you cannot find the source of the problem yourself, because regex gives no feedback. Also you don't have any recursion in that RE. Make your life simple: ``` def ini_parse(src): in_block = None contents = {} for line in src.split("\n"): if line.startswith('[') and line.endswith(']'): in_block = line[1:len(line)-1] contents[in_block] = "" elif in_block is not None: contents[in_block] += line + "\n" elif line.strip() != "": raise Exception("content out of block") return contents ``` You get error handling with exceptions and the ability to debug execution as a bonus. Also you get a dictionary as a result and can handle duplicate sections while processing. My result: ``` {'tab2': 'help me\nwrite a better RE\n\n', 'tab1': 'this is captured\nbut this is suppose to be captured too!\n@[this should be taken though as this is in the content]\n\n'} ``` RE is much overused these days...
My regex in python isn't recursing properly
[ "", "python", "regex", "recursion", "" ]
I'm looking for a Java API to convert ICS (aka iCal) attachments to nicely formatted HTML or plaintext for display purposes. Ideally, it would be able to handle: * Converting dates to a specified timezone. * Expanding recurrence patterns into human readable sentences. * Multiple VCALENDAR records in a single file. I'm looking at iCal4j, which has a nice DOM parser, but no way to serialize to anything but iCal.
Sorry mate, if you Googled around and found nothing, then its a sure set of unique requirements you got there, time to innovate. Take what you have, think up some ideas, and try them out, comes with the job!
I didn't find a better alternative to ical4j. I used it fairly successfully. Unfortunately, as you point out, all it does is to bind to XML, with no other way to output it to something else. You could walk the DOM after creation and output the relevant text - although this seems a bit strange since all you want is text/html, I had the same issue and just ended up parsing out the XML. The iCal4j API is a bit strange and you might want to relax the parsing and enable outlook/notes compatibility to help you along the way. You could write your own Parser and that implements `net.fortuna.ical4j.data.CalendarParser` and pull out the necessary information into plain text that way. I think the default `net.fortuna.ical4j.data.CalendarParserImpl` is about 500 lines of code, you could alternatively hack that.
Convert iCal to HTML or plaintext in Java
[ "", "java", "icalendar", "" ]
How would I go about marking one folder for deletion when the system reboots, using C#. Thanks,
Originally from: <http://abhi.dcmembers.com/blog/2009/03/24/mark-file-for-deletion-on-reboot/> Documentation: <https://learn.microsoft.com/en-us/windows/desktop/api/winbase/nf-winbase-movefileexa#parameters> ``` /// /// Consts defined in WINBASE.H /// [Flags] internal enum MoveFileFlags { MOVEFILE_REPLACE_EXISTING = 1, MOVEFILE_COPY_ALLOWED = 2, MOVEFILE_DELAY_UNTIL_REBOOT = 4, //This value can be used only if the process is in the context of a user who belongs to the administrators group or the LocalSystem account MOVEFILE_WRITE_THROUGH = 8 } /// <summary> /// Marks the file for deletion during next system reboot /// </summary> /// <param name="lpExistingFileName">The current name of the file or directory on the local computer.</param> /// <param name="lpNewFileName">The new name of the file or directory on the local computer.</param> /// <param name="dwFlags">MoveFileFlags</param> /// <returns>bool</returns> /// <remarks>http://msdn.microsoft.com/en-us/library/aa365240(VS.85).aspx</remarks> [System.Runtime.InteropServices.DllImportAttribute("kernel32.dll",EntryPoint="MoveFileEx")] internal static extern bool MoveFileEx(string lpExistingFileName, string lpNewFileName, MoveFileFlags dwFlags); //Usage for marking the file to delete on reboot MoveFileEx(fileToDelete, null, MoveFileFlags.MOVEFILE_DELAY_UNTIL_REBOOT); ```
Use PInvoke and call MoveFileEx, passing null in as the destination.... [This link](http://www.eggheadcafe.com/conversation.aspx?messageid=31656648&threadid=31656596) has some sample code: ``` [DllImport("kernel32.dll", CharSet = CharSet.Unicode)] public static extern bool MoveFileEx(string lpExistingFileName, string lpNewFileName, int dwFlags); public const int MOVEFILE_DELAY_UNTIL_REBOOT = 0x4; MoveFileEx(filename, null, MOVEFILE_DELAY_UNTIL_REBOOT); ```
how to mark folders for deletion C#
[ "", "c#", "windows", "" ]
I am building a Qt 4.5 application on Windows using Visual Studio 2008. Whenever I run my application in Debug mode and then close it, Visual Studio prints the following to the output pane: > Detected memory leaks! > Dumping objects -> > {696512} normal block at 0x01981AB0, 24 bytes long. > Data: < > 00 CD CD CD 00 00 00 00 00 00 00 00 00 00 00 00 > {696511} normal block at 0x02E59B70, 12 bytes long. > Data: < U2g U2g> B0 1A 98 01 E8 55 32 67 E8 55 32 67 And the output reports hundreds of such blocks. I have noticed this particularly when using Qt 4's Model/View framework. Does Qt in fact have memory leaks, or are there circumstances under which Visual Studio misreports leaks?
I had a chance to profile my project using DevPartner. The surprising thing is that it reports memory leaks in QtGuid4.dll and QtCored4.dll; however, after manually looking at each case, I discovered that they were all false positives. As a side note, there were no memory leaks reported in the code using Qt.
The memory leak info is provided by the debug windows runtime. Your program can interact and configure this. The number in braces `{696512}` is the allocation order number. If this number is always the same, then you can set a break point on this allocation by passing the number to [`_CrtSetBreakAlloc`](https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/crtsetbreakalloc?view=msvc-170). Run the program in the debugger again and the debugger will stop when the leaked memory is allocated. Call this function early in main. If the number is not always the same, try to reproduce the memory leak with reduced code until it is always the same. For more information see [Find memory leaks with the CRT library](https://learn.microsoft.com/en-us/visualstudio/debugger/finding-memory-leaks-using-the-crt-library?view=vs-2022)
On closing a Qt 4.5 application, Visual Studio reports that it has detected memory leaks
[ "", "c++", "visual-studio-2008", "memory-leaks", "qt4", "" ]
When dividing by 1000 I sometime run across a bug that doesn't do the division "correctly". For example, when dividing 90.55 by 1000 instead of getting .09055 I get .090549999999. ``` <c:out value="${bean.paPrice / 1000}" /> ``` Why is this happening? Is this a result of floating point math? A google search seemed to indicate that this may be a culprit, but I have found no concrete answer. To fix this I can round the answer to 5 digits, but it seems like this is just a patch to a deeper problem. ``` <fmt:formatNumber type="number" maxFractionDigits="5" value="${bean.paPrice / 1000}" /> ```
It's not a problem, it's just a natural result of the binary representation of floating point. Round your values and don't worry about it.
Why do computers suck at math? <http://www.codinghorror.com/blog/archives/001266.html>
Why does JSP/JSTL division by 1000 sometimes give remainder?
[ "", "java", "jsp", "floating-point", "jstl", "" ]
If I have access only keys from a `Dictionary<TKey, TValue>` what is better to use: ``` Dictionary<TKey, TValue>.ForEach(pair => action(pair.Key)) ``` or ``` Dictionary<TKey, TValue>.Keys.ForEach(key => action(key)) ``` Which method is more 'best-practice' ? Speed in both cases I think seems to be very similar.
I think this depends entirely on your use case. If you only need to use the key in the predicate, I would use the second version. Otherwise you're adding more information to the lambda than is strictly necessary. But I don't think there is a hard and fast rule here. Probably just whatever flows off the keyboard more naturally. Likewise, if you need to use both the key and the value, go with the first.
If speed is what you are after then I would suggest that you do this: ``` foreach (TKey key in yourDictionary.Keys) action(key) ``` This does not require the creation of a delegate for whatever method you are using as `action`. *Note that this will be a minimal performance benefit since in the case where you create a delegate (as in your two examples) the compiler will hoist the creation of the delegate out of the loop and only create one delegate instance. Still I find `foreach` cleaner and easier to read than any `ForEach` extension method.*
What is better to use: Dictionary<>.ForEach(pair =>) or Dictionary<>.Keys.ForEach(key =>)?
[ "", "c#", ".net", "generics", "extension-methods", "" ]
This is for MSVC ``` #define Get64B(hi, lo) ((((__int64)(hi)) << 32) | (unsigned int)(lo)) ``` Specifically, what is the role of the 'operator <<' ? Thanks for your help
<< is the left shift operator. This macro is intended to make a 64-bit value from two 32 bit values, using the first argument as the top 32 bits and the second argument as the bottom 32 bits of the new value.
It takes two 32 bit integers and returns a 64 bit integer, with the first parameter as the 32 high bits and the second as the 32 low bits. << is the left shift operator. It takes the high 32 bits, shifts them over, and then ORs the that result with the low bits.
what does this C++ macro do?
[ "", "c++", "operators", "macros", "" ]
> **Possible Duplicate:** > [Automatically re-direct a user when session Times out or goes idle…](https://stackoverflow.com/questions/1003001/automatically-re-direct-a-user-when-session-times-out-or-goes-idle) I have a Log In system and the session expires, but they need to refresh the page to be shown the login in screen again. Instead, my users enter data and hit submit to find out that they have been logged out. Is there any way to make the page automatically redirect to the log-in page once the session has expired? Thanks! **EDIT:::** From reviewing the previously asked question found [Here](https://stackoverflow.com/questions/1003001/automatically-re-direct-a-user-when-session-times-out-or-goes-idle) I have used the accepted answer for this application. Thank you all for your suggestions.
Well, optimally the data the user sent using your form is saved temporarily, the user gets a chance to log in again and then gets redirected to the page he came from, already filled with the old data. Having to type data twice (or having the page itself redirecting after some amount of time if you're in the middle of something) is rather annoying. If you want to redirect without retaining the entered data on the page you should at least show the user an indication how much time he has left until the page expires. Using JavaScript should be a good option for that.
You could use a [meta-refresh](http://en.wikipedia.org/wiki/Meta_refresh) tag, e.g. to redirect after 10 minutes: ``` <meta http-equiv="refresh" content="600;url=http://example.com/" /> ``` This isn't a very user friendly way to handle session expiry, particularly for the use case you've highlighted. A better technique would be to track user activity with Javascript by picking up keypress and mousemove events. Every minute, if there has been some activity, fire off an XMLHttpRequest to keep the session alive. Say your sessions expire after 10 minutes, and this JS notices no user activity for that time, it can inside a banner into your page alerting the user that their session has expired and offering ways to re-establish the session etc. That way, people performing data entry or (whatever the form is for) don't lose their session if they taking their time, and aren't redirected if they leave their desk for lunch!
PHP Session Expire Redirect
[ "", "php", "" ]
I have telerik RadGrid which is in edit mode. Each cell contains NumericTextBox. Is it possible to calculate one cell based on other cells in the same row (on client side). For example if I have a row which contains cells like price and item I want on every change to calculate total price but on client side, without going to server side. Is this possible with RadGrid?
Thanks for all your answers but I found the solution here at [telerik forum](http://www.telerik.com/community/forums/aspnet/grid/i-have-telerik-radgrid-which-is-in-edit-mode-each-cell-contains-numerictextbox-is-it-possible-to-calculate-one-cell-based-on-other-cells-in-the-same-r.aspx). I'll just paste the solution here in case that somebody get stuck on the same issue. ASPX: ``` <Columns> <rad:GridTemplateColumn UniqueName="Price" HeaderText="Price"> <EditItemTemplate> <radI:RadNumericTextBox ID="txtPrice" runat="server"> </radI:RadNumericTextBox> </EditItemTemplate> </rad:GridTemplateColumn> <rad:GridTemplateColumn UniqueName="Quantity" HeaderText=" Number of Items"> <EditItemTemplate> <radI:RadNumericTextBox ID="txtQuantity" runat="server"> </radI:RadNumericTextBox> </EditItemTemplate> </rad:GridTemplateColumn> <rad:GridTemplateColumn UniqueName="TotalAmount" HeaderText="Total"> <EditItemTemplate> <radI:RadNumericTextBox ID="txtTotalAmount" runat="server"> </radI:RadNumericTextBox> </EditItemTemplate> </rad:GridTemplateColumn> </Columns> ``` C# ``` protected void RadGrid1_ItemDataBound(object sender, GridItemEventArgs e) { if (e.Item is GridDataItem && e.Item.IsInEditMode) { GridDataItem item = (GridDataItem)e.Item; RadNumericTextBox txtPrice= item.FindControl("txtPrice") as RadNumericTextBox; // Get the textbox for column Price RadNumericTextBox txtQuantity= item.FindControl("txtQuantity") as RadNumericTextBox; // Get the textbox for column Quantity RadNumericTextBox txtTotalAmount= item.FindControl("txtTotalAmount") as RadNumericTextBox; // Get the textbox for column "TotalAmount", if it is template as shown in aspx txtPrice.Attributes.Add("onFocusout", "return calculate('" + txtPrice.ClientID + "','" + txtQuantity.ClientID + "','" + txtTotalAmount.ClientID + "')"); txtQuantity.Attributes.Add("onFocusout", "return calculate('" + txtPrice.ClientID + "','" + txtQuantity.ClientID + "','" + txtTotalAmount.ClientID + "')"); txtTotalAmount.Attributes.Add("onfocus", "return calculate('" + txtPrice.ClientID + "','" + txtQuantity.ClientID + "','" + txtTotalAmount.ClientID + "')"); } } ``` JavaScript: ``` <script type="text/javascript"> function calculate(price, quantity, totalAmount) { var text1 = $find(price); //I used Asp.net Ajax find method var text2 = $find(quantity); var text3 = $find(totalAmount); var total = text1.GetValue() * text2.GetValue(); text3.SetValue(total); } </script> ```
Check out the demo on the Telerik site from RadGrid -> Application scenarios section which uses numeric textboxes and illustrates what you are looking for, dude. Dick
How to calculate RadGrid cell value on Client side?
[ "", "asp.net", "javascript", "jquery", "telerik", "radgrid", "" ]
What is the preferable way for transferring some items (not all) from one list to another. What I am doing is the following: ``` var selected = from item in items where item.something > 10 select item; otherList.AddRange(selected); items.RemoveAll(item => selected.Contains(item)); ``` In the interest of having the fastest/best code there is, is there a better way?
I'd try @Mehrdad's answer, and maybe test it against this one too... ``` var selected = items.Where(item => item.Something > 10).ToList(); selected.ForEach(item => items.Remove(item)); otherList.AddRange(selected); ```
I suggest: ``` var selected = items.Where(item => item.Something > 10).ToList(); items = items.Except(selected).ToList(); otherList.AddRange(selected); ```
How do I move items from a list to another list in C#?
[ "", "c#", "linq", "list", "" ]
I need to execute some JavaScript code when the page has fully loaded. This includes things like images. I know you can check if the DOM is ready, but I don’t know if this is the same as when the page is fully loaded.
That's called `load`. It came waaaaay before DOM ready was around, and DOM ready was actually created for the exact reason that `load` waited on images. ``` window.addEventListener('load', function () { alert("It's loaded!") }) ```
For completeness sake, you might also want to bind it to DOMContentLoaded, which is now widely supported ``` document.addEventListener("DOMContentLoaded", function(event){ // your code here }); ``` More info: <https://developer.mozilla.org/en-US/docs/Web/Events/DOMContentLoaded>
How to execute a function when page has fully loaded?
[ "", "javascript", "" ]
Perhaps I'm going about this all wrong (and please tell me if I am), but I'm hitting my head against a wall with something that seems like a really simple concept. This Render override is coming from a User Control. ``` protected override void Render(HtmlTextWriter writer) { string htmlAboutToBeRendered = writer.GetWhatHasBeenWrittenToTheWriterSoFar(); // Do something nefarious, yet unrelated with htmlAboutToBeRendered } ``` This seems like a there would be an obvious way to do this, but I can't seem to find it. Can anyone shed some light on this for me, please?
Try this: ``` protected override void RenderContents(HtmlTextWriter output) { StringBuilder sb = new StringBuilder(); HtmlTextWriter htw = new HtmlTextWriter(new System.IO.StringWriter(sb, System.Globalization.CultureInfo.InvariantCulture)); foreach (Control ctrl in Controls) { ctrl.RenderControl(htw); } string strContents = sb.ToString(); ``` }
You can derive from HttpTextWriter and override OnTagRender, OnAttributeRender and OnStyleRender methods and modify the tags as they are rendered. The MSDN docs for OnTagRender show a sample where the tag is modified during rendering: <http://msdn.microsoft.com/en-us/library/system.web.ui.htmltextwriter.ontagrender.aspx>
HtmlTextWriter to String - Am I overlooking something?
[ "", "c#", "asp.net", "" ]
Let's say I want to be able to log to file every time any exception is raised, anywhere in my program. I don't want to modify any existing code. Of course, this could be generalized to being able to insert a hook every time an exception is raised. Would the following code be considered safe for doing such a thing? ``` class MyException(Exception): def my_hook(self): print('---> my_hook() was called'); def __init__(self, *args, **kwargs): global BackupException; self.my_hook(); return BackupException.__init__(self, *args, **kwargs); def main(): global BackupException; global Exception; BackupException = Exception; Exception = MyException; raise Exception('Contrived Exception'); if __name__ == '__main__': main(); ```
If you want to log *uncaught* exceptions, just use [sys.excepthook](http://docs.python.org/library/sys.html#sys.excepthook). I'm not sure I see the value of logging *all* raised exceptions, since lots of libraries will raise/catch exceptions internally for things you probably won't care about.
Your code as far as I can tell would not work. 1. `__init__` has to return None and you are trying to return an instance of backup exception. In general if you would like to change what instance is returned when instantiating a class you should override `__new__`. 2. Unfortunately you can't change any of the attributes on the `Exception` class. If that was an option you could have changed `Exception.__new__` and placed your hook there. 3. the "`global Exception`" trick will only work for code in the current module. `Exception` is a builtin and if you really want to change it globally you need to `import __builtin__; __builtin__.Exception = MyException` 4. Even if you changed `__builtin__.Exception` it will only affect future uses of `Exception`, subclasses that have already been defined will use the original Exception class and will be unaffected by your changes. You could loop over `Exception.__subclasses__` and change the `__bases__` for each one of them to insert your `Exception` subclass there. 5. There are subclasses of `Exception` that are also built-in types that you also cannot modify, although I'm not sure you would want to hook any of them (think `StopIterration`). I think that the only decent way to do what you want is to patch the Python sources.
Calling a hook function every time an Exception is raised
[ "", "python", "exception", "" ]
I have the following code ``` <div class="item" id="item1">Text Goes Here</div> <div class="admin_tools" id="tools1">Link 1 | Link 2 | Link 3</div> <div class="item" id="item2">Text Goes Here</div> <div class="admin_tools" id="tools2">Link 1 | Link 2 | Link 3</div> ``` *admin\_tools* div is hidden from view by default. When a mouse is moved over the *item* div, it should be replaced with the contents of *admin\_tools*. How would I go about doing that? Preferably... a CSS only solution. The layout isnt fixed either. Can be altered if necessary.
I changed a couple of things, but what about something like this? ``` <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Hidden/Visible</title> <style type="text/css"> .admin_tools{ display:none; } .item:hover span{ display:none; } .item:hover .admin_tools{ display:block; } </style> <body> <div class="item" id="item1"> <span>Text Goes Here</span> <span class="admin_tools" id="tools1">Link 1 | Link 2 | Link 3</span> </div> </body> </html> ```
``` <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Hidden/Visible</title> <style type="text/css"> .outer { width:200px; } .admin_tools{ display:none; } .item { position:relative; } .item:hover .admin_tools{ display:inline; position:absolute; width:200px; top:0; right:0; background-color:#333; color:#FFF; } </style> <body> <div class="outer"> <div class="item" id="item1"> <span>Text Goes Here</span> <span class="admin_tools" id="tools1">Link 1 | Link 2 | Link 3</span> </div> <div class="item" id="item2"> <span>Text Goes Here</span> <span class="admin_tools" id="tools2">Link 1 | Link 2 | Link 3</span> </div> <div class="item" id="item3"> <span>Text Goes Here</span> <span class="admin_tools" id="tools3">Link 1 | Link 2 | Link 3</span> </div> </div> </body> </html> ```
How do you change a div with another div, onmouseover?
[ "", "javascript", "css", "" ]
I'm trying to create a view out of quite a complex select query and it won't let me put a clustered index on it because I have to use subqueries and some aggregate functions. I have to get a clustered index on it otherwise the queries that use the view will take forever. Apparently SQL Server will only store the result set if you meet a specific criteria. The base tables for the view are read-only and and only get updated by a bulk import once a day. I can't see why the results can't be cached. Does anyone know of any way to get SQL Server to cache the results of a view so they can in turn be queried later? I don't really want to create another table because that would snowball into a bunch of changes all over the place. Thanks in advance.
I think the answer you are looking for is: * Don't use a view to do this. * Use a table with the fields corresponding to the returned fields form the SQL query. * Automate the query to populate this table
when using aggregates inside a indexed view you need to use COUNT\_BIG() instead of COUNT() otherwise the view won't be created Also if you are not on Enterprise Edition you need to provide the NOEXPAND hint otherwise the optimizer will not use the view ``` SELECT * FROM YourView WITH(NOEXPAND) WHERE .... ``` Maybe you don't need a view but you just don't have th correct indexes on the tables, can you post the DDL of the tables (including indexes and constraints)
SQL Server clustered index on a view
[ "", "sql", "sql-server", "t-sql", "view", "clustered-index", "" ]
Is there a good way to get my IoC to resolve dependencies on views? I have my own IoC resolver based on Castle Windsor. I have a `IResourceService` that I would like to have access in my views to resolve some strings.
I would go for an extention method, then resolve my dependencies within that: ``` public static class LocalizationExtentions { public static string Localize(this HtmlHelper html, string resource) { var localize = IoC.Resolve<ILocalize>(); return localize.For(resource); } } ``` In my view: ``` <h1><%= Html.Localize("MainTitle") %></h1> ```
Could you use a wrapper, and inside that wrapper directly ask Windsor to resolve the service for you. Then from your view, just use the ResourceHelper class which passes everything to your resolved service? You could use the ResourceHelper class just like an HtmlHelper class in your view. ``` public interface IInjectionWrapper { T Resolve<T>(); object Resolve(Type service, object view); object Resolve(Type service); } public class WindsorWrapper: IInjectionWrapper { private readonly static IWindsorContainer windsor; static WindsorWrapper() { string config = ConfigurationManager.AppSettings["WindsorConfig"]; FileResource resource = new FileResource(config); windsor = new WindsorContainer(new XmlInterpreter(resource)); } public T Resolve<T>() { T result = windsor.Resolve<T>(); return result; } public object Resolve(Type service) { return windsor.Resolve(service); } } public interface IResourceService { string LookupString(string key); } public class ResourceHelper : IResourceService { private IResourceService _resources; public ResourceHelper() { IInjectionWrapper ioc = new WindsorWrapper(); _resources = ioc.Reslove<IResourceService>(); } public string LookupString(string key) { return _resources.LookupString(key); } } ```
IoC and ASP.NET MVC Views
[ "", "c#", "asp.net-mvc", "inversion-of-control", "" ]
I'm making a gallery website. Photos and albums should be hidden from the public by default, but the owner can share a public URL if he wants. I also want to use short URLs for this. I'm trying to figure out the best way to do this. At first I thought I could have a MySQL table with the short URL code (something like `Zneg8rjK`), and the full URL (`/album/5/`). When the user visits this link (`mysite.com/Zneg8rjK`) it would set a session variable which is just a hash of `/album/5` plus a salt, and then it redirects them to the full URL. Then on that page I can just rehash the current page and check if it's in their session or not. No problem... but what happens when they click on a photo in that album? It breaks down. So, the next solution I thought of was that I should store a secret key with each album and each photo, and then have a public address like `/album/5/secretkey`. The short URL could just point to this instead. But then how do I give the user to permission to view all the photos in the album? Should I just link each photo to point to the secret version of the photo URL instead, but only if they accessed the album via the secret album URL? My concern is that I want to keep the secret keys/URLs out of the address bar as much as possible so that users don't accidentally share it with people they shouldn't... but it's not a huge concern if there aren't any better solutions. Thoughts? --- Wait a sec, do I even need to hash the URL? There's no way for clients to modify session variables is there?
I guess it depends a bit on how you check permissions - I'd suggest having a key for a whole album, and a key for individual photo's. The redirect thing in the first part of your post actually doesn't sound that bad to me, only you'd have to check the supplied key in the session against 2 valid options - one for the individual photo, and one for the album that contains that photo. If either one validates, show the photo (instead of 'break down'). Hope this helps!
You would probably want to do the two keys like the above person said. You might not want to restrict access going through one URL like mysite.com/hashhere instead you should have the short URL be able to include the hash, so if they have the right hash no matter what link they go to, they can view it. To make it even more secure, give the owner of the gallery the option to refresh the hash key as well so if they have shared with users and someone posts the public URL or something they can lock it down again.
Temporary permission to view photos and albums
[ "", "php", "mysql", "authentication", "permissions", "" ]
I have dropdown on my page, I am changing selected value of dropdown from popup window using Javascript. I have some logic in dropdown `SelectedIndexChanged` event, so I need to fire the `SelectedIndexChanged` event when dropdown selection changed from Javascript.
``` document.getElementById('<%= yourDropdown.ClientID %>').onchange(); ``` This should work, if you are still getting some error, you can try like this: ``` setTimeout('__doPostBack(\'yourcontrolClientSideID\',\'\')', 0); ``` `yourcontrolClientSideID` is the ID of Rendered Client ID.
Call onchange method like that at client side : ``` document.getElementById('yourDropdownsClientId').onchange(); ``` **EDIT :** If you set your dropdown's AutoPostBack property to true, the code above will post your page to server, than your server side event will be called. But If you want to call your event manually, you can all it anywhere in your page's codebehind like that : ``` myDropDownList_SelectedIndexChanged(null, new EventArgs()); ```
Want to fire Dropdown SelectedIndexChanged Event in Javascript
[ "", "javascript", "html", "asp.net", "dom-events", "" ]
I am new to Java and I came across a statement in a Java project which says: ``` Digester digester = DigesterLoader.createDigester(getClass() .getClassLoader().getResource("rules.xml")); ``` rules.xml file contains various patterns and every pattern has different attributes like classname, methodname and some another properties. i googled about digester but couldn't found anything useful that could help me with the statement above. can anyone just tell me what are the steps followed in executing above statement ? In fact what is the advantage of this XML stuff ?
swapnil, as a user of Digester back in my Struts days I can honestly say it's tricky to learn/debug. It's a tough library to familiarize yourself with, essentially you are setting up event handlers for certain elements kinda like a SAX parser (in fact it's using SAX in behind the scenes). So you feed a rules engine some XPath for nodes you are interested in and setup rules which will instantiate, and set properties on some POJOs with data it finds in the XML file. Great idea, and once you get used to it it's good, however if you have an xsd for your input xml file I'd sooner recommend you use JAXB. The one thing that is nice about Digester is it will only do things with elements you are interested in, so memory footprint ends up being nice and low.
This is the method that's getting called [here](http://commons.apache.org/digester/commons-digester-1.6/docs/api/org/apache/commons/digester/xmlrules/DigesterLoader.html#createDigester(java.net.URL)). Xml is used commonly in Java for configurations, since xml files do not need to be compiled. Having the same configuration in a java file would mean you have to compile the file.
IN java, how a commons-Digester process an input XML file?
[ "", "java", "xml", "apache-commons-digester", "" ]
I want to join two tables, with the number of records for each type being counted. If there are no records of that type in the left table I want a 0 to be returned, not a null. How can I do this?
Use: ``` ISNULL(count(*), 0) ```
You can use "CASE" ``` SELECT T1.NAME, CASE WHEN T2.DATA IS NULL THEN 0 ELSE T2.DATA END FROM T1 LEFT JOIN T2 ON T1.ID = T2.ID ```
SQL LEFT JOIN return 0 rather than NULL
[ "", "sql", "null", "left-join", "" ]
I'm trying to learn how to use the Entity framework but I've hit an issue I can't solve. What I'm doing is that I'm walking through a list of Movies that I have and inserts each one into a simple database. This is the code I'm using ``` private void AddMovies(DirectoryInfo dir) { MovieEntities db = new MovieEntities(); foreach (DirectoryInfo d in dir.GetDirectories()) { Movie m = new Movie { Name = d.Name, Path = dir.FullName }; db.AddToMovies(movie); } db.SaveChanges(); } ``` When I do this I get an exception at db.SaveChanges() that read. > The changes to the database were > committed successfully, but an error > occurred while updating the object > context. The ObjectContext might be in > an inconsistent state. Inner exception > message: AcceptChanges cannot continue > because the object's key values > conflict with another object in the > ObjectStateManager. Make sure that the > key values are unique before calling > AcceptChanges. I haven't been able to find out what's causing this issue. My database table contains three columns Id int autoincrement Name nchar(255) Path nchar(255) Update: I Checked my edmx file and the SSDL section have the StoreGeneratedPattern="Identity" as suggested. I also followed the blog post and tried to add ClientAutoGenerated="true" and StoreGenerated="true" in the CSDL as suggested there. This resulted in compile errors ( Error 5: The 'ClientAutoGenerated' attribute is not allowed.). Since the blog post is from 2006 and it has a link to a follow up post I assume it's been changed. However, I cannot read the followup post since it seems to require an msdn account.
I found the solution to this. What happened was that when I created my table I forgot to add the primary key and set (Is Identity) property to yes. I then created my Entity model and got this error. I went back and fixed my database table but I still hade the weird Exception. What solved the problem in the end was to remove the entity and re-create it after the table was fixed. No more exceptions :)
last time I tried the following code and I said it is working fine ``` bs.SuspendBinding(); Data.SaveChanges(); bs.ResumeBinding(); ``` The important things which I wana tell you today are that: 1- if we use the above code to suspend binding we have to do more code to fix a lot of scenarios like index lost in the collections and the master detail bindings 2- if we use the following code instead of the above code we will see the exception gone and every thing will be ok where no need to write more code ``` Data.SaveChanges(System.Data.Objects.SaveOptions.None); ``` I hope this solves your similar problems thank you friends
InvalidOperationException when calling SaveChanges in .NET Entity framework
[ "", "c#", ".net", "entity-framework", "" ]
I'm using jQuery to make ajax requests. The data is getting to PHP nicely, but the response isn't getting back to javascript properly. Somehow there is a space before the response. I know this because Firebug says so and my code doesn't work because the space is there. When I expect there to be a space it works fine. Any ideas as to what could be adding the space? Here is my ajax function: ``` function my_ajax (aurl, adata, aretfunc) { $.ajax({ type: "POST", url: aurl, data: adata, success: function(msg) { eval(aretfunc+'(msg);'); } }); } ```
Look for spurious whitespace characters outside of the `<?php ?>` tags in your PHP file. Any such whitespace will get output when the PHP script is executed. If your PHP file includes other PHP files, the same thing applies to those files as well.
Agreed, look for spurious whitespace character outside of the `<?php ?>`. One suggestion, and one that is completely legit is to simply remove the trailing `?>`, as they are unnecessary. [In fact, it's a coding standard for Drupal.](http://drupal.org/node/246079)
Space Before Ajax Response (jQuery, PHP)
[ "", "php", "jquery", "ajax", "" ]
Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show *exactly* what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: 1. User clicks a button 2. Server retrieves a list of images in the form of a data-table 3. Each row in the table contains 8 data-cells that in turn each contain one hyperlink * Each request by the user can contain up to 50 rows (I can change this number if need be) * That means the table contains upwards of 800 individual DOM elements * My analysis shows that `jQuery("#dataTable").empty()` and `jQuery("#dataTable).replaceWith(tableCloneObject)` take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.
jQuery `empty()` is taking a long time on your table because it does a truly monumental amount of work with the contents of the emptied element in the interest of preventing memory leaks. If you can live with that risk, you can skip the logic involved and just do the part that gets rid of the table contents like so: ``` while ( table.firstChild ) table.removeChild( table.firstChild ); ``` or ``` table.children().remove(); ```
I recently had very large data-tables that would eat up 15 seconds to a minute of processing when making changes due to all the DOM manipulation being performed. I got it down to <1 second in all browsers but IE (it takes 5-10 seconds in IE8). The largest speed gain I found was to remove the parent element I was working with from the DOM, performing my changes to it, then reinserting it back into the DOM (in my case the `tbody`). Here you can see the two relevant lines of code which gave me huge performance increases (using Mootools, but can be ported to jQuery). ``` update_table : function(rows) { var self = this; this.body = this.body.dispose(); //<------REMOVED HERE rows.each(function(row) { var active = row.retrieve('active'); self.options.data_classes.each(function(hide, name) { if (row.retrieve(name) == true && hide == true) { active = false; } }); row.setStyle('display', (active ? '' : 'none')); row.store('active', active); row.inject(self.body); //<--------CHANGES TO TBODY DONE HERE }) this.body.inject(this.table); //<-----RE-INSERTED HERE this.rows = rows; this.zebra(); this.cells = this._update_cells(); this.fireEvent('update'); }, ```
Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements
[ "", "javascript", "jquery", "dom", "performance", "" ]
Suppose I have a table of customers: ``` CREATE TABLE customers ( customer_number INTEGER, customer_name VARCHAR(...), customer_address VARCHAR(...) ) ``` This table does **not** have a primary key. However, `customer_name` and `customer_address` *should* be unique for any given `customer_number`. It is not uncommon for this table to contain many duplicate customers. To get around this duplication, the following query is used to isolate only the unique customers: ``` SELECT DISTINCT customer_number, customer_name, customer_address FROM customers ``` Fortunately, the table has traditionally contained accurate data. That is, there has never been a conflicting `customer_name` or `customer_address` for any `customer_number`. However, suppose conflicting data did make it into the table. I wish to write a query that will fail, rather than returning multiple rows for the `customer_number` in question. For example, I tried this query with no success: ``` SELECT customer_number, DISTINCT(customer_name, customer_address) FROM customers GROUP BY customer_number ``` Is there a way to write such a query using standard SQL? If not, is there a solution in Oracle-specific SQL? **EDIT: The rationale behind the bizarre query:** Truth be told, this customers table does not actually exist (thank goodness). I created it hoping that it would be clear enough to demonstrate the needs of the query. However, people are (fortunately) catching on that the need for such a query is the least of my worries, based on that example. Therefore, I must now peel away some of the abstraction and hopefully restore my reputation for suggesting such an abomination of a table... I receive a flat file containing invoices (one per line) from an external system. I read this file, line-by-line, inserting its fields into this table: ``` CREATE TABLE unprocessed_invoices ( invoice_number INTEGER, invoice_date DATE, ... // other invoice columns ... customer_number INTEGER, customer_name VARCHAR(...), customer_address VARCHAR(...) ) ``` As you can see, the data arriving from the external system is denormalized. That is, the external system includes both the invoice data and its associated customer data on the same line. It is possible that multiple invoices will share the same customer, therefore it is possible to have duplicate customer data. The system cannot begin processing the invoices until all customers are guaranteed to be registered with the system. Therefore, the system must identify the unique customers and register them as necessary. This is why I wanted the query: **because I was working with denormalized data I had no control over**. ``` SELECT customer_number, DISTINCT(customer_name, customer_address) FROM unprocessed_invoices GROUP BY customer_number ``` Hopefully this helps clarify the original intent of the question. **EDIT: Examples of good/bad data** To clarify: `customer_name` and `customer_address` only have to be unique **for a particular `customer_number`**. ``` customer_number | customer_name | customer_address ---------------------------------------------------- 1 | 'Bob' | '123 Street' 1 | 'Bob' | '123 Street' 2 | 'Bob' | '123 Street' 2 | 'Bob' | '123 Street' 3 | 'Fred' | '456 Avenue' 3 | 'Fred' | '789 Crescent' ``` The first two rows are fine because it is the same `customer_name` and `customer_address` for `customer_number` 1. The middle two rows are fine because it is the same `customer_name` and `customer_address` for `customer_number` 2 (even though another `customer_number` has the same `customer_name` and `customer_address`). The last two rows are **not okay** because there are two different `customer_address`es for `customer_number` 3. The query I am looking for would fail if run against all six of these rows. However, if only the first four rows actually existed, the view should return: ``` customer_number | customer_name | customer_address ---------------------------------------------------- 1 | 'Bob' | '123 Street' 2 | 'Bob' | '123 Street' ``` I hope this clarifies what I meant by "conflicting `customer_name` and `customer_address`". They have to be unique per `customer_number`. **I appreciate those that are explaining how to properly import data from external systems.** In fact, I am already doing most of that already. I **purposely** hid all the details of what I'm doing so that it would be easier to focus on the question at hand. This query is not meant to be the only form of validation. I just thought it would make a nice finishing touch (a last defense, so to speak). This question was simply designed to investigate just what was possible with SQL. :)
A scalar sub-query must only return one row (per result set row...) so you could do something like: ``` select distinct customer_number, ( select distinct customer_address from customers c2 where c2.customer_number = c.customer_number ) as customer_address from customers c ```
Your approach is flawed. You do not want data that was successfully able to be stored to then throw an error on a select - that is a land mine waiting to happen and means you never know when a select could fail. What I recommend is that you add a unique key to the table, and slowly start modifying your application to use this key rather than relying on any combination of meaningful data. You can then stop caring about duplicate data, which is not really duplicate in the first place. It is entirely possible for two people with the same name to share the same address. You will also gain performance improvements from this approach. As an aside, I highly recommend you normalize your data, that is break up the name into FirstName and LastName (optionally MiddleName too), and break up the address field into separate fields for each component (Address1, Address2, City, State, Country, Zip, or whatever) **Update:** If I understand your situation correctly (which I am not sure I do), you want to prevent duplicate combinations of name and address from ever occurring in the table (even though that is a possible occurrence in real life). This is best done by a unique constraint or index on these two fields to prevent the data from being inserted. That is, catch the error **before** you insert it. That will tell you the import file or your resulting app logic is bad and you can choose to take the appropriate measures then. I still maintain that throwing the error when you query is too late in the game to do anything about it.
Safely normalizing data via SQL query
[ "", "sql", "denormalization", "" ]
I'm trying to provide two classes to my users - one that is read-only and one that is writable. The r/o will only have getters, while the writable will inherit from it and add the setters. I was under the impression that C# should be able to handle it, but the compiler disagreed. Why doesn't this work? Any workarounds? ``` class A { protected int m_val; public int Val { get { return m_val; } } } class B : A { public int Val { set { m_val = value; } } } class Test { static void Main(string[] args) { B b = new B(); b.Val++; // <-- WHY DOESN'T THIS WORK?! } } ``` P.S. the protected variable in the example above is artificial. My class actually wraps some native resources and the getting/setting happens on either const or mutable native pointer.
`partial` applies to a single type - not 2 types (A and B). You would need something more like below, ideally keeping the field private: ``` class A { private int m_val; public int Val { get { return m_val; } protected set { m_val = value; } } } class B : A { public new int Val { get { return base.Val;} set { base.Val = value; } } } ```
I'm not sure about why you need this, but a possibly better design would be to have two **interfaces** rather than two classes, and a single class that implements both. Then you could hand your client code whichever interface you'd like them to use, with the added bonus of being able to use the values set on a writable interface and hand it over to someone else as a read-only interface.
C# splitting property get and set between classes. Why doesn't it work?
[ "", "c#", "properties", "" ]
We have a large C++ application, which sometimes we need to run as a debug build in order to investigate bugs. The debug build is much much slower than the release build, to the point of being almost unusable. What tricks are available for making MSVC Debug builds execute faster without sacrificing too much on the debugability?
Use `#pragma optimize("", off)` at the top of selected files that you want to debug in release. This gives better stack trace/variable view. Works well if it's only a few files you need to chase the bug in.
We turned off Iterator debugging with the preprocessor symbols: ``` _HAS_ITERATOR_DEBUGGING=0 _SCL_SECURE=0 ``` It helped a bit, but was still not as fast as we'd like. We also ended up making our debug build more release-like by defining NDEBUG instead of \_DEBUG. There were a couple other options that we changed too, but I'm not remembering them. Its unfortunate that we needed to do all this, but our application has a certain amount of work needed to be done every 50ms or its unusable. VS2008 out of the box would give us ~60ms times for debug and ~6ms times for release. With the tweaks mentioned above we could get debug down to ~20ms or so, which is at least usable.
How to make MSVC debug builds run faster
[ "", "c++", "performance", "debugging", "visual-c++", "" ]
I'm trying to implement a single-sign-on link from application written in JAVA, to another web app written in PHP. I'd like a way to encrypt the username in .JSP and then decrypt in PHP. I need to find functions matching functions that will allow this.
Well pick any publicly available encryption method. An encryption is just an algorithm, in most cases should be possible to implement it in any language. There's not really an encryption that works in PHP and not JSP, or vice versa.
The encryption algorithm, block mode, and padding just need to be defined the same. PHP its own set of mcrypt libraries that supports many common symmetrical encryption algorithms. Although if possible i'd suggest using a different token like system for sharing authentication. Have a unique token that is only valid for that session not for that user, then store it in the server side database which both your php and jsp pages have access to. This way no decryption need take place.
Encrypt in JSP, Decrypt in PHP
[ "", "php", "jsp", "encryption", "" ]
I have a C++ project (VC++ 2008) that only uses the std namespace in many of the source files, but I can't find the "right" place to put "using namespace std;". If I put it in main.cpp, it doesn't seem to spread to my other source files. I had it working when I put this in a header file, but I've since been told that's bad. If I put it in all of my .cpp files, the compiler doesn't recognize the std namespace. How should this be done?
You generally have three accepted options: 1. Scope usage (std::Something) 2. Put using at the top of a source file 3. Put using in a common header file I think the most commonly accepted best practice is to use #1 - show exactly where the method is coming from. In some instances a file is so completely dependent on pulling stuff in from a namespace that it's more readable to put a using namespace at the top of the source file. While it's easy to do this due to being lazy, try not to succumb to this temptation. At least by having it inside the specific source files it's visible to someone maintaining the code. The third instance is generally poor practice. It can lead to problems if you're dependent on more than one external source file both of which may define the same method. Also, for someone maintaining your code it obfuscates where certain declarations are coming from. This should be avoided. **Summary:** Prefer to use scoped instances (std::Something) unless the excessive use of these decreases the legibility and maintainability of your code.
In your headers, the best thing I think is just to fully qualify the namespace, say of a member ``` #include <list> class CYourClass { std::list<int> myListOfInts; ... }; ``` You can continue to fully qualify in any function in a cpp ``` int CYourClass::foo() { std::list<int>::iterator iter = myListOfInts.begin(); ... } ``` you really never need "using namespace std" anywhere. Only if you find you are typing std:: too much, thats when its good to throw in a "using namespace std" to save your own keystrokes and improve the readability of your code. This will be limited to the scope of the statement. ``` int CYourClass::foo() { using namespace std; list<int>::iterator iter = myListOfInts.begin(); ... } ```
How to use a single namespace across files?
[ "", "c++", "visual-c++", "namespaces", "std", "" ]
Let's say I have a URL that looks something like this: > <http://www.mywebsite.com/param1:set1/param2:set2/param3:set3/> I've made it a varaible in my javascript but now I want to change "param2:set2" to be "param2:set5" or whatever. How do I grab that part of the string and change it? One thing to note is where "param2..." is in the string can change as well as the number of characters after the ":". I know I can use substring to get part of the string from the front but I'm not sure how to grab it from the end or anywhere in the middle.
How about this? ``` >>> var url = 'http://www.mywebsite.com/param1:set1/param2:set2/param3:set3/'; >>> url.replace(/param2:[^/]+/i, 'param2:set5'); "http://www.mywebsite.com/param1:set1/param2:set5/param3:set3/" ```
Use regular expressions ;) ``` url.replace(/param2:([\d\w])+/, 'param2:new_string') ```
Replacing part of a string with javascript?
[ "", "javascript", "string", "" ]
I've been learning how to use `Serializable`. I know if I create a class 'A' with different variables who implements `Serializable` and I add `Serializable` to my class, it's also `Serializable`. But, who is actually implementing those two methods to serialize? Does `Object` take care of everything or different classes overloads them when necessary?
The serialization is actually implemented in `java.io.ObjectOutputStream` (and java.io.ObjectInputStream) and some of its helper classes. In many cases, this built-in support is sufficient, and the developer simply needs to implement the marker interface `Serializable`. This interface is called a "marker" because it doesn't declare any methods, and thus doesn't require any special API on implementation classes. A programmer can add or replace default serialization mechanism with their own methods if needed. For example, if some additional initialization is required after deserializing an object, a method can be added with the following signature: ``` private void readObject(java.io.ObjectInputStream s) throws java.io.IOException, java.lang.ClassNotFoundException ``` For total control over serialization and deserialization, implement `java.io.Externalizable` instead of `Serializable`. There are many other extension points in Java serialization, if needed. The [serialization specification](http://java.sun.com/javase/6/docs/platform/serialization/spec/serialTOC.html) is an authoritative and complete source for learning about all of them.
I suppose the methods you are talking about are `readObject()` and `writeObject()`. You only need to implement those if you need to do custom serialization, for example when you have fields in your object that aren't serializable. If you only have serializable fields and primitives, you don't have to implement custom serialization methods. Also, you can skip some fields on serialization by adding the `transient` keyword to them.
Who actually implements serializable methods?
[ "", "java", "serializable", "" ]
My understanding is that you're typically supposed to use xor with GetHashCode() to produce an int to identify your data by its value (as opposed to by its reference). Here's a simple example: ``` class Foo { int m_a; int m_b; public int A { get { return m_a; } set { m_a = value; } } public int B { get { return m_b; } set { m_b = value; } } public Foo(int a, int b) { m_a = a; m_b = b; } public override int GetHashCode() { return A ^ B; } public override bool Equals(object obj) { return this.GetHashCode() == obj.GetHashCode(); } } ``` The idea being, I want to compare one instance of Foo to another based on the value of properties A and B. If Foo1.A == Foo2.A and Foo1.B == Foo2.B, then we have equality. **Here's the problem:** ``` Foo one = new Foo(1, 2); Foo two = new Foo(2, 1); if (one.Equals(two)) { ... } // This is true! ``` These both produce a value of 3 for GetHashCode(), causing Equals() to return true. Obviously, this is a trivial example, and with only two properties I could simply compare the individual properties in the Equals() method. However, with a more complex class this would get out of hand quickly. I know that sometimes it makes good sense to set the hash code only once, and always return the same value. However, for mutable objects where an evaluation of equality is necessary, I don't think this is reasonable. **What's the best way to handle property values that could easily be interchanged when implementing GetHashCode()?** > ### See Also > > [What is the best algorithm for an overridden System.Object.GetHashCode?](https://stackoverflow.com/questions/263400/what-is-the-best-algorithm-for-an-overridden-system-object-gethashcode)
First off - Do not implement Equals() only in terms of GetHashCode() - hashcodes will sometimes collide even when objects are not equal. The contract for GetHashCode() includes the following: * different hashcodes means that objects are definitely not equal * same hashcodes means objects *might* be equal (but possibly might not) Andrew Hare suggested I incorporate his answer: I would recommend that you read [this solution](https://stackoverflow.com/questions/263400/what-is-the-best-algorithm-for-an-overridden-system-object-gethashcode) (by our very own [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet), by the way) for a "better" way to calculate a hashcode. > No, the above is relatively slow and > doesn't help a lot. Some people use > XOR (eg a ^ b ^ c) but I prefer the > kind of method shown in Josh Bloch's > "Effective Java": > > ``` > public override int GetHashCode() > { > int hash = 23; > hash = hash*37 + craneCounterweightID; > hash = hash*37 + trailerID; > hash = hash*37 + craneConfigurationTypeCode.GetHashCode(); > return hash; > } > ``` > > The 23 and 37 are arbitrary numbers > which are co-prime. > > The benefit of the above over the XOR > method is that if you have a type > which has two values which are > frequently the same, XORing those > values will always give the same > result (0) whereas the above will > differentiate between them unless > you're very unlucky. As mentioned in the above snippet, you might also want to look at [Joshua Bloch's book, Effective Java,](https://rads.stackoverflow.com/amzn/click/com/0321356683) which contains a nice treatment of the subject (the hashcode discussion applies to .NET as well).
Andrew has posted a good example for generating a better hash code, but also bear in mind that you shouldn't use hash codes as an equality check, since they are not guaranteed to be unique. For a trivial example of why this is consider a double object. It has more possible values than an int so it is impossible to have a unique int for each double. Hashes are really just a first pass, used in situations like a dictionary when you need to find the key quickly, by first comparing hashes a large percentage of the possible keys can be ruled out and only the keys with matching hashes need to have the expense of a full equality check (or other [collision resolution](http://en.wikipedia.org/wiki/Hash_table#Collision_resolution) methods).
GetHashCode() problem using xor
[ "", "c#", ".net", "gethashcode", "" ]
Suppose I have a table named [ProductPriceHistory] like the following: ``` HistoryID..ProductCode..EffectDate.... Price.... IsActive...ProductName 1----------11-----------1 Jan 09-------100-------true-------AAA 2----------11-----------1 Feb 09-------150-------true-------AAA 3----------11-----------1 Mar 09-------200-------false------AAA 4----------22-----------1 Jan 09-------150-------true-------BBB 5----------22-----------1 Feb 09-------200-------true-------BBB 6----------22-----------1 Mr 09--------250-------true-------AAA ``` How can I find the final status of all active products on latest date? That is, my query will find the row: ``` 6----------22-----------1 Mr 09--------250-------true-------AAA ```
to get the given product code's value use: ``` DECLARE @ProcuctCode int SET @ProductCode=11 SELECT h.* FROM ProductPriceHistory h INNER JOIN (SELECT ProductCode ,MAX(EffectDate) AS MaxEffectDate FROM ProductPriceHistory WHERE ProductCode=@ProductCode AND IsActive='true' GROUP BY ProductCode ) dt ON h.ProductCode=dt.ProductCode AND h.EffectDate=dt.MaxEffectDate WHERE h.ProductCode =@ProductCode ``` to find all products use: ``` SELECT h.* FROM ProductPriceHistory h INNER JOIN (SELECT ProductCode ,MAX(EffectDate) AS MaxEffectDate FROM ProductPriceHistory WHERE IsActive='true' GROUP BY ProductCode ) dt ON h.ProductCode=dt.ProductCode AND h.EffectDate=dt.MaxEffectDate ORDER BY h.ProductCode ```
``` select * from ProductPriceHistory p1 where EffectDate = (select max(EffectDate) from ProductPriceHistory p2 where p1.ProductCode = p2.ProductCode and p2.EffectDate<=getdate()) ```
SQL - Problem with SQL Query ; group by and join
[ "", "sql", "grouping", "" ]
I have a project like this: Test Solution ``` Project TestApplication References: TestFunctions.dll(ver 1.0.0.0),Project TestDLL Project TestDLL References: TestFunctions.dll(ver 1.0.0.1) ``` In the application when i make a call to TestDLL.Methodx() inside it calls TestFunctions.HelloWorld() but it gives a MissingMethodException because TestFunctions.HelloWorld() only exists in TestFunctions.dll(ver 1.0.0.1) and it tries to call the function in the ver 1.0.0.0 dll... How can I force it to call to the correct version? I tried using "extern alias" to no avail...
Rename referenced dlls to TestFunctions1.0.0.0.dll and TestFunctions1.0.0.1.dll If the two references have the same name one will be overriden by the other one on compile
At the end I solved this as in my other question, renaming the `TestFunctions.dll` according to the project that uses it. It's more handwork but at least it works. I don't know if some of the other answers will work too because I don't have much time for testing them. Sorry people. Thanks for the help!
C# dll version conflict
[ "", "c#", "dll", "reference", "version", "conflict", "" ]
Am using following .net code to add objects to cache: ``` public static void Add<T>(string key, T dataToCache) { try { ApplicationLog.Instance.WriteInfoFormat("Inserting item with key {0} into Cache...", key); HttpRuntime.Cache.Insert( key, dataToCache, null, DateTime.Now.AddDays(7), System.Web.Caching.Cache.NoSlidingExpiration); } catch (Exception ex) { ApplicationLog.Instance.WriteException(ex); } } ``` and here is my code to retrieve values from cache: ``` public static T Get<T>(string key) { try { if (Exists(key)) { ApplicationLog.Instance.WriteInfoFormat("Retrieving item with key {0} from Cache...", key); return (T)HttpRuntime.Cache[key]; } else { ApplicationLog.Instance.WriteInfoFormat("Item with key {0} does not exist in Cache.", key); return default(T); } } catch(Exception ex) { ApplicationLog.Instance.WriteException(ex); return default(T); } } public static bool Exists(string key) { bool retVal = false; try { retVal= HttpRuntime.Cache[key] != null; } catch (Exception ex) { ApplicationLog.Instance.WriteException(ex); } return retVal; } ``` But i find that after every 2 minutes or so,the cached object value is getting set to null resulting in pulling that value from database again. What am i missing here?
When you say every two minutes the value inserted is set to null, does that mean just the item you're interested in or every single item in the cache? I ask this because the cache only exists as long as the application is running. If the application is restarted, the cache goes away. This would explain the behavior if everything goes away every 2 minutes. In that case you have a different problem on your hands: why does the application restart every 2 minutes. If it's only SOME items then it could be a memory issue. The cache cleans itself up in response to low memory. I believe there's a way to set priority on values inserted. But this should only be a problem when you're low on memory. If this still doesn't solve your problem, there is a way to discover why an item is being removed. It is explained [here](http://msdn.microsoft.com/en-us/library/7kxdx246(VS.80).aspx).
Well first of all your access isn't synchronized so that's a great source of problems. Reading from the HttpRuntime Cache is guarantied to be thread safe so you should really try reading your item as your first step on each and every cache operation. Between checking if `Exists` and actually retrieving the item lots of things can happen (such as your item not beeing there anymore). You should get a handle of the item you're looking for, and if it isn't there provide thread-safe insert by fetching it from your persistent data store. So your `Add` logic would get inside your `Get` IF the data isn't there. There's nothing fundamentally wrong in providing separate `Add` logic and you should measure the cost of hitting the database multiple times compared to blocking further requests for that specific piece of data. ``` T GetT(string key) { T item = (cache.Get(key) as T); if (item == null) { lock (yourSyncRoot) { // double check it here item = (cache.Get(key) as T); if (item != null) return item; item = GetMyItemFromMyPersistentStore(key); // db? if (item == null) return null; string[] dependencyKeys = {your, dependency, keys}; cache.Insert(key, item, new CacheDependency(null, dependencyKeys), absoluteExpiration, slidingExpiration, priority, null); } } return item; } ``` Depending on your expiration policy you'll get your data in memory and provide fast & synchronized access to it, but as I said, measure it and adjust it to your needs. In your business logic after updating your item and properly saving it to your persistent store, just remove it from cache and the next call to your `Get` will fetch it again.
Issue using HttpRuntime.Cache
[ "", "c#", "asp.net", "caching", "" ]
Can anyone tell me if its possible to redeclare a C# class in IronPython? If I have a C# class, would I be able to monkey-patch it from IronPython?
You cannot monkey patch from IronPython. IronPython treats all .NET classes just like CPython treats built-in types: they cannot be monkey patched. IronRuby on the other hand does support this.
You can monkey-patch from IronPython, but IPy is the only environment that will respect your changes; i.e. if you tried to mock out File.Create from IronPython, this would work fine for any IPy code, but if you called a C# method which called File.Create, it would get the real one, not the mock.
Redeclare .net classes in IronPython
[ "", "c#", "ironpython", "monkeypatching", "" ]
I'm trying to create a javascript error logging infrastructure. I'm trying to set `window.onerror` to be my error handler. It works in IE 6, but when I run it in Firefox, it runs into some conflicting `onerror` method. ``` var debug = true; MySite.Namespace.ErrorLogger.prototype = { //My error handling function. //If it's not in debug mode, I should get an alert telling me the error. //If it is, give a different alert, and let the browser handle the error. onError: function(msg, url, lineNo) { alert('onError: ' + msg); if (!debug) { alert('not debug mode'); return true; } else { alert(msg); return false; } } } //Document ready handler (jQuery shorthand) $(function() { log = $create(MySite.Namespace.ErrorLogger); window.onerror = log.onError; $(window).error(function(msg, url, line) { log.onError(msg, url, line); }); }); ``` If I use `setTimeout("eval('a')", 1);` where '`a`' is an undefined variable, my error handler is what's fired (it works). However, my error-logger needs to catch all errors thrown by clients accessing the website, not just incorrect code in one place. The code is on a .js page that is being called from the base page (C#) of a website. The site also uses jQuery, so I have a function that overrides the jQuery bind function, and that function works fine in both Firefox 3 and IE 6. I know that Firefox is seeing the error because it shows up in both the Error Console and Firebug, but *my* `window.onerror` function is still not being called. Any thoughts on how to override what Firefox is doing?
The following is tested and working in IE 6 and Firefox 3.0.11: ``` <html> <head> <title>Title</title> </head> <body> <script type="text/javascript"> window.onerror = function (msg, url, num) { alert(msg + ';' + url + ';' + num); return true; } </script> <div> ...content... </div> <script type="text/javascript"> blah; </script> </body> </html> ``` If some other JavaScript library you are loading is also attaching itself to `window.onerror` you can do this: ``` <script type="text/javascript"> function addHandler(obj, evnt, handler) { if (obj.addEventListener) { obj.addEventListener(evnt.replace(/^on/, ''), handler, false); // Note: attachEvent fires handlers in the reverse order they // were attached. This is the opposite of what addEventListener // and manual attachment do. //} else if (obj.attachEvent) { // obj.attachEvent(evnt, handler); } else { if (obj[evnt]) { var origHandler = obj[evnt]; obj[evnt] = function(evt) { origHandler(evt); handler(evt); } } else { obj[evnt] = function(evt) { handler(evt); } } } } addHandler(window, 'onerror', function (msg, url, num) { alert(msg + ';' + url + ';' + num); return true; }); addHandler(window, 'onerror', function (msg, url, num) { alert('and again ' + msg + ';' + url + ';' + num); return true; }); </script> ``` The above lets you attach as many `onerror` handlers as you want. If there is already an existing custom `onerror` handler it will invoke that one, then yours. Note that `addHandler()` can be used to bind multiple handlers to any event: ``` addHandler(window, 'onload', function () { alert('one'); }); addHandler(window, 'onload', function () { alert('two'); }); addHandler(window, 'onload', function () { alert('three'); }); ``` This code is new and somewhat experimental. I'm not 100% sure [addEventListener](https://developer.mozilla.org/en/DOM/element.addEventListener) does precisely what the manual attachment does, and as commented, [attachEvent](http://msdn.microsoft.com/en-us/library/ms536343.aspx) fires the handlers in the reverse order they were attached in (so you would see 'three, two, one' in the example above). While not necessarily "wrong" or "incorrect", it is the opposite of what the other code in `addHandler` does and as a result, could result in inconsistent behaviour from browser to browser, which is why I removed it. EDIT: This is a full test case to demonstrate the onerror event: ``` <html> <head> <title>Title</title> </head> <body> <script type="text/javascript"> function addHandler(obj, evnt, handler) { if (obj.addEventListener) { obj.addEventListener(evnt.replace(/^on/, ''), handler, false); } else { if (obj[evnt]) { var origHandler = obj[evnt]; obj[evnt] = function(evt) { origHandler(evt); handler(evt); } } else { obj[evnt] = function(evt) { handler(evt); } } } } addHandler(window, 'onerror', function (msg, url, num) { alert(msg + ';' + url + ';' + num); return true; }); </script> <div> ...content... </div> <script type="text/javascript"> blah; </script> </body> </html> ``` When the above code is put in test.htm and loaded into Internet Explorer from the local idks, you should see a dialog box that says `'blah' is undefined;undefined;undefined`. When the above code is put in test.htm and loaded into Firefox 3.0.11 (and the latest 3.5 as of this edit - Gecko/20090616) from the local disk, you should see a dialog box that says `[object Event];undefined;undefined`. If that is not happening then your copy of Firefox is not configured correctly or otherwise broken. All I can suggest is that you remove Firefox, remove your local profile(s) (information about how to find your profile is available [here](http://support.mozilla.com/en-US/kb/Profiles)) and reinstall the latest version and test again.
Remember to return **true** from any custom window.onerror handler, or firefox will handle it anyway.
window.onerror not firing in Firefox
[ "", "javascript", "jquery", "firefox", "onerror", "" ]
Say, I have a reference to a Class object with SomeType having a static method. Is there a way to call that method w/o instantiating SomeType first? Preferably not escaping strong typing. EDIT: OK, I've screwed up. ``` interface Int{ void someMethod(); } class ImplOne implements Int{ public void someMethod() { // do something } } Class<? extends Int> getInt(){ return ImplOne.class; } ``` In this case someMethod() can't be static anyways.
A static method, by definition, is called on a class and not on an instance of that class. So if you use: ``` SomeClass.someStaticMethod() ``` you are instantiating nothing (leave aside the class loading and instantiation of the `SomeClass` class itself, which the JVM handles and is way out of your scope). This is opposed to a regular method called on an object, which has already been instantiated: ``` SomeObject o = someObject; // had to be instantiated *somewhere* o.someMethod(); ```
I'm not sure exactly what the situation is, but if you're looking to execute the static method on a class without knowing the class type (i.e. you don't know it's SomeType, you just have the Class object), if you know the name and parameters of the method you could use reflection and do this: ``` Class c = getThisClassObjectFromSomewhere(); //myStaticMethod takes a Double and String as an argument Method m = c.getMethod("myStaticMethod", Double.class, String.class); Object result = m.invoke(null, 1.5, "foo"); ```
Calling static method on a class?
[ "", "java", "class", "static-methods", "" ]
Let's say I have a function call on a **select** or **where** clause in Oracle like this: ``` select a, b, c, dbms_crypto.hash(utl_raw.cast_to_raw('HELLO'),3) from my_table ``` A similar example can be constructed for MS SQLServer. What's the expected behavior in each case? Is the **HASH** function going to be called once for each row in the table, or DBMS will be smart enough to call the function just once, since it's a function with constant parameters and no *side-effects*? Thanks a lot.
The answer for Oracle is it depends. The function will be called for every row selected UNLESS the Function is marked 'Deterministic' in which case it will only be called once. ``` CREATE OR REPLACE PACKAGE TestCallCount AS FUNCTION StringLen(SrcStr VARCHAR) RETURN INTEGER; FUNCTION StringLen2(SrcStr VARCHAR) RETURN INTEGER DETERMINISTIC; FUNCTION GetCallCount RETURN INTEGER; FUNCTION GetCallCount2 RETURN INTEGER; END TestCallCount; CREATE OR REPLACE PACKAGE BODY TestCallCount AS TotalFunctionCalls INTEGER := 0; TotalFunctionCalls2 INTEGER := 0; FUNCTION StringLen(SrcStr VARCHAR) RETURN INTEGER AS BEGIN TotalFunctionCalls := TotalFunctionCalls + 1; RETURN Length(SrcStr); END; FUNCTION GetCallCount RETURN INTEGER AS BEGIN RETURN TotalFunctionCalls; END; FUNCTION StringLen2(SrcStr VARCHAR) RETURN INTEGER DETERMINISTIC AS BEGIN TotalFunctionCalls2 := TotalFunctionCalls2 + 1; RETURN Length(SrcStr); END; FUNCTION GetCallCount2 RETURN INTEGER AS BEGIN RETURN TotalFunctionCalls2; END; END TestCallCount; SELECT a,TestCallCount.StringLen('foo') FROM( SELECT 0 as a FROM dual UNION SELECT 1 as a FROM dual UNION SELECT 2 as a FROM dual ); SELECT TestCallCount.GetCallCount() AS TotalFunctionCalls FROM dual; ``` Output: ``` A TESTCALLCOUNT.STRINGLEN('FOO') ---------------------- ------------------------------ 0 3 1 3 2 3 3 rows selected TOTALFUNCTIONCALLS ---------------------- 3 1 rows selected ``` So the StringLen() function was called three times in the first case. Now when executing with StringLen2() which is denoted deterministic: ``` SELECT a,TestCallCount.StringLen2('foo') from( select 0 as a from dual union select 1 as a from dual union select 2 as a from dual ); SELECT TestCallCount.GetCallCount2() AS TotalFunctionCalls FROM dual; ``` Results: ``` A TESTCALLCOUNT.STRINGLEN2('FOO') ---------------------- ------------------------------- 0 3 1 3 2 3 3 rows selected TOTALFUNCTIONCALLS ---------------------- 1 1 rows selected ``` So the StringLen2() function was only called once since it was marked deterministic. For a function not marked deterministic, you can get around this by modifying your query as such: ``` select a, b, c, hashed from my_table cross join ( select dbms_crypto.hash(utl_raw.cast_to_raw('HELLO'),3) as hashed from dual ); ```
For SQL server, it will be evaluated for every single row. You will be MUCH better off by running the function once and assigning to a variable and using the variable in the query.
Oracle and SQLServer function evaluation in queries
[ "", "sql", "sql-server", "oracle", "rdbms", "" ]
Assigning a `QTextStream` to a `QFile` and reading it line-by-line is easy and works fine, but I wonder if the performance can be inreased by first storing the file in memory and then processing it line-by-line. Using [FileMon](http://technet.microsoft.com/en-us/sysinternals/bb896642.aspx) from sysinternals, I've encountered that the file is read in chunks of 16KB and since the files I've to process are not that big (~2MB, but many!), loading them into memory would be a nice thing to try. Any ideas how can I do so? `QFile` is inhereted from `QIODevice`, which allows me to `ReadAll()` it into `QByteArray`, but how to proceed then and divide it into lines?
QTextStream has a ReadAll function: <http://doc.qt.io/qt-4.8/qtextstream.html#readAll> Surely that's all you need? Or you could read all into the [QByteArray](http://doc.qt.io/qt-4.8/qtextstream.html#QTextStream-5) and QTextStream can take that as an input instead of a QFile.
Be careful. There are many effects to consider. For the string processing involved (or whatever you are doing with the file) there is likely no performance difference between doing it from memory and doing it from a file line by line provided that the file buffering is reasonable. Actually calling your operating system to do a low level read is VERY expensive. That's why we have buffered I/O. For small I/O sizes the overhead of the call dominates. So, reading 64 bytes at a time is likely 1/4 as efficient as reading 256 bytes at a time. (And I am talking about read() here, not fgets() or fread() both of which are buffered.) At a certain point the time required for the physical I/O starts to dominate, and when the performance doesn't increase much for a larger buffer you have found your buffer size. Very old data point: 7MHz Amiga 500, 100MB SCSI hard disk (A590+Quantum): my I/O performance really only hit maximum with a 256KB buffer size. Compared to the processor, that disk was FAST!!! (The computer had only 3MB of RAM. 256KB is a BIG buffer!) However, you can have too much of a good thing. Once your file is in memory, the OS can page that file back out to disk at its leisure. And if it does so, you've lost any benefit of buffering. If you make your buffers too big then this may happen under certain load situations and your performance goes down the toilet. So consider your runtime environment carefully, and limit memory footprint if need be. An alternative is to use mmap() to map the file into memory. Now the OS won't page your file out - rather, it will simply not page in, or if it needs memory it will discard any pieces of file cached in core. But it won't need to write anything to swap space - it has the file available. I'm not sure if this would result in better performance, however, because it's still better to do I/O in big chunks, and virtual memory tends to move things in page-sized chunks. Some memory managers may do a decent job of moving pages in chunks to increase I/O bandwidth, and prefetching pages. But I haven't really studied this in detail. Get your program working correctly first. Then optimize.
Reading from a file not line-by-line
[ "", "c++", "performance", "qt", "file", "filesystems", "" ]
Do you know any step by step tutorial about installing Liferay cluster in Glassfish ?
Google found me this writeup called [how-to-install-and-configure-a-liferay-cluster](http://www.randombugs.com/java/glassfish/how-to-install-and-configure-a-liferay-cluster.html) Enjoy!
I am working on the same problem, or very similar one -- deploying the Liferay WAR file to a glassfish cluster with two nodes. I don't have it configured completely correctly yet, but I do have it deployed successfully. Maybe this will help you, too, and we can compare notes. Here's what I had to do. First, the groundwork. GlassFish is a bit weird to me in the way it deploys the WAR. As I understand it, WAR files are exploded somewhere by the node-agent, but you don't get access to poke at the files once they are deployed. This means as you tweak your configuration files (portal-ext.properties) you are going to need to re-deploy every time -- and Liferay is pretty big at ~73MB. This is going to cause PermGen out-of-space exceptions periodically, and require you to reboot your cluster. So you'd be wise to set the JVM option to increase the size of the PermGen space in glassfish. There is a good explanation of the problem here: <http://www.freshblurbs.com/explaining-java-lang-outofmemoryerror-permgen-space> That JVM option won't solve the problem, but it will increase the delay between cluster reboots (glassfish console didn't work to reboot, btw; we had to do it by command-line). The next question was: where do the dependency JAR files go? We're operating in a shared cluster running other services, so putting it in the domains/domain1/lib folder won't work. We stuck the dependency JAR files in the liferay war file, in WEB-INF/lib, and it seems to be happy with that. Next: where does portal-ext.properties override file go? The answer is again in the liferay war file, in WEB-INF/classes. This is also a contributing reason why we need to re-deploy every time we modify a property as discussed above. Next: context. By default, liferay tries to deploy to the root context "/". We're in a shared environment, so we deployed the WAR to the context /lr1. In portal-ext.properties we had to set the property > portal.ctx=/lr1 Next: It doesn't make much sense to use the embedded HSQL in a clustered environment; we set up a JNDI name for our databse connection pool using GlassFish. There are instructions on how to do this in the Liferay documentation guides. In the portal-ext.properties file, we were then able to put > jdbc.default.jndi.name=jdbc/LiferayPool We also don't want to store Lucene indexes on the filesystem. We overrode these properties in the portal-ext.properties file to fix that: > lucene.store.type=jdbc > > lucene.store.jdbc.auto.clean.up=true > > lucene.store.jdbc.dialect.oracle=org.apache.lucene.store.jdbc.dialect.OracleDialect Similar logic applies to JackRabbit repository; I currently have the following property set-up (i don't know if this is correct, but the document library is working): > jcr.jackrabbit.repository.root=WEB-INF/classes/ I had to put jackrabbit's repository.xml file in WEB-INF/classes too. That xml file tells jackrabbit what database connection parameters to use (see Apache's Jackrabbit configuration page for more details). Again, I'm not sure putting that in WEB-INF/classes was the right idea, but it probably has to go somewhere in the WAR file or be on some shared file-system for all nodes in your cluster to share the same data. I have not messed with EHcache yet, but I did put in the the hibernate property: > hibernate.dialect=org.hibernate.dialect.Oracle10gDialect for our oracle db. I believe it uses the default JDBC property above to reference our JNDI database connection. The concept of "Liferay Home Directory" variable being "one folder above the server home" is something I'm still wrestling with, and it is causing me to have errors every time an HTTP request is sent relating to /opt/ee/license. The user that liferay is running as does not have permission to modify /opt, and in any case that's a bad idea in a clustered environment. I'm not sure where the setting is, because when I look all I see is > liferay.home=${resource.repositories.root} and > resource.repositories.root=${default.liferay.home} I don't know where default.liferay.home is defined yet; still working on that. Deploying liferay to a clustered environment is unfortunately not that well documented yet, but I hope sharing this helps you in some small way. Good luck!
Liferay Cluster
[ "", "java", "glassfish", "liferay", "" ]
I have to implement a read/write lock in C++ using the Win32 api as part of a project at work. All of the existing solutions use kernel objects (semaphores and mutexes) that require a context switch during execution. This is far too slow for my application. I would like implement one using only critical sections, if possible. The lock does not have to be process safe, only threadsafe. Any ideas on how to go about this?
I don't think this can be done without using at least one kernel-level object (Mutex or Semaphore), because you need the help of the kernel to make the calling process block until the lock is available. Critical sections do provide blocking, but the API is too limited. e.g. you cannot grab a CS, discover that a read lock is available but not a write lock, and wait for the other process to finish reading (because if the other process has the critical section it will block other readers which is wrong, and if it doesn't then your process will not block but spin, burning CPU cycles.) However what you can do is use a spin lock and fall back to a mutex whenever there is contention. The critical section is itself implemented this way. I would take an existing critical section implementation and replace the PID field with separate reader & writer counts.
If you can target Vista or greater, you should use the built-in [SRWLock's](http://msdn.microsoft.com/en-us/library/aa904937(VS.85).aspx). They are lightweight like critical sections, entirely user-mode when there is no contention. Joe Duffy's blog has some recent [entries](http://www.bluebytesoftware.com/blog/2009/06/05/AScalableReaderwriterSchemeWithOptimisticRetry.aspx) on implementing different types of non-blocking reader/writer locks. These locks do spin, so they would not be appropriate if you intend to do a lot of work while holding the lock. The code is C#, but should be straightforward to port to native. You can implement a reader/writer lock using critical sections and events - you just need to keep enough state to only signal the event when necessary to avoid an unnecessary kernel mode call.
Win32 Read/Write Lock Using Only Critical Sections
[ "", "c++", "multithreading", "winapi", "critical-section", "" ]
EDIT: declaring them private was a typo, I fixed it: Relating to another question, if I declared a static variable in a class, then derived a class from that, is there any way to declare the static variable as individual per each class. Ie: ``` class A: { public: static int x; }; class B:A { public: const static int x; }; ``` does that define TWO DIFFERENT static variables `x`, one for A and one for B, or will I get an error for redefining `x`, and if I do get an error, how do a I create two seperate static variables?
When you're using static variables, it might be a good idea to refer to them explicitly: ``` public class B:A { public const static int x; public int foo() { return B::x; } } ``` That way, even if the class "above" yours in the hierarchy decides to create a similarly-named member, it won't break your code. Likewise, I usually try to us the `this` keyword when accessing normal member fields. **Updated** to use C++ syntax.
That creates two separate static variables.
How to declare two different static variables? (C++)
[ "", "c++", "inheritance", "static-variables", "" ]
I intend to write a small application to scratch a personal itch and probably make the life of some colleagues easier. Here is what I have: * 10+ years of experience in C * Plenty of experience in programming against the Win16/32 API in C from the Win3.1 to 2000 days. * C library written by myself already doing about 75% of what the application shall do. What the application shall do: * open a binary, feed it into the mentioned library. * take the resulting text output and feed it into a new Excel Workbook. * apply some formating. * integrate nicely with the Windows environment (availability in "Open With...", remember some stuff using the registry etc.) * (maybe later) before giving the CSV data to Excel, parse it by looking up the meaning of some values in an XML file. Except for the XML parsing part I have done all of that stuff before including COM / Office Automation in C/Win32. There is a lot of boilerplate code involved, but it is doable and the result will be a pretty small application without the need for an installer. So why even think about C# / .Net? * no experience with parsing XML * the promise of less boilerplate code for the Windows and Excel stuff (yes, I have done C++ with OWL, MFC, ATL etc. but I am not going there anymore - not for free/fun) * Since I have also experience with C++, VB(not .Net) and a little Java / Objective-C I suppose learning C# will all be about the .Net libraries and not actually about the language. My considerations so far: * Learning .NET might be fun and might result in less code / first steps in a more modern environment. * Sticking with what I know will lead to a predictable outcome in terms of effort and function (except for the optional XML stuff) * VB looked great at the beginning until the projects where about 80% done, then the pain started and the DLL coding in C. I am concerned history could repeat itself if I choose .Net. My primary objective is the functionality. Effort is a concern. The XML parsing is optional. Please advice. **Update:** one thing I forgot to mention explicitly is that I am also worried about easy deployment of the tool to my co-workers. With Win32 I am pretty sure I can come up with an EXE file < 1Mb that can be easily emailed and does not require installation. With .Net not so much. Can I create the necessary MSI or whatever in Visual Studio *Express* (free) or do I need 3rd party tools?
as others have your question mostly covered, I'd just like to quickly comment on your considerations: > * Learning .NET might be fun and might result in less code / first steps in a more modern environment. Totally agreed. It is definitely fun and usually it does result in less code. The investment you make now will certainly benefit you in future projects. It is way faster to program in .Net than in C. Not only it is easier, but it is also safer. You are isolated from many programming errors common in C mostly related to memory mismanagement. You also get a very complete managed API to do stuff you would usually need to build your own framework. > * Sticking with what I know will lead to a predictable outcome in terms of effort and function (except for the optional XML stuff) Hence your indecision. :-) > * VB looked great at the beginning until the projects where about 80% done, then the pain started and the DLL coding in C. I am concerned history could repeat itself if I choose .Net. My primary objective is the functionality. Effort is a concern. The XML parsing is optional. .Net is an entirely different beast from VB. Most of the things you wouldn't be able to do in VB, or at least do them easily, are supported by .Net. For instance, Windows Services are a snap to build in .Net. Socket programming is also supported, but there are very few reasons to do it yourself, as you've got loads of communication APIs with .Net. You've got web-services, .Net Remoting, MSMQ management, and more recently WCF. Proper multithreading is supported by .Net, unlike the idiotic apartment model in VB. In case you really need to go low level, you can also actually use pointers in C#, inside of unsafe code blocks, even though I would never advise to do so. If you really need to do things in C, then integrating is also relatively easy. You can create COM objects and use interop to work with them from .Net. You can also interact directly with plain ol' dlls using DllImport. Using [www.pinvoke.net](http://www.pinvoke.net) makes it easier. When I developed in VB, sometimes I also had to go back to C++ to do stuff that I wasn't able of doing in VB. Since I began programming in .Net, the only extremely rare scenarios I would need to go back to C++ were when I needed to use legacy COM components that used types I was having a hard time to marshal via interop. I wouldn't worry about history repeating itself.
If you're using COM, you may be interested in using C# 4.0 instead of earlier versions - the downside being that it's only in beta. But basically it makes COM stuff somewhat less ugly for various reasons. I'd expect there to be plenty of good C libraries for XML parsing by now. I would expect the *main* benefit to actually be the knowledge gained. I doubt that you'll actually produce the code faster for *this* project, but the next one may well be a lot quicker. How much do you care about learning new stuff?
Effort estimation: using C / Win32 or learning C# / .NET
[ "", "c#", ".net", "c", "xml", "winapi", "" ]
I am working on a project, which is having huge database. [ around 32gb data in one week ]. We are using DB2, and spring-framework + jdbc. I just wanted to know is JDBC capable of handling this much of data? Or should i use something else? Or if JDBC is capable of doing this then should i use some specific technique for this thing.
JDBC is just the connection - it doesn't care how much data is in the database. I'd expect it to be more of an issue at the database side, if anywhere. If you've got indexes which are expensive to create etc, you're more likely to have issues - but to be honest, 32GB in a week isn't really that big. I'd expect any "real" server-side database to handle it fairly easily. I suggest you try it before committing yourself too far down any particular path. Chuck data at it as quickly as you can. I'd be slightly worried if you couldn't create 32GB of data in a few hours.
JDBC is just the interface between the database and the java program. It's up to the database to handle that amount of data. In the java world, there is hardly an alternative to using JDBC when it comes to database connectivity.
Is JDBC capable of handling huge database?
[ "", "java", "spring", "jdbc", "" ]
I am trying to make a decision whether I should use a REST service or a SOAP service for some web facing functions that I am producing. This decision is based on whether I can easily use the REST service in implementation. I would prefer to use REST, though I don't want to spend days coding the object model in PHP. The services are being developed in .NET but will be consumed mainly by PHP. Basically it has come down to one point: Ease of integration. Using SOAP in PHP I can use the NuSOAP library, which will generate the object model. However with REST I cannot seem to find a way to easily generate that model, if this is possible I would use REST services as they are easier to document and extend, and also have the JSON abilities as well. Can I generate an object model in PHP from an XML file/schema that I could then serialize with the REST service?
You might not even have to go the class route. Simply ingest the data using simplexml and then traverse it as if it were an object. Or if you have json, `json_decode($data, TRUE)` would do the same thing (without attributes in brackets). ``` $ch = curl_init("http://example.com/some/rest/endpoint"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $data = curl_exec($ch); $obj = simplexml_load_string($data); print $obj->some->data->you['need']; ``` That would print `here` if your XML was something like ``` <_> <some> <data> <you need="here" /> </data> </some> </_> ```
If I were you I would: * investigate if NuSOAP tools can be used on just XSD. In the .NET world, you have svcutil (or in the ASMX days, wsdl.exe) to digest .wsdl files to produce proxy classes. But if you have only .xsd files, you can use the xsd.exe tool, or the "aftermarket" [XsdObjectGen](http://msdn.microsoft.com/en-us/xml/bb190622.aspx), which is like a supercharged xsd.exe. Are there similar tools in NuSOAP to do the same? Maybe this is obvious and you've already done it. * if that doesn't pan out, produce a dummy WSDL, and stuff the XSD you have into it. Then, process the .wsdl file with the NuSOAP tools. Grab the generated code, and remove the soap envelope and communications stuff, but retain the object serialization stuff. Not sure if this is possible, the way the PHP (or is it C?) code is generated by the NuSOAP tools. In .NET, it's easy to break the different pieces out.
Generating objects in PHP using REST
[ "", "php", "xml", "rest", "soap", "" ]
In the [ArduinoUnit](http://code.google.com/p/arduinounit) unit testing library I have provided a mechanism for giving a TestSuite a name. A user of the library can write the following: ``` TestSuite suite("my test suite"); // ... suite.run(); // Suite name is used here ``` This is the expected usage - the name of the TestSuite is a string literal. However to prevent hard-to-find bugs I feel obliged to cater for different usages, for example: ``` char* name = (char*) malloc(14); strcpy(name, "my test suite"); TestSuite suite(name); free(name); // ... suite.run(); // Suite name is used here ``` As such I have implemented TestSuite like this: ``` class TestSuite { public: TestSuite(const char* name) { name_ = (char*) malloc(strlen(name) + 1); strcpy(name_, name); } ~TestSuite() { free(name_); } private: char* name_; }; ``` Putting aside the issue of failing to deal with memory allocation failures in the constructor I'd prefer to simply allocate the pointer to a member variable like this: ``` class TestSuite { public: TestSuite(const char* name) : name_(name) { } private: const char* name_; }; ``` Is there any way I can change the interface to force it to be used 'correctly' so that I can do away with the dynamic memory allocation?
What if you provide two overloaded constructors? ``` TestSuite(const char* name) ... TestSuite(char* name) ... ``` If called with a `const char*`, then the constructor could make a copy of the pointer, assuming that the string will not go away. If called with a `char*`, the constructor could make a copy of the whole string. Note that it is still possible to subvert this mechanism by passing a `const char*` to the constructor when the `name` is in fact dynamically allocated. However, this may be sufficient for your purposes. I should note that I have never actually seen this technique used in an API, it was just a thought that occurred to me as I was reading your question.
Well, you can use a std::string that will take care of all memory allocation ``` class TestSuite { public: TestSuite(const std::string &name):name_(name) { } ~TestSuite() { } private: std::string name_; }; ``` **Edit** : If it is the call to malloc() that you want to avoid you could do this : ``` class TestSuite { public: TestSuite(const char *name){ memcpy(name_, name, min(16, strlen(name)); } private: char name_[16]; }; ``` This will waste some memory however, which can be an issue on embedded platforms.
How can I prevent the need to copy strings passed to a avr-gcc C++ constructor?
[ "", "c++", "embedded", "arduino", "avr", "avr-gcc", "" ]
I have a Unicode string consisting of letters, digits and punctuation marks. Ho can I detect characters that are digits and letters (not necessarily ASCII) with a C++ standard library or Win32 API?
**iswdigit()**, **iswalpha()** and **iswalnum()** are the functions you are looking for. Cheers !
Another option is the Win32 API call [GetStringTypeW()](http://msdn.microsoft.com/en-us/library/dd318119(VS.85).aspx)
Is there a way to detect an alphanumeric Unicode symbol?
[ "", "c++", "string", "winapi", "unicode", "" ]
I'm trying my hand at behavior driven development and I'm finding myself second guessing my design as I'm writing it. This is my first greenfield project and it may just be my lack of experience. Anyway, here's a simple spec for the class(s) I'm writing. It's written in NUnit in a BDD style instead of using a dedicated behavior driven framework. This is because the project targets .NET 2.0 and all of the BDD frameworks seem to have embraced .NET 3.5. ``` [TestFixture] public class WhenUserAddsAccount { private DynamicMock _mockMainView; private IMainView _mainView; private DynamicMock _mockAccountService; private IAccountService _accountService; private DynamicMock _mockAccount; private IAccount _account; [SetUp] public void Setup() { _mockMainView = new DynamicMock(typeof(IMainView)); _mainView = (IMainView) _mockMainView.MockInstance; _mockAccountService = new DynamicMock(typeof(IAccountService)); _accountService = (IAccountService) _mockAccountService.MockInstance; _mockAccount = new DynamicMock(typeof(IAccount)); _account = (IAccount)_mockAccount.MockInstance; } [Test] public void ShouldCreateNewAccount() { _mockAccountService.ExpectAndReturn("Create", _account); MainPresenter mainPresenter = new MainPresenter(_mainView, _accountService); mainPresenter.AddAccount(); _mockAccountService.Verify(); } } ``` None of the interfaces used by MainPresenter have any real implementations yet. AccountService will be responsible for creating new accounts. There can be multiple implementations of IAccount defined as separate plugins. At runtime, if there is more than one then the user will be prompted to choose which account type to create. Otherwise AccountService will simply create an account. One of the things that has me uneasy is how many mocks are required just to write a single spec/test. Is this just a side effect of using BDD or am I going about this thing the wrong way? [Update] Here's the current implementation of MainPresenter.AddAccount ``` public void AddAccount() { IAccount account; if (AccountService.AccountTypes.Count == 1) { account = AccountService.Create(); } _view.Accounts.Add(account); } ``` Any tips, suggestions or alternatives welcome.
When doing top to down development it's quite common to find yourself using a lot of mocks. The pieces you need aren't there so naturally you need to mock them. With that said this does feel like an acceptance level test. In my experience BDD or Context/Specification starts to get a bit weird at the unit test level. At the unit test level I'd probably be doing something more along the lines of... ``` when_adding_an_account should_use_account_service_to_create_new_account should_update_screen_with_new_account_details ``` You may want to reconsider your usage of an interface for IAccount. I personally stick with keeping interfaces for services over domain entities. But that's more of a personal preference. A few other small suggestions... * You may want to consider using a Mocking framework such as Rhino Mocks (or Moq) which allow you to avoid using strings for your assertions. ``` _mockAccountService.Expect(mock => mock.Create()) .Return(_account); ``` * If you are doing BDD style one common pattern I've seen is using chained classes for test setup. In your example... ``` public class MainPresenterSpec { // Protected variables for Mocks [SetUp] public void Setup() { // Setup Mocks } } [TestFixture] public class WhenUserAddsAccount : MainPresenterSpec { [Test] public void ShouldCreateNewAccount() { } } ``` * Also I'd recommend changing your code to use a guard clause.. ``` public void AddAccount() { if (AccountService.AccountTypes.Count != 1) { // Do whatever you want here. throw a message? return; } IAccount account = AccountService.Create(); _view.Accounts.Add(account); } ```
The test life support is a lot simpler if you use an auto mocking container such as RhinoAutoMocker (part of [StructureMap](http://sourceforge.net/project/showfiles.php?group_id=104740)) . You use the auto mocking container to create the class under test and ask it for the dependencies you need for the test(s). The container might need to inject 20 things in the constructor but if you only need to test one you only have to ask for that one. ``` using StructureMap.AutoMocking; namespace Foo.Business.UnitTests { public class MainPresenterTests { public class When_asked_to_add_an_account { private IAccountService _accountService; private IAccount _account; private MainPresenter _mainPresenter; [SetUp] public void BeforeEachTest() { var mocker = new RhinoAutoMocker<MainPresenter>(); _mainPresenter = mocker.ClassUnderTest; _accountService = mocker.Get<IAccountService>(); _account = MockRepository.GenerateStub<IAccount>(); } [TearDown] public void AfterEachTest() { _accountService.VerifyAllExpectations(); } [Test] public void Should_use_the_AccountService_to_create_an_account() { _accountService.Expect(x => x.Create()).Return(_account); _mainPresenter.AddAccount(); } } } } ``` Structurally I prefer to use underscores between words instead of RunningThemAllTogether as I find it easier to scan. I also create an outer class named for the class under test and multiple inner classes named for the method under test. The test methods then allow you to specify the behaviors of the method under test. When run in NUnit this gives you a context like: ``` Foo.Business.UnitTests.MainPresenterTest When_asked_to_add_an_account Should_use_the_AccountService_to_create_an_account Should_add_the_Account_to_the_View ```
Is this a poor design?
[ "", "c#", "unit-testing", "bdd", "" ]
What are good things to check for, with respect to error-handling, when you are dealing with the data-access-layer? For example, let's assume I have this function.. ``` Public Function UserExists(ByVal userName As String) As DataTable Dim dt As Object = Nothing Dim arSqlParameters(0) As SqlParameter arSqlParameters(0) = New SqlParameter("@UserName", SqlDbType.NVarChar, 50) arSqlParameters(0).value = userName dt = ABC.APP.DAL.DALHelper.ExecuteDatatable(ConnectionString, CommandType.StoredProcedure, "dbo.aspnet_sprGetUserByUsername", arSqlParameters) Return dt End Function ``` This seems like very lazy and unsafe coding. How would you go about ensuring that your code elegantly handles anything unexpected in a situation like this? I'm new to vb.net and the app that I'm working on has no error handling, so I figured this would be the best place to look for advice. Thanks in advance :)
Not sure why you're not declaring dt as a datatable - what is the motiviation for "dim dt as object = nothing"? Really the only line that can reasonably fail is the "dt=ABC.APP.DAL...." call. In that line, you could have a few errors: * The stored procedure or parameter names are wrong. This should be caught at design-time (not by a built-in checking mechanism, but the first time that you try to run the code) * An error occurs in the stored procedure. Sprocs use deferred name resolution, which can lead to runtime errors if (for instance) objects don't exist at the time that the sproc is called. Again, this is most likely to rear it's head in testing. * A deadlock. In this case, you should catch and resubmit the batch. [Here](http://technet.microsoft.com/en-us/library/ms189583.aspx)'s an intro and set of links on handling SQL errors in application code. * A parameter that is passed is invalid (too long, wrong datatype, etc). This should be checked *before* you call the sproc.
Opinions are likely to vary wildly on a topic like this, but here's my take. Only try to deal with the exceptions that you relate to this area. That's a vague statement, but what I mean is, did the user pass a string with more chars than the column in the db. Or did they violate some other business rule. I would *not* catch errors here that imply the database is down. Down this far in the code, You catch errors that you can deal with, and your app needs it's database. Declare a more global exception handler that logs the problem, notifies someone, whatever... and present the user with a "graceful" exit. I don't see any value in catching in each method a problem with the database. You're just repeating code for a scenario that could bomb any part of your datalayer. That being said, if you choose to catch db errors (or other general errors) in this method, at least set the innerexception to the caught exception if you throw a new exception. Or better, log what you want and then just "throw", not "throw ex". It will keep the original stack trace. Again, you will get a lot of varying thoughts on this, and there is no clear right and wrong, just preferences.
Correct error-handling practices for the data-layer
[ "", "sql", "vb.net", "stored-procedures", "" ]
I want to remove, if possible, the includes of both <vector> and <string> from my class header file. Both string and vector are return types of functions declared in the header file. I was hoping I could do something like: ``` namespace std { template <class T> class vector; } ``` And, declare the vector in the header and include it in the source file. Is there a reference covering situations where you must include in the header, and situations where you can pull the includes into the source file?
You cannot safely forward declare STL templates, at least if you want to do it portably and safely. The standard is clear about the minimum requirements for each of the STL element, but leaves room for implemtation extensions that might add extra template parameters *as long as those have default values*. That is: the standard states that std::vector is a template that takes at least 2 parameters (type and allocator) but can have any number of extra arguments in a standard compliant implementation. What is the point of not including string and vector headers? Surely whoever is going to use your class must have already included it since it is on your interface. When you ask about a reference to decide when to include and when to forward declare, my advice would be: include everything that is part of your interface, forward declare internal details. There are more issues here that plain compilation performance. If you push the include of a type that is in your public (or protected) interface outside of the header you will be creating dependencies on the order of includes. Users must know that they must include *string* before including your header, so you are giving them one more thing to worry about. What things should be included in the implementation file: implementation details, loggers, elements that don't affect the interface (the database connectors, file headers), internal implementation details (i.e. using STL algorithms for your implementation does not affect your interface, functors that are created for a simple purpose, utilities...)
With a very few exceptions, you are not allowed to add things to the std:; namespace. For classes like vector and string, you therefore have no option but to #include the relevant Standard header files. Also, notice that string is not a class, but a typedef for `basic_string<char>`.
C++, removing #include<vector> or #include<string> in class header
[ "", "c++", "templates", "include", "" ]
I'm in the process of learning php and creating themes. Unfortunately, while I was editing a theme that i was currently using in drupal, I made a mistake in the theme such that nothing shows up anymore, even if i were to hit drupal/index.php. I want to change my broken drupal theme to a working one but i'm unable to do so because I can't even view the administration section.
The *[How To reset your theme via the database](http://drupal.org/node/200774)* page on Drupal.org has instructions for changing your theme directly from the SQL prompt. It's not immediately clear whether this will work in the most recent version of Drupal, so **back up your database before attempting this**.
The easiest way to change your frontend theme is to set it in your sites/default/settings.php: ``` $conf['theme_default'] = 'minelli'; ```
Switching themes in Drupal without the web interface
[ "", "php", "drupal", "content-management-system", "themes", "" ]
I have a string and I need to see if it contains the following "\_archived". I was using the following: ``` preg_match('(.*)_archived$',$string); ``` but get: ``` Warning: preg_match() [function.preg-match]: Unknown modifier '_' in /home/storrec/classes/class.main.php on line 70 ``` I am new to Regular Expressions so this is probably very easy. Or should I be using something a lot simpler like ``` strstr($string, "_archived"); ``` Thanks in advance
strstr is enough in this case, but to solve your problem, you need to add delimiters to your regex. A delimiter is a special character that starts and ends the regex, like so: ``` preg_match('/_archived/',$string); ``` The delimiter can be a lot of different characters, but usual choices are /, # and !. From the PHP manual: > Any character can be used for delimiter as long as it's not alphanumeric, backslash (), or the null byte. If the delimiter character has to be used in the expression itself, it needs to be escaped by backslash. Since PHP 4.0.4, you can also use Perl-style (), {}, [], and <> matching delimiters. Read all about PHP regular expression syntax [here](http://se.php.net/manual/en/regexp.reference.php). You can see some examples of valid (and invalid) patterns in the [PHP manual here](http://se.php.net/manual/en/pcre.examples.php).
strstr($string, "\_archived"); Is going to be way easier for the problem you describe. As is often quoted > Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. - Jamie Zawinski
Simple RegEx PHP
[ "", "php", "regex", "" ]
The situation goes like this, I have two network interfaces in my macbook pro. One is a statically defined Ethernet interface and the other one is a DHCP configured wireless interface. I am trying to figure out how to configure a specific java project to use my statically wired interface only and not simply pick the first one in the list either through the JVM or through my IDEA ( IntelliJ 8 ) I have managed to achieve this via an instance of vmware where I have my virtual environment feeding off my wired interface only but this ads performance problems and just general headaches of flipping between windows / osx. I have so far been unable to find a jvm property that let's me specify which nic gets used nor have I seen documentation for IntelliJ that defines that. If anyone has any ideas on how to solve this as painlessly as possible I would appreciate the help.
Yes, you can specify by following way: ``` int port= 52000; //some free port MulticastSocket msocket = new MulticastSocket(port); msocket.setInterface(InetAddress.getByName("172.16.30.205")); ``` Where `172.16.30.205` is one of my PC's IP address and I would like to communicate through interface.
`java.net.Socket` has a constructor which specifies which local IP address to bind to, and since each network interface gets its own IP address, you can use that. But getting from a `java.net.Socket` to a higher-level protocol (e.g. HTTP) is another matter entirely, but you don't specify what that would be. I don't know of a JVM-level way of doing this, sadly, just the above programmatic approach.
Is it possible to specify which network interface for a JVM ( or IDE ) to use
[ "", "java", "networking", "" ]
I know how to do this in an ugly way, but am wondering if there is a more elegant and succinct method. I have a string array of e-mail addresses. Assume the string array is of arbitrary length -- it could have a few items or it could have a great many items. I want to build another string consisting of say, 50 email addresses from the string array, until the end of the array, and invoke a send operation after each 50, using the string of 50 addresses in the Send() method. The question more generally is what's the cleanest/clearest way to do this kind of thing. I have a solution that's a legacy of my VBScript learnings, but I'm betting there's a better way in C#.
You want elegant and succinct, I'll give you elegant and succinct: ``` var fifties = from index in Enumerable.Range(0, addresses.Length) group addresses[index] by index/50; foreach(var fifty in fifties) Send(string.Join(";", fifty.ToArray()); ``` Why mess around with all that awful looping code when you don't have to? You want to group things by fifties, *then group them by fifties.* That's what the group operator is for! UPDATE: commenter MoreCoffee asks how this works. Let's suppose we wanted to group by threes, because that's easier to type. ``` var threes = from index in Enumerable.Range(0, addresses.Length) group addresses[index] by index/3; ``` Let's suppose that there are nine addresses, indexed zero through eight What does this query mean? The `Enumerable.Range` is a range of nine numbers starting at zero, so `0, 1, 2, 3, 4, 5, 6, 7, 8`. Range variable `index` takes on each of these values in turn. We then go over each corresponding `addresses[index]` and assign it to a group. What group do we assign it to? To group `index/3`. Integer arithmetic rounds towards zero in C#, so indexes 0, 1 and 2 become 0 when divided by 3. Indexes 3, 4, 5 become 1 when divided by 3. Indexes 6, 7, 8 become 2. So we assign `addresses[0]`, `addresses[1]` and `addresses[2]` to group 0, `addresses[3]`, `addresses[4]` and `addresses[5]` to group 1, and so on. The result of the query is a sequence of three groups, and each group is a sequence of three items. Does that make sense? Remember also that the result of the *query expression* is a *query which represents this operation*. It does not *perform* the operation until the `foreach` loop executes.
Seems similar to this question: [Split a collection into n parts with LINQ?](https://stackoverflow.com/questions/438188/split-a-collection-into-n-parts-with-linq) A modified version of *Hasan Khan*'s answer there should do the trick: ``` public static IEnumerable<IEnumerable<T>> Chunk<T>( this IEnumerable<T> list, int chunkSize) { int i = 0; var chunks = from name in list group name by i++ / chunkSize into part select part.AsEnumerable(); return chunks; } ``` Usage example: ``` var addresses = new[] { "a@example.com", "b@example.org", ...... }; foreach (var chunk in Chunk(addresses, 50)) { SendEmail(chunk.ToArray(), "Buy V14gr4"); } ```
C#: Cleanest way to divide a string array into N instances N items long
[ "", "c#", "arrays", "string", "loops", "" ]
So App Verifier is throwing this exception. From what I gather, the text of this message is a little misleading. The problem appears to be that the the critical section was created by a thread that is being destroyed before the critical section is destroyed. It's a relatively simple fix but does anyone know what the ramifications are for having a thread other than the creating one destroy the crticial section? How dangerous is it? Is the concern only that the critical section handle will "leak" or is there a more insidious side-effect? Some other info: * App written in C++ (on Windows, of course) * Critical section created with InitializeCriticalSelection * Critical section is eventually deleted with DeleteCriticalSection
I believe you are correct on the interpretation of the message. The only reference I can find is as follows. The stack trace is a good clue as the author suggests * <http://jpassing.wordpress.com/2008/02/18/application-verifier-thread-cannot-own-a-critical-section/> I dug around for a bit and cannot find any specific reason why you cannot create and delete a critical section on different threads. However I do wonder why it is that you want to do so? It seems like best practice to have one thread own a critical section so to speak. Handing off the critical section between threads introduces another means of communication and potential error (can be done, just more fun).
Seems accepted answer talks about creation of critical section, which is not what this message is about. Eran's answer covered the real cause of the message, but here it is in *TL;DR* terms: **Application verifier detected a thread, that acquired a critical section lock, is attempting to exit while the section is still locked** It is not complaining that thread *created* critical section and is now terminating. This has nothing to do with who creates and destroys the section. It is complaining, and very legitimately, that the thread *owns the lock* on that critical section and terminating. They could've made the wording of that message so much clearer.
App Verifier reporting "Thread cannot own a critical section."
[ "", "c++", "multithreading", "critical-section", "" ]
I need help making this method generic. It is repeated about ten times to get lists for different web list controls (substituting "MyType" for the type used in the particular control). ``` private static IList<MyType> GetList(RequestForm form) { // get base list IMyTypeRepository myTypeRepository = new MyTypeRepository(new HybridSessionBuilder()); IList<MyType> myTypes = myTypeRepository.GetAll(); // create results list IList<MyType> result = new List<MyType>(); // iterate for active + used list items foreach (MyType myType in myTypes) { if (myType.Active || form.SolutionType.Contains(myType.Value)) { result.Add(myType); } } // return sorted results result.OrderBy(o => o.DisplayOrder); return result; } ``` Let me know if this isn't enough information. I think this requires more advanced language features that I'm just getting acquainted with. Maybe I should make them all use the same repository? Thanks for your help. EDIT: Thanks for your help. I don't have any peer support, so this board is fantastic and I learned something from each of you. I wish I could accept all the answers.
You could firstly make your function a bit more terse like this: ``` private static IList<MyType> GetList(RequestForm form) { // get base list IMyTypeRepository myTypeRepository = new MyTypeRepository(new HybridSessionBuilder()); IList<MyType> myTypes = myTypeRepository.GetAll(); return myTypes.Where(x => x.Active || form.SolutionType.Contains(x.Value)) .OrderBy(x => x.DisplayOrder).ToList(); } ``` At that point, most of the content of the function is directly related to `MyType`, so how you can further improve it depends largely on how `MyType` relates to the other types involved. For example, here is a hypothetical version that you could write if your other types followed a reasonable-looking (to me) contract: ``` private static IList<T> GetList(RequestForm form) where T : OrderedValueContainer { // we'll want to somehow genericize the idea of a TypeRepository that can // produce these types; if that can't be done, we're probably better off // passing a repository into this function rather than creating it here var repository = new TypeRepository<T>(new HybridSessionBuilder()); IList<T> myTypes = repository.GetAll(); // the hypothetical OrderedValueContainer class/interface // contains definitions for Active, Value, and DisplayOrder return myTypes.Where(x => x.Active || form.SolutionType.Contains(x.Value)) .OrderBy(x => x.DisplayOrder).ToList(); } ```
If all the types implement the same interface, (if they don't then make them, and make sure to add all the properties to the interface that are needed in this method) then you can do something like this: ``` private static IList<T> GetList(RequestForm form) where T: IMyInterface { // get base list IMyTypeRepository myTypeRepository = new MyTypeRepository(new HybridSessionBuilder()); IList<T> myTypes = myTypeRepository.GetAll(); // create results list IList<T> result = new List<T>(); // iterate for active + used list items foreach (T myType in myTypes) { if (myType.Active || form.SolutionType.Contains(myType.Value)) { result.Add(myType); } } // return sorted results return result.OrderBy(o => o.DisplayOrder).ToList(); } ``` One other change I made is the last line, where you had the orderby on a seperate line and were never actually capturing the Ordered list. **EDIT:** To solve the repository problem, you can have a repository factory of sorts that returns the correct repository based on the type of T: ``` public static IMyTypeRepository GetRepository(Type t) { if(t == typeof(Type1)) { return Type1Repository(); } if(t == typeof(Type2)) { return Type2Repository(); } ....... } ``` Assuming of course that all your repositories implement the IMyRepository interface.
DRY this method
[ "", "c#", "dry", "" ]
The problem is that, as you know, there are thousands of characters [in the Unicode chart](http://www.ssec.wisc.edu/~tomw/java/unicode.html) and I want to convert all the similar characters to the letters which are in English alphabet. For instance here are a few conversions: ``` ҥ->H Ѷ->V Ȳ->Y Ǭ->O Ƈ->C tђє Ŧค๓เℓy --> the Family ... ``` and I saw that there are more than 20 versions of letter A/a. and I don't know how to classify them. They look like needles in the haystack. The complete list of unicode chars is at <http://www.ssec.wisc.edu/~tomw/java/unicode.html> or <http://unicode.org/charts/charindex.html> . Just try scrolling down and see the variations of letters. How can I convert all these with Java? Please help me :(
Reposting my post from [How do I remove diacritics (accents) from a string in .NET?](https://stackoverflow.com/questions/249087/how-do-i-remove-diacritics-accents-from-a-string-in-net) This method works fine in java *(purely for the purpose of removing diacritical marks aka accents)*. It basically converts all accented characters into their deAccented counterparts followed by their combining diacritics. Now you can use a regex to strip off the diacritics. ``` import java.text.Normalizer; import java.util.regex.Pattern; public String deAccent(String str) { String nfdNormalizedString = Normalizer.normalize(str, Normalizer.Form.NFD); Pattern pattern = Pattern.compile("\\p{InCombiningDiacriticalMarks}+"); return pattern.matcher(nfdNormalizedString).replaceAll(""); } ```
It's a part of [Apache Commons Lang](https://commons.apache.org/proper/commons-lang/javadocs/api-release/org/apache/commons/lang3/StringUtils.html#stripAccents%28java.lang.String%29) as of ver. 3.0. ``` org.apache.commons.lang3.StringUtils.stripAccents("Añ"); ``` returns `An` Also see <http://www.drillio.com/en/software-development/java/removing-accents-diacritics-in-any-language/>
Converting Symbols, Accent Letters to English Alphabet
[ "", "java", "unicode", "special-characters", "diacritics", "" ]
I run ``` import sys print "x \tx^3\tx^3+x^3\t(x+1)^3\tcube+cube=cube+1" for i in range(sys.argv[2]): // mistake here cube=i*i*i cube2=cube+cube cube3=(i+1)*(i+1)*(i+1) truth=(cube2==cube3) print i, "\t", cube, "\t", cube + cube, "\t", cube3, "\t", truth ``` I get ``` Traceback (most recent call last): File "cube.py", line 5, in <module> for i in range(sys.argv[2]): IndexError: list index out of range ``` **How can you use command line parameter as follows in the code?** **Example of the use** ``` python cube.py 100 ``` It should give ``` x x^3 x^3+x^3 (x+1)^3 cube+cube=cube+1 0 0 0 1 False 1 1 2 8 False 2 8 16 27 False --- cut --- 97 912673 1825346 941192 False 98 941192 1882384 970299 False 99 970299 1940598 1000000 False ```
I'd like to suggest having a look at Python's [`argparse`](https://docs.python.org/dev/library/argparse.html) module, which is a giant improvement in parsing commandline parameters - it can also do the conversion to `int` for you including type-checking and error-reporting / generation of help messages.
Use: ``` sys.argv[1] ``` also note that arguments are always strings, and `range` expects an integer. So the correct code would be: ``` for i in range(int(sys.argv[1])): ```
Unable to have a command line parameter in Python
[ "", "python", "" ]
I'm using C# and ASP.NET 2.5. I want a simple way to generate a "file" on-the-fly (let's say a csv file for this example) and transmit it to the client without actually writing it to the server file system.
After some searching and trial and error, I developed the following. It seems to fit the bill exactly. It should be very easily adaptable to PHP or any other server-side software since it mostly involves modifying headers. ``` protected void streamToResponse() { Response.Clear(); Response.AddHeader("content-disposition", "attachment; filename=testfile.csv"); Response.AddHeader("content-type", "text/csv"); using(StreamWriter writer = new StreamWriter(Response.OutputStream)) { writer.WriteLine("col1,col2,col3"); writer.WriteLine("1,2,3"); } Response.End(); } ```
May I also suggest, that if you have something other than text, say, binary, you use the Response.WriteBinary() method I've often created a JPG on the fly, wrote it to a MemoryStream using the Bitmap.Save() method, turned the MemoryStream into a byte[] array using .ToArray() and then Response.WriteBinary() that array.
Write a file "on-the-fly" to the client with C#
[ "", "c#", "asp.net", "" ]
One of our old C++ projects is still with Visual Studio 6. Once a year I try to convert it in to a higher Visual Studio Version but it's not easy because not all the code is written by us. Anyway, I finally succeeded in converting the project to VS2005 after fixing a few hundred lines of code. But compiling the projects takes a very long time! Much longer than in VS6. Some classes have a lot of codelines, a few thousands even. These are just arrays to be filled in the code with a lot of items. I know it's not the perfect solution but this is how it is at the moment and VS6 never had a problem with that. Maybe there are just some settings I have to adjust to speed things up but if it stays like it is now I will keep it as an VS6 project since I don't want to sit at my desk all day doing nothing. Any ideas?
VS2005 produces more optimized code and thus has to spend extra time figuring out how to make it faster.
Differences in compile times are normal. The C++ compiler from VS2005 is significantly more compliant to standard C++ than VC6 was. There is a huge difference between these two compilers.
Visual Studio 2005 C++ Compiler slower that Visual Studio 6 Compiler?
[ "", "c++", "visual-studio-2005", "visual-c++-2005", "" ]
I want to generate unique hexadecimal numbers in SQL. How can I do this? If you know how to generate in c# please also include that.
**SQL Server**: ``` SELECT NEWID() ``` **Oracle**: ``` SELECT SYS_GUID() FROM dual ``` **MySQL**: ``` SELECT UUID() ``` In **PostgreSQL**, you'll have to use an external function, though it has `UUID` type.
In SQL Server: newid() In C#: System.Guid.NewGuid() These get you [GUID's](http://en.wikipedia.org/wiki/Guid), which are unique, hexidecimal numbers; however, they tend to have dashes (-) in the middle of them, so you may have to do a bit of string parsing to get what you want. Other than that, though, this should work for you.
Create hexadecimal number in SQL
[ "", "sql", "sql-server", "database", "unique-key", "" ]
I have some jar files that will be distributed to clients that are using log4j for logging. My question is should I include a log4j.xml configuration in the jar file or have the client provide one if they want logging? My feeling is to leave the log4j.xml configuration file **out** of the client jars, since the apache jar files all come with log4j logging, but sans log4j.xml.
Yes, leave it out. It's an utter nuisance when your log4j configuration file is ignored because one of the 60 third-party libraries of your app contains its own.
The good thing about log4j in your case is that your jar really shouldn't have to worry about it. The basic use case of log4j is: 1. Obtain a logger object for the current class 2. Call one of the methods on that logger, such as `debug("some message");` If the jars you are shipping are to be used by a larger application, then ideally your code will only do the two steps listed above. In this way, your code will simply obtain logger objects from the already-configured log4j instance in the client's application. Your production code is then decoupled from having to know how to configure log4j. Any logging you need to see for your development of the jars can be accomplished by configuring a log4j instance in unit test setUp() methods or something similar that won't get bundled with the production code going to the client.
log4j.xml in client jars
[ "", "java", "client", "log4j", "" ]
I have loaded a jpg image in which I want to draw letters and circles, given a x,y coordinate. I have been trying to figure out the paintIcon of the [ImageIcon](http://java.sun.com/j2se/1.4.2/docs/api/javax/swing/ImageIcon.html) class ``` public void paintIcon(Component c, Graphics g, int x, int y) ``` Does this method allow me to edit jpg images the way I want to? What are supposd to be the Component c and Graphics g paramethers? What would I add to its body to paint circles or letters? I'm working on Netbeans 6.5, do I have anything builtin for this task (instead of ImageIcon)?
The pure-Java way is to use [`ImageIO`](http://java.sun.com/javase/6/docs/api/javax/imageio/ImageIO.html) to [load](http://java.sun.com/javase/6/docs/api/javax/imageio/ImageIO.html#read(java.io.File)) the image as a [`BufferedImage`](http://java.sun.com/javase/6/docs/api/java/awt/image/BufferedImage.html). Then you can call [`createGraphics()`](http://java.sun.com/javase/6/docs/api/java/awt/image/BufferedImage.html#createGraphics()) to get a `Graphics2D` object; you can then draw whatever you want onto the image. You can use an `ImageIcon` embedded in a `JLabel` to do the displaying, and you can add a `MouseListener` and/or a `MouseMotionListener` to the `JLabel` if you're trying to allow the user to edit the image.
Manipulating images in Java can be achieved by using the [`Graphics`](http://java.sun.com/javase/6/docs/api/java/awt/Graphics.html) or [`Graphics2D`](http://java.sun.com/javase/6/docs/api/java/awt/Graphics2D.html) contexts. Loading images such as JPEG and PNG can be performed by using the [`ImageIO`](http://java.sun.com/javase/6/docs/api/javax/imageio/ImageIO.html) class. The `ImageIO.read` method takes in a `File` to read in and returns a [`BufferedImage`](http://java.sun.com/javase/6/docs/api/java/awt/image/BufferedImage.html), which can be used to manipulate the image via its [`Graphics2D`](http://java.sun.com/javase/6/docs/api/java/awt/Graphics2D.html) (or the [`Graphics`](http://java.sun.com/javase/6/docs/api/java/awt/Graphics.html), its superclass) context. The `Graphics2D` context can be used to perform many image drawing and manipulation tasks. For information and examples, the [Trail: 2D Graphics](http://java.sun.com/docs/books/tutorial/2d/index.html) of [The Java Tutorials](http://java.sun.com/docs/books/tutorial/index.html) would be a very good start. Following is a simplified example (untested) which will open a JPEG file, and draw some circles and lines (exceptions are ignored): ``` // Open a JPEG file, load into a BufferedImage. BufferedImage img = ImageIO.read(new File("image.jpg")); // Obtain the Graphics2D context associated with the BufferedImage. Graphics2D g = img.createGraphics(); // Draw on the BufferedImage via the graphics context. int x = 10; int y = 10; int width = 10; int height = 10; g.drawOval(x, y, width, height); g.drawLine(0, 0, 50, 50); // Clean up -- dispose the graphics context that was created. g.dispose(); ``` The above code will open an JPEG image, and draw an oval and a line. Once these operations are performed to manipulate the image, the `BufferedImage` can be handled like any other `Image`, as it is a subclass of `Image`. For example, by creating an [`ImageIcon`](http://java.sun.com/javase/6/docs/api/javax/swing/ImageIcon.html) using the `BufferedImage`, one can embed the image into a [`JButton`](http://java.sun.com/javase/6/docs/api/javax/swing/JButton.html) or [`JLabel`](http://java.sun.com/javase/6/docs/api/javax/swing/JLabel.html): ``` JLabel l = new JLabel("Label with image", new ImageIcon(img)); JButton b = new JButton("Button with image", new ImageIcon(img)); ``` The `JLabel` and `JButton` both have constructors which take in an `ImageIcon`, so that can be an easy way to add an image to a Swing component.
How can I edit a jpg image through Java?
[ "", "java", "image", "netbeans", "" ]
I have a drop down list that is populated by inspecting a class's methods and including those that match a specific signature. The problem is in taking the selected item from the list and getting the delegate to call that method in the class. The first method works, but I cannot figure out part of the second. For example, ``` public delegate void MyDelegate(MyState state); public static MyDelegate GetMyDelegateFromString(string methodName) { switch (methodName) { case "CallMethodOne": return MyFunctionsClass.CallMethodOne; case "CallMethodTwo": return MyFunctionsClass.CallMethodTwo; default: return MyFunctionsClass.CallMethodOne; } } public static MyDelegate GetMyDelegateFromStringReflection(string methodName) { MyDelegate function = MyFunctionsClass.CallMethodOne; Type inf = typeof(MyFunctionsClass); foreach (var method in inf.GetMethods()) { if (method.Name == methodName) { //function = method; //how do I get the function to call? } } return function; } ``` How do I get the commented out section of the second method to work? How do I cast the `MethodInfo` into the delegate? Thanks! **Edit: Here is the working solution.** ``` public static MyDelegate GetMyDelegateFromStringReflection(string methodName) { MyDelegate function = MyFunctionsClass.CallMethodOne; Type inf = typeof(MyFunctionsClass); foreach (var method in inf.GetMethods()) { if (method.Name == methodName) { function = (MyDelegate)Delegate.CreateDelegate(typeof(MyDelegate), method); } } return function; } ```
You'll need to call some form of [Delegate.CreateDelegate()](https://learn.microsoft.com/en-us/dotnet/api/system.delegate.createdelegate), depending on whether the method in question is a static or instance method.
``` public static Delegate CreateDelegate(this MethodInfo methodInfo, object target) { Func<Type[], Type> getType; var isAction = methodInfo.ReturnType.Equals((typeof(void))); var types = methodInfo.GetParameters().Select(p => p.ParameterType); if (isAction) { getType = Expression.GetActionType; } else { getType = Expression.GetFuncType; types = types.Concat(new[] { methodInfo.ReturnType }); } if (methodInfo.IsStatic) { return Delegate.CreateDelegate(getType(types.ToArray()), methodInfo); } return Delegate.CreateDelegate(getType(types.ToArray()), target, methodInfo.Name); } ```
Getting a delegate from methodinfo
[ "", "c#", "reflection", "delegates", "" ]
I am quite new to Android and Java. Before I was working with C++ where the events where dispatched with messages. Now I would like to create the same user experience for Android platform and I would appreciate any of your suggestions or comments on what is the best way to bind events to user controls. Here is an example from C++: ``` ON_MESSAGE(WM_RECORD_START, &CMainFrame::OnRecordStart)//Method OnRecordStarts() executes on WM_RECORD_START_MESSAGE ``` ... ``` LRESULT CMainFrame::OnRecordStart(WPARAM wParam, LPARAM lParam) { m_pNetworkCtrl->SetGeoLocationInfo(); ... } ``` ... ``` void CMainFrame::RecordStart() { PostMessage(WM_RECORD_START); } ``` In the case above the method `RecordStart()` is bound to a `Button` (it is executed when a `Button` is pressed) and posts the message `WM_RECORD_START`. When the message `WM_RECORD_START` is received, the method `OnRecordStart()` is executed. As mentioned before I would like to create a responsive user interface and am not sure if it would be good enough if the method `OnRecordStart()` is called directly from `RecordStart()`: ``` void RecordStart() { OnRecordStart(); } ``` I would really appreciate any of your suggestions.
You can emulator the MFC style behavior in Android by using a [Handler](http://developer.android.com/reference/android/os/Handler.html) to post a [Runnable](http://developer.android.com/reference/java/lang/Runnable.html) into the message queue. Here is a brief example ``` class MyClass { Handler m_myHandler; Runnable m_myRunnable; MyClass() { m_myHandler = new Handler(); m_myRunnable = new RUnnable() { public void run() { // do your stuff here } }; } public void onclickListener(...) { // push the runnable into the message queue m_myHandler.post(m_myRUnnable); } } ```
You have 2 questions here. 1. How to bind controls. I use anonymous inner classes everywhere. It is a bit verbose to type in but auto completion makes it a snap. 2. how to make the UI responsive. The key is not to do anything time consuming in the UI thread. If it takes more than .1s, do the work in a worker thread and notify the UI to update when the task is done. The other thing is to make sure you don't generate a lot of garbage since the android GC is pretty primitive right now.
Android: Best practice for responsive user interfaces
[ "", "java", "android", "message", "" ]
Anyone know if there are plans for LINQ to MDX . Does any way currently exist to use LINQ with MDX / CUBES
The answer is definately no. If you drink the Entity Framework koolaid, you'll believe that if you start using EF now (instead of linq2sql) you'll get OLAP/BI for free down the road.
I think they'd have to add more than a few new operators and methods to LINQ before they could support even a significant subset of MDX: ``` WITH SET c0 AS 'HIERARCHIZE({[Measures].[Internet Sales Amount], [Measures].[Internet Gross Profit], [Measures].[Reseller Gross Profit], [Measures].[Average Sales Amount], [Measures].[Gross Profit]})' SET r0_1 AS 'HIERARCHIZE(Filter(Filter({[Customer].[Customer Geography].[All Customers], AddCalculatedMembers(DESCENDANTS([Customer].[Customer Geography].[All Customers], 1))}, NOT [Customer].[Customer Geography].currentmember.parent is [Customer].[Customer Geography].[All Customers] OR vba!ucase(left([Customer].[Customer Geography].currentmember.properties("CAPTION"),1)) = "u"), NOT [Customer].[Customer Geography].currentmember.level is [Customer].[Customer Geography].[Country] OR vba!ucase(left([Customer].[Customer Geography].currentmember.properties("CAPTION"),1)) = "u"))' SELECT NON EMPTY {[c0]} ON COLUMNS, NON EMPTY VISUALTOTALS(FILTER({[r0_1]} ,vba!ucase(right([Customer].[Customer Geography].currentmember.properties("CAPTION"),8)) = "SUBTOTAL" OR [Measures].[Internet Gross Profit] > 80000) , "* SUBTOTAL") ON ROWS FROM [Adventure Works] CELL PROPERTIES VALUE, FORMATTED_VALUE, FORMAT_STRING, FORE_COLOR, BACK_COLOR ```
Is there any way to use LINQ for MDX queries?
[ "", "sql", "database", "linq", "mdx", "cubes", "" ]
Single inheritance is easy to implement. For example, in C, the inheritance can be simulated as: ``` struct Base { int a; } struct Descendant { Base parent; int b; } ``` But with multiple inheritance, the compiler has to arrange multiple parents inside newly constructed class. How is it done? The problem I see arising is: should the parents be arranged in AB or BA, or maybe even other way? And then, if I do a cast: ``` SecondBase * base = (SecondBase *) &object_with_base1_and_base2_parents; ``` The compiler must consider whether to alter or not the original pointer. Similar tricky things are required with virtuals.
The following paper from the creator of C++ describes a possible implementation of multiple inheritance: [Multiple Inheritance for C++](http://www-plan.cs.colorado.edu/diwan/class-papers/mi.pdf) - Bjarne Stroustrup
There was [this pretty old MSDN article](http://www.openrce.org/articles/files/jangrayhood.pdf) on how it was implemented in VC++.
How is C++'s multiple inheritance implemented?
[ "", "c++", "inheritance", "compiler-construction", "multiple-inheritance", "language-implementation", "" ]
I'm participating in online judge contests and I want to test my code with a .in file full of testcases to time my algorithm. How can I get my script to take input from this .in file?
So the script normally takes test cases from stdin, and now you want to test using test cases from a file? If that is the case, use the `<` redirection operation on the cmd line: ``` my_script < testcases.in ```
Read from file(s) and/or stdin: ``` import fileinput for line in fileinput.input(): process(line) ```
How to test a Python script with an input file filled with testcases?
[ "", "python", "input", "" ]