Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Using Firebird, I want to combine the results of two queries using UNION ALL, then sort the resulting output on a given column. ``` (select C1, C2, C3 from T1) union all (select C1, C2, C3 from T2) order by C3 ``` The parentheses came from valid syntax for other databases, and are needed to make sure the arguments to UNION ALL (an operation that's defined to work on tables - i.e. an *unordered* set of records) don't try to be ordered individually. However I couldn't get this syntax to work in Firebird--how can it be done?
``` SELECT C1, C2, C3 FROM ( select C1, C2, C3 from T1 union all select C1, C2, C3 from T2 ) order by C3 ```
Field names are not required to be equal. That's why you can't use the field name in the order by. You may use the field index instead. As in: ``` (select C1, C2, C3 from T1) union all (select C7, C8, C9 from T2) order by 3 ```
Combining UNION ALL and ORDER BY in Firebird
[ "", "sql", "database", "database-design", "firebird", "" ]
I need to include a copyright statement at the top of every Python source file I produce: ``` # Copyright: © 2008 etc. ``` However, when I then run such a file I get this message: SyntaxError: Non-ASCII character '\xa9' in file MyFile.py on line 3, but no encoding declared; see <http://www.python.org/peps/pep-0263.html> for details. Apparently Python isn't happy about the copyright symbol because it assumes the source file is all in ASCII. Either I need to make my first line be: ``` # -*- coding: iso-8859-1 -*- ``` to tell Python I'm using Latin encoding, or I can change the copyright statement to: ``` # Copyright: \xa9 2008 etc. ``` which just possibly doesn't have the same legal standing. Is there a more elegant solution?
The copyright symbol in ASCII is spelled `(c)` or "`Copyright`". See circular 61, [Copyright Registration for Computer Programs](http://www.copyright.gov/circs/circ61.pdf). While it's true that the legal formalism (see Circular 1, [Copyright Basics](http://www.copyright.gov/circs/circ01.pdf)) is > The symbol © (the letter C in a > circle), or the word “Copyright,” or > the abbreviation “Copr.”; and... And it's also true that > To guarantee protection for a > copyrighted work in all UCC member > countries, the notice must consist of > the symbol © (the word “Copyright” or > the abbreviation is not acceptable) You can dig through circular [3](http://www.copyright.gov/circs/circ03.html) and [38a](http://www.copyright.gov/circs/circ38a.html). This has, however, already been tested in court. It isn't an interesting issue. If you do a search for "(c) acceptable for c-in-a-circle", you'll find that lawyers all agree that (c) is an acceptable substitute. See Perle and Williams. See Scott on Information Technology Law.
Contrary to the accepted answer, AFAIK, (c) is not an officially recognized alternative to the copyright symbol, although I'm not sure it's been tested in court. However, © is just an abreviation of the word Copyright. Saying "Copyright 2008 Robert Munro" is identical to saying "© 2008 Robert Munro" Your "Copyright: © 2008 etc." Expands to "Copyright: Copyright 2008 etc." Wikipedia's page seems to agree with me <http://en.wikipedia.org/wiki/Copyright_symbol> In the United States, the copyright notice consists of three elements: 1. the © symbol, **or** the word "Copyright" or abbreviation "Copr."; ...
Putting Copyright Symbol into a Python File
[ "", "python", "encoding", "" ]
Is it okay to run Hibernate applications configured with `hbm2ddl.auto=update` to update the database schema in a production environment?
No, it's unsafe. Despite the best efforts of the Hibernate team, you simply cannot rely on automatic updates **in production**. Write your own patches, review them with DBA, test them, then apply them manually. Theoretically, if **hbm2ddl update** worked in development, it should work in production too. But in reality, it's not always the case. Even if it worked OK, it may be sub-optimal. DBAs are paid that much for a reason.
We do it in production albeit with an application that's not mission critical and with no highly paid DBAs on staff. It's just one less manual process that's subject to human error - the application can detect the difference and do the right thing, plus you've presumably tested it in various development and test environments. One caveat - in a clustered environment you may want to avoid it because multiple apps can come up at the same time and try to modify the schema which could be bad. Or put in some mechanism where only one instance is allowed to update the schema.
Hibernate: hbm2ddl.auto=update in production?
[ "", "java", "hibernate", "hbm2ddl", "" ]
Here is the situation: I have been called upon to work with InstallAnywhere 8, a Java-based installer IDE, of sorts, that allows starting and stopping of windows services, but has no built-in method to query their states. Fortunately, it allows you to create custom actions in Java which can be called at any time during the installation process (by way of what I consider to be a rather convoluted API). I just need something that will tell me if a specific service is started or stopped. The IDE also allows calling batch scripts, so this is an option as well, although once the script is run, there is almost no way to verify that it succeeded, so I'm trying to avoid that. Any suggestions or criticisms are welcome.
here's what I had to do. It's ugly, but it works beautifully. ``` String STATE_PREFIX = "STATE : "; String s = runProcess("sc query \""+serviceName+"\""); // check that the temp string contains the status prefix int ix = s.indexOf(STATE_PREFIX); if (ix >= 0) { // compare status number to one of the states String stateStr = s.substring(ix+STATE_PREFIX.length(), ix+STATE_PREFIX.length() + 1); int state = Integer.parseInt(stateStr); switch(state) { case (1): // service stopped break; case (4): // service started break; } } ``` `runProcess` is a private method that runs the given string as a command line process and returns the resulting output. As I said, ugly, but works. Hope this helps.
You can create a small VBS on-th-fly, launch it and capture its return code. ``` import java.io.File; import java.io.FileWriter; public class VBSUtils { private VBSUtils() { } public static boolean isServiceRunning(String serviceName) { try { File file = File.createTempFile("realhowto",".vbs"); file.deleteOnExit(); FileWriter fw = new java.io.FileWriter(file); String vbs = "Set sh = CreateObject(\"Shell.Application\") \n" + "If sh.IsServiceRunning(\""+ serviceName +"\") Then \n" + " wscript.Quit(1) \n" + "End If \n" + "wscript.Quit(0) \n"; fw.write(vbs); fw.close(); Process p = Runtime.getRuntime().exec("wscript " + file.getPath()); p.waitFor(); return (p.exitValue() == 1); } catch(Exception e){ e.printStackTrace(); } return false; } public static void main(String[] args){ // // DEMO // String result = ""; msgBox("Check if service 'Themes' is running (should be yes)"); result = isServiceRunning("Themes") ? "" : " NOT "; msgBox("service 'Themes' is " + result + " running "); msgBox("Check if service 'foo' is running (should be no)"); result = isServiceRunning("foo") ? "" : " NOT "; msgBox("service 'foo' is " + result + " running "); } public static void msgBox(String msg) { javax.swing.JOptionPane.showConfirmDialog((java.awt.Component) null, msg, "VBSUtils", javax.swing.JOptionPane.DEFAULT_OPTION); } } ```
Need a way to check status of Windows service programmatically
[ "", "java", "command-line", "windows-services", "" ]
So regular expressions seem to match on the longest possible match. For instance: ``` public static void main(String[] args) { String s = "ClarkRalphKentGuyGreenGardnerClarkSupermanKent"; Pattern p = Pattern.compile("Clark.*Kent", Pattern.CASE_INSENSITIVE); Matcher myMatcher = p.matcher(s); int i = 1; while (myMatcher.find()) { System.out.println(i++ + ". " + myMatcher.group()); } } ``` generates output 1. ClarkRalphKentGuyGreenGardnerClarkSupermanKent I would like this output 1. ClarkRalphKent 2. ClarkSupermanKent I have been trying Patterns like: ``` Pattern p = Pattern.compile("Clark[^((Kent)*)]Kent", Pattern.CASE_INSENSITIVE); ``` that don't work, but you see what I'm trying to say. I want the string from Clark to Kent that doesn't contain any occurrences of Kent. This string: ClarkRalphKentGuyGreenGardnerBruceBatmanKent should generate output 1. ClarkRalphKent
greedy vs reluctant is your friend here. try: `Clark.+?Kent`
You want a "reluctant" rather than a "greedy" quantifier. Simply putting a ? after your \* should do the trick.
Negating literal strings in a Java regular expression
[ "", "java", "regex", "" ]
What exactly are the Python scoping rules? If I have some code: ``` code1 class Foo: code2 def spam..... code3 for code4..: code5 x() ``` Where is `x` found? Some possible choices include the list below: 1. In the enclosing source file 2. In the class namespace 3. In the function definition 4. In the for loop index variable 5. Inside the for loop Also there is the context during execution, when the function `spam` is passed somewhere else. And maybe [lambda functions](https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions) pass a bit differently? There must be a simple reference or algorithm somewhere. It's a confusing world for intermediate Python programmers.
Actually, a concise rule for Python Scope resolution, from [Learning Python, 3rd. Ed.](https://rads.stackoverflow.com/amzn/click/com/0596513984). (These rules are specific to variable names, not attributes. If you reference it without a period, these rules apply.) **LEGB Rule** * **L**ocal — Names assigned in any way within a function (`def` or `lambda`), and not declared global in that function * **E**nclosing-function — Names assigned in the local scope of any and all statically enclosing functions (`def` or `lambda`), from inner to outer * **G**lobal (module) — Names assigned at the top-level of a module file, or by executing a `global` statement in a `def` within the file * **B**uilt-in (Python) — Names preassigned in the built-in names module: `open`, `range`, `SyntaxError`, etc So, in the case of ``` code1 class Foo: code2 def spam(): code3 for code4: code5 x() ``` The `for` loop does not have its own namespace. In LEGB order, the scopes would be * L: Local in `def spam` (in `code3`, `code4`, and `code5`) * E: Any enclosing functions (if the whole example were in another `def`) * G: Were there any `x` declared globally in the module (in `code1`)? * B: Any builtin `x` in Python. `x` will never be found in `code2` (even in cases where you might expect it would, see [Antti's answer](https://stackoverflow.com/a/23471004/2810305) or [here](https://stackoverflow.com/q/13905741/2810305)).
Essentially, the only thing in Python that introduces a new scope is a function definition. Classes are a bit of a special case in that anything defined directly in the body is placed in the class's namespace, but they are not directly accessible from within the methods (or nested classes) they contain. In your example there are only 3 scopes where x will be searched in: * spam's scope - containing everything defined in code3 and code5 (as well as code4, your loop variable) * The global scope - containing everything defined in code1, as well as Foo (and whatever changes after it) * The builtins namespace. A bit of a special case - this contains the various Python builtin functions and types such as len() and str(). Generally this shouldn't be modified by any user code, so expect it to contain the standard functions and nothing else. More scopes only appear when you introduce a nested function (or lambda) into the picture. These will behave pretty much as you'd expect however. The nested function can access everything in the local scope, as well as anything in the enclosing function's scope. eg. ``` def foo(): x=4 def bar(): print x # Accesses x from foo's scope bar() # Prints 4 x=5 bar() # Prints 5 ``` **Restrictions:** Variables in scopes other than the local function's variables can be accessed, but can't be rebound to new parameters without further syntax. Instead, assignment will create a new **local** variable instead of affecting the variable in the parent scope. For example: ``` global_var1 = [] global_var2 = 1 def func(): # This is OK: It's just accessing, not rebinding global_var1.append(4) # This won't affect global_var2. Instead it creates a new variable global_var2 = 2 local1 = 4 def embedded_func(): # Again, this doen't affect func's local1 variable. It creates a # new local variable also called local1 instead. local1 = 5 print local1 embedded_func() # Prints 5 print local1 # Prints 4 ``` In order to actually modify the bindings of global variables from within a function scope, you need to specify that the variable is global with the global keyword. Eg: ``` global_var = 4 def change_global(): global global_var global_var = global_var + 1 ``` Currently there is no way to do the same for variables in enclosing *function* scopes, but Python 3 introduces a new keyword, "`nonlocal`" which will act in a similar way to global, but for nested function scopes.
Short description of the scoping rules
[ "", "python", "scope", "" ]
I would like to trim long sequences of the same value from a binary file in python. A simple way of doing it is simply reading in the file and using re.sub to replace the unwanted sequence. This will of course not work on large binary files. Can it be done in something like numpy?
If you don't have the memory to do `open("big.file").read()`, then numpy wont really help.. It uses the same memory as python variables do (if you have 1GB of RAM, you can only load 1GB of data into numpy) The solution is simple - read the file in chunks.. `f = open("big.file", "rb")`, then do a series of `f.read(500)`, remove the sequence and write it back out to another file object. Pretty much how you do file reading/writing in C.. The problem then is if you miss the pattern you are replacing.. For example: ``` target_seq = "567" input_file = "1234567890" target_seq.read(5) # reads 12345, doesn't contain 567 target_seq.read(5) # reads 67890, doesn't contain 567 ``` The obvious solution is to start at the first character in the file, check `len(target_seq)` characters, then go forward one character, check forward again. For example (pseudo code!): ``` while cur_data != "": seek_start = 0 chunk_size = len(target_seq) input_file.seek(offset = seek_start, whence = 1) #whence=1 means seek from start of file (0 + offset) cur_data = input_file.read(chunk_size) # reads 123 if target_seq == cur_data: # Found it! out_file.write("replacement_string") else: # not it, shove it in the new file out_file.write(cur_data) seek_start += 1 ``` It's not exactly the most efficient way, but it will work, and not require keeping a copy of the file in memory (or two).
If two copies fit in memory, then you can easily make a copy. The second copy is the compressed version. Sure, you can use numpy, but you can also use the [array](http://www.python.org/doc/2.5.2/lib/module-array.html) package. Additionally, you can treat your big binary object as a string of bytes and manipulate it directly. It sounds like your file may be *REALLY* large, and you can't fit two copies into memory. (You didn't provide a lot of details, so this is just a guess.) You'll have to do your compression in chunks. You'll read in a chunk, do some processing on that chunk and write it out. Again, numpy, array or simple string of bytes will work fine.
Removing a sequence of characters from a large binary file using python
[ "", "python", "numpy", "binaryfiles", "" ]
Here is what I have: JAVA\_HOME=C:\Software\Java\jdk1.5.0\_12 (points to JDK 5.0) In Eclipse "Installed Runtimes" I have: jre 1.5.0\_12 (points to JRE 5.0) jre 1.6.0\_3 (points to JRE 6.0) (this one is default) I do not have "javac" on my PATH (i.e. I cannot run javac -version from command line if I am not in JDK/bin). My project is set to use jre 1.6.0\_3 and compiler's Compliance Level is set to 6.0. Question is: how Eclipse is able to compile my project? Where it finds a compiler for 6.0? Moreover, is there a way to tell precisely which compiler (i.e. path to javac) is used by Eclipse? P.S. OS XP SP2 if it is relevant P.P.S. I do not have any -vm set in eclipse.ini. In eclipse I see eclipse.vm=C:\Program Files\Java\jre6\bin\client\jvm.dll and it is still a JRE.
Eclipse has the [JDT](http://www.eclipse.org/jdt/overview.php) which includes the incremental compiler so it does not need an external one unless that is your wish :)
Eclipse has a list of installed JRE's under **window->preferences->java->Installed JRE's**. The one selected as the default will be the one included with Eclipse, but you can easily add any other JRE's from this same preference pane, and select any default you wish. This will be the system wide default, which can be overridden on a project by project basis from the Build Path->Libraries tab. To change, select **Add Library->JRE System Library** and choose from your configured JRE's. Then remove the library for the default.
Where does Eclipse find javac to compile a project?
[ "", "java", "eclipse", "" ]
I'm using Spring's dependency injection but I'm running into difficulty loading a resource in my Spring config file. The resource is an XML file and is in a JAR file on my classpath. I try to access it as follows: ``` <import resource="classpath:com/config/resources.xml" /> ``` however I keep getting encountering the following error: > Failed to import bean definitions from URL location [classpath:com/config/resources.xml] The JAR file is on the classpath of a Java project, which is in turn used by my web app. Should I really be doing my Spring configuration in the web project as opposed the Java project, or does that matter?
If it needs to be in the classpath of your webapp, then you should stick the JAR containing the config file into your WEB-INF/lib directory. If you're using a webapp, then the common convention is use a ContextLoaderListener to ensure a WebApplicationContext is inserted into a standard place in the ServletContext: ``` <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <context-param> <param-name>contextConfigLocation</param-name> <param-value>classpath:/com/config/resources.xml</param-value> </context-param> ``` Then use [WebApplicationContextUtils](http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/web/context/support/WebApplicationContextUtils.html) to fish the application context out of the servlet context using: ``` WebApplicationContext wac = WebApplicationContextUtils.getRequiredWebApplicationContext(getServletContext()); ```
I ran into a similar issue with a red5 plugin. I resolved it like so: ``` try { subContext = new FileSystemXmlApplicationContext(new String[] { "classpath*:/myconfig.xml" }, true, context); } catch (Exception fnfe) { subContext = new FileSystemXmlApplicationContext(new String[] { "plugins/myconfig.xml" }, true, context); } ``` This will look anywhere on the classpath first, including within the jar that contains my code. If an exception occurs the plugin directory is checked. It may not be the best solution but it works.
spring beans configuration
[ "", "java", "spring", "" ]
I have a web-app that I would like to extend to support multiple languages with new URLs. For example, www.example.com/home.do stays English, but www.example.com/es/home.do is Spanish. My first thought was to create a Filter which rewrites incoming urls like /es/home.do to /home.do (and sets the Locale in the Request); this works fine. The Filter wraps the ServletRequest with an HttpServletRequestWrapper which overrides getContextPath() to return the language: ``` class FakeContextRequest extends HttpServletRequestWrapper { private String context = ""; FakeContextRequest(HttpServletRequest request, String context) { super(request); // snip some validation code this.context = request.getContextPath() + context; } @Override public String getContextPath() { return this.context; } } ``` My Filter forwards to the appropriate request as follows: ``` FakeContextRequest fr = new FakeContextRequest(request, lang); fr.getRequestDispatcher(newResourceName).forward(fr, response); ``` My problem is that the next servlet doesn't forward properly. The next servlet (typically a Struts ActionServlet) forwards to a JSP (often using Struts Tiles); when I get to the JSP the HttpServletRequest has been wrapped several times and the object in question reports the context to be empty (the root context, which is where the application is actually deployed). I want the context to be re-written so that all my context-aware code that already exists can automatically insert the language into the URLs that are written. Is this possible? **Edit:** I solved my problem by using a wrapped HttpServletResponse instead of a wrapped HttpServletRequest; I rewrite the URL in the response.encodeURL() method.
I'm not sure that overriding `getContextPath()` is enough to solve your problem. What if Struts is calling `ServletContext.getContextPath()` under the covers, or uses `getRequestURI()`, etc?
As far as I know, the conventional way to do this is with the accept-language HTTP header. The presentation language is a presentation detail, which shouldn't be represented by the set of URLs to navigate through the application.
Override getContextPath in an HttpServletRequest (for URL rewriting)
[ "", "java", "jakarta-ee", "url-rewriting", "servlet-filters", "" ]
I'm writing a simple C++ class in which I would like cache picture thumbnails versions of images downloaded from the web. As such, I would like to use a hash function which takes in URL strings and outputs a unique string suitable as a filename. Is there a simple way to do this without re-writing the function myself? I searched around for a simple library, but couldn't find anything. Surely this is a common problem.
A simpler approach is to replace everything which is not a character or a number with an underscore. EDIT: Here's a naive implementation in C: ``` #include <cctype> char *safe_url(const char *str) { char *safe = strdup(str); for (int i = 0; i < strlen(str); i++) { if (isalpha(str[i])) safe[i] = str[i]; else safe[i] = '_'; } } ```
In a similar situation I encoded the key's bytes in hex (where, in your case, the key is the hash of the URL). This doubles the size but is simple, avoids any possible problems with your filesystem mangling the characters, and sorts in the same order as the original key. (Originally I tried a slightly fancier, more efficient encoding, which I thought escaped any problematic characters, but OS X's filesystem turns out to be crazier than I assumed.)
Encode URLs into safe filename string
[ "", "c++", "url", "caching", "hash", "filenames", "" ]
I'm trying to start `iexplore.exe` let it run for 5 seconds and then close it again. `iexplore` opens just fine however it doesn't close when I call the PostThreadMessage. Can anyone see what I'm doing wrong? Here is my code: ``` CString IEPath = "C:\\Program Files\\Internet Explorer\\IEXPLORE.EXE";//GetIEPath(); //IEPath += ' ' + url; std::string strCommand((LPCTSTR)IEPath); PROCESS_INFORMATION procinfo; STARTUPINFO startupinfo; GetStartupInfo(&startupinfo); CreateProcess( NULL, (char *)strCommand.c_str(),// name of executable module NULL, // lpProcessAttributes NULL, // lpThreadAttributes false, // handle inheritance option CREATE_SHARED_WOW_VDM, // creation flags NULL, // new environment block NULL, // current directory name &startupinfo, // startup information &procinfo // process information ); Sleep(5000); ::PostThreadMessage(procinfo.dwThreadId, WM_QUIT, 0, 0); //<---Dosent Close internet explorer! ``` Anyone have an idea of what I'm doing wrong? Or is there better way what to do the trick?
if you can enumerate the windows on the desktop and send a WM\_CLOSE to the IE window , it might work .. you can use the spy programme to get the window class of the IE window
What is the return value from the PostThreadMessage call? That might give a clue.
How do I use PostThreadMessage to close internet explorer from C++
[ "", "c++", "windows", "multithreading", "messaging", "" ]
How to convert Unicode string into a utf-8 or utf-16 string? My VS2005 project is using Unicode char set, while sqlite in cpp provide ``` int sqlite3_open( const char *filename, /* Database filename (UTF-8) */ sqlite3 **ppDb /* OUT: SQLite db handle */ ); int sqlite3_open16( const void *filename, /* Database filename (UTF-16) */ sqlite3 **ppDb /* OUT: SQLite db handle */ ); ``` for opening a folder. How can I convert string, CString, or wstring into UTF-8 or UTF-16 charset? Thanks very much!
Short answer: No conversion required if you use Unicode strings such as CString or wstring. Use sqlite3\_open16(). You will have to make sure you pass a WCHAR pointer (casted to `void *`. Seems lame! Even if this lib is cross platform, I guess they could have defined a wide char type that depends on the platform and is less unfriendly than a `void *`) to the API. Such as for a CString: `(void*)(LPCWSTR)strFilename` The longer answer: You don't have a Unicode string that you want to convert to UTF8 or UTF16. You have a Unicode string represented in your program using a given encoding: Unicode is not a binary representation per se. Encodings say how the Unicode code points (numerical values) are represented in memory (binary layout of the number). UTF8 and UTF16 are the most widely used encodings. They are very different though. When a VS project says "Unicode charset", it actually means "characters are encoded as UTF16". Therefore, you can use sqlite3\_open16() directly. No conversion required. Characters are stored in WCHAR type (as opposed to `char`) which takes 16 bits (Fallsback on standard C type `wchar_t`, which takes 16 bits on Win32. Might be different on other platforms. Thanks for the correction, Checkers). There's one more detail that you might want to pay attention to: UTF16 exists in 2 flavors: Big Endian and Little Endian. That's the byte ordering of these 16 bits. The function prototype you give for UTF16 doesn't say which ordering is used. But you're pretty safe assuming that sqlite uses the same endian-ness as Windows (Little Endian IIRC. I know the order but have always had problem with the names :-) ). EDIT: Answer to comment by Checkers: UTF16 uses 16 bits *code units*. Under Win32 (and *only* on Win32), `wchar_t` is used for such storage unit. The trick is that some Unicode characters require a sequence of 2 such 16-bits code units. They are called Surrogate Pairs. The same way an UTF8 represents 1 character using a 1 to 4 bytes sequence. Yet UTF8 are used with the `char` type.
Use the [WideCharToMultiByte](http://msdn.microsoft.com/en-us/library/ms776420(VS.85).aspx) function. Specify `CP_UTF8` for the `CodePage` parameter. ``` CHAR buf[256]; // or whatever WideCharToMultiByte( CP_UTF8, 0, StringToConvert, // the string you have -1, // length of the string - set -1 to indicate it is null terminated buf, // output __countof(buf), // size of the buffer in bytes - if you leave it zero the return value is the length required for the output buffer NULL, NULL ); ``` Also, the default encoding for unicode apps in windows is UTF-16LE, so you might not need to perform any translation and just use the second version `sqlite3_open16`.
How to convert Unicode string into a utf-8 or utf-16 string?
[ "", "c++", "unicode", "utf-8", "character-encoding", "utf-16", "" ]
I am developing a .NET CF 3.5 network game. My issue is the app loads all the resources at first instance. However upon subsequent launches, the app gives me memory out of exception while loading resources especially sounds or big images. Please guide me
I assume you're not attempting to lauch multiple instances of the game at a time. This sounds like memory is not being returned to the OS after your game shuts down. One simple way to determine if you have a leak is: 1. Restart the device 2. Check the memory usage 3. Start your game, play it for a few minutes 4. Close the game 5. Wait a few minutes, then check the memory usage again If you can get a few launches in before you run out of memory, you have a small leak. If you can only launch once before restarting the device, you have a BIG one. Garbage collection will only go so far. If you are making any calls to unmanaged code (Win32 or PInvoke calls that instantiate unmanaged objects,) then you need to be sure to release those resources when your game shuts down.
How much memory is your application taking after loading all the resources ? On default settings I have been getting this error coming over cca 1.3 GB of private bytes (checking the task manager and the processes memory allocation).
.NET CF Application and Out of Memory Exception
[ "", "c#", ".net", "memory-management", "" ]
I'd like to make an (MS)SQL query that returns something like this: ``` Col1 Col2 Col3 ---- --------------------- ------ AAA 18.92 18.92 BBB 20.00 40.00 AAA 30.84 30.84 BBB 06.00 12.00 AAA 30.84 30.84 AAA 46.79 46.79 AAA 86.40 86.40 ``` where Col3 is equal to Col2 when Col1 = AAA and Col3 is twice Col2 when Col1 = BBB. Can someone point me in the rigth direction please?
You didn't mention what kind of database you're using. Here's something that will work in SQL Server: ``` SELECT Col1, Col2, CASE WHEN Col1='AAA' THEN Col2 WHEN Col1='BBB' THEN Col2*2 ELSE NULL END AS Col3 FROM ... ```
You can also use the `ISNULL` or `COALESCE` functions like thus, should the values be null: ``` SELECT ISNULL(Col1, 'AAA') AS Col1, ISNULL(Col2, 0) AS Col2, CASE WHEN ISNULL(Col1, 'AAA') = 'BBB' THEN ISNULL(Col2, 0) * 2 ELSE ISNULL(Col2) END AS Col3 FROM Tablename ```
SQL inline if statement type question
[ "", "sql", "sql-server", "" ]
I am trying to have a tooltip on multiple lines. how do i do this?
Put a newline (use `Environment.NewLine`) into the actual tooltip text.
You can enter a newline in the designer also *(for static-text only, obviously)* by clicking the dropdown arrow near the tooltip property-box, and hitting enter where you want the newline.
Multiline tooltipText
[ "", "c#", "winforms", "" ]
Consider the following code: ``` template <int dim> struct vec { vec normalize(); }; template <> struct vec<3> { vec cross_product(const vec& second); vec normalize(); }; template <int dim> vec<dim> vec<dim>::normalize() { // code to normalize vector here return *this; } int main() { vec<3> direction; direction.normalize(); } ``` Compiling this code produces the following error: > 1>main.obj : error LNK2019: unresolved external symbol "public: struct vec<3> \_\_thiscall vec<3>::normalize(void)" (?normalize@?$vec@$02@@QAE?AU1@XZ) referenced in function \_main
You can't :) What you want is to specialize the member functions instead: ``` template <int dim> struct vec { // leave the function undefined for everything except dim==3 vec cross_product(const vec& second); vec normalize(); }; template<> vec<3> vec<3>::cross_product(const vec& second) { // ... } template <int dim> vec<dim> vec<dim>::normalize() { // code to normalize vector here return *this; } ``` Another, slightly more complicated solution is to use `boost::enable_if`: ``` template <int dim> struct vec { // function can't be called for dim != 3. Error at compile-time template<int dim1> typename boost::enable_if_c< dim == dim1 && dim1 == 3, vec<dim1> >::type cross_product(const vec<dim1>& second) { // ... } vec normalize(); // delegate to the template version void without_params() { // delegate this->without_params<dim>(); } private: // function can't be called for dim != 3. Error at compile-time template<int dim1> typename boost::enable_if_c< dim == dim1 && dim1 == 3 >::type without_params() { // ... } }; template <int dim> vec<dim> vec<dim>::normalize() { // code to normalize vector here return *this; } ``` That will cause a compile time error if cross\_product is called for any dim != 3. Note that that 'trick' only works for functions with parameters, since only then the template parameter can be auto-deduced. For cases without parameters, i have provided a function `without_parameters` above :).
You haven't supplied a definition of vec<3>::normalize, so the linker obviously can't link to it. The entire point in a template specialization is that you can supply specialized versions of each method. Except you don't actually do that in this case.
How can I get a specialized template to use the unspecialized version of a member function?
[ "", "c++", "templates", "specialization", "" ]
I have a Windows Mobile application using the compact framework (NETCF) that I would like to respond to someone pressing the send key and have the phone dial the number selected in my application. Is there a way using the compact framework to trap the send key? I have looked at several articles on capturing keys, but I have not found one that includes the "Send" key. **Update**: I found an article describing SetWindowsHookEx as an undocumented API on Windows Mobile. If this is the case then I really don't want to use it. [SetWindowsHookEx on Windows Mobile](http://blogs.msdn.com/raffael/archive/2008/05/12/setwindowshookex-on-windows-mobile.aspx) After doing more searching I found out that the "Send" key is called the "Talk" key in Windows Mobile lingo. I then found a blog post about using the SHCMBM\_OVERRIDEKEY message to signal the OS to send my app a WM\_HOTKEY message when the user presses the Talk key. [Jason Fuller Blog post about using the Talk button](http://blogs.msdn.com/windowsmobile/archive/2005/09/02/460327.aspx) The blog post and the documentation it points to seem like exactly what I'm looking for. I'm unable to find a working example, and I find a lot of people unable to make it work. It also looks like VK\_TTALK is not supported in SmartPhones. I would love to hear from someone that actually has this working on both Smartphones and PocketPC phones.
I can confirm that using SHCMBM\_OVERRIDEKEY works on both PPC and SP devices. I have tested it on WM5 PPC, WM5 SP, WM6 PPC, WM6 SP. I have not tried WM6.1 or WM6.5 yet but I kind-of assume that they work since WM6 works. Also you may need to support DTMF during the call as well? Since I was writing a LAP dll I followed the following page which you may find useful: [LAP Implementation Issues](http://msdn.microsoft.com/en-us/library/aa923659.aspx) These examples are in C so you will have to translate them into C#. To setup trapping of the "talk" key for a specific window you need to do: ``` SendMessage(SHFindMenuBar(window_hwnd), SHCMBM_OVERRIDEKEY, VK_TTALK, MAKELPARAM((SHMBOF_NODEFAULT|SHMBOF_NOTIFY), (SHMBOF_NODEFAULT|SHMBOF_NOTIFY)); ``` You can turn on/off the trap at any time. To turn the trap off it easy as well: ``` SendMessage(SHFindMenuBar(window_hwnd), SHCMBM_OVERRIDEKEY, VK_TTALK, MAKELPARAM(0, (SHMBOF_NODEFAULT|SHMBOF_NOTIFY)); ``` To detect when the 'Talk' key is pressed you need to trap the WM\_HOTKEY window message on the window proc: ``` case WM_HOTKEY: switch(HIWORD(lParam)) { case VK_TTALK: // make ph call break; } return TRUE; ``` To make a phone call you need to use the "PhoneMakeCall" API: ``` #include <phone.h> void MakePhoneCall(const wchar_t* number) { PHONEMAKECALLINFO call; memset(&call, 0x0, sizeof(PHONEMAKECALLINFO)); call.cbSize = sizeof(PHONEMAKECALLINFO); call.dwFlags = PMCF_DEFAULT; call.pszDestAddress = number; PhoneMakeCall(&call); } ``` To support DTMF during a phone call you need to track the phone call using [SNAPI](http://msdn.microsoft.com/en-us/library/aa455748.aspx) (I believe there is a C# library to help you out there [SystemProperty](http://msdn.microsoft.com/en-us/library/microsoft.windowsmobile.status.systemproperty.aspx)). Setup after starting the call: ``` #include <snapi.h> RegistryNotifyWindow(SN_PHONEACTIVECALLCOUNT_ROOT, SN_PHONEACTIVECALLCOUNT_PATH, SN_PHONEACTIVECALLCOUNT_VALUE, window_hwnd, callback_window_msg_number /*e.g. WM_APP */, 0, NULL, &phone_call_notify_handle); ``` You will be called back with the window message you supply when the call count changes. You need to read the registry and check that the call count drops to zero. If it does you need to close the SNAPI handle: ``` RegistryCloseNotification(phone_call_notify_handle); ``` While in the call send a message to the cprog application with the key that was pressed by the user: ``` #define WM_CPROG_SEND_VKEY_DTMF (WM_APP+3) // Sends the DTMF tone(s) through to the current call (converting from VKEY to DTMF chars) BOOL PhoneSendDTMF(UINT uvKey) { BOOL bRet = FALSE; static HWND s_hwndCProg = NULL; TCHAR chDTMF = MapVKeyToChar(uvKey); // Attempt to find the cprog window (MSCprog). // Try to keep this window handle cached. if(NULL == s_hwndCProg || !IsWindow(s_hwndCProg)) { s_hwndCProg = FindWindow(TEXT("MSCprog"), NULL); } // Send WM_CPROG_SEND_VKEY_DTMF to the CProg window. if(NULL != s_hwndCProg) { bRet = BOOLIFY(PostMessage(s_hwndCProg, WM_CPROG_SEND_VKEY_DTMF, (WPARAM)chDTMF, 0)); } return bRet; } ```
You can catch all keys in teh worlds (apart from CTRL+ALT+DEL on desktop) via a keyhook: [static extern IntPtr SetWindowsHookEx(HookType hook, HookProc callback, IntPtr hMod, uint dwThreadId);](http://www.pinvoke.net/default.aspx/user32/SetWindowsHookEx.html) You can use this (or one of the other overrides) in CE via coredll.dll (instead of user32). However this is not in the .NET Compact Framework and requires a P/Invoke. The best joy with "free" key pressery is Form.KeyPreview = true; and that only gives you the keys that get hit when the form is focused and sometimes nothing when dealing with devices that have special keys (possibly such as your "send" key, as I had a similar situation with "Application1" key on an Intermec unit a while back). There is however one fantastic alternative, I believe [OpenNETCF](http://www.opennetcf.com/) [either already does or will very soon roll with a keyhook](http://blog.opennetcf.com/ctacke/2008/09/03/KeyboardHookingInSDFVNext.aspx) (via the WinAPI call above) exposing a delightful managed wrapper. Joy! If that's not ready yet there is an abundance of articles and code kicking around that demonstrate how to use this via Google so in the worst case scenario you should be able to find something if you perform a search like [this](http://www.google.co.uk/search?q=SetWindowsHookEx+compact+framework+c%23&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a). Perhaps something like [this](http://blog.opennetcf.com/ayakhnin/CategoryView,category,Compact%20Framework.aspx)?
Is there a way to capture the send key on Windows Mobile and have it dial a specific number?
[ "", "c#", "windows-mobile", "compact-framework", "key", "" ]
\*\* Dup: [What's the difference between X = X++; vs X++;?](https://stackoverflow.com/questions/226002/whats-the-difference-between-x-x-vs-x) \*\* So, even though I know you would never actually do this in code, I'm still curious: ``` public static void main(String[] args) { int index = 0; System.out.println(index); // 0 index++; System.out.println(index); // 1 index = index++; System.out.println(index); // 1 System.out.println(index++); // 1 System.out.println(index); // 2 } ``` Note that the 3rd `sysout` is still `1`. In my mind the line `index = index++;` means "set index to index, then increment index by 1" in the same way `System.out.println(index++);` means "pass index to the println method then increment index by 1". This is not the case however. Can anyone explain what's going on?
this is a duplicate question. **EDIT:** I can't seem to find the original :P oh well a = a++ uses the postincrement, which your compiler interprets as: ``` a = function() { var old_value = a; a++; return old_value; } ``` **EDIT 2:** [What's the difference between X = X++; vs X++;?](https://stackoverflow.com/questions/226002/whats-the-difference-between-x-x-vs-x)
`value++;` is post increment. ``` int firtValue = 9; int secondValue = firstValue++; ``` firstValue is now 10, but secondValue is 9, the value of firstValue before it was incremented. Now with pre-increment: ``` int firtValue = 9; int secondValue = ++firstValue; ``` firstValue and secondValue are now 10, fistValue is incremented and then its value is asigned to secondValue.
Why doesn't `index = index++` increment `index`?
[ "", "java", "" ]
I have a MySQL table LOGIN\_LOG with fields ID, PLAYER, TIMESTAMP and ACTION. ACTION can be either 'login' or 'logout'. Only around 20% of the logins have an accompanying logout row. For those that do, I want to calculate the average duration. I'm thinking of something like ``` select avg(LL2.TIMESTAMP - LL1.TIMESTAMP) from LOGIN_LOG LL1 inner join LOGIN_LOG LL2 on LL1.PLAYER = LL2.PLAYER and LL2.TIMESTAMP > LL1.TIMESTAMP left join LOGIN_LOG LL3 on LL3.PLAYER = LL1.PLAYER and LL3.TIMESTAMP between LL1.TIMESTAMP + 1 and LL2.TIMESTAMP - 1 and LL3.ACTION = 'login' where LL1.ACTION = 'login' and LL2.ACTION = 'logout' and isnull(LL3.ID) ``` is this the best way to do it, or is there one more efficient?
Given the data you have, there probably isn't anything much faster you can do because you have to look at a LOGIN and a LOGOUT record, and ensure there is no other LOGIN (or LOGOUT?) record for the same user between the two. Alternatively, find a way to ensure that a disconnect records a logout, so that the data is complete (instead of 20% complete). However, the query probably still has to ensure that the criteria are all met, so it won't help the query all that much. If you can get the data into a format where the LOGIN and corresponding LOGOUT times are both in the same record, then you can simplify the query immensely. I'm not clear if the SessionManager does that for you.
Do you have a SessionManager type object that can timeout sessions? Because a timeout could be logged there, and you could get the last activity time from that and the timeout period. Or you log all activity on the website/service, and thus you can query website/service visit duration directly, and see what activities they performed. For a website, Apache log analysers can probably generate the required stats.
SQL query to calculate visit duration from log table
[ "", "sql", "mysql", "duration", "" ]
I'm wondering where I find the source to show how the operator \*\* is implemented in Python. Can someone point me in the right direction?
The python grammar definition (from which the parser is generated using [pgen](http://www.python.org/dev/peps/pep-0269/)), look for 'power': [Gramar/Gramar](http://svn.python.org/view/python/trunk/Grammar/Grammar?rev=65872&view=markup) The python ast, look for 'ast\_for\_power': [Python/ast.c](http://svn.python.org/view/python/trunk/Python/ast.c?rev=67590&view=markup) The python eval loop, look for 'BINARY\_POWER': [Python/ceval.c](http://svn.python.org/view/python/trunk/Python/ceval.c?rev=67666&view=markup) Which calls PyNumber\_Power (implemented in [Objects/abstract.c](http://svn.python.org/view/python/trunk/Objects/abstract.c?rev=66043&view=markup)): ``` PyObject * PyNumber_Power(PyObject *v, PyObject *w, PyObject *z) { return ternary_op(v, w, z, NB_SLOT(nb_power), "** or pow()"); } ``` Essentially, invoke the **pow** slot. For long objects (the only default integer type in 3.0) this is implemented in the long\_pow function [Objects/longobject.c](http://svn.python.org/view/python/trunk/Objects/longobject.c?rev=65518&view=markup), for int objects (in the 2.x branches) it is implemented in the int\_pow function [Object/intobject.c](http://svn.python.org/view/python/trunk/Objects/intobject.c?rev=64753&view=markup) If you dig into long\_pow, you can see that after vetting the arguments and doing a bit of set up, the heart of the exponentiation can be see here: ``` if (Py_SIZE(b) <= FIVEARY_CUTOFF) { /* Left-to-right binary exponentiation (HAC Algorithm 14.79) */ /* http://www.cacr.math.uwaterloo.ca/hac/about/chap14.pdf */ for (i = Py_SIZE(b) - 1; i >= 0; --i) { digit bi = b->ob_digit[i]; for (j = 1 << (PyLong_SHIFT-1); j != 0; j >>= 1) { MULT(z, z, z) if (bi & j) MULT(z, a, z) } } } else { /* Left-to-right 5-ary exponentiation (HAC Algorithm 14.82) */ Py_INCREF(z); /* still holds 1L */ table[0] = z; for (i = 1; i < 32; ++i) MULT(table[i-1], a, table[i]) for (i = Py_SIZE(b) - 1; i >= 0; --i) { const digit bi = b->ob_digit[i]; for (j = PyLong_SHIFT - 5; j >= 0; j -= 5) { const int index = (bi >> j) & 0x1f; for (k = 0; k < 5; ++k) MULT(z, z, z) if (index) MULT(z, table[index], z) } } } ``` Which uses algorithms discussed in [Chapter 14.6](http://www.cacr.math.uwaterloo.ca/hac/about/chap14.pdf) of the [Handbook of Applied Cryptography](http://www.cacr.math.uwaterloo.ca/hac/) which describes efficient exponentiation algorithms for arbitrary precision arithmetic.
There are two different implementations one for int (long in 3.0) objects and another one for float objects. The float pow is the float\_pow(PyObject \*v, PyObject \*w, PyObject \*z) function defined in Objects/floatobject.c file of the Python source code. This functions calls pow() from C stdlib's math.h The int pow has its own implementation, is the function int\_pow(PyIntObject \*v, PyIntObject \*w, PyIntObject \*z) defined in Objects/intobject.c (longobject.c for 3.0) of the Python source code.
How is ** implemented in Python?
[ "", "python", "" ]
There is a simple C++ method to use pattern matching on strings? The code should sound like this: ``` if (regexpcmp("l?nole*[0-9]", "linoleum1")) { //we have a match! } else { //no match } ```
Not in the core language. Use Boost.Regex or an external library like `pcre`. In a unix environment you almost certainly have access to the BSD regular expression tools (`regcomp`, `regerror`, `regexec`, `regfree`) which are c-like rather than c++-like but do work.
Did you already look at [Boost.Regex](http://www.boost.org/doc/libs/release/libs/regex/)? ``` const boost::regex e("l?nole*[0-9]"); if (regex_match("linoleum1", e)) { //we have a match! } else { //no match } ```
There is a function to use pattern matching (using regular expressions) in C++?
[ "", "c++", "regex", "string", "" ]
I am working on a background program that will be running for a long time, and I have a external logging program ([SmartInspect](http://www.gurock.com/products/smartinspect/)) that I want to feed with some values periodically, to monitor it in realtime when debugging. I know I can simply fire up multiple programs, like the Task Manager, or IARSN TaskInfo, but I'd like to keep everything in my own program for this, as I also wants to add some simple rules like if the program uses more than X% CPU, flag this in the log. I have a background thread that periodically feeds some statistics to SmartInspect, like memory consumption, working set, etc. Is it possible for this thread to get a reasonably accurate measure of how much of the computer's CPU resources it consumes? The main program is a single-threaded application (apart from the watchdog thread that logs statistics) so if a technique is limited to *how much does a single thread use* then that would be good too. I found some entries related to something called *rusage* for Linux and C. Is there something similar I can use for this? --- **Edit:** Ok, I tried the performance counter way, but it added quite a lot of GC-data each time called, so the graph for memory usage and garbage collection skyrocketed. I guess I'll just leave this part out for now.
You can also use [System.Diagnostics.**Process.TotalProcessorTime**](http://msdn.microsoft.com/en-us/library/system.diagnostics.process.totalprocessortime.aspx) and [System.Diagnostics.**ProcessThread.TotalProcessorTime**](http://msdn.microsoft.com/en-us/library/system.diagnostics.processthread.totalprocessortime.aspx) properties to calculate your processor usage as this [article](http://www.codeproject.com/KB/system/processescpuusage.aspx) describes.
Have a look at `System.Diagnostics.PerformanceCounter`. If you run up `perfmon.exe`, you'll see the range of performance counters available to you (set the 'performance object' to 'Process'), one of which is '% Processor Time'.
Can a C# program measure its own CPU usage somehow?
[ "", "c#", "monitoring", "cpu", "performancecounter", "" ]
When designing a lookup table (enum) in SqlServer 2005, if you know the number of entries will never get very high, should you use tinyint instead of int? I'm most concerned about performance, particularly efficiency of indexes. Let's say you have these representative tables: ``` Person ------ PersonId int (PK) PersonTypeId tinyint (FK to PersonTypes) ``` and ``` PersonTypes ----------- PersonTypeId tinyint PersonTypeName varchar(50) ``` The obvious factors are data size and coding hassle. When we get to 100 million rows in the person table, we're storing 300 million less bytes with tinyint as opposed to int, plus the space taken up by our indexes. Not a huge amount of data, but significant if the design decision is applied to dozens of big tables. The coding hassle, of course, comes from all those casting problems back in the ASP.NET C#/VB code. If we set aside those two issues, what else comes into play? Will queries be much more efficient due to the decreased size of the index pages? Or is there some sort of padding that happens that will just negate the benefits? Any other gotchas? I've always just used ints personally, but I'm considering tinyint for an upcoming redesign/migration effort on some huge tables, so I'd love to get some advice. [Edit] After experimenting with this, the coding hassles I anticipated turned out to be a non-issue. Changing from int to tinyint hasn't resulted in any casting problems at all.
The narrower a table (or index node entry) is, the more records (or index nodes) can fit on a single IO page, and the fewer physical (and logical) reads IO operations are required for any query. Also, the more index nodes there are on a single page, the fewer levels there may be in the index, from root to leaf level, and if by making a table narrower you pass the threshold where the index can be one level smaller, this can have a dramatic effect on perforamnce. If by switching to TinyInt you change your table from 200 bytes wide to 197 bytes wide, it probably won't make any difference... But if you change it from 20 bytes to 14, (say you have 2 ints in there), then it could be dramatic...
Memory 101: Smaller stuff means holding more in RAM at once and thus fewer hard disk reads. If the DB is big enough and you're running certain kinds of queries, this could be a very serious factor. But it probably won't make big difference.
Is it worth the trouble to use tinyint instead of int for SqlServer lookup tables?
[ "", "sql", "sql-server", "database-design", "" ]
Before I write my own I will ask all y'all. I'm looking for a C++ class that is almost exactly like a STL vector but stores data into an array on the stack. Some kind of STL allocator class would work also, but I am trying to avoid any kind of heap, even static allocated per-thread heaps (although one of those is my second choice). The stack is just more efficient. It needs to be almost a drop in replacement for current code that uses a vector. For what I was about to write myself I was thinking of something like this: ``` char buffer[4096]; stack_vector<match_item> matches(buffer, sizeof(buffer)); ``` Or the class could have buffer space allocated internally. Then it would look like: ``` stack_vector<match_item, 256> matches; ``` I was thinking it would throw std::bad\_alloc if it runs out of space, although that should not ever happen. **Update** Using Chromium's stack\_container.h works great! The reason I hadn't thought of doing it this way myself is that I have always overlooked the allocator object parameter to the STL collection constructors. I have used the template parameter a few times to do static pools but I'd never seen code or written any that actually used the object parameter. I learned something new. Very cool! The code is a bit messy and for some reason GCC forced me to declare the allocator as an actual item instead of constructing it into vector's allocator parameter. It went from something like this: ``` typedef std::pair< const char *, const char * > comp_list_item; typedef std::vector< comp_list_item > comp_list_type; comp_list_type match_list; match_list.reserve(32); ``` To this: ``` static const size_t comp_list_alloc_size = 128; typedef std::pair< const char *, const char * > comp_list_item; typedef StackAllocator< comp_list_item, comp_list_alloc_size > comp_list_alloc_type; typedef std::vector< comp_list_item, comp_list_alloc_type > comp_list_type; comp_list_alloc_type::Source match_list_buffer; comp_list_alloc_type match_list_alloc( &match_list_buffer ); comp_list_type match_list( match_list_alloc ); match_list.reserve( comp_list_alloc_size ); ``` And I have to repeat that whenever I declare a new one. But it works just like I wanted. I noticed that stack\_container.h has a StackVector defined and I tried using it. But it doesn't inherit from vector or define the same methods so it wasn't a drop-in replacement. I didn't want to rewrite all the code using the vector so I gave up on it.
You don't have to write a completely new container class. You can stick with your STL containers, but change the second parameter of for example `std::vector` to give it your custom allocator which allocates from a stack-buffer. The chromium authors wrote an allocator just for this: <https://chromium.googlesource.com/chromium/chromium/+/master/base/stack_container.h> It works by allocating a buffer where you say how big it is. You create the container and call `container.reserve(buffer_size);`. If you overflow that size, the allocator will automatically get elements from the heap (since it is derived from `std::allocator`, it will in that case just use the facilities of the standard allocator). I haven't tried it, but it looks like it's from google so i think it's worth a try. Usage is like this: ``` StackVector<int, 128> s; s->push_back(42); // overloaded operator-> s->push_back(43); // to get the real std::vector. StackVector<int, 128>::ContainerType & v = s.container(); std::cout << v[0] << " " << v[1] << std::endl; ```
It seems that [boost::static\_vector](http://www.boost.org/doc/libs/1_55_0/doc/html/container/non_standard_containers.html#container.non_standard_containers.static_vector) is what you are searching. From the documentation: > static\_vector is an hybrid between vector and array: like vector, it's a sequence container with contiguous storage that can change in size, along with the static allocation, low overhead, and fixed capacity of array. static\_vector is based on Adam Wulkiewicz and Andrew Hundt's high-performance varray class. > > The number of elements in a static\_vector may vary dynamically up to a fixed capacity because elements are stored within the object itself similarly to an array.
Looking for C++ STL-like vector class but using stack storage
[ "", "c++", "data-structures", "stl", "vector", "" ]
I'm implementing unit tests for a family of functions that all share a number of invariants. For example, calling the function with two matrices produce a matrix of known shape. I would like to write unit tests to test the entire family of functions for this property, without having to write an individual test case for each function (particularly since more functions might be added later). One way to do this would be to iterate over a list of these functions: ``` import unittest import numpy from somewhere import the_functions from somewhere.else import TheClass class Test_the_functions(unittest.TestCase): def setUp(self): self.matrix1 = numpy.ones((5,10)) self.matrix2 = numpy.identity(5) def testOutputShape(unittest.TestCase): """Output of functions be of a certain shape""" for function in all_functions: output = function(self.matrix1, self.matrix2) fail_message = "%s produces output of the wrong shape" % str(function) self.assertEqual(self.matrix1.shape, output.shape, fail_message) if __name__ == "__main__": unittest.main() ``` I got the idea for this from [Dive Into Python](http://www.diveintopython.net/unit_testing/romantest.html). There, it's not a list of functions being tested but a list of known input-output pairs. The problem with this approach is that if any element of the list fails the test, the later elements don't get tested. I looked at subclassing unittest.TestCase and somehow providing the specific function to test as an argument, but as far as I can tell that prevents us from using unittest.main() because there would be no way to pass the argument to the testcase. I also looked at dynamically attaching "testSomething" functions to the testcase, by using setattr with a lamdba, but the testcase did not recognize them. How can I rewrite this so it remains trivial to expand the list of tests, while still ensuring every test is run?
You could use a metaclass to dynamically insert the tests. This works fine for me: ``` import unittest class UnderTest(object): def f1(self, i): return i + 1 def f2(self, i): return i + 2 class TestMeta(type): def __new__(cls, name, bases, attrs): funcs = [t for t in dir(UnderTest) if t[0] == 'f'] def doTest(t): def f(slf): ut=UnderTest() getattr(ut, t)(3) return f for f in funcs: attrs['test_gen_' + f] = doTest(f) return type.__new__(cls, name, bases, attrs) class T(unittest.TestCase): __metaclass__ = TestMeta def testOne(self): self.assertTrue(True) if __name__ == '__main__': unittest.main() ```
Here's my favorite approach to the "family of related tests". I like explicit subclasses of a TestCase that expresses the common features. ``` class MyTestF1( unittest.TestCase ): theFunction= staticmethod( f1 ) def setUp(self): self.matrix1 = numpy.ones((5,10)) self.matrix2 = numpy.identity(5) def testOutputShape( self ): """Output of functions be of a certain shape""" output = self.theFunction(self.matrix1, self.matrix2) fail_message = "%s produces output of the wrong shape" % (self.theFunction.__name__,) self.assertEqual(self.matrix1.shape, output.shape, fail_message) class TestF2( MyTestF1 ): """Includes ALL of TestF1 tests, plus a new test.""" theFunction= staticmethod( f2 ) def testUniqueFeature( self ): # blah blah blah pass class TestF3( MyTestF1 ): """Includes ALL of TestF1 tests with no additional code.""" theFunction= staticmethod( f3 ) ``` Add a function, add a subclass of `MyTestF1`. Each subclass of MyTestF1 includes all of the tests in MyTestF1 with no duplicated code of any kind. Unique features are handled in an obvious way. New methods are added to the subclass. It's completely compatible with `unittest.main()`
How do I concisely implement multiple similar unit tests in the Python unittest framework?
[ "", "python", "unit-testing", "" ]
I want to use JavaScript to control an embedded Windows Media Player, as well as access any properties that the player exposes. I've found a few hacky examples online, but nothing concrete. I really need access to play, pause, stop, seek, fullscreen, etc. I'd also like to have access to any events the player happens to broadcast. Help would be wonderful (I already have a Flash equiv, just so you know), thanks!
There is an API in Microsoft's developer center, but it will only work if you embed windows media player using active-x. To "learn" more about the API, check out MSDN: <http://msdn.microsoft.com/en-us/library/dd564034(VS.85).aspx>
The API requires ActiveX connectivity native to Internet Explorer, or can use a [plugin for Firefox](http://port25.technet.com/pages/windows-media-player-firefox-plugin-download.aspx). Here's a sample page that might get you started. ``` <html> <head> <title>so-wmp</title> <script> onload=function() { player = document.getElementById("wmp"); player.URL = "test.mp3"; }; function add(text) { document.body .appendChild(document.createElement("div")) .appendChild(document.createTextNode(text)); }; function handler(type) { var a = arguments; add(type +" = "+ PlayStates[a[1]]); }; // http://msdn.microsoft.com/en-us/library/bb249361(VS.85).aspx var PlayStates = { 0: "Undefined", // Windows Media Player is in an undefined state. 1: "Stopped", // Playback of the current media item is stopped. 2: "Paused", // Playback of the current media item is paused. When a media item is paused, resuming playback begins from the same location. 3: "Playing", // The current media item is playing. 4: "ScanForward", // The current media item is fast forwarding. 5: "ScanReverse", // The current media item is fast rewinding. 6: "Buffering", // The current media item is getting additional data from the server. 7: "Waiting", // Connection is established, but the server is not sending data. Waiting for session to begin. 8: "MediaEnded", // Media item has completed playback. 9: "Transitioning", // Preparing new media item. 10: "Ready", // Ready to begin playing. 11: "Reconnecting" // Reconnecting to stream. }; </script> <script for="wmp" event="PlayStateChange(newState)"> // http://msdn.microsoft.com/en-us/library/bb249362(VS.85).aspx handler.call(this, "playstatechange", newState); </script> </head> <body> <div id="page"> <object id="wmp" classid="clsid:6BF52A52-394A-11d3-B153-00C04F79FAA6" type="application/x-oleobject"> </object> </div> </body> </html> ```
Is there a documented JavaScript API for Windows Media Player?
[ "", "javascript", "windows-media-player", "" ]
What's the best way to load HTML markup for a custom jQuery UI widget? So far, I've seen elements simply created using strings (i.e. `$(...).wrap('<div></div>')`) which is fine for something simple. However, this makes it extremely difficult to modify later for more complex elements. This seems like a fairly common problem, but I also know that the jQuery UI library is new enough that there may not be a widely accepted solution for this. Any ideas?
Something which is quite neat to do is this: ``` var newDiv = $("<div></div>"); //Create a new element and save a reference newDiv.attr("id","someid").appendTo("body"); ``` That way, you are creating an element, and storing its reference in a variable, for later use.
If you have a large amount of complex HTML, and don't want to deal with hardcoded strings in your JavaScript code, you can put it in a separate HTML file and use the [Ajax load function](http://docs.jquery.com/Ajax/load). It's chainable, so you can load the HTML file (the whole thing, or a selected fragment), inject it into your current DOM, and keep right on going.
jQuery UI: Best way to load HTML markup?
[ "", "javascript", "jquery", "jquery-ui", "jquery-ui-plugins", "" ]
We have a number of MS SQL Server 2005 installations (both Standard and Enterprise editions) on our webservers. From time to time I need to administrate them from home. Right now I'm using Remote Desktop to connect to my work machine, but I'd prefer to install SQL Server Management Studio on my home machine so that I can work more efficiently. I don't have a spare license, so I was wondering... if I installed MS SQL Server 2005 Express on my home machine, could I use it to effectively administrate the higher-end installations on our servers?
See this question: [What's the difference between SQL Server Management Studio and the Express edition?](https://stackoverflow.com/questions/117347/whats-the-difference-between-sql-server-management-studio-and-the-express-versi) Also, if you have any trouble doing anything from Managment Studio Express, there are third-party tools out there that have no trouble connecting to SQL Server. Personally, I've noticed a few things missing out of the box: * Import/Export * Activity Monitor * Job monitor However, the question I linked to indicated you may be able to restore most of that for use when you connect to a full sql server edition.
If you do a client tools only install of the standard edition (ie. just SSMS) and don't install the database, agent, etc. then I'm pretty sure you don't need another license and won't be violating the EULA. If you want a full install the SQL Server developer edition is only $49 and comes with everything. <http://www.microsoft.com/sqlserver/2005/en/us/developer.aspx>
Can SQL Server Express be used to effectively administrate a SQL Server Standard/Enterprise installation?
[ "", "sql", "sql-server", "administration", "" ]
How can I match (PCRE) everything inbetween two tags? I tried something like this: > <!--\s\*LoginStart\s\*-->(.\*)<!--\s\*LoginEnd\s\*--> But it didn't work out too well for me.. I'm kind of new to regular expressions, so I was hoping if someone would be kind enough to explain to me how I would accomplish this, if its even possible with regular expressions. Thanks
``` $string = '<!-- LoginStart --><div id="stuff">text</div><!-- LoginEnds -->'; $regex = '#<!--\s*LoginStart\s*-->(.*?)<!--\s*LoginEnds\s*-->#s'; preg_match($regex, $string, $matches); print_r($matches); // $matches[1] = <div id="stuff">text</div> ``` explanations: ``` (.*?) = non greedy match (match the first <!-- LoginEnds --> it finds s = modifier in $regex (end of the variable) allows multiline matches such as '<!-- LoginStart -->stuff more stuff <!-- LoginEnds -->' ```
PHP and regex? Here's some suggestions: ``` '/<!--\s*LoginStart\s*-->(.*)<!--\s*LoginEnd\s*-->/Us' ``` Might be better - the `U` capitalized makes the regex non-greedy, which means it'll stop at the first `<!--` that may work. But the important one is the `s`, which tells the regex to match a newline with the `.` character. Depending on how certain you are on the capitalization, adding an `i` at the end will make the regex search case-insensitive.
Match everything inbetween two tags with Regular Expressions?
[ "", "php", "html", "regex", "tags", "" ]
What's the best way, using C# or other .NET language, to determine if a file path string is on the local machine or a remote server? It's possible to determine if a path string is UNC using the following: ``` new Uri(path).IsUnc ``` That works great for paths that start with C:\ or other drive letter, but what about paths like: ``` \\machinename\sharename\directory \\10.12.34.56\sharename\directory ``` ...where both refer to the local machine - these are UNC paths but are still local.
Don't know if there's a more efficient way of doing this, but it seems to work for me: ``` IPAddress[] host; IPAddress[] local; bool isLocal = false; host = Dns.GetHostAddresses(uri.Host); local = Dns.GetHostAddresses(Dns.GetHostName()); foreach (IPAddress hostAddress in host) { if (IPAddress.IsLoopback(hostAddress)) { isLocal = true; break; } else { foreach (IPAddress localAddress in local) { if (hostAddress.Equals(localAddress)) { isLocal = true; break; } } if (isLocal) { break; } } } ```
This is how I did it. ``` public static bool IsLocal(DirectoryInfo dir) { foreach (DriveInfo d in DriveInfo.GetDrives()) { if (string.Compare(dir.Root.FullName, d.Name, StringComparison.OrdinalIgnoreCase) == 0) //[drweb86] Fix for different case. { return (d.DriveType != DriveType.Network); } } throw new DriveNotFoundException(); } ```
Method to determine if path string is local or remote machine
[ "", "c#", ".net", "uri", "unc", "" ]
I have an XML file and an XML schema in another file and I'd like to validate that my XML file adheres to the schema. How do I do this in Python? I'd prefer something using the standard library, but I can install a third-party package if necessary.
I am assuming you mean using XSD files. Surprisingly there aren't many python XML libraries that support this. lxml does however. Check [Validation with lxml](http://lxml.de/validation.html). The page also lists how to use lxml to validate with other schema types.
## An example of a simple validator in Python3 using the popular library [lxml](http://lxml.de/) **Installation lxml** ``` pip install lxml ``` If you get an error like *"Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?"*, try to do this first: ``` # Debian/Ubuntu apt-get install python-dev python3-dev libxml2-dev libxslt-dev # Fedora 23+ dnf install python-devel python3-devel libxml2-devel libxslt-devel ``` --- **The simplest validator** Let's create simplest **validator.py** ``` from lxml import etree def validate(xml_path: str, xsd_path: str) -> bool: xmlschema_doc = etree.parse(xsd_path) xmlschema = etree.XMLSchema(xmlschema_doc) xml_doc = etree.parse(xml_path) result = xmlschema.validate(xml_doc) return result ``` then write and run **main.py** ``` from validator import validate if validate("path/to/file.xml", "path/to/scheme.xsd"): print('Valid! :)') else: print('Not valid! :(') ``` --- **A little bit of OOP** In order to validate more than one file, there is no need to create an *XMLSchema* object every time, therefore: **validator.py** ``` from lxml import etree class Validator: def __init__(self, xsd_path: str): xmlschema_doc = etree.parse(xsd_path) self.xmlschema = etree.XMLSchema(xmlschema_doc) def validate(self, xml_path: str) -> bool: xml_doc = etree.parse(xml_path) result = self.xmlschema.validate(xml_doc) return result ``` Now we can validate all files in the directory as follows: **main.py** ``` import os from validator import Validator validator = Validator("path/to/scheme.xsd") # The directory with XML files XML_DIR = "path/to/directory" for file_name in os.listdir(XML_DIR): print('{}: '.format(file_name), end='') file_path = '{}/{}'.format(XML_DIR, file_name) if validator.validate(file_path): print('Valid! :)') else: print('Not valid! :(') ``` --- For more options read here: [Validation with lxml](http://lxml.de/validation.html#xmlschema)
Validating with an XML schema in Python
[ "", "python", "xml", "validation", "xsd", "" ]
To implement data access code in our application we need some framework to wrap around jdbc (ORM is not our choice, because of scalability). The coolest framework I used to work with is [Spring-Jdbc](http://static.springframework.org/spring/docs/2.5.x/reference/jdbc.html). However, the policy of my company is to avoid external dependencies, especially spring, J2EE, etc. So we are thinking about writing own handy-made jdbc framework, with functionality similar Spring-jdbc: row mapping, error handling, supporting features of java5, but without transaction support. Does anyone have experience of writing such jdbc wrapper framework? If anyone has experience of using other jdbc wrapper frameworks, please share your experience. Thanks in advance.
We wrote our own wrapper. This topic is worthy of a paper but I doubt I'll ever have time to write it, so here are some key points: * we embraced sql and made no attempt to hide it. the only tweak was to add support for named parameters. parameters are important because we do not encourage the use of on-the-fly sql (for security reasons) and we always use PreparedStatements. * for connection management, we used Apache DBCP. This was convenient at the time but it's unclear how much of this is needed with modern JDBC implementations (the docs on this stuff is lacking). DBCP also pools PreparedStatements. * we didn't bother with row mapping. instead (for queries) we used something similar to the Apache dbutil's ResultSetHandler, which allows you to "feed" the result set into a method which can then dump the information wherever you'd like it. This is more flexible, and in fact it wouldn't be hard to implement a ResultSetHandler for row mapping. for inserts/updates we created a generic record class (basically a hashmap with some extra bells and whistles). the biggest problem with row mapping (for us) is that you're stuck as soon as you do an "interesting" query because you may have fields that map to different classes; because you may have a hierarchical class structure but a flat result set; or because the mapping is complex and data dependent. * we built in error logging. for exception handling: on a query we trap and log, but for an update we trap, log, and rethrow an unchecked exceptions. * we provided transaction support using a wrapper approach. the caller provides the code that performs transaction, and we make sure that the transaction is properly managed, with no chance of forgetting to finish the transaction and with rollback and error handling built-in. * later on, we added a very simplistic relationship scheme that allows a single update/insert to apply to a record and all its dependencies. to keep things simple, we did not use this on queries, and we specifically decided not to support this with deletes because it is more reliable to use cascaded deletes. This wrapper has been successfully used in two projects to date. It is, of course, lightweight, but these days everyone says their code is lightweight. More importantly, it increases programmer productivity, decreases the number of bugs (and makes problems easier to track down), and it's relatively easy to trace through if need be because we don't believe in adding lots of layers just to provide beautiful architecture.
Spring-JDBC is fantastic. Consider that for an open source project like Spring the down side of external dependency is minimized. You can adopt the most stable version of Spring that satisfies your JDBC abstraction requirements and you know that you'll always be able to modify the source code yourselves if you ever run into an issue -- without depending on an external party. You can also examine the implementation for any security concerns that your organization might have with code written by an external party.
simple jdbc wrapper
[ "", "java", "jdbc", "data-access", "spring-jdbc", "" ]
1. Why is operator '&' defined for bool?, and operator '&&' is not? 2. How exactly does this 1) bool? & bool? and 2) bool? and bool work? Any other "interesting" operator semantics on Nullable? Any overloaded operators for generic T?
Operators on `Nullable<T>` are "lifted" operators. What this means is: if T has the operator, T? will have the "lifted" counterpart. && and || aren't really operators in the same sense as & and | - for example, they can't be overloaded - from the ECMA spec 14.2.2 Operator overloading: > The overloadable binary operators are: > + - \* / % & | ^ << >> == != > < >= <= Only the operators listed above can be > overloaded. In particular, it is not > possible to overload member access, > method invocation, or the =, &&, ||, > ??, ?:, checked, unchecked, new, > typeof, as, and is operators. Likewise, from the ECMA spec, 14.2.7 Lifted operators, the lifted operators are: > For the unary operators > + ++ - -- ! ~ > > For the binary operators > + - \* / % & | ^ << >> > > For the equality operators > == != > > For the relational operators < > <= >= So basically, the short-circuiting operators aren't defined as lifted operators. [edit: added crib sheet] * Lifted operator: a compiler provided operator on `Nullable<T>`, based on the operators of T - for example: the `int` "+" operator gets "lifted" onto `int?`, defined as: (int? x, int? y) => (x.HasValue && y.HasValue) ? (x.Value + y.Value) : (int?) null; * Operator overloading: the act of providing a custom operator implementation for a given type; for example `decimal` and `DateTime` provide various operator overloads * Short-circuiting: the normal behavior of `&&` and `||` (in many languages, including C++ and C#) - i.e. the second operand *might not be evaluated* - i.e. (expression1, expression2) => expression1() ? expression2() : false; Or perhaps a simpler example: ``` bool someFlag = Method1() && Method2(); ``` if `Method1()` returns false, then `Method2()` isn't executed (since the compiler already knows that the overall answer is false). This is important if `Method2()` has side-effects, since as saving to the database...
There are no short-circuiting operators (&& ||) defined for `bool?` Only are the logical AND, inclusive OR, operators and they behave like this: ``` x y x & y x | y true true true true true false false true true null null true false true false true false false false false false null false null null true null true null false false null null null null null ```
Nullable<T>: and overloaded operators, bool? & bool
[ "", "c#", "nullable", "" ]
How would you append an integer to a `char*` in c++?
First convert the int to a `char*` using `sprintf()`: ``` char integer_string[32]; int integer = 1234; sprintf(integer_string, "%d", integer); ``` Then to append it to your other char\*, use `strcat()`: ``` char other_string[64] = "Integer: "; // make sure you allocate enough space to append the other string strcat(other_string, integer_string); // other_string now contains "Integer: 1234" ```
You could also use stringstreams. ``` char *theString = "Some string"; int theInt = 5; stringstream ss; ss << theString << theInt; ``` The string can then be accessed using `ss.str();`
Append an int to char*
[ "", "c++", "integer", "char", "append", "" ]
List comprehensions can be useful in certain situations, but they can also be rather horrible to read.. As a slightly exaggerated example, how would you indent the following? ``` allUuids = [x.id for x in self.db.query(schema.allPostsUuid).execute(timeout = 20) if x.type == "post" and x.deleted is not False] ```
It depends on how long they are. I tend to structure them like so: ``` [x.id for x in self.db.query(schema.allPostsUuid).execute(timeout=20) if x.type == 'post' and x.deleted is not False and ... and ...] ``` That way every expression has its own line. If any line becomes too big I like to extract it out in a lambda or expression: ``` transform = lambda x: x.id results = self.db.query(schema.allPostsUuid).execute(timeout=20) condition = lambda x: x.deleted is not False and ... and ... [transform(x) for x in results if condition(x)] ``` And then if a lambda becomes too long it gets promoted to a function.
Where I work, our coding guidelines would have us do something like this: ``` all_posts_uuid_query = self.db.query(schema.allPostsUuid) all_posts_uuid_list = all_posts_uuid_query.execute(timeout=20) all_uuid_list = [ x.id for x in all_posts_uuid_list if ( x.type == "post" and not x.deleted # <-- if you don't care about NULLs / None ) ] ```
How to indent Python list-comprehensions?
[ "", "python", "coding-style", "" ]
Do you put unit tests in the same project for convenience or do you put them in a separate assembly? If you put them in a separate assembly like we do, we end up with a number of extra projects in the solution. It's great for unit testing while coding but how do you release the application without all of these extra assemblies?
In my opinion, unit tests should be placed in a separate assembly from production code. Here are just a few cons of placing unit tests in the same assembly or assemblies as production code are: 1. Unit tests get shipped with production code. The only thing shipped with product code is production code. 2. Assemblies will be unnecessarily bloated by unit tests. 3. Unit tests can affect build processes like automated or continuous build. I don't really know of any pros. Having an extra project (or 10) isn't a con. **Edit: More Info On Build and Shipping** I would further recommend that any automated build process place production and unit tests into different locations. Ideally, the unit test build process only runs if the production code builds, and copies the product files into the unit tests directory. Doing it this way results in the actual bits being separated for shipping, etc. Additionally, it is fairly trivial to run automated unit testing at this point on all tests in a particular directory. To summarize, here is the general idea for a daily build and testing and shipping of bits and other files: 1. Production build runs, placing production files into a specific "production" directory. 1. Build production projects only. 2. Copy compiled bits and other files into a "production" directory. 3. Copy bits and other files into a release candidate directory, aka a Christmas release directory would be "Release20081225". 2. If production build succeeds, unit test build runs. 1. Copy production code to "tests" directory. 2. Build unit tests to "tests" directory. 3. Run unit tests. 3. Send build notifications and unit tests results to developers. 4. When a release candidate (like Release20081225) is accepted, ship these bits.
Separate project, but in the same solution. (I've worked on products with separate solutions for test and production code - it's horrible. You're always switching between the two.) The reasons for separate projects are as stated by others. Note that if you're using data-driven tests, you might end up with quite a significant amount of bloat if you include the tests in the production assembly. If you need access to the internal members of the production code, use [InternalsVisibleTo](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx).
Do you put unit tests in same project or another project?
[ "", "c#", "unit-testing", "" ]
In C#, I'm creating an XML file from a DataTable using dataTable.WriteXml(filePath), and get the following: ``` <?xml version="1.0" encoding="utf-8" ?> <ExperienceProfiles> <ExperienceProfile> <Col1>blah</Col1> <Col2>4ed397bf-a4d5-4ace-9d44-8c1a5cdb0f34</Col2> </ExperienceProfile> </ExperienceProfiles> ``` How can I get it to write the XML in the following format?: ``` <?xml version="1.0" encoding="utf-8" ?> <ExperienceProfiles> <ExperienceProfile Col1="blah" Col2="blah" ></ExperienceProfile> </ExperienceProfiles> ```
What you want is some way to tell the DataSet the expected format of your data. You're in luck, the DataSet supports just this feature. You will need to create an XML schema for your data and load it into the DataSet before you write out the XML. In the schema define Col1 and Col2 as attributes of the ExperienceProfile element and the DataSet will know to format the output document to meet the requirements specified in the schema. If you are not comfortable with creating a schema you can create a sample of the XML file the way that you want it to be formatted, then take a look at the XmlSchemaInference class in the framework. This class can be used to automatically generate your schema, you may need to tweak the output a little but it can help if you are not familiar with XSD.
you can use the ColumnMapping feature of the datatable column. . ``` column.ColumnMapping = MappingType.Attribute ```
How to specify format of XML output when writing from a DataTable?
[ "", "c#", "ado.net", "xsd", "" ]
I'm opening a new browser window from my site for some of the members. However, some may later close it, or it might have initially failed to open. Is there a snippet of fairly plain Javascript that can be run on each page to confirm if another browser window is open, and if not, to provide a link to re-open it? **[clarification:]** The code to check is a window is open would be run on other pages - not just in the same window and URL that opened it. Imagine a user logging in, the window (tries to) open, and then they surf around in the same tab/window (or others) for some time before they close the 2nd window (or it never opened) - I want to be able to notice the window has been closed some time after the initial attempt at opening/after it's closed, so I'm not sure that checking the javascript's return from window.open() (with popup\_window\_handle.closed) is easily used, or indeed possible.
This [excellent, comprehensive article](http://www.irt.org/articles/js205/index.htm) *("Almost complete control of pop-up windows")* should answer all your questions about javascript popup windows. *"JavaScript 1.1 also introduced the window **closed** property. Using this property, it is possible to detect if a window has been opened and subsequently closed. We can use this to load a page directly into the opener window if it still open for JavaScript 1.1 enabled browsers:"* ``` <script language="JavaScript"><!-- function open_main(page) { window_handle = window.open(page,'main'); return false; } //--></script> <script language="JavaScript1.1"><!-- function open_main(page) { if (opener && !opener.closed) { opener.location.href = page; } else { window_handle = window.open(page,'main'); } return false; } //--></script> <a href="example.htm" onClick="return open_main('example.htm')">example.htm</a> ``` **Addition:** You can get the window handle back in another page by referring to the popup's name this way: ``` window_handle = window.open(page,'myPopupName'); ``` I guess in your case you should think about a consistent way to create all the names of the popup windows throughout the entire application.
``` var myWin = window.open(...); if (myWin.closed) { myWin = window.open(...); } ```
How do I confirm a browser window is open, with Javascript?
[ "", "javascript", "popup", "" ]
I am getting an 'access is denied' error when I attempt to delete a folder that is not empty. I used the following command in my attempt: `os.remove("/folder_name")`. What is the most effective way of removing/deleting a folder/directory that is not empty?
``` import shutil shutil.rmtree('/folder_name') ``` [Standard Library Reference: shutil.rmtree](http://docs.python.org/library/shutil.html#shutil.rmtree). By design, `rmtree` fails on folder trees containing read-only files. If you want the folder to be deleted regardless of whether it contains read-only files, then use ``` shutil.rmtree('/folder_name', ignore_errors=True) ```
From [the python docs](http://docs.python.org/library/os.html#os.walk) on `os.walk()`: ``` # Delete everything reachable from the directory named in 'top', # assuming there are no symbolic links. # CAUTION: This is dangerous! For example, if top == '/', it # could delete all your disk files. import os for root, dirs, files in os.walk(top, topdown=False): for name in files: os.remove(os.path.join(root, name)) for name in dirs: os.rmdir(os.path.join(root, name)) ```
How do I remove/delete a folder that is not empty?
[ "", "python", "file", "" ]
**Update** - for those of a facetious frame of mind, you can assume that Aggregate still produces the normal result whatever function is passed to it, including in the case being optimized. I wrote this program to build a long string of integers from 0 to 19999 separate by commas. ``` using System; using System.Linq; using System.Diagnostics; namespace ConsoleApplication5 { class Program { static void Main(string[] args) { const int size = 20000; Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); Enumerable.Range(0, size).Select(n => n.ToString()).Aggregate((a, b) => a + ", " + b); stopwatch.Stop(); Console.WriteLine(stopwatch.ElapsedMilliseconds + "ms"); } } } ``` When I run it, it says: ``` 5116ms ``` Over five seconds, terrible. Of course it's because the whole string is being copied each time around the loop. But what if make one very small change indicated by the comment? ``` using System; using System.Linq; using System.Diagnostics; namespace ConsoleApplication5 { using MakeAggregateGoFaster; // <---- inserted this class Program { static void Main(string[] args) { const int size = 20000; Stopwatch stopwatch = new Stopwatch(); stopwatch.Start(); Enumerable.Range(0, size).Select(n => n.ToString()).Aggregate((a, b) => a + ", " + b); stopwatch.Stop(); Console.WriteLine(stopwatch.ElapsedMilliseconds + "ms"); } } } ``` Now when I run it, it says: ``` 42ms ``` Over 100x faster. ### Question What's in the MakeAggregateGoFaster namespace? **Update 2:** [Wrote up my answer here](http://incrediblejourneysintotheknown.blogspot.com/2008/12/optimizing-aggregate-for-string.html).
You are 'overriding' System.Linq.Aggregate with your own extension method in namespace MakeAggregateGoFaster. Perhaps specialised on `IEnumerable<string>` and making use of a StringBuilder? Maybe taking an `Expression<Func<string, string, string>>` instead of a `Func<string, string, string>` so it can analyse the expression tree and compile some code that uses StringBuilder instead of calling the function directly? Just guessing.
Why not use one of the other forms of Aggregate? ``` Enumerable.Range(0, size ).Aggregate(new StringBuilder(), (a, b) => a.Append(", " + b.ToString()), (a) => a.Remove(0,2).ToString()); ``` You can specify any type for your seed, perform whatever formatting or custom calls are needed in the first lambda function and then customize the output type in the second lambda function. The built in features already provide the flexibility you need. My runs went from 1444ms to 6ms.
Optimizing Aggregate for String Concatenation
[ "", "c#", "optimization", "linq-to-objects", "" ]
With JSR 311 and its implementations we have a powerful standard for exposing Java objects via REST. However on the client side there seems to be something missing that is comparable to Apache Axis for SOAP - something that hides the web service and marshals the data transparently back to Java objects. How do you create Java RESTful clients? Using HTTPConnection and manual parsing of the result? Or specialized clients for e.g. Jersey or Apache CXR?
This is an old question (2008) so there are many more options now than there were then: * **Apache CXF** has three different [REST Client options](http://cxf.apache.org/docs/jax-rs-client-api.html) * **[Jersey](https://jersey.java.net/)** (mentioned above). * **[Spring RestTemplate](http://blog.springsource.com/2009/03/27/rest-in-spring-3-resttemplate/)** superceded by **Spring WebClient** * **[Commons HTTP Client](http://hc.apache.org/httpclient-3.x/)** build your own for older Java projects. **UPDATES (projects still active in 2020):** * **[Apache HTTP Components (4.2) Fluent adapter](http://hc.apache.org/httpcomponents-client-ga/tutorial/html/fluent.html)** - Basic replacement for JDK, used by several other candidates in this list. Better than old Commons HTTP Client 3 and easier to use for building your own REST client. You'll have to use something like [Jackson for JSON parsing](http://jackson.codehaus.org/) support and you can use [HTTP components URIBuilder to construct resource URIs](http://hc.apache.org/httpcomponents-client-ga/tutorial/html/fundamentals.html#d5e49) similar to Jersey/JAX-RS Rest client. HTTP components also supports NIO but I doubt you will get better performance than BIO given the short requestnature of REST. **Apache HttpComponents 5** has HTTP/2 support. * **[OkHttp](https://github.com/square/okhttp)** - Basic replacement for JDK, similar to http components, used by several other candidates in this list. Supports newer HTTP protocols (SPDY and HTTP2). Works on Android. Unfortunately it does not offer a true reactor-loop based async option (see Ning and HTTP components above). However if you use the newer HTTP2 protocol this is less of a problem (assuming connection count is problem). * **[Ning Async-http-client](https://github.com/AsyncHttpClient/async-http-client)** - provides NIO support. Previously known as **[Async-http-client](https://github.com/sonatype/async-http-client)** by Sonatype. * **[Feign](https://github.com/OpenFeign/feign)** wrapper for lower level http clients (okhttp, apache httpcomponents). Auto-creates clients based on interface stubs similar to some Jersey and CXF extensions. Strong spring integration. * **[Retrofit](http://square.github.io/retrofit/)** - wrapper for lower level http clients (okhttp). Auto-creates clients based on interface stubs similar to some Jersey and CXF extensions. * **[Volley](https://github.com/google/volley)** wrapper for jdk http client, by google * **[google-http](https://github.com/googleapis/google-http-java-client)** wrapper for jdk http client, or apache httpcomponents, by google * **[Unirest](https://github.com/Kong/unirest-java)** wrapper for jdk http client, by kong * **[Resteasy](https://github.com/resteasy)** JakartaEE wrapper for jdk http client, by jboss, part of jboss framework * **[jcabi-http](https://github.com/jcabi/jcabi-http)** wrapper for apache httpcomponents, part of jcabi collection * **[restlet](https://github.com/restlet/restlet-framework-java)** wrapper for apache httpcomponents, part of restlet framework * **[rest-assured](https://github.com/rest-assured/rest-assured)** wrapper with asserts for easy testing A caveat on picking HTTP/REST clients. Make sure to check what your framework stack is using for an HTTP client, how it does threading, and ideally use the same client if it offers one. That is if your using something like Vert.x or Play you may want to try to use its backing client to participate in whatever bus or reactor loop the framework provides... otherwise be prepared for possibly interesting threading issues.
As I mentioned in [this thread](https://stackoverflow.com/questions/165720/how-to-debug-restful-services#166269) I tend to use [Jersey](http://jersey.java.net/) which implements JAX-RS and comes with a nice REST client. The nice thing is if you implement your RESTful resources using JAX-RS then the Jersey client can reuse the entity providers such as for JAXB/XML/JSON/Atom and so forth - so you can reuse the same objects on the server side as you use on the client side unit test. For example [here is a unit test case](http://svn.apache.org/viewvc/activemq/camel/trunk/components/camel-rest/src/test/java/org/apache/camel/rest/resources/EndpointsTest.java?revision=700513&view=markup) from the [Apache Camel project](http://activemq.apache.org/camel/) which looks up XML payloads from a RESTful resource (using the JAXB object Endpoints). The resource(uri) method is defined in [this base class](http://svn.apache.org/viewvc/activemq/camel/trunk/components/camel-rest/src/test/java/org/apache/camel/rest/resources/TestSupport.java?revision=700513&view=markup) which just uses the Jersey client API. e.g. ``` clientConfig = new DefaultClientConfig(); client = Client.create(clientConfig); resource = client.resource("http://localhost:8080"); // let's get the XML as a String String text = resource("foo").accept("application/xml").get(String.class); ``` BTW I hope that future versions of JAX-RS add a nice client-side API along the lines of the one in Jersey.
How do you create a REST client for Java?
[ "", "java", "rest", "client", "" ]
I got dtd in file and I cant remove it. When i try to parse it in Java I get "Caused by: java.net.SocketException: Network is unreachable: connect", because its remote dtd. can I disable somehow dtd checking?
You should be able to specify your own EntityResolver, or use specific features of your parser? See [here](https://stackoverflow.com/questions/155101/make-documentbuilderparse-ignore-dtd-references) for some approaches. A more complete example: ``` <?xml version="1.0"?> <!DOCTYPE foo PUBLIC "//FOO//" "foo.dtd"> <foo> <bar>Value</bar> </foo> ``` And xpath usage: ``` import java.io.File; import java.io.IOException; import java.io.StringReader; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.xpath.XPath; import javax.xml.xpath.XPathFactory; import org.w3c.dom.Document; import org.xml.sax.EntityResolver; import org.xml.sax.InputSource; import org.xml.sax.SAXException; public class Main { public static void main(String[] args) throws Exception { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); builder.setEntityResolver(new EntityResolver() { @Override public InputSource resolveEntity(String publicId, String systemId) throws SAXException, IOException { System.out.println("Ignoring " + publicId + ", " + systemId); return new InputSource(new StringReader("")); } }); Document document = builder.parse(new File("src/foo.xml")); XPathFactory xpathFactory = XPathFactory.newInstance(); XPath xpath = xpathFactory.newXPath(); String content = xpath.evaluate("/foo/bar/text()", document .getDocumentElement()); System.out.println(content); } } ``` Hope this helps...
This worked for me: ``` SAXParserFactory saxfac = SAXParserFactory.newInstance(); saxfac.setValidating(false); try { saxfac.setFeature("http://xml.org/sax/features/validation", false); saxfac.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false); saxfac.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false); saxfac.setFeature("http://xml.org/sax/features/external-general-entities", false); saxfac.setFeature("http://xml.org/sax/features/external-parameter-entities", false); } catch (Exception e1) { e1.printStackTrace(); } ```
how to disable dtd at runtime in java's xpath?
[ "", "java", "xpath", "dtd", "doctype", "" ]
Java requires that you catch all possible exceptions or declare them as thrown in the method signature. This isn't the case with C# but I still feel that it is a good practice to catch all exceptions. Does anybody know of a tool which can process a C# project and point out places where an exception is thrown but not caught?
Check out the ExceptionFinder plug-in by Jason Bock for the .NET Reflector. It does just what you are looking for. Here's a screeny: Check it out on [CodePlex](http://www.codeplex.com/ExFinderReflector)
There is a R# plug-in that analyses thrown exceptions. <http://exceptionalplugin.codeplex.com/>
Checked exception catching in C#
[ "", "c#", "exception", "" ]
I have some code which ignores a specific exception. ``` try { foreach (FileInfo fi in di.GetFiles()) { collection.Add(fi.Name); } foreach (DirectoryInfo d in di.GetDirectories()) { populateItems(collection, d); } } catch (UnauthorizedAccessException ex) { //ignore and move onto next directory } ``` of course this results in a compile time warning as ex is unused. Is there some standard accept noop which should be used to remove this warning?
Just rewrite it as ``` catch (UnauthorizedAccessException) {} ```
As Dave M. and tvanfosson said, you want to rewrite it as ``` catch (UnauthorizedAccessException) {} ``` The bigger question that should be asked, however, is why you are catching an exception on ignoring it (commonly called swallowing the exception)? This is generally a bad idea as it can (and usually does) hide problems in the application at runtime that can lead to very strange results and a difficult time debugging them.
Ignoring exceptions
[ "", "c#", "exception", "" ]
In Java, you can qualify local variables and method parameters with the final keyword. ``` public static void foo(final int x) { final String qwerty = "bar"; } ``` Doing so results in not being able to reassign x and qwerty in the body of the method. This practice nudges your code in the direction of immutability which is generally considered a plus. But, it also tends to clutter up code with "final" showing up everywhere. What is your opinion of the final keyword for local variables and method parameters in Java?
You should try to do this, whenever it is appropriate. Besides serving to warn you when you "accidentally" try to modify a value, it provides information to the compiler that can lead to better optimization of the class file. This is one of the points in the book, "Hardcore Java" by Robert Simmons, Jr. In fact, the book spends all of its second chapter on the use of final to promote optimizations and prevent logic errors. Static analysis tools such as PMD and the built-in SA of Eclipse flag these sorts of cases for this reason.
My personal opinion is that it is a waste of time. I believe that the visual clutter and added verbosity is not worth it. I have never been in a situation where I have reassigned (remember, this does not make objects immutable, all it means is that you can't reassign another reference to a variable) a variable in error. But, of course, it's all personal preference ;-)
Why would one mark local variables and method parameters as "final" in Java?
[ "", "java", "final", "" ]
What is the most direct and/or efficient way to convert a `char[]` into a `CharSequence`?
Without the copy: ``` CharSequence seq = java.nio.CharBuffer.wrap(array); ``` However, the `new String(array)` approach is likely to be easier to write, easier to read and faster.
A `String` is a `CharSequence`. So you can just create a new `String` given your `char[]`. ``` CharSequence seq = new String(arr); ```
Java: convert a char[] to a CharSequence
[ "", "java", "string", "" ]
I want to send an array constructed in javascript with the selected values of a multiple select. Is there a way to send this array to a php script using ajax?
You can post back to your server with XML or JSON. Your javascript will have to construct the post, which in the case of XML would require you to create it in javascript. JSON is not only lighterweight but easier to make in javascript. Check out [JSON-PHP](http://mike.teczno.com/json.html) for parsing JSON. You might want to take a look at [Creating JSON Data in PHP](http://extjs.com/learn/Tutorial:Creating_JSON_Data_in_PHP)
You might do that with $.post method of jQuery (for example) : ``` var myJavascriptArray = new Array('jj', 'kk', 'oo'); $.post('urltocallinajax', {'myphpvariable[]': myJavascriptArray }, function(data){ // do something with received data! }); ``` Php will receive an array which will be name **myphpvariable** and it will contain the myJavascriptArray values. Is it that ?
How can I send an array to php through ajax?
[ "", "php", "ajax", "multiple-select", "" ]
I'd like to write a plugin that does something with the currently edited file in Eclipse. But I'm not sure how to properly get the file's full path. This is what I do now: ``` IFile file = (IFile) window.getActivePage().getActiveEditor.getEditorInput(). getAdapter(IFile.class); ``` Now I have an IFile object, and I can retrieve it's path: ``` file.getFullPath().toOSString(); ``` However this still only gives me the path relative to the workspace. How can I get the absolute path from that?
Looks like you want [`IResource.getRawLocation()`](http://help.eclipse.org/helios/index.jsp?topic=%2Forg.eclipse.platform.doc.isv%2Freference%2Fapi%2Forg%2Feclipse%2Fcore%2Fresources%2FIResource.html). That returns an `IPath`, which also has a `makeAbsolute()` method if you want to be doubly sure you've got an absolute path.
I think a more Java friendly solution would be to do use the following: ``` IResource.getLocation().toFile() ``` This takes advantage of the IPath API (the getLocation() part) and will return a java.io.File instance. Of course the other answers will probably get you to where you want to be too. On a tangential note, I find the IDE class `(org.eclipse.ui.ide.IDE)` a useful utility resource when it comes to editors.
Get the absolute path of the currently edited file in Eclipse
[ "", "java", "eclipse", "eclipse-plugin", "eclipse-api", "" ]
I had been steering away from C# for a while, because it was "just a Windows thing", and it fell out of my current needs. However It's been gaining popularity and now with Mono catching up, it's becoming more attractive but I was wondering what IDE are people using to Code C#(with Mono) on \*nix platforms.
I think [MonoDevelop](http://monodevelop.com/Main_Page) is the most popular.
I use [Vim](http://www.vim.org) for all my \*nix stuff. There's even a Vim plugin for VS ([ViEmu](http://www.viemu.com)) so you can use your Vim tricks from within the IDE as well.
What is a decent Mono Editor?
[ "", "c#", "editor", "mono", "" ]
What's the best way to pass data from one Windows Forms app (an office plugin) to another (exe written in C#) in C#?
I'll take a wild stab at this and say you probably want the office app to *phone home* to your exe? In this context, the "exe" is the server and the office app is the client. If you're using .NET 3.0, WCF is likely your best bet. I would structure the solution into three parts: 1. "Shared Contracts". These are interfaces that describe your services. If you have custom data objects that will be passed between the applications, they should be defined in this assembly as well. This assembly is shared between the client and the server. See "[Designing Service Contracts](https://learn.microsoft.com/en-us/dotnet/framework/wcf/designing-service-contracts)" for more info. 2. "Service". This assembly is your "exe" and it will reference the contracts and define the classes based on your service contracts. Your app will also host a ServiceClient for your service. The configuration file for this app will define how your ServiceClient will be exposed to the client (available as a web service, tcp, etc). See "[Implementing Service Contracts](https://learn.microsoft.com/en-us/dotnet/framework/wcf/implementing-service-contracts)" for more info. 3. "Client". Your plugin will reference the "Shared Contracts" assembly and will contain service-clients based on the contracts. The client can be auto-generated using the svcutil.exe tool. Both the "exe" and the "plugin" will require configuration files that define the bindings. When you want to pass data between client and server, your client will create an object from the "Shared Contracts" assembly and pass it to the service-client. The client's configuration file will figure out where to send the data. For a step-by-step tutorial on how to create a basic WCF service, [check out this Tutorial](https://learn.microsoft.com/en-us/dotnet/framework/wcf/getting-started-tutorial).
Yup. WCF is the way to go. I recommend checking out iDesign.net to use the InProcFactory class. You can shim up your office class into a service and call into your other application which is hosting a service. The other service can then call back into the office based service. You can use an IPC endpoint which will make the communication snappy. -Scott
How to do intra-application communication in .NET
[ "", "c#", ".net", "windows", "" ]
The default seems to be upper case, but is there really any reason to use upper case for keywords? I started using upper case, because I was just trying to match what [SQL Server](https://en.wikipedia.org/wiki/Microsoft_SQL_Server) gives me whenever I tried to create something, like a new [stored procedure](https://en.wikipedia.org/wiki/Stored_procedure). But then, I felt terrible for my baby (5th) finger, that always needs to hold down the `Shift` button, so I stopped using upper case. Is there a reason why I should go back to upper case?
It's just a matter of style, probably originating in the days when editors didn't do code colouring. I used to prefer all upper case, but I'm now leaning towards all lower. Either way, be consistent.
PERSONALLY, I DON'T LIKE MY SQL YELLING AT ME. IT REMINDS ME OF BASIC OR COBOL. So I prefer my T-SQL lowercase with database object names MixedCase. It is much easier to read, and literals and comments stand out.
Is there a good reason to use upper case for SQL keywords?
[ "", "sql", "coding-style", "capitalization", "" ]
I'm trying to get the name of the executable of a window that is outside my C# 2.0 application. My app currently gets a window handle (hWnd) using the GetForegroundWindow() call from "user32.dll". From the digging that I've been able to do, I think I want to use the GetModuleFileNameEx() function (from PSAPI) to obtain the name, but GetModuleFileNameEx() requires a handle to a Process, not a Window. Is it possible to get a process handle from a window handle? (Do I need to get the thread handle of the window first?) EDITED the first sentence to make it clearer what I'm trying to do. **UPDATE!** Here's the C# code that I found worked for me. The only caveat is *occasionally* it returns a file/path where the drive letter is a "?" instead of the actual drive letter (like "C"). -- Haven't figured out why yet. ``` [DllImport("user32.dll")] static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId); [DllImport("kernel32.dll")] static extern IntPtr OpenProcess(UInt32 dwDesiredAccess, Int32 bInheritHandle, UInt32 dwProcessId); [DllImport("psapi.dll")] static extern uint GetModuleFileNameEx(IntPtr hProcess, IntPtr hModule, [Out] StringBuilder lpBaseName, [In] [MarshalAs(UnmanagedType.U4)] int nSize); [DllImport("kernel32.dll")] [return: MarshalAs(UnmanagedType.Bool)] static extern bool CloseHandle(IntPtr hObject); private string GetWindowModuleFileName(IntPtr hWnd) { uint processId = 0; const int nChars = 1024; StringBuilder filename = new StringBuilder(nChars); GetWindowThreadProcessId(hWnd, out processId); IntPtr hProcess = OpenProcess(1040, 0, processId); GetModuleFileNameEx(hProcess,IntPtr.Zero,filename,nChars); CloseHandle(hProcess); return (filename.ToString()); } ```
You can call [GetWindowThreadProcessId](http://msdn.microsoft.com/en-us/library/ms633522(VS.85).aspx) and that will return you the process associated with the window. From that, you can call [OpenProcess](http://msdn.microsoft.com/en-us/library/ms684320.aspx) to open the process and get the handle to the process.
Been struggling with the same problem for an hour now, also got the first letter replaced by a **?** by using GetModuleFileNameEx. Finaly came up with this solution using the **System.Diagnostics.Process** class. ``` [DllImport("user32.dll")] public static extern IntPtr GetWindowThreadProcessId(IntPtr hWnd, IntPtr ProcessId); void GetProcessPathFromWindowHandle(IntPtr hwnd) { uint pid = 0; Win32.GetWindowThreadProcessId(hwnd, out pid); Process p = Process.GetProcessById((int)pid); return p.MainModule.FileName; } ```
How do I GetModuleFileName() if I only have a window handle (hWnd)?
[ "", "c#", "winapi", "hwnd", "getmodulefilename", "" ]
With multiple developers working on the same Tomcat application, I'd like to tell the application to install to a different path, based on the current user and revision control client/view. So, if Bob is building, the app should be installed in Bob's test environment, maybe /bob1 or something like that. Bob might have several revision control clients/views/workspaces he works with so he could have /bob1, /bob2, /bob3, etc. The install location is specified in the build.properties file. Is there a way to avoid checking that file out and changing it for each specific user and revision control view? Can "ant install" take arguments or be configured to consider environment variables for the install target?
I typically use a variation on the default properties answer already given: ``` <property file="local.properties" /> <property file="default.properties" /> ``` I read the local properties file first and the default one second. Users don't edit the default one (then accidentally check it in), they just define the properties they want to override in the local.properties.
You can override ant properties from the command line. ``` ant -Dinstall.location=/bob1 install ``` See [Running Ant](http://ant.apache.org/manual/running.html) for more information.
Building with ant : dynamic build options?
[ "", "java", "ant", "build-process", "" ]
I've never actually used greasemonkey, but I was considering using it. Considering that GreaseMonkey allows you to let random people on the Internet change the behavior of your favorite websites, how safe can it be? Can they steal my passwords? Look at my private data? Do things I didn't want to do? How safe is Greasemonkey? Thanks
*Considering that GreaseMonkey allows you to let random people on the Internet change the behavior of your favorite websites, how safe can it be?* It's as safe as you allow it to be - but you aren't very clear, so let's look at it from a few perspectives: ## Web Developer Greasemonkey can't do anything to your website that a person with telnet can't already do to your website. It automates things a bit, but other than that if greasemonkey is a security hole, then your website design is flawed - not greasemonkey. ## Internet user with Greasemonkey loaded Like anything else you load on your system, greasemonkey can be used against you. Don't load scripts onto your system unless you trust the source (in both meanings of the term 'source'). It's fairly limited and sandboxed, but that doesn't mean it's safe, merely that it's harder for someone to do something nefarious. ## Internet user without Greasemonkey If you do not load greasemonkey or any of its scripts, it cannot affect you in any way. Greasemonkey does not alter the websites you visit unless you've loaded it on your system. ## Greasemonkey developer There's not much you can do beyond what can already be done with XUL and javascript, but it is possible to trash your mozilla and/or firefox profile, and possibly other parts of your system. Unlikely, difficult to do on purpose or maliciously, but it's not a bulletproof utility. *Develop responsibly.* -Adam
> Considering that GreaseMonkey allows you to let random people on the Internet change the behavior of your favorite websites Random people whose UserScript you have installed. No one can force you to install a UserScript. > Can they steal my passwords? Yes, a UserScript could modify a login page so it sent your password to an attacker. No, it cannot look at your current passwords, or for websites the UserScript isn't enabled for > Look at my private data? Yes, if your private data is viewable on a website that you've given a UserScript access too > Do things I didn't want to do? Yes, a UserScript could do things to a webpage (you've given it access to) that are unwanted > How safe is GreaseMonkey? As safe as the individual UserScripts you have installed
How safe is Greasemonkey?
[ "", "javascript", "security", "greasemonkey", "" ]
I'm looking into tightening up our ad code by moving it to an external jQuery script, but I obviously still need some HTML to target the ad to. So I was wondering if I can target a noscript element (or within a noscript element) since I'm going to have to leave that on the page anyway, or if I need to have some other element for the JavaScript to target? ``` <noscript> <div class="ad"><a href="foo"><img src="bar" alt="ad" /></a></div> </noscript> ``` My intention would be to change or strip the noscript element.
`<noscript>` content is not only not displayed when JS is active, it apparently is also not in the DOM. I tried accessing content inside a `<noscript>` area (hoping you could `clone()` it with jQuery and insert it somewhere else) but got back nothing.
You can target noscript elemements. ``` <noscript>asdfasFSD</noscript> <script> alert(document.getElementsByTagName("noscript")[0].innerHTML); </script> ``` This works in FF3, IE6 and Google Chrome. It will alert asdfasFSD for me.
Does noscript get acknowledged by javascript enabled browsers?
[ "", "javascript", "jquery", "html", "" ]
I have a file upload form that is being posted back to a servlet (using multipart/form-data encoding). In the servlet, I am trying to use Apache Commons to handle the upload. However, I also have some other fields in the form that are just plain fields. How can I read those parameters from the request? For example, in my servlet, I have code like this to read in the uplaoded file: ``` // Create a factory for disk-based file items FileItemFactory factory = new DiskFileItemFactory(); // Create a new file upload handler ServletFileUpload upload = new ServletFileUpload(factory); // Parse the request Iterator /* FileItem */ items = upload.parseRequest(request).iterator(); while (items.hasNext()) { FileItem thisItem = (FileItem) items.next(); ... do stuff ... } ```
You could try something like this: ``` while (items.hasNext()) { FileItem thisItem = (FileItem) items.next(); if (thisItem.isFormField()) { if (thisItem.getFieldName().equals("somefieldname") { String value = thisItem.getString(); // Do something with the value } } } ```
Took me a few days of figuring this out but here it is and it works, you can read multi-part data, files and params, here is the code: ``` try { ServletFileUpload upload = new ServletFileUpload(); FileItemIterator iterator = upload.getItemIterator(req); while(iterator.hasNext()){ FileItemStream item = iterator.next(); InputStream stream = item.openStream(); if(item.isFormField()){ if(item.getFieldName().equals("vFormName")){ byte[] str = new byte[stream.available()]; stream.read(str); full = new String(str,"UTF8"); } }else{ byte[] data = new byte[stream.available()]; stream.read(data); base64 = Base64Utils.toBase64(data); } } } catch (FileUploadException e) { e.printStackTrace(); } ```
How can I read other parameters in a multipart form with Apache Commons
[ "", "java", "apache-commons", "multipartform-data", "" ]
If I have a Django form such as: ``` class ContactForm(forms.Form): subject = forms.CharField(max_length=100) message = forms.CharField() sender = forms.EmailField() ``` And I call the as\_table() method of an instance of this form, Django will render the fields as the same order as specified above. My question is how does Django know the order that class variables where defined? (Also how do I override this order, for example when I want to add a field from the classe's **init** method?)
I went ahead and answered my own question. Here's the answer for future reference: In Django `form.py` does some dark magic using the `__new__` method to load your class variables ultimately into `self.fields` in the order defined in the class. `self.fields` is a Django `SortedDict` instance (defined in `datastructures.py`). So to override this, say in my example you wanted sender to come first but needed to add it in an **init** method, you would do: ``` class ContactForm(forms.Form): subject = forms.CharField(max_length=100) message = forms.CharField() def __init__(self,*args,**kwargs): forms.Form.__init__(self,*args,**kwargs) #first argument, index is the position of the field you want it to come before self.fields.insert(0,'sender',forms.EmailField(initial=str(time.time()))) ```
New to Django 1.9 is **[Form.field\_order](https://docs.djangoproject.com/en/stable/ref/forms/api/#django.forms.Form.field_order)** and **[Form.order\_fields()](https://docs.djangoproject.com/en/stable/ref/forms/api/#django.forms.Form.order_fields)**. ``` # forms.Form example class SignupForm(forms.Form): password = ... email = ... username = ... field_order = ['username', 'email', 'password'] # forms.ModelForm example class UserAccount(forms.ModelForm): custom_field = models.CharField(max_length=254) def Meta: model = User fields = ('username', 'email') field_order = ['username', 'custom_field', 'password'] ```
How does Django Know the Order to Render Form Fields?
[ "", "python", "django", "django-forms", "" ]
I need some help regarding algorithm for randomness. So Problem is. There are 50 events going to happen in 8 hours duration. Events can happen at random times. Now it means in each second there is a chance of event happening is 50/(8\*60\*60)= .001736. How can I do this with random generation algorithm? I can get random number ``` int r = rand(); double chance = r/RAND_MAX; if(chance < 0.001736) then event happens else no event ``` But most of times rand() returns 0 and 0<0.001736 and I am getting more events than required. Any suggestions? --- sorry I forget to mention I calculated chance as double chance = (static\_cast )(r) / (static\_cast)(RAND\_MAX); --- It removed double from static\_cast double chance = (double)r/(double)(RAND\_MAX);
* Create a list of 50 numbers. * Fill them with a random number between 1 and 8 \* 60 \* 60. * Sort them And you have the 50 seconds. Note that you can have duplicates.
Both `r` and `RAND_MAX` are integers, so the expression ``` double chance = r / RAND_MAX; ``` is computed with integer arithmetic. Try: ``` double chance = 1.0 * r / RAND_MAX; ``` which will cause the division to be a floating point division. However, a better solution would be to use a random function that returns a floating point value in the first place. If you use an integer random number generator, you will get some bias errors in your probability calculations.
randomness algorithm
[ "", "c++", "random", "events", "" ]
Suppose I have a table with a numeric column (lets call it "score"). I'd like to generate a table of counts, that shows how many times scores appeared in each range. For example: ``` score range | number of occurrences ------------------------------------- 0-9 | 11 10-19 | 14 20-29 | 3 ... | ... ``` In this example there were 11 rows with scores in the range of 0 to 9, 14 rows with scores in the range of 10 to 19, and 3 rows with scores in the range 20-29. Is there an easy way to set this up? What do you recommend?
Neither of the highest voted answers are correct on SQL Server 2000. Perhaps they were using a different version. Here are the correct versions of both of them on SQL Server 2000. ``` select t.range as [score range], count(*) as [number of occurences] from ( select case when score between 0 and 9 then ' 0- 9' when score between 10 and 19 then '10-19' else '20-99' end as range from scores) t group by t.range ``` or ``` select t.range as [score range], count(*) as [number of occurrences] from ( select user_id, case when score >= 0 and score< 10 then '0-9' when score >= 10 and score< 20 then '10-19' else '20-99' end as range from scores) t group by t.range ```
An alternative approach would involve storing the ranges in a table, instead of embedding them in the query. You would end up with a table, call it Ranges, that looks like this: ``` LowerLimit UpperLimit Range 0 9 '0-9' 10 19 '10-19' 20 29 '20-29' 30 39 '30-39' ``` And a query that looks like this: ``` Select Range as [Score Range], Count(*) as [Number of Occurences] from Ranges r inner join Scores s on s.Score between r.LowerLimit and r.UpperLimit group by Range ``` This does mean setting up a table, but it would be easy to maintain when the desired ranges change. No code changes necessary!
In SQL, how can you "group by" in ranges?
[ "", "sql", "sql-server", "t-sql", "" ]
What is the best way to close a browser window of an AJAX ASP.NET application after the server-side has been executed. I found this [solution](https://stackoverflow.com/questions/250450/aspnet-ajax-close-window-after-ajax-call), but it seems a little complex for what I want to accomplish. Or is this the best way to accomplish my task. **UPDATE:** I have to close the window after the button is pressed **UPDATE 1:** I tried the solution from the other SO question, and it did not work for me. ``` <asp:Button ID="btnMyButton" runat="server" onClick="btnMyButton_Click" /> protected void btnMyButton_Click(object sender, EventArgs e) { } ``` --- I used the following code in my page, but the "The webpage you are viewing is trying to close the windows" module window pops up. ``` if (ScriptManager.GetCurrent(this).IsInAsyncPostBack) ScriptManager.RegisterStartupScript(upApproveRequest, typeof(string), "closeWindow", "window.close();", true); ``` Any way to prevent this?
No, there is no way to close a browser window without the user's consent. You can log them out of their application, but you can't forcibly close the browser window.
Actually you can do this by placing the following code in your button click event. ``` protected void btnMyButton_Click(object sender, ImageClickEventArgs e) { // Update database bool success = Presenter.DoDatabaseStuff(); if (success) { // Close window after success const string javaScript = "<script language=javascript>window.top.close();</script>"; if (!ClientScript.IsStartupScriptRegistered("CloseMyWindow")) { ClientScript.RegisterStartupScript(GetType(),"CloseMyWindow", javaScript); } } else { // Display failure result result_msg_area.Visible = true; lblError.Text = "An error occurred!"; } } ```
Close a browser window of an ASP.NET and AJAX application
[ "", "asp.net", "javascript", "asp.net-ajax", "" ]
I have a two-dimensional array (of Strings) which make up my data table (of rows and columns). I want to sort this array by any column. I tried to find an algorithm for doing this in C#, but have not been successful. Any help is appreciated.
Load your two-dimensional string array into an actual DataTable (System.Data.DataTable), and then use the DataTable object's Select() method to generate a sorted array of DataRow objects (or use a DataView for a similar effect). ``` // assumes stringdata[row, col] is your 2D string array DataTable dt = new DataTable(); // assumes first row contains column names: for (int col = 0; col < stringdata.GetLength(1); col++) { dt.Columns.Add(stringdata[0, col]); } // load data from string array to data table: for (rowindex = 1; rowindex < stringdata.GetLength(0); rowindex++) { DataRow row = dt.NewRow(); for (int col = 0; col < stringdata.GetLength(1); col++) { row[col] = stringdata[rowindex, col]; } dt.Rows.Add(row); } // sort by third column: DataRow[] sortedrows = dt.Select("", "3"); // sort by column name, descending: sortedrows = dt.Select("", "COLUMN3 DESC"); ``` You could also write your own method to sort a two-dimensional array. Both approaches would be useful learning experiences, but the DataTable approach would get you started on learning a better way of handling tables of data in a C# application.
Can I check - do you mean a rectangular array (`[,]`)or a jagged array (`[][]`)? It is quite easy to sort a jagged array; I have a discussion on that [here](http://groups.google.co.uk/group/microsoft.public.dotnet.languages.csharp/browse_thread/thread/1e1a4bd58144bf73/364f22121b566a3c#23b3e4487e91cd02). Obviously in this case the `Comparison<T>` would involve a column instead of sorting by ordinal - but very similar. Sorting a rectangular array is trickier... I'd probably be tempted to copy the data out into either a rectangular array or a `List<T[]>`, and sort *there*, then copy back. Here's an example using a jagged array: ``` static void Main() { // could just as easily be string... int[][] data = new int[][] { new int[] {1,2,3}, new int[] {2,3,4}, new int[] {2,4,1} }; Sort<int>(data, 2); } private static void Sort<T>(T[][] data, int col) { Comparer<T> comparer = Comparer<T>.Default; Array.Sort<T[]>(data, (x,y) => comparer.Compare(x[col],y[col])); } ``` For working with a rectangular array... well, here is some code to swap between the two on the fly... ``` static T[][] ToJagged<T>(this T[,] array) { int height = array.GetLength(0), width = array.GetLength(1); T[][] jagged = new T[height][]; for (int i = 0; i < height; i++) { T[] row = new T[width]; for (int j = 0; j < width; j++) { row[j] = array[i, j]; } jagged[i] = row; } return jagged; } static T[,] ToRectangular<T>(this T[][] array) { int height = array.Length, width = array[0].Length; T[,] rect = new T[height, width]; for (int i = 0; i < height; i++) { T[] row = array[i]; for (int j = 0; j < width; j++) { rect[i, j] = row[j]; } } return rect; } // fill an existing rectangular array from a jagged array static void WriteRows<T>(this T[,] array, params T[][] rows) { for (int i = 0; i < rows.Length; i++) { T[] row = rows[i]; for (int j = 0; j < row.Length; j++) { array[i, j] = row[j]; } } } ```
How do I sort a two-dimensional (rectangular) array in C#?
[ "", "c#", "arrays", "sorting", "" ]
I like generics a lot and use them whereever I can. Every now and then I need to use one of my classes in another project which has to run on an old JVM (before 5.0), needs to run on JavaME (where generics are not allowed neither) or in Microsoft J# (which has VERY poor Support for generics). At the moment, I remove all generics manually, which means inserting many casts as well. Since generics are said to be compile-time-only, and every piece of generic code could possibly converted to non-generic code automatically, I wonder if there is any tool which can do this for me. If there is no such tool, how else could I solve the problem? Should I completely stop using generics? There already are answers related to *bytecode compability*. What if I need *source code compability* for some reason?
You need to use something like [Retroweaver](http://retroweaver.sourceforge.net/) in order to achieve this sort of thing. The other answers on this question are slightly misleading. Generics are sort-of bytecode compatible with previous versions, but not entirely (see `java.lang.reflect.Type` if you don't believe me). Also, there is an issue of the bytecode version attribute, which will prevent a class compiled against 1.5 from running on a previous version. Retroweaver works around both problems while also enabling other Java 5 features like annotations and enums.
In Netbeans (I'm not sure about what IDE you are using) you can set the source-code compatibility to a set java version - just set it to one that supports generics. As already posted, generics are bytecode compatable with old JVM / JRE versions and so it should hopefully work out of the box.
How to use generics in a world of mixed Java versions?
[ "", "java", "generics", "jvm", "portability", "" ]
I'm working in a windows environment (my laptop!) and I need a couple of scripts that run other programs, pretty much like a windows batch file. how can I run a command from python such that the program when run, will replace the script? The program is interactive (for instance, unison) and keeps printing lines and asking for user input all the time. So, just running a program and printing the output won't suffice. The program has to takeover the script's input/output, pretty mcuh like running the command from a .bat file. I tried os.execl but it keeps telling me "invalid arguments", also, it doesn't find the program name (doesn't search the PATH variable); I have to give it the full path ..?! basically, in a batch script I can write: unison profile how can I achieve the same effect in python? EDIT: I found out it can be done with `os.system( ... )` and since I cannot accept my own answer, I'm closing the question. --- EDIT: this was supposed to be a comment, but when I posted it I didn't have much points. Thanks Claudiu, that's pretty much what I want, except for a little thing: I want the function to end when the program exits, but when I try it on unison, it doesn't return control to the python script, but to the windows command line environment ``` >>> os.execlp("unison") C:\>Usage: unison [options] or unison root1 root2 [options] or unison profilename [options] For a list of options, type "unison -help". For a tutorial on basic usage, type "unison -doc tutorial". For other documentation, type "unison -doc topics". C:\> C:\> C:\> ``` how to get around this?
I found out that os.system does what I want, Thanks for all that tried to help. ``` os.system("dir") ``` runs the command just as if it was run from a batch file
You should create a new processess using the [subprocess module](http://www.python.org/doc/2.5.2/lib/module-subprocess.html). I'm not fluent in windows processes but its Popen function is cross-platform, and should be preffered to OS specific solutions. EDIT: I maintain that you should prefer the Subprocess module to os.\* OS specific functions, it is cross-platform and more pythonic (just google it). You can wait for the result easily, and *cleanly*: ``` import os import subprocess unison = os.path.join(os.path.curdir, "unison") p = subprocess.Popen(unison) p.wait() ```
python as a "batch" script (i.e. run commands from python)
[ "", "python", "scripting", "batch-file", "" ]
I'm trying to complete a practice question from a book on generics but the question doesn't make sense to me. Here it goes. Create two classes with identical functionality. Use generics for the first class, and cast the second class to Object types. Create a for loop that uses class and the Object based class to determine which performs better. I'm not sure what it means by casting to Object types. Here is my code so far ``` //Generic class Person<T> { T var1; public Person(T yer) { var1 = yer; } public T Value { get { return var1; } } } //Normal class class Human { int var1; public Human(int yer) { var1 = yer; } public int Value { get { return var1; } } } ``` My main program running the loops ``` for (int i = 0; i < 1000000; i++) { Person<int> me = new Person<int>(1); int hey = me.Value; } for (int i = 0; i < 1000000; i++) { Human per = new Human(1); object her = (object)per.Value; } ``` I don't know if Im doing this right. Help please :-)
I think that the question is asking you to create a collection class, and insert instances of your class into that. E.g., Generics version: ``` List<Human> myList = new List<Human>(); Human h = new Human(); myList.Add(h); ``` Object version: ``` ArrayList myObjectList = new ArrayList(); Human h = new Human(); myObjectList.Add((object)h)); ``` I haven't checked whether that compiles, and have to run now.
I think the question is for looping over a collection of your classes. **Generic** ``` List<Person> pList = new List<Person>(); for(int i = 0; i<1000; ++i) pList.Add(new Person(30)); StopWatch sw = new StopWatch(); sw.start(); int sum = 0; foreach(Person p in pList) sum += p.Value; sw.Stop(); ``` **Object** ``` ArrayList hList = new ArrayList; for(int i = 0; i<1000; ++i) hList.Add(new Human(30)); StopWatch sw = new StopWatch(); sw.start(); int sum = 0; foreach(Object h in hList) sum += ((Human)h).Value; sw.Stop(); ```
C# generics question
[ "", "c#", "generics", "" ]
I'm writing a service that has five different methods that can take between 5 seconds and 5 minutes to run. The service will schedule these different methods to run at different intervals. I don't want any of the methods to run concurrently, so how do I have the methods check to see if another method is running and queue itself to run when it finishes? Anthony
If you want *simple*, and all the methods are in the same class, ou can just use `[MethodImpl]`: ``` [MethodImpl(MethodImplOptions.Synchronized)] public void Foo() {...} [MethodImpl(MethodImplOptions.Synchronized)] public void Bar() {...} ``` For instance methods, this locks on `this`; for static methods, this locks on `typeof(TheClass)`. As such, these lock objects are public - so there is a remote (but genuine) chance that another bit of code might be locking on them. It is generally considered better practice to create your own lock object: ``` private readonly object syncLock = new object(); // or static if needed ... public void Foo() { lock(syncLock) { ... } } ``` etc --- Aside: a curious fact; the ECMA spec doesn't define a specific pattern for [MethodImpl], even including an example of a private lock, as "valid". The MS spec, however, insists on this/typeof.
There's the [MethodImplOptions.Synchronized attribute](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.methodimploptions.aspx), as noted in the article [Synchronized method access in C#](http://bartdesmet.net/blogs/bart/archive/2006/10/02/4490.aspx), but that can lead to deadlocks as noted at MSDN. It sounds like, for your usage, this won't be a big concern. Otherwise, the simplest approach would be to use the [lock statement](http://msdn.microsoft.com/en-us/library/c5kehkcz.aspx) to make sure that only one method is executing at a time: ``` class ServiceClass { private object thisLock = new object(); public Method1() { lock ( thisLock ) { ... } } public Method2() { lock ( thisLock ) { ... } } ... } ```
Forcing threads in a service to wait for another thread to finish
[ "", "c#", ".net", "multithreading", "" ]
I have a two way foreign relation similar to the following ``` class Parent(models.Model): name = models.CharField(max_length=255) favoritechild = models.ForeignKey("Child", blank=True, null=True) class Child(models.Model): name = models.CharField(max_length=255) myparent = models.ForeignKey(Parent) ``` How do I restrict the choices for Parent.favoritechild to only children whose parent is itself? I tried ``` class Parent(models.Model): name = models.CharField(max_length=255) favoritechild = models.ForeignKey("Child", blank=True, null=True, limit_choices_to = {"myparent": "self"}) ``` but that causes the admin interface to not list any children.
I just came across [ForeignKey.limit\_choices\_to](http://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.ForeignKey.limit_choices_to) in the Django docs. Not sure yet how it works, but it might be the right thing here. **Update:** `ForeignKey.limit_choices_to` allows one to specify either a constant, a callable or a Q object to restrict the allowable choices for the key. A constant obviously is of no use here, since it knows nothing about the objects involved. Using a callable (function or class method or any callable object) seems more promising. However, the problem of how to access the necessary information from the HttpRequest object remains. Using [thread local storage](https://stackoverflow.com/questions/160009/django-model-limitchoicestouser-user) may be a solution. **2. Update:** Here is what has worked for me: I created a middleware as described in the link above. It extracts one or more arguments from the request's GET part, such as "product=1", and stores this information in the thread locals. Next there is a class method in the model that reads the thread local variable and returns a list of ids to limit the choice of a foreign key field. ``` @classmethod def _product_list(cls): """ return a list containing the one product_id contained in the request URL, or a query containing all valid product_ids if not id present in URL used to limit the choice of foreign key object to those related to the current product """ id = threadlocals.get_current_product() if id is not None: return [id] else: return Product.objects.all().values('pk').query ``` It is important to return a query containing all possible ids if none was selected so that the normal admin pages work ok. The foreign key field is then declared as: ``` product = models.ForeignKey( Product, limit_choices_to={ id__in=BaseModel._product_list, }, ) ``` The catch is that you have to provide the information to restrict the choices via the request. I don't see a way to access "self" here.
The 'right' way to do it is to use a custom form. From there, you can access self.instance, which is the current object. Example -- ``` from django import forms from django.contrib import admin from models import * class SupplierAdminForm(forms.ModelForm): class Meta: model = Supplier fields = "__all__" # for Django 1.8+ def __init__(self, *args, **kwargs): super(SupplierAdminForm, self).__init__(*args, **kwargs) if self.instance: self.fields['cat'].queryset = Cat.objects.filter(supplier=self.instance) class SupplierAdmin(admin.ModelAdmin): form = SupplierAdminForm ```
How do I restrict foreign keys choices to related objects only in django
[ "", "python", "django", "django-models", "" ]
I wonder if there is a way to set the value of #define in run time. I assume that there is a query for Oracle specific and Sql Server specific at the code below. ``` #define oracle // ... #if oracle // some code #else // some different code. #endif ```
Absolutely not, #defines are compiled out by the preprocessor before the compiler even sees it - so the token 'oracle' isn't even in your code, just '1' or '0'. Change the #define to a global variable or (better) a function that returns the correct value.
`#if` is compile-time. You could specify this in your build process (via switches to msbuild/csc), but not really at runtime. The excluded code *doesn't exist*. You might be better advised to (1 of): * Have separate DAL stacks for each back-end, using Dependency Injection / IoC * Use an ORM tool that supports either * Branch the code based n the provider (in a single DAL)
Is there a way to set the value of #define on runtime?
[ "", "c#", "c-preprocessor", "" ]
In LinqToSql, it is lovely easy to load a row, change a column, and submit the changes to the database: ``` using (MyDataContext wdc = new MyDataContext()) { Article article = wdc.Article.First(p => p.ID == id); article.ItemsInStock = itemsinstock; wdc.SubmitChanges(); } ``` The only drawback: Article is huge. **To load the entire article, just to update one column is way overkill** and slows down my app significantly. Is there a way to update a single column using LINQ, without having to load the entire row? Right now I revert to using ExecuteCommand where speed is of essence, but this is ugly and error prone: ``` wdc.ExecuteCommand("UPDATE Article SET ItemsInStock = @1 WHERE ID = @2", itemsinstock,id); ```
ligget78 gave me another idea how to make an update of a single column: Create a new DataContext just for this kind of update, and only include the needed columns into this DataContext. This way the unneeded columns will not even be loaded, and of course not sent back to the database.
You need to set UpdateCheck on all properties of the Article class except the primary key (click on the class property in LINQ2SQL designer and switch to Properties Tool Window) to Never (not sure about WhenChanged, maybe that works too - go ahead and experiment with it!). This will force LINQ2SQL to use ``` UPDATE ... SET ... WHERE ID = @2 ``` instead of the long version with all columns in the WHERE-clause: ``` UPDATE ... SET ... WHERE ID = @2 AND ItemsInStock = @1 AND SomeOtherColumn = @3 AND... ``` Now you can use code like ``` context.Articles.Attach(article /* article with updated values */, new Article { ID = articleID, ItemsInStock = -1 } /* pretend that this is the original article */); context.SubmitChanges(); ``` Basically you indicate that only ItemsInStock property has changed - other props should have the same default value, articleID of course being the same. NOTE: you don't need to fetch the article prior to that.
How to update a single column in LINQ without loading the entire row?
[ "", "c#", "linq", "performance", "linq-to-sql", "" ]
I'm wanting to write a method that I can use to initialise a Map. First cut: ``` Map map(Object ... o) {for (int i = 0; i < o.length; i+=2){result.put(o[i], o[i+1])}} ``` Simple, but not type-safe. Using generics, maybe something like: ``` <TKey, TValue> HashMap<TKey, TValue> map(TKey ... keys, TValue ... values) ``` but that syntax isn't supported. So eventually I come to this: ``` public static <TKey, TValue, TMap extends Map<? super TKey, ? super TValue>> TMap map(TMap map, Pair<? extends TKey, ? extends TValue> ... pairs) { for (Pair<? extends TKey, ? extends TValue> pair: pairs) { map.put(pair.getKey(), pair.getValue()); } return map; } public static <TKey, TValue> HashMap<? super TKey, ? super TValue> map(Pair<? extends TKey, ? extends TValue> ... pairs) { return map(new HashMap<TKey, TValue>(), pairs); } public static <TKey, TValue> Pair<TKey, TValue> pair(TKey key, TValue value) { return new Pair<TKey, TValue>(key, value); } public static final class Pair<TKey, TValue> { private final TKey key; private final TValue value; Pair(TKey key, TValue value) {this.key = key; this.value = value; } public TKey getKey() {return key;} public TValue getValue() {return value;} } ``` But when I try it out, I need to cast it: ``` private static final Map<? extends Class<? extends Serializable>, ? super TypeHandler<? extends Serializable > > validCodeTypes = /* (Map<? extends Class<? extends Serializable>, ? super TypeHandler<? extends Serializable >>) */ map( pair(Integer.class, new IntHandler()), pair(Integer.TYPE, new IntHandler()), pair(Character.class, new CharHandler()), pair(Character.TYPE, new CharHandler()), pair(String.class, new StringHandler()) ); private interface TypeHandler<TType extends Serializable> {} private static class CharHandler implements TypeHandler<Character> {} private static class IntHandler implements TypeHandler<Integer> {} private static class StringHandler implements TypeHandler<String> {} ``` Can anyone tell me how to code my map() methods so that it is entirely general yet doesn't need to be casted?
To make life easier for yourself, never use a return type that contains wildcards. Wildcard types, in general, are for method parameters only. So, try this: ``` public static <TKey, TValue, TMap extends Map<TKey, TValue>> TMap map(TMap map, Pair<? extends TKey, ? extends TValue>... pairs) { for (Pair<? extends TKey, ? extends TValue> pair: pairs) { map.put(pair.getKey(), pair.getValue()); } return map; } public static <TKey, TValue> HashMap<TKey, TValue> map(Pair<? extends TKey, ? extends TValue>... pairs) { return map(new HashMap<TKey, TValue>(), pairs); } ``` I haven't tested it, but give it a go and see how you fare. P.S., rather than using a made-up `Pair` type, you may find it easier to use `Map.Entry`.
Why not this? Did I misunderstand something? ``` import java.util.HashMap; import java.util.Map; public class ToHash { public static <K, V> Map<K, V> toHash(Object... objects) { Map<K, V> map = new HashMap<K, V>(objects.length / 2); if (objects.length % 2 != 0) { throw new IllegalArgumentException("Odd number of elements: " + objects.length); } for (int i = 0; i < objects.length; i += 2) { map.put((K) objects[i], (V) objects[i + 1]); } return map; } } ```
Type-safe varargs method that initialises a Map
[ "", "java", "generics", "variadic-functions", "" ]
I am wondering how you would approach this problem I have two Taxrates that can apply to my products. I specifically want to avoid persisting the Taxrates into the database while still being able to change them in a central place (like Taxrate from 20% to 19% etc). so I decided it would be great to have them just compiled into my application (It's internal). The problem is that I want to not only to know the Rate but also the Name of the Taxrate. I could go with an Enum that maps to the value. But then I'd have to create some method that retrieves the German Name of that Taxrate for the English enum-value (I write my code in english, the App is in german). I thought about just using hardcoded objects to reflect this, ``` public interface Taxrate { string Name { get; } decimal Rate { get; } } public class NormalTaxRate : Taxrate { public string Name { get { return "Regelsteuersatz"; } } public decimal Rate { get { return 20m; } } } ``` But then I'd have to create some sort of list that holds two instances of those two objects. Doing it static may work, but still I'd have to keep some sort of list. Also I'd have to find a way to map my POCO Domain Object to this, because I doubt NHibernate can instantiate the right Object depending on a value in a field. It doesn't really feel right, and I think I'm missing something here. Hope somebody has a better solution, I can't think of one. greetings, Daniel *Ps: also please retag this question if you find something fitting, I can't think of more meaningful tags right now.*
EDIT: Note that the code here could easily be abbreviated by having a private constructor taking the tax rate and the name. I'm assuming that in real life there might be actual behavioral differences between the tax rates. It sounds like you want something like Java's enums. C# makes that fairly tricky, but you can do it to some extent using private constructors and nested classes: ``` public abstract class TaxRate { public static readonly TaxRate Normal = new NormalTaxRate(); public static readonly TaxRate Whatever = new OtherTaxRate(); // Only allow nested classes to derive from this - and we trust those! private TaxRate() {} public abstract string Name { get; } public abstract decimal Rate { get; } private class NormalTaxRate : TaxRate { public override string Name { get { return "Regelsteuersatz"; } } public override decimal Rate { get { return 20m; } } } private class OtherTaxRate : TaxRate { public override string Name { get { return "Something else"; } } public override decimal Rate { get { return 120m; } } } } ``` You'd probably want some sort of static method in TaxRate to return the right instance based on name or whatever. I don't know how easily this fits in with NHibernate, but hopefully it will help to some extent... As noted in the comments, it's pretty ugly - or at least can get pretty ugly when you've got lots of different values. Partial classes can help here: ``` // TaxRate.cs public partial abstract class TaxRate { // All the stuff apart from the nested classes } // TaxRate.Normal.cs public partial abstract class TaxRate { private class NormalTaxRate : TaxRate { public override string Name { get { return "Regelsteuersatz"; } } public override decimal Rate { get { return 20m; } } } } // TaxRate.Other.cs public partial abstract class TaxRate { private class OtherTaxRate : TaxRate { public override string Name { get { return "Something else"; } } public override decimal Rate { get { return 120m; } } } } ``` You can then munge the project file to show the nested classes as children of the outer class, as shown in [this SO question](https://stackoverflow.com/questions/223249/).
I'd do it like this: ``` public class TaxRate { public readonly string Name; public readonly decimal Rate; private TaxRate(string name, decimal rate) { this.Name = name; this.Rate = rate; } public static readonly TaxRate NormalRate = new TaxRate("Normal rate", 20); public static readonly TaxRate HighRate = new TaxRate("High rate", 80); } ``` This way it would be easy to use it - just access `TaxRate` static members like enum values. To use it with NHibernate you will have to create your own custom NHibernate type class (see documentation for it), but it's not so hard. I've done it once already.
Enum struct? A Value object that behaves like a Enum
[ "", "c#", "enums", "" ]
I am reading in numbers from a file and then trying to add them to an array. My problem is how do you increase the size of the array? For example I thought might be able to just do: ``` #include <iostream> using namespace std; int main() { double *x; x = new double[1]; x[0]=5; x = new double[1]; x[1]=6; cout << x[0] << "," << x[1] << endl; return 0; } ``` But this obviously just overwrites the value, 5, that I initially set to x[0] and so outputs 0,6. How would I make it so that it would output 5,6? Please realize that for the example I've included I didn't want to clutter it up with the code reading from a file or code to get numbers from a user. In the actual application I won't know how big of an array I need at compile time so please don't tell me to just make an array with two elements and set them equal to 5 and 6 respectively.
You don't want to work with arrays directly. Consider using a `vector`, instead. Then, you can call the `push_back` function to add things to the end, and it will automatically resize the vector for you. ``` #include <iostream> #include <vector> int main() { double value; std::vector<double> values; // Read in values while (std::cin >> value) { values.push_back(value); } // Print them back out for (std::size_t i(0), len(values.size()); i != len; ++i) { std::cout << values[i]; } } ```
You should use a collection class to do this for you rather than managing it yourself. Have a look at the "vector" class. It's essentially a dynamic array that resizes automatically as required. In your situation you would use "vector" with the "double" type. You may also need to read up on templates in C++. <http://www.cplusplus.com/reference/stl/vector/>
Dynamic Arrays - How to increase the size of an array?
[ "", "c++", "arrays", "" ]
Is there an equivalent to the Java File method **isDirectory()** in MFC? I tried using this : ``` static bool isDirectory(CString &path) { return GetFileAttributes(path) & FILE_ATTRIBUTE_DIRECTORY; } ``` but it doesn't seem to work.
Sorry for possibly "inconsistency" of answer to question but may be you'll see it useful because anytime I need something like this in Windows I am NOT using MFC but regular Windows API: ``` //not completely tested but after some debug I'm sure it'll work bool IsDirectory(LPCTSTR sDirName) { //First define special structure defined in windows WIN32_FIND_DATA findFileData; ZeroMemory(&findFileData, sizeof(WIN32_FIND_DATA)); //after that call WinAPI function finding file\directory //(don't forget to close handle after all!) HANDLE hf = ::FindFirstFile(sDirName, &findFileData); if (hf == INVALID_HANDLE_VALUE) //also predefined value - 0xFFFFFFFF return false; //closing handle! ::FindClose(hf); // true if directory flag in on return (findFileData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0; } ```
# CFileFind::IsDirectory() <http://msdn.microsoft.com/en-us/library/scx99850(VS.80).aspx> EDIT: ``` #include <afxwin.h> #include <iostream> using namespace std; CFileFind finder; fileName += _T("c:\\aDirName"); if (finder.FindFile(fileName)) { if (finder.FindNextFIle()) { if (finder.IsDirectory()) { // Do directory stuff... } } } ``` If you change filename to have wildcards, you can do a ``` while(finder.findNextFile()) {... ``` to get all matching files.
MFC Equivalent to Java File#isDirectory()
[ "", "c++", "mfc", "path", "directory", "" ]
Is it possible to using "paging" functionality in Linq queries? Let's say I have some XML like this: ``` <Root> <BetaSection> <Choices> <SetA> <Choice id="choice1">Choice One</Choice> <Choice id="choice2">Choice Two</Choice> <Choice id="choice3">Choice Three</Choice> . . . <Choice id="choice48">Choice Forty-Eight</Choice> <Choice id="choice49">Choice Forty-Nine</Choice> <Choice id="choice50">Choice Fifty</Choice> </SetA> </Choices> </BetaSection> </Root> ``` If I wanted to implement paging functionality, would I be able to provide an offset to a LINQ query such that I could start at the 11th element and end on the 20th element? If so, would the query be any different if the data was a list of objects instead of XML?
``` var q = from X in Choices.Skip((page-1)*pageSize).Take(pageSize) select X; ``` Now, if you need a where clause in it, it gets a bit trickier: ``` var q = (from X in Choices where x.SomeField == SomeValue select X).Skip((page-1)*pageSize).Take(pageSize); ```
Absolutely - `Skip()` and `Take()` achieve paging, and are supported by pretty-much all LINQ providers. In this case it looks like you are using LINQ-to-Xml, so feel free to ignore the following bit - but for general info: note that if the data is coming from a database via a stored procedure, it is hard to page at the server. You can, however, compose (i.e. page) "UDF"s. LINQ-to-SQL supports UDFs (via `[FunctionAttribute]`), but not Entity Framework. If you are using auto-generated database queries this isn't an issue. Note that with xml, you could also do a lot with xpath - here using `XmlDocument`: ``` foreach (XmlElement el in doc.SelectNodes( "/Root/BetaSection/Choices/SetA/Choice[position() > 11 and position() < 20]")) { Console.WriteLine(el.GetAttribute("id")); } ``` or with `Skip()`/`Take()` (still with `XmlDocument`): ``` foreach (var el in doc.SelectNodes( "/Root/BetaSection/Choices/SetA/Choice").Cast<XmlElement>() .Skip(10).Take(10)) { Console.WriteLine(el.GetAttribute("id")); } ```
Newbie LINQ Question: Is Paging in LINQ Queries Possible?
[ "", "c#", "linq-to-xml", "" ]
I'm hoping that it isn't to pass each as a parameter to the controller post method.. Have a grid of 52 text boxes representing weeks in a year, each will be containing a decimal value, need ALL of these passed back to the controller for mass updating of the database.. edit: Adding more details. So my action Index(); returns a view with a list of 52 objects in it. so return View(dates); inside the view I do a foreach for all the objects of that type in the model, and put their values into textboxes, I then want to change them and hit submit. **is there any way to get the updated list into my POST controller without having to give it 52 parameters with the id's of the textboxes?** Thanks a lot in advance! edit: Maybe there is a way I can post the forum to the controller and iterate through the textbox controls and get out id/value, any idea how?
I would suggestion using a Model Binder: <http://www.singingeels.com/Articles/Model_Binders_in_ASPNET_MVC.aspx> <http://www.singingeels.com/Articles/Model_Binders_in_ASPNET_MVC__Part_2.aspx>
Check out this question. It should provide you with a better answer than using the FormCollection. [Complex model binding to a list](https://stackoverflow.com/questions/231878/complex-model-binding-to-a-list)
MVC ASP.NET Best way to get 52 textbox key/values from View to Controller
[ "", "c#", "asp.net-mvc", "" ]
Unit testing with C/C++: What do you teach people who either did not do unit testing before or come from Java/Junit? What is the single most important lesson / thing to remember/ practice from your point of view that saves a lot of time or stress (especially regarding C/C++)?
1. Unit tests have to run automatically on every checkin (or, unit tests that are written then forgotten are not unit tests). 2. Before fixing a bug, write a unit test to expose it (it should fail). Then fix the bug and rejoice as the test turns green. 3. It's OK to sacrifice a bit of "beauty" of a class for easier testing (like provide public methods that should not really be public, but help your testing/mocking).
Read this... you will anyway.. ![alt text](https://i.stack.imgur.com/iaN6v.jpg)
Unit testing with C/C++: Lessons, what to remember?
[ "", "c++", "unit-testing", "tdd", "" ]
I'm working with a database schema that is running into scalability issues. One of the tables in the schema has grown to around 10 million rows, and I am exploring sharding and partitioning options to allow this schema to scale to much larger datasets (say, 1 billion to 100 billion rows). Our application must also be deployable onto several database products, including but not limited to Oracle, MS SQL Server, and MySQL. This is a large problem in general, and I'd like to read up on what options are available. What resources are out there (books, whitepapers, web sites) for database sharding and partitioning strategies?
I agree with the other answers that you should look at your schema and indexes before resorting to sharding. 10 million rows is well within the capabilities of any of the major database engines. However if you want some resources for learning about the subject of sharding then try these: * [Scalability Best Practices: Lessons from eBay](http://www.infoq.com/articles/ebay-scalability-best-practices) * [Randy Shoup on eBay's Architectural Principles - Video and Presentation](http://www.infoq.com/presentations/shoup-ebay-architectural-principles) * [High Scalability Site](http://highscalability.com/) * [Mr. Moore gets to punt on sharding (when not to do it)](http://www.37signals.com/svn/posts/1509-mr-moore-gets-to-punt-on-sharding)
I agree with Mike Woodhouse's observation that the current size should not be an issue - and the questioner agrees. Most of the commercial DBMS provide support for fragmented tables in some for or another, under one name or several others. One of the key questions is whether there is a sensible way of splitting the data into fragments. One common way is to do so based on a date, so all the values for, say, November 2008 go in one fragment, those for October 2008 into another, and so on. This has advantages when it comes time to remove old data. You can probably drop the fragment containing data from October 2001 (seven years data retention) without affecting the other fragments. This sort of fragmentation can also help with 'fragment elimination'; if the query clearly cannot need to read the data from a given fragment, then it will be left unread, which can give you a magnificent performance benefit. (For example, if the optimizer knows that the query is for a date in October 2008, it will ignore all fragments except the one that contains the data from October 2008.) There are other fragmentation techniques - round robin distributes the load across multiple disks, but means you cannot benefit from fragment elimination.
Resources for Database Sharding and Partitioning
[ "", "sql", "database", "scalability", "sharding", "database-cluster", "" ]
I am trying to process files one at a time that are stored over a network. Reading the files is fast due to buffering is not the issue. The problem I have is just listing the directories in a folder. I have at least 10k files per folder over many folders. Performance is super slow since File.list() returns an array instead of an iterable. Java goes off and collects all the names in a folder and packs it into an array before returning. The bug entry for this is <https://bugs.java.com/bugdatabase/view_bug;jsessionid=db7fcf25bcce13541c4289edeb4?bug_id=4285834> and doesn't have a work around. They just say this has been fixed for JDK7. A few questions: 1. Does anybody have a workaround to this performance bottleneck? 2. Am I trying to achieve the impossible? Is performance still going to be poor even if it just iterates over the directories? 3. Could I use the beta JDK7 builds that have this functionality without having to build my entire project on it?
Although it's not pretty, I solved this kind of problem once by piping the output of dir/ls to a file before starting my app, and passing in the filename. If you needed to do it within the app, you could just use system.exec(), but it would create some nastiness. You asked. The first form is going to be blazingly fast, the second should be pretty fast as well. Be sure to do the one item per line (bare, no decoration, no graphics), full path and recurse options of your selected command. EDIT: 30 minutes just to get a directory listing, wow. It just struck me that if you use exec(), you can get it's stdout redirected into a pipe instead of writing it to a file. If you did that, you should start getting the files immediately and be able to begin processing before the command has completed. The interaction may actually slow things down, but maybe not--you might give it a try. Wow, I just went to find the syntax of the .exec command for you and came across this, possibly exactly what you want (it lists a directory using exec and "ls" and pipes the result into your program for processing): [good link in wayback](http://web.archive.org/web/20080912133319/http://java.sun.com/developer/JDCTechTips/2003/tt0304.html) (Jörg provided in a comment to replace [this one](http://java.sun.com/developer/JDCTechTips/2003/tt0304.html "this broken link here") from sun that Oracle broke) Anyway, the idea is straightforward but getting the code right is annoying. I'll go steal some codes from the internets and hack them up--brb ``` /** * Note: Only use this as a last resort! It's specific to windows and even * at that it's not a good solution, but it should be fast. * * to use it, extend FileProcessor and call processFiles("...") with a list * of options if you want them like /s... I highly recommend /b * * override processFile and it will be called once for each line of output. */ import java.io.*; public abstract class FileProcessor { public void processFiles(String dirOptions) { Process theProcess = null; BufferedReader inStream = null; // call the Hello class try { theProcess = Runtime.getRuntime().exec("cmd /c dir " + dirOptions); } catch(IOException e) { System.err.println("Error on exec() method"); e.printStackTrace(); } // read from the called program's standard output stream try { inStream = new BufferedReader( new InputStreamReader( theProcess.getInputStream() )); processFile(inStream.readLine()); } catch(IOException e) { System.err.println("Error on inStream.readLine()"); e.printStackTrace(); } } // end method /** Override this method--it will be called once for each file */ public abstract void processFile(String filename); } // end class ``` And thank you code donor at [IBM](http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=/rzaha/jvlngex1.htm "IBM")
How about using File.list(FilenameFilter filter) method and implementing FilenameFilter.accept(File dir, String name) to process each file and return false. I ran this on Linux vm for directory with 10K+ files and it took <10 seconds. ``` import java.io.File; import java.io.FilenameFilter; public class Temp { private static void processFile(File dir, String name) { File file = new File(dir, name); System.out.println("processing file " + file.getName()); } private static void forEachFile(File dir) { String [] ignore = dir.list(new FilenameFilter() { public boolean accept(File dir, String name) { processFile(dir, name); return false; } }); } public static void main(String[] args) { long before, after; File dot = new File("."); before = System.currentTimeMillis(); forEachFile(dot); after = System.currentTimeMillis(); System.out.println("after call, delta is " + (after - before)); } } ```
Is there a workaround for Java's poor performance on walking huge directories?
[ "", "java", "performance", "directory-walk", "" ]
I'm creating some videos from a collection of images, I subsequently wish to play this video back with java. I found JMF but I haven't been able to find an encoding which is actually playable by it. Does anybody have an ffmpeg or mencoder formulation which produces JMF playable output? I would also take alternatives to JMF if there is something better.
According to the [JMF 2.1.1 - Supported Formats](http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/formats.html) page, Quicktime and various codecs are supported for decoding directly.
Two things, you will need to use the formats listed here: <http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/formats.html> Secondly, JMF still seems to have problems under certain resolutions, framerates and bitrates. I've found that it successfully can decode when the video to read, is not only in the accepted format, but of a length longer than 2 minutes. I don't know why; this happens to be my experience. Hope it helps.
Encoding for JMF
[ "", "java", "ffmpeg", "jmf", "" ]
How do you detect which form input has focus using JavaScript or jQuery? From within a function I want to be able to determine which form input has focus. I'd like to be able to do this in straight JavaScript and/or jQuery.
I am not sure if this is the most efficient way, but you could try: ``` var selectedInput = null; $(function() { $('input, textarea, select').focus(function() { selectedInput = this; }).blur(function(){ selectedInput = null; }); }); ```
[`document.activeElement`](https://developer.mozilla.org/en-US/docs/Web/API/Document/activeElement), it's been supported in IE for a long time and the latest versions of FF and chrome support it also. If nothing has focus, it returns the `document.body` object.
Detect which form input has focus using JavaScript or jQuery
[ "", "javascript", "jquery", "focus", "forms", "" ]
**Problem:** a table of coordinate lat/lngs. Two rows can potentially have the same coordinate. We want a query that returns a set of rows with unique coordinates (within the returned set). Note that `distinct` is not usable because I need to return the id column which is, by definition, distinct. This sort of works (`@maxcount` is the number of rows we need, `intid` is a unique int id column): ``` select top (@maxcount) max(intid) from Documents d group by d.geoLng, d.geoLat ``` It will always return the same row for a given coordinate unfortunately, which is bit of a shame for my use. If only we had a `rand()` aggregate we could use instead of `max()`... Note that you can't use `max()` with guids created by `newid()`. Any ideas? (there's some more background here, if you're interested: <http://www.itu.dk/~friism/blog/?p=121>) UPDATE: Full solution [here](http://www.itu.dk/~friism/blog/?p=213)
You might be able to use a CTE for this with the ROW\_NUMBER function across lat and long and then use rand() against that. Something like: ``` WITH cte AS ( SELECT intID, ROW_NUMBER() OVER ( PARTITION BY geoLat, geoLng ORDER BY NEWID() ) AS row_num, COUNT(intID) OVER (PARTITION BY geoLat, geoLng) AS TotalCount FROM dbo.Documents ) SELECT TOP (@maxcount) intID, RAND(intID) FROM cte WHERE row_num = 1 + FLOOR(RAND() * TotalCount) ``` This will always return the first sets of lat and lngs and I haven't been able to make the order random. Maybe someone can continue on with this approach. It will give you a random row within the matching lat and lng combinations though. If I have more time later I'll try to get around that last obstacle.
this doesn't work for you? ``` select top (@maxcount) * from ( select max(intid) as id from Documents d group by d.geoLng, d.geoLat ) t order by newid() ```
SQL Server rand() aggregate
[ "", "sql", "sql-server", "geo", "" ]
I use Eclipse (3.4) and my class compiles without warning or errors. My project uses an external jar file. Where do I need to put this external jar file in order not to get a `java.lang.NoClassDefFoundError` when using this class from another project (not in Eclipse)? I could just extract the jar into the project folder, but that does not feel right. Edit: this question is not about importing jars in Eclipse, but using them outside of Eclipse.
If you're wanting to include a JAR file to your Eclipse project, you would generally create a 'lib' folder inside the project folder, and put the file in there. You then need to tell eclipse to include it in your class path so your code will compile and run inside eclipse. To do that: - Go into the properties of your project - Select 'Java Build Path' in the left hand column - Select the 'Libraries' tab in the centre part of the window - Click the Add JARs button - this will give you a list of your projects in eclipse - expand your project and into the lib folder - your jar will be there. - Select the JAR, click OK, and OK again out of the properties window. Your code will now compile and run.
put it in your `jre/lib/ext` folder everything said about the classpath is true, but this is a consistent and sensible place for it to live. you can find out your jre folder by looking at the JAVA\_HOME environment variable on Windows.
Where to put the external jars?
[ "", "java", "eclipse", "jar", "eclipse-3.4", "ganymede", "" ]
I would like to convert an array if IDs, into a string of comma separated values, to use in a MySQL UPDATE query. How would I do this?
Remember to escape values: ``` '"' . implode('","', array_map('mysql_real_escape_string', $data)) . '"' ```
``` implode(',', $array); ```
How do you output contents of an array as a comma separated string?
[ "", "php", "arrays", "" ]
How can i make my Java Swing GUI Components [Right To Left] for Arabic language from NetBeans Desktop Application?
Don't you just have to use: ``` Component.setComponentOrientation( ComponentOrientation.RIGHT_TO_LEFT ) ``` I believe that the swing components all already have support for RTL, don't they? Not sure how/where you'd do that in regards to netbeans, though.
The call of ``` Component.setComponentOrientation( ComponentOrientation.RIGHT_TO_LEFT ) ``` should do the trick. But be sure to use the SwingConstants LEADING and TRAILING instead of LEFT and RIGHT in your layouts. The same goes for GridBagConstraints.LINE\_START or LINE\_END instead of WEST or EAST, and probably some similar cases which I forgot to mention.
JAVA Swing GUI Components howto RTL view?
[ "", "java", "swing", "right-to-left", "" ]
I would like to learn the best practices to employ when creating a database driven web-application. I prefer to learn from examples. What is a good sample application that I can download and run to learn this: I am looking for: 1. Should be written in C# (preferably) 2. Should contain a complex database design (parent child relations, etc.) 3. Should implement the best practices for an ASP.net website as well as for database design. 4. Preferably uses Oracle.
If you don't want to worry about writing your [DAL](http://en.wikipedia.org/wiki/Data_access_layer) (Data Access Layer), then I suggest looking at [Nhibernate](http://www.nhibernate.org). There are samples with it and ASP.NET [here](http://www.beansoftware.com/asp.net-tutorials/nhibernate-log4net.aspx), [here](http://www.hibernate.org/363.html), and [here](https://stackoverflow.com/questions/259409/database-and-orm-choice-for-an-small-medium-size-net-application#259551). Here's a [sample with just Oracle](http://www.oracle.com/technology/tech/windows/odpnet/index.html) and ASP.NET. If you are looking for just sample ASP.NET Applications without an Object Relational Mapper like NHibernate, try [this](http://www.asp.net/Learn/data-access/). Also, on a higher level, you're looking for an [N-Tier ASP.NET application design](http://www.developerfusion.com/article/3058/boosting-your-net-application-performance/2/). There are good tutorials on that [here](http://www.c-sharpcorner.com/UploadFile/tavanover/NTierDevelopmentWithNetP311282005073236AM/NTierDevelopmentWithNetP3.aspx), [here](http://www.developerfusion.com/article/3058/boosting-your-net-application-performance/2/), and [here](http://msdn.microsoft.com/en-us/library/ms978384.aspx).
There are lots of examples in [ASP.NET Starter](http://www.asp.net/community/projects/) Kits. I really like the [BlogEngine.NET](http://www.asp.net/downloads/starter-kits/blog-engine/) application. For higher-end basics (hmmm..."higher-end basics"...oxymoron?) the [BeerHouse E-Commerce application](http://www.asp.net/downloads/starter-kits/the-beer-house/) is great.
Where can I find a Good Sample ASP.NET Database Driven Web Application?
[ "", "c#", "asp.net", "database", "" ]
I'm writing a simple program to browse the local network and pass on filenames to mplayer using "system". However, sometimes filenames contain spaces or quotes. Obviously I could write my own function to escape those, but I'm not sure exactly what characters do or do not need escaping. Is there a function available in the CRT or somewhere in the linux headers to safely escape a string to pass to the command line ?
There isn't a single solution that works everywhere because different shells have different ideas of what special characters are and how they are interpreted. For bash, you could probably get away with surrounding the entire filename in single quotes after replacing every single quote in the file name with `'"'"'` (the first single quote stops the sequence, the `"'"` appends the literal single quote to the string, the final single quote starts the quoted sequence again). A better solution would be to find a way to call the program without using system, such as by using fork with one of the exec functions so there is no shell interpolation.
Other answers include this fork and exec solution, but I claim that this is the only right way to do it. Escaping shell arguments is prone to bugs and a waste of time, just as trying to escape SQL parameters is a silly idea when safer and more efficient parameter binding APIs exist. Here is a sample function: ``` void play(const char *path) { /* Fork, then exec */ pid = fork(); if( pid < 0 ) { /* This is an error! */ return; } if( pid == 0 ) { /* This is the child */ freopen( "/dev/null", "r", stdin ); freopen( "/dev/null", "w", stdout ); freopen( "/dev/null", "w", stderr ); execlp( "mplayer", "mplayer", path, (char *)0 ); /* This is also an error! */ return; } } ```
How to safely escape a string from C++
[ "", "c++", "c", "linux", "shell", "" ]
If a packed Javascript saves 20kb off the download size, but takes a perfomance hit to unpack, then are there any benefits, besides obfuscating the code? I've made this a community wiki because it may be open to some discussion.
Packed javascript code generally does not take longer to execute vs regular code. Most code packers will shorten variables, function names and use a host of tricks to make the source smaller. The resulting source is fully executable! You can verify this by taking a look at it during runtime with firebug. You'll see the compressed code executing in its minimized form.
Short answer yes, because the client machine's unpacking time is faster than transmission, also the internet is overloaded as is, so any contribution to making things better is appreciated by anonymous, also remember that most clients will be caching this stuff, and the larger the file the more chances it will get itself or other stuff evicted from the clients cache, especially in mobile devices
Packing Javascripts - Is it worth it?
[ "", "javascript", "" ]
I have seen a lot of `ob_get_clean()` the last while. Typically I have done `$test .= 'test'` I'm wondering if one is faster and/or better than the other. Here is the code using `ob_get_clean()`: ``` ob_start(); foreach($items as $item) { echo '<div>' . $item . '</div>'; } $test = ob_get_clean(); ``` Here is the code using `$test .= 'test'`: ``` $test = ''; foreach($items as $item) { $test .= '<div>' . $item . '</div>'; } ``` Which is better?
Output buffers have all the pitfalls of global variables. You have to be aware of all execution paths from the `ob_start()` to the `ob_get_clean()`. Are you sure it will get there, and that any buffers opened in between will have been closed? Keep in mind that code can throw exceptions. That can be a really fun bug for the next guy to track down. On the other hand--and I hate to even mention it--at one time output buffering was somewhat faster at concatenating large strings, for reasons internal to PHP. I'm not sure if that is still true.
The results are the same, and I'd imagine the performance differences are negligible if any. Basically, a matter of personal style preference. I would go with concatenation myself - I use output buffering only when concatenation is not an option. Also, instead of running both `ob_get_contents()` and `ob_clean()` simply run `ob_get_clean()` which performs both at once.
Is it better to use ob_get_contents() or $text .= 'test';
[ "", "php", "optimization", "concatenation", "" ]
Why one would use one of the following packages instead of the other? * Java Logging * Commons Logging * Log4j * SLF4j * Logback
In chronological order of api apperance (as far as I know): * Log4j because most everybody uses it (in my experience) * Commons Logging because open source projects use it (so they can integrate with whatever logging framework is used in the integrated solution); especially valid if you're an API/Framework/OSS and you rely on other packages that use Commons Logging. * Commons Logging because you don't want to "lock down" to a particular logging framework (so instead you lock down to what Commons Logging gives you instead) - I don't think it is sensible to decide using this point as the reason. * Java logging because you don't want to add in an extra jar. * SLF4j because it's newer than Commons Logging and provides parameterized logging: --- ``` logger.debug("The entry is {}.", entry); //which expands effectively to if (logger.isDebugEnabled()){ // Note that it's actually *more* efficient than this - see Huxi's comment below... logger.debug("The entry is " + entry + "."); } ``` * Logback because it's newer than log4j and again, supports parameterized logging, as it implements SLF4j directly * SLF4j/Logback because it's written by the same guy who did log4j, so he's made it better (according to [Ken G](https://stackoverflow.com/questions/354837/whats-up-with-logging-in-java#356038) - thanks. It seems to fit when looking at [their earlier news posts](http://www.slf4j.org/news.html)) * SLF4j because they also publish a log4j adapter so you don't have to "switch out" log4j in older code - just make log4j.properties use SLF4j and it's configuration
I find logging in Java to be confusing, inconsistent, poorly documented, and especially haphazard. Moreover, there is a huge amount of similarity between these logging frameworks resulting in duplication of effort, and confusion as to what logging environment you are actually in. In particular, if you are working in a serious Java web application stack, you are often in *multiple* logging environments at one time; (e.g hibernate may use log4j, and tomcat java.util.logging). Apache commons is meant to bridge different logging frameworks, but really just adds more complexity. If you do not know this ahead of time, it is utterly bewildering. Why are my log messages not printing out to the console, etc.? Ohh because I am looking at the Tomcat logs, and not log4j. Adding yet another layer of complexity, the application server may have global logging configurations that may not recognize local configurations for a particular web application. Lastly, all these logging frameworks are WAY TOO COMPLICATED. Logging in Java has been a disorganized mess leaving developers like me frustrated and confused. Early versions of Java did not have a built-in logging framework leading to this scenario.
What's Up with Logging in Java?
[ "", "java", "logging", "log4j", "logback", "slf4j", "" ]
Here is a little test program: ``` #include <iostream> class Test { public: static void DoCrash(){ std::cout<< "TEST IT!"<< std::endl; } }; int main() { Test k; k.DoCrash(); // calling a static method like a member method... std::system("pause"); return 0; } ``` On VS2008 + SP1 (vc9) it compiles fine: the console just display "TEST IT!". As far as I know, static member methods shouldn't be called on instanced object. 1. Am I wrong? Is this code correct from the standard point of view? 2. If it's correct, why is that? I can't find why it would be allowed, or maybe it's to help using "static or not" method in templates?
The standard states that it is not necessary to call the method through an instance, that does not mean that you cannot do it. There is even an example where it is used: C++03, 9.4 static members > A static member s of class X may be referred to using the > qualified-id expression X::s; it is > not necessary to use the class member access syntax (5.2.5) to refer > to a static member. A static member > may > be referred to using the class member access syntax, in which > case the object-expression is > evaluated. ``` class process { public: static void reschedule(); }; process& g(); void f() { process::reschedule(); // OK: no object necessary g().reschedule(); // g() is called } ```
Static functions doesn´t need an instanciated object for being called, so ``` k.DoCrash(); ``` behaves exactly the same as ``` Test::DoCrash(); ``` using the scope resolution operator (::) to determine the static function inside the class. *Notice that in both case the compiler doesn´t put the `this` pointer in the stack since the static function doesn't need it.*
C++ Static member method call on class instance
[ "", "c++", "visual-c++", "standards", "" ]
I´m working on a project, in Visual Studio 2008, which DLL currently surpasses 20 MB. Is there a way to profile that DLL, searching the main contributors to this size? I suspect that breaking it in smaller projects inside the solution would help, but I'm looking for a faster solution (although not better, I'm afraid). As a sidenote, we are getting 'unexpected error writing metadata' error, that we think it's related to the DLL size. However, when restarting Visual Studio 2008, the error vanishes. Until it appears later again, after a few rebuilds. The disk space is not a problem.
Yowser! Have you perchance got some huge resx files (or other embedded content) that are getting embedded in the dll? Perhaps treat those as external content? I'd start by looking at the files in the project tree... that 20Mb has to come from somewhere obvious - large graphics, etc.
[PE Explorer](http://www.heaventools.com/overview.htm) will show you the contents. [This](http://wiki.answers.com/Q/How_do_you_open_.dll_files_to_see_what_is_written_inside) may also help? Update: [Dependency Walker](http://www.dependencywalker.com/) may also help here. You can run it in "Profile Mode" which works by executing your application and watching it to see what it loads.
How to minimize a ASP.NET C# project DLL size?
[ "", "c#", "asp.net", "visual-studio-2008", "" ]
I have a file and I don't know how big it's going to be (it could be quite large, but the size will vary greatly). I want to search the last 10 lines or so to see if any of them match a string. I need to do this as quickly and efficiently as possible and was wondering if there's anything better than: ``` s = "foo" last_bit = fileObj.readlines()[-10:] for line in last_bit: if line == s: print "FOUND" ```
``` # Tail from __future__ import with_statement find_str = "FIREFOX" # String to find fname = "g:/autoIt/ActiveWin.log_2" # File to check with open(fname, "r") as f: f.seek (0, 2) # Seek @ EOF fsize = f.tell() # Get Size f.seek (max (fsize-1024, 0), 0) # Set pos @ last n chars lines = f.readlines() # Read to end lines = lines[-10:] # Get last 10 lines # This returns True if any line is exactly find_str + "\n" print find_str + "\n" in lines # If you're searching for a substring for line in lines: if find_str in line: print True break ```
Here's an answer like MizardX's, but without its apparent problem of taking quadratic time in the worst case from rescanning the working string repeatedly for newlines as chunks are added. Compared to the Active State solution (which also seems to be quadratic), this doesn't blow up given an empty file and does one seek per block read instead of two. Compared to spawning 'tail', this is self-contained. (But 'tail' is best if you have it.) Compared to grabbing a few kB off the end and hoping it's enough, this works for any line length. ``` import os def reversed_lines(file): "Generate the lines of file in reverse order." part = '' for block in reversed_blocks(file): for c in reversed(block): if c == '\n' and part: yield part[::-1] part = '' part += c if part: yield part[::-1] def reversed_blocks(file, blocksize=4096): "Generate blocks of file's contents in reverse order." file.seek(0, os.SEEK_END) here = file.tell() while 0 < here: delta = min(blocksize, here) here -= delta file.seek(here, os.SEEK_SET) yield file.read(delta) ``` To use it as requested: ``` from itertools import islice def check_last_10_lines(file, key): for line in islice(reversed_lines(file), 10): if line.rstrip('\n') == key: print 'FOUND' break ``` **Edit:** changed map() to itertools.imap() in head(). **Edit 2:** simplified reversed\_blocks(). **Edit 3:** avoid rescanning tail for newlines. **Edit 4:** rewrote reversed\_lines() because str.splitlines() ignores a final '\n', as BrianB noticed (thanks). Note that in very old Python versions the string concatenation in a loop here will take quadratic time. CPython from at least the last few years avoids this problem automatically.
Most efficient way to search the last X lines of a file?
[ "", "python", "file", "search", "" ]
I have very little experience building software for Windows, and zero experience using the Windows API, but I'm reasonably familiar with Python. How should I go about learning to use the Windows API with Python?
Honestly, no. The Windows API is an 800 pound monster covered with hair. [Charlie Petzold's 15 pound book](http://www.charlespetzold.com/faq.html) was the canonical reference once upon a time. That said, the [Python for Windows](http://python.net/crew/mhammond/win32/) folks have some good material. Microsoft has the [whole API online](http://msdn.microsoft.com/en-us/library/aa383749(VS.85).aspx), including some sample code and such. And the [Wikipedia article](http://en.wikipedia.org/wiki/Win32) is a good overview.
About 4 years ago I set out to truly understand the Windows API. I was coding in C# at the time, but I felt like the framework was abstracting me too much from the API (which it was). So I switched to Delphi (C++ or C would have also been good choices). In my opinion, it is important that you start working in a language that creates native code and talks directly to the Windows API and makes you care about buffers, pointers, structures, and real constructs that Windows uses directly. C# is a great language, but not the best choice for learning the Windows API. Next, buy Mark Russinovich's book "Windows Internals" [Amazon link](http://amzn.to/xdu4Br). This is the 5th edition. The 6th edition is coming out April 2012 and adds info about Server 2008 R2 and Windows 7. ## And now, for the most important (and best) resource for learning Win32 API: Mark Russinovich's [Windows Operating Systems Internals Curriculum](http://www.microsoft.com/resources/sharedsource/windowsacademic/curriculumresourcekit.mspx) which is offered for free. It is designed to be used by an instructor to teach students. I went through it and it is awesome. Full of examples, history, and detailed explanations. In my opinion, this is an ideal way to learn the Windows API. Mark Russinovich is a Microsoft Technical Fellow (there are only 14 at MS including the creator of C#). He used to own Winternals until he sold it to MS, he has a PhD in Computer Engineering from Carnegie Mellon, he has been a frequent presenter at Microsoft conferences (even before he worked for them), and he is crazy smart. His presentations are one of the primary reasons I attend Microsoft TechEd every year.
How should I learn to use the Windows API with Python?
[ "", "python", "winapi", "" ]
**Does anyone know of an algorithm (or search terms / descriptions) to locate a known image within a larger image?** e.g. I have an image of a single desktop window containing various buttons and areas (target). I also have code to capture a screen shot of the current desktop. I would like an algorithm that will help me find the target image within the larger desktop image (what exact x and y coordinates the window is located at). The target image may be located anywhere in the larger image and may not be 100% exactly the same (very similar but not exact possibly b/c of OS display differences) Does anyone know of such an algorithm or class of algorithms? I have found various image segmentation and computer vision algorithms but they seem geared to "fuzzy" classification of regions and not locating a specific image within another. \*\* *My goal is to create a framework that, given some seed target images, can find "look" at the desktop, find the target area and "watch" it for changes.* \*\*
You said your image may not be exactly the same, but then say you don't want "fuzzy" algorithms. I'm not sure those are compatible. In general, though, I think you want to look at [image registration](http://en.wikipedia.org/wiki/Image_registration) algorithms. There's an open source C++ package called [ITK](http://itk.org/) that might provide some hints. Also [ImageJ](http://rsbweb.nih.gov/ij/) is a popular open source Java package. Both of these have at least some registration capabilities available if you poke around.
Have a look at the paper I wrote: <http://werner.yellowcouch.org/Papers/subimg/index.html>. It's highly detailed and appears to be the only article discussing how to apply fourier transformation to the problem of subimage finding. In short, if you want to use the fourier transform one could apply the following formula: the correlation between image A and image B when image A is shifted over dx,dy is given in the following matrix: C=ifft(fft(A) x conjugate(fft(B)). So, the position in image C that has the highest value, has the highest correlation and that position reflects dx,dy. This result works well for subimages that are relatively large. For smaller images, some more work is necessary as explained in the article. Nevertheless, such fourier transforms are quite fast. It results in around 3\*sx*sy*log\_2(sx\*sy)+3\*sx\*sy operations.
Find known sub image in larger image
[ "", "java", "algorithm", "image-processing", "image-manipulation", "" ]
I have class with internal property: ``` internal virtual StateEnum EnrolmentState { get { ..getter logic } set { ..setter logic } } ``` However I want to be able to access to this property outside of the assembly so I created method that simply returns this property: ``` public StateEnum GetCurrentState() { return EnrolmentState; } ``` But when I call this method from class outside of this assembly I get an exception `(System.TypeLoadException: Method 'get_EnrolmentState' on type 'EnrolmentAopProxy' from assembly '44fe776f-458e-4c5d-aa35-08c55501dd43, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' is overriding a method that is not visible from that assembly.)` So it is possible to access to internal member outside of the assembly in any way, or I should consider a different approach. Just to mention that this class is used as an O/R mapper entity (NPersist) and it is overrided from the O/R mapper to inject persistence code.
Why is the property internal in the first place? If you want to have public access to it, make it public. I assume you have some control over this, as otherwise you wouldn't be able to add a public method to access it in the first place. If you only want selected other assemblies to be able to access it, `InternalsVisibleTo` is your friend (pun not intended) - but as Erik says, you should think about the design carefully at that point. As to why you're getting that particular error - it looks like your AOP proxy is still trying to override the internal property, rather than using your public method. It's hard to know whether or not you can change that without knowing more about your particular setup - but making the property public is likely to be a simpler fix.
This sounds like you should reconsider your choice of design. Internal is used to avoid what you are trying to do, so consider using some kind of public access to the properties instead. It's possible to use the InternalsVisibleTo Attribute to make a specific assembly able to reach internal properties but from my point of view that's a poor design.
Accessing internal property out of the assembly scope
[ "", "c#", "oop", "language-features", "" ]
There seems to be three common approaches for mapping an application end user to a database user. 1. **One to One Mapping:** Each Application user (bob, nancy and fred) also get a corresponding database user account (bob nancy and fred). 2. **N to M mapping:** Each application user is mapped to a database user that represents their role. bob and nancy are mapped to the 'clerk' database user while fred is mapped to the 'manager' database user. 3. **N to 1 mapping:** Each application user is mapped to a single database user (app\_user) and identity is only managed at the application tier. It seems that #3 is the most common in web application development. **Why is there not a greater emphasis on the other two options?** Oracle encourages techniques like #2 using its proxy authentication features for the following reason: **Limited trust model**-controlling the users on whose behalf middle tiers can connect, and the roles the middle tiers can assume for the user **Scalability**-by supporting lightweight user sessions and eliminating the overhead of re-authenticating clients **Accountability**, by preserving the identity of the real user through to the database, and enabling auditing of actions taken on behalf of the real user [Oracle's Proxy Authentication documentation](http://download.oracle.com/docs/cd/B28359_01/java.111/b31224/proxya.htm)
In addition to the simpler administration, there are performance advantages of option 3 on web-servers; this allows pooling of connections - i.e. a low number of physical database connections can be re-used continuously to service a large number of app users. This is known as the "[trusted subsystem](http://msdn.microsoft.com/en-us/library/aa480587.aspx)" model - i.e. your app-server validates external callers, but then the app-server itself is used as the identity for calling downwards. The biggest issue here is that for audit etc you need to keep telling the db who made the current change (things like `USER_NAME()`, `SUSER_SNAME()` cease to be useful) - and of course, this is relatively easy to spoof. If the web-server used security per user, this wouldn't be possible - and so you'd essentially have to disable connection pooling. The act of establishing a connection is (relatively) expensive, so this would have a significant impact on performance. You wouldn't want to keep the (per-user) connection around between requests, as this would lead to a huge pool and a lot of open connections (also expensive). The "per role" option site between them - but it is rare that roles are truly mutually exclusive, which makes this hard to implement. With client apps that talk directly to the database, option 1 is the simplest to maintain, since you don't need to distribute any special account details to the client. The pooling likewise isn't an issue, since the client's machine is only acting as 1 user.
Re 2), application security/permission needs are usually of a much finer granularity than can be provided by database security layers, unless you put most of your application logic into the database. A simple example is that while two users may need to update the orders table, one may be creating their own order and the other may be an admin user editing someone else's order. They both need insert/update privileges on the table. You could implement this restriction via stored procedures, but that is really a workaround - both users still need to update the table so will need an account with those privileges. I prefer to use the same db account for all end users of an application, and implement application roles and permissions outside of the db. Obviously for a web app where users can register themselves, 1) is not practical. I have a site with 500,000 or more users - would I want that many db accounts? I don't think so! I take a minimalist approach (which I am sure many would argue with), where the db user an application runs under has the minimum permissions needed to function, but no more. That means the db user cannot make schema changes, drop tables, restore databases, etc. A separate account is used during development for schema modification, that has the requisite privileges.
What are the advantages to each approach for mapping application end users to database users?
[ "", "sql", "database", "oracle", "security", "" ]
I don't understand, why does the following regular expression: ``` ^*$ ``` Match the string "127.0.0.1"? Using `Regex.IsMatch("127.0.0.1", "^*$");` Using Expresso, it does not match, which is also what I would expect. Using the expression `^.*$` does match the string, which I would also expect. Technically, `^*$` should match the beginning of a string/line any number of times, followed by the ending of the string/line. It seems \* is implicitly treated as a `.*` What am I missing? EDIT: Run the following to see an example of the problem. ``` using System; using System.Text.RegularExpressions; namespace RegexFubar { class Program { static void Main(string[] args) { Console.WriteLine(Regex.IsMatch("127.0.0.1", "^*$")); Console.Read(); } } } ``` I do not wish to have ^\*$ match my string, I am wondering why it **does** match it. I would think that the expression should result in an exception being thrown, or at least a non-match. EDIT2: To clear up any confusion. I did not write this regex with the intention of having it match "127.0.0.1". A user of our application entered the expression and wondered why it matched the string when it should not. After looking at it, I could not come up with an explanation for why it matched - especially not since Expresso and .NET seems to handle it differently. I guess the question is answered by it being due to the .NET implementation avoiding throwing an exception, even thought it's technically an incorrect expression. But is this really what we want?
Well, theoretically you are right, it should not match. But this depends on how the implementation works internally. Most regex impl. will take your regex and strip ^ from the front (taking note that it must match from start of the string) and strip $ from the end (noting that it must to the end of the string), what is left over is just "\*" and "\*" on its own is a valid regex. The implementation you are using is just wrong regarding how to handle it. You could try what happens if you replace "^\*$" just with "\*"; I guess it will also match everything. It seems like the implementation treats a single asterisk like a ".\*". According to ISO/IEC 9945-2:1993 standard, which is also described in the [POSIX standard](http://www.opengroup.org/onlinepubs/007908799/xbd/re.html), it is broken. It is broken because the standard says that after a ^ character, an asterisk has no special meaning at all. That means "^\*$" should actually only match a single string and this string is **"\*"**! To quote the standard: > The asterisk is special except when used: > > * in a bracket expression > * as the first character of an entire BRE (after an initial ^, if any) > * as the first character of a subexpression (after an initial ^, if any); see BREs Matching Multiple Characters . So if it is the first character (and ^ doesn't count as first character if present) it has no special meaning. That means in this case an asterisk should only match one character and that is an asterisk. --- ## Update Microsoft says > Microsoft .NET Framework regular > expressions incorporate the most > popular features of other regular > expression implementations such as > those in Perl and awk. Designed to be > compatible with Perl 5 regular > expressions, .NET Framework regular > expressions include features not yet > seen in other implementations, such as > right-to-left matching and on-the-fly > compilation. Source: <http://msdn.microsoft.com/en-us/library/hs600312.aspx> Okay, let's test this: ``` # echo -n 127.0.0.1 | perl -n -e 'print (($_ =~ m/(^.*$)/)[0]),"\n";' -> 127.0.0.1 # echo -n 127.0.0.1 | perl -n -e 'print (($_ =~ m/(^*$)/)[0]),"\n";' -> ``` Nope, it does not. Perl works correctly. ^.\*$ matches the string, ^\*$ doesn't => .NET's regex implementation is broken and it does not work like Perl 5 as MS claims.
Asterisk (\*) matches the preceding element **ZERO OR MORE** times. If you want one or more, use the + operator instead of the \*. You are asking it to match an optional start of string marker and the end of string marker. I.e. if we omit the start of string marker, you're only looking for the end of string marker... which will match any string! I don't really understand what you are trying to do. If you could give us more information then maybe I could tell you what you should have done :)
Why ^*$ matches "127.0.0.1"
[ "", "c#", "regex", "" ]
I have used extension methods to extend html helpers to make an RSS repeater: ``` public static string RSSRepeater(this HtmlHelper html, IEnumerable<IRSSable> rss) { string result=""; foreach (IRSSable item in rss) { result += "<item>" + item.GetRSSItem().InnerXml + "</item>"; } return result; } ``` So I make one of my business objects implement IRSSable, and try to pass this to the HTML helper. But I just cannot seem to make it work, I have tried: ``` <%=Html.RSSRepeater(ViewData.Model.GetIssues(null, null, "") as IEnumerable<IRSSable>) %> ``` Compiles fine, but null is passed ``` <%=Html.RSSRepeater(ViewData.Model.GetIssues(null, null, "")) %> ``` Intellisense moans about not being able to pass IEnumerable issue to IEnumberable IRSSable * So how do you do it? That method I am calling definitly returns `IEnumberable<Issue>` and Issue definitly implements IRSSAble
Ahh... try: ``` public static string RSSRepeater<T>(this HtmlHelper html, IEnumerable<T> rss) where T : IRSSable { ... } ``` This then should allow you to pass any sequence of things that implement `IRSSable` - and the generic type inference should mean you don't need to specify the `T` (as `Issue`) yourself - the compiler will handle it. By the way - [avoid concatenation here](http://www.yoda.arachsys.com/csharp/stringbuilder.html); `StringBuilder` is preferable: ``` StringBuilder result = new StringBuilder(); foreach (IRSSable item in rss) { result.Append("<item>").Append(item.GetRSSItem().InnerXml).Append("</item>"); } return result.ToString(); ```
You're running into [generic variance issues](https://stackoverflow.com/questions/229656). Just because something implements `IEnumerable<Issue>` doesn't mean it implements `IEnumerable<IRssable>`. (It will in C# 4, but I'm assuming you're not using that :) You could make your extension method take just `IEnumerable` and call `IEnumerable.Cast<IRssable>` on it though - that's probably the simplest approach. EDIT: [Marc's suggestion](https://stackoverflow.com/questions/325561/passing-interface-as-a-parameter-to-an-extension-method#325568) is probably the better one, but I'll leave this answer here as it explains what's going on rather than just the fix :)
Passing interface as a parameter to an extension method
[ "", "c#", "asp.net-mvc", "extension-methods", "interface", "" ]
I want to create a C# program to provision Windows Mobile devices. I have found MSDN documentation on a function called [DMProcessConfigXML](http://msdn.microsoft.com/en-us/library/ms852998.aspx), but no instructions on how to use this function. How can I use this function in my Windows Mobile app? I suspect it has something to do with using pinvoke. Thanks, Paul
From managed code, you can call ConfigurationManager.ProcessConfiguration found in the Microsoft.WindowsMobile.Configuration namespace. [msdn](http://msdn.microsoft.com/en-us/library/microsoft.windowsmobile.configuration.configurationmanager.processconfiguration.aspx) Here is sample code: ``` XmlDocument configDoc = new XmlDocument(); configDoc.LoadXml( "<wap-provisioningdoc>"+ "<characteristic type=\"BrowserFavorite\">"+ "<characteristic type=\"Microsoft\">"+ "<parm name=\"URL\" value=\"http://www.microsoft.com\"/>"+ "</characteristic>"+ "</characteristic>"+ "</wap-provisioningdoc>" ); ConfigurationManager.ProcessConfiguration(configDoc, false); ``` No need to P/Invoke.
I looked at the MSDN and indeed very little information is available. I did some google searching and I found this [example](http://cowo.supersized.org/archives/14-Example-usage-of-DMProcessConfigXML-on-WinCE-5.0.html). Also this [blog entry](http://blogs.msdn.com/andrewarnottms/archive/tags/provisioning/default.aspx) about a CF open source provisioning application. Most of the available examples are in C++. If you want to call it from C#, you need to use pinvoke. One great resource is [pinvoke.net](http://pinvoke.net/). [Here](http://pinvoke.net/default.aspx/aygshell.DMProcessConfigXML) you can find the signatures you need.
How do I use DMProcessConfigXML to provision my Windows Mobile device?
[ "", "c#", ".net", "windows-mobile", "pinvoke", "" ]
I have a table where I'm recording if a user has viewed an object at least once, hence: ``` HasViewed ObjectID number (FK to Object table) UserId number (FK to Users table) ``` Both fields are NOT NULL and together form the Primary Key. My question is, since I don't care how many times someone has viewed an object (after the first), I have two options for handling inserts. * Do a SELECT count(\*) ... and if no records are found, insert a new record. * Always just insert a record, and if it throws a DUP\_VAL\_ON\_INDEX exceptions (indicating that there already was such a record), just ignore it. What's the downside of choosing the second option? UPDATE: I guess the best way to put it is : "Is the overhead caused by the exception worse than the overhead caused by the initial select?"
I would normally just insert and trap the DUP\_VAL\_ON\_INDEX exception, as this is the simplest to code. This is more efficient than checking for existence before inserting. I don't consider doing this a "bad smell" (horrible phrase!) because the exception we handle is raised by Oracle - it's not like raising your own exceptions as a flow-control mechanism. Thanks to Igor's comment I have now run two different benchamrks on this: (1) where all insert attempts except the first are duplicates, (2) where all inserts are not duplicates. Reality will lie somewhere between the two cases. Note: tests performed on Oracle 10.2.0.3.0. **Case 1: Mostly duplicates** It seems that the most efficient approach (by a significant factor) is to check for existence WHILE inserting: ``` prompt 1) Check DUP_VAL_ON_INDEX begin for i in 1..1000 loop begin insert into hasviewed values(7782,20); exception when dup_val_on_index then null; end; end loop rollback; end; / prompt 2) Test if row exists before inserting declare dummy integer; begin for i in 1..1000 loop select count(*) into dummy from hasviewed where objectid=7782 and userid=20; if dummy = 0 then insert into hasviewed values(7782,20); end if; end loop; rollback; end; / prompt 3) Test if row exists while inserting begin for i in 1..1000 loop insert into hasviewed select 7782,20 from dual where not exists (select null from hasviewed where objectid=7782 and userid=20); end loop; rollback; end; / ``` Results (after running once to avoid parsing overheads): ``` 1) Check DUP_VAL_ON_INDEX PL/SQL procedure successfully completed. Elapsed: 00:00:00.54 2) Test if row exists before inserting PL/SQL procedure successfully completed. Elapsed: 00:00:00.59 3) Test if row exists while inserting PL/SQL procedure successfully completed. Elapsed: 00:00:00.20 ``` **Case 2: no duplicates** ``` prompt 1) Check DUP_VAL_ON_INDEX begin for i in 1..1000 loop begin insert into hasviewed values(7782,i); exception when dup_val_on_index then null; end; end loop rollback; end; / prompt 2) Test if row exists before inserting declare dummy integer; begin for i in 1..1000 loop select count(*) into dummy from hasviewed where objectid=7782 and userid=i; if dummy = 0 then insert into hasviewed values(7782,i); end if; end loop; rollback; end; / prompt 3) Test if row exists while inserting begin for i in 1..1000 loop insert into hasviewed select 7782,i from dual where not exists (select null from hasviewed where objectid=7782 and userid=i); end loop; rollback; end; / ``` Results: ``` 1) Check DUP_VAL_ON_INDEX PL/SQL procedure successfully completed. Elapsed: 00:00:00.15 2) Test if row exists before inserting PL/SQL procedure successfully completed. Elapsed: 00:00:00.76 3) Test if row exists while inserting PL/SQL procedure successfully completed. Elapsed: 00:00:00.71 ``` In this case DUP\_VAL\_ON\_INDEX wins by a mile. Note the "select before insert" is the slowest in both cases. So it appears that you should choose option 1 or 3 according to the relative likelihood of inserts being or not being duplicates.
I don't think there is a downside to your second option. I think it's a perfectly valid use of the named exception, plus it avoids the lookup overhead.
How bad is ignoring Oracle DUP_VAL_ON_INDEX exception?
[ "", "sql", "oracle", "exception", "plsql", "" ]