Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm trying to understand the API inconsistency in the Java standard collections library. There is no method in `List` or in `AbstractList` to get the last item, although one can simulate that with size and `getIndex()`. However, `LinkedList` supports that function. Any idea why it was decided not to support this method in the interface?
The `java.util.List` interface doesn't support `getLast()` because the designers went for a 'minimal interface'. With the minimal number of methods defined it makes it both easier to understand and quicker to learn. This is in contrast with a 'humane interface' (such as used in the [Ruby array class](http://www.ruby-doc.org/core/classes/Array.html)) which attempts to provide methods for doing common operations (e.g. `getLast()`). As there are many uses which such a fundamental concept as a list can be put to this tends to lead to much larger interfaces. For further information see Martin Fowler's [Minimal Interface](http://martinfowler.com/bliki/MinimalInterface.html) and [Humane Interface](http://martinfowler.com/bliki/HumaneInterface.html) descriptions. As to why LinkedList supports `getLast()` etc., to quote the javadoc: > ... the LinkedList class provides uniformly named methods to get, remove and insert an element at the beginning and end of the list. These operations allow linked lists to be used as a stack, queue, or double-ended queue (deque). Presumably it was felt that a general List would not be adequate for these specific use cases. As an insight into the mind of the principal designer of the Java Collections API (Joshua Bloch) he provides [this list of API design maxims](http://www.infoq.com/articles/API-Design-Joshua-Bloch) by which he works. Of which, the most pertinent to this question are: > Early drafts of APIs should be short, typically one page with class and method signatures and one-line descriptions. This makes it easy to restructure the API when you don't get it right the first time. > > When in doubt, leave it out. If there is a fundamental theorem of API design, this is it. It applies equally to functionality, classes, methods, and parameters. Every facet of an API should be as small as possible, but no smaller. You can always add things later, but you can't take them away. Minimizing conceptual weight is more important than class- or method-count. > > Keep APIs free of implementations details. They confuse users and inhibit the flexibility to evolve. It isn't always obvious what's an implementation detail: Be wary of overspecification. > > Minimize accessibility; when in doubt, make it private. This simplifies APIs and reduces coupling. > > Consider the performance consequences of API design decisions, but don't warp an API to achieve performance gains. Luckily, good APIs typically lend themselves to fast implementations. However he also states: > Don't make the client do anything the library could do. Violating this rule leads to boilerplate code in the client, which is annoying and error-prone. Which just shows that design guidelines often conflict and the hardest part of an API designers job is to balance these conflicts.
Usually the reason is that they wanted to specify each function with a Big-O requirement and felt that getLast() could not be implemented efficiently on all lists. So they introduce it at each level with its Big-O promise. Or it could just have been an oversight, or they felt it wasn't common enough and if you needed it you could get it with size/getIndex.
Why does the Java List interface not support getLast()?
[ "", "java", "collections", "" ]
How can I convert a BYTE buffer (from 0 to 255) to a float buffer (from 0.0 to 1.0)? Of course there should be a relation between the two values, eg: 0 in byte buffer will be .0.f in float buffer, 128 in byte buffer will be .5f in float buffer, 255 in byte buffer will be 1.f in float buffer. Actually this is the code that I have: ``` for (int y=0;y<height;y++) { for (int x=0;x<width;x++) { float* floatpixel = floatbuffer + (y * width + x) * 4; BYTE* bytepixel = (bytebuffer + (y * width + x) * 4); floatpixel[0] = bytepixel[0]/255.f; floatpixel[1] = bytepixel[1]/255.f; floatpixel[2] = bytepixel[2]/255.f; floatpixel[3] = 1.0f; // A } } ``` This runs very slow. A friend of mine suggested me to use a conversion table, but I wanted to know if someone else can give me another approach.
Whether you choose to use a lookup table or not, your code is doing a lot of work each loop iteration that it really does not need to - likely enough to overshadow the cost of the convert and multiply. Declare your pointers restrict, and pointers you only read from const. Multiply by 1/255th instead of dividing by 255. Don't calculate the pointers in each iteration of the inner loop, just calculate initial values and increment them. Unroll the inner loop a few times. Use vector SIMD operations if your target supports it. Don't increment and compare with maximum, decrement and compare with zero instead. Something like ``` float* restrict floatpixel = floatbuffer; BYTE const* restrict bytepixel = bytebuffer; for( int size = width*height; size > 0; --size ) { floatpixel[0] = bytepixel[0]*(1.f/255.f); floatpixel[1] = bytepixel[1]*(1.f/255.f); floatpixel[2] = bytepixel[2]*(1.f/255.f); floatpixel[3] = 1.0f; // A floatpixel += 4; bytepixel += 4; } ``` would be a start.
I know this is an old question, but since no one gave a solution using the IEEE float representation, here is one. ``` // Use three unions instead of one to avoid pipeline stalls union { float f; uint32_t i; } t, u, v, w; t.f = 32768.0f; float const b = 256.f / 255.f; for(int size = width * height; size > 0; --size) { u.i = t.i | bytepixel[0]; floatpixel[0] = (u.f - t.f) * b; v.i = t.i | bytepixel[1]; floatpixel[1] = (v.f - t.f) * b; w.i = t.i | bytepixel[2]; floatpixel[2] = (w.f - t.f) * b; floatpixel[3] = 1.0f; // A floatpixel += 4; bytepixel += 4; } ``` This is more than **twice as fast** as an `int` to `float` conversion on my computer (Core 2 Duo CPU). Here is an SSE3 version of the above code that does 16 floats at a time. It requires `bytepixel` and `floatpixel` to be 128-bit aligned, and the total size to be a multiple of 4. Note that the SSE3 built-in int to float conversions will not help much here, as they will require an additional multiplication anyway. I believe this is the shortest way to go instruction-wise, but if your compiler isn't clever enough you may wish to unroll and schedule things by hand. ``` /* Magic values */ __m128i zero = _mm_set_epi32(0, 0, 0, 0); __m128i magic1 = _mm_set_epi32(0xff000000, 0xff000000, 0xff000000, 0xff000000); __m128i magic2 = _mm_set_epi32(0x47004700, 0x47004700, 0x47004700, 0x47004700); __m128 magic3 = _mm_set_ps(32768.0f, 32768.0f, 32768.0f, 32768.0f); __m128 magic4 = _mm_set_ps(256.0f / 255.0f, 256.0f / 255.0f, 256.0f / 255.0f, 256.0f / 255.0f); for(int size = width * height / 4; size > 0; --size) { /* Load bytes in vector and force alpha value to 255 so that * the output will be 1.0f as expected. */ __m128i in = _mm_load_si128((__m128i *)bytepixel); in = _mm_or_si128(in, magic1); /* Shuffle bytes into four ints ORed with 32768.0f and cast * to float (the cast is free). */ __m128i tmplo = _mm_unpacklo_epi8(in, zero); __m128i tmphi = _mm_unpackhi_epi8(in, zero); __m128 in1 = _mm_castsi128_ps(_mm_unpacklo_epi16(tmplo, magic2)); __m128 in2 = _mm_castsi128_ps(_mm_unpackhi_epi16(tmplo, magic2)); __m128 in3 = _mm_castsi128_ps(_mm_unpacklo_epi16(tmphi, magic2)); __m128 in4 = _mm_castsi128_ps(_mm_unpackhi_epi16(tmphi, magic2)); /* Subtract 32768.0f and multiply by 256.0f/255.0f */ __m128 out1 = _mm_mul_ps(_mm_sub_ps(in1, magic3), magic4); __m128 out2 = _mm_mul_ps(_mm_sub_ps(in2, magic3), magic4); __m128 out3 = _mm_mul_ps(_mm_sub_ps(in3, magic3), magic4); __m128 out4 = _mm_mul_ps(_mm_sub_ps(in4, magic3), magic4); /* Store 16 floats */ _mm_store_ps(floatpixel, out1); _mm_store_ps(floatpixel + 4, out2); _mm_store_ps(floatpixel + 8, out3); _mm_store_ps(floatpixel + 12, out4); floatpixel += 16; bytepixel += 16; } ``` **Edit**: improve accuracy by using `(f + c/b) * b` instead of `f * b + c`. **Edit**: add SSE3 version.
Convert BYTE buffer (0-255) to float buffer (0.0-1.0)
[ "", "c++", "floating-point", "arrays", "byte", "" ]
I need to create something similar to the following within my CodeIgniter project: * `my_config.php` * `config_production.php` * `config_development.php` Now, `my_config.php` will be autoloaded. From there, if it is a production server, `config_production.php` will be loaded; else `config_development.php` will be loaded. How should I go about executing this? I've tried doing the following in `my_config.php`: ``` <?php if(gethostbyaddr ("127.0.0.1") == 'hello.sabya'){ $this->config->load('config_production'); } else { $this->config->load('config_development'); } ?> ``` It is not working as `$this->config` is not initialized. How can I achieve this?
Two options: You can try referencing the object with `$CI` instead of `$this`: ``` $CI =& get_instance(); //do this only once in this file $CI->config->load(); $CI->whatever; ``` ...which is the correct way to reference the CI object from the outside. Or secondly, you could switch configs from within your config.php file: ``` <?php if(gethostbyaddr ("127.0.0.1") == 'hello.sabya'){ $config['base_url'] = "http://dev.example.com/"; } else { $config['base_url'] = "http://prod.example.com/"; } ?> ``` ...etc. Load all the differences between the two `if`/`else` blocks.
As Eric mentioned use [environments](http://codeigniter.com/user_guide/libraries/config.html#environments). > You may load different configuration files depending on the current > environment. > > To create an environment-specific configuration file, create or copy > a configuration file in > application/config/{ENVIRONMENT}/{FILENAME}.php > > Note: CodeIgniter always tries to load the configuration files for > the current environment first. If the file does not exist, the global > config file (i.e., the one in application/config/) is loaded. This > means you are not obligated to place all of your configuration files > in an environment folder − only the files that change per > environment. Why implement logic, when it's already there? ;)
How to Conditionally Load Configuration Files Within CodeIgniter?
[ "", "php", "codeigniter", "" ]
I've been warned by several people not to store temporary files in the Temporary Internet Files directory. Where would be the best place to store any temporary or log files that my application generates?
You best bet is to use [isolated storage](http://msdn.microsoft.com/en-us/library/3ak841sy(VS.80).aspx) for temp files, but you can also use [GetTempFileName](http://msdn.microsoft.com/en-us/library/system.io.path.gettempfilename(VS.80).aspx).
Is there a reason you can't use the default temporary directory? Essentially use the name returned from `Path.GetTempFileName`?
Where should I generate temporary files for my application?
[ "", "c#", ".net", "windows", "temporary-files", "" ]
Is there a standard way of maintaining a weak pointer to a parent (which is created using a shared pointer) in a child object in C++? Essentially, I need to implement something on the lines of the following: ``` Class B; Class A { ... private: B m_b; }; Class B { .... public: void SetParentPtr(const boost::shared_ptr<A>& a) { m_parentPtr = a; } private: boost::weak_ptr<A> m_parentPtr; }; ``` In the above all instances of class B need to hold a weak pointer to their parent (i.e object of class A). Class A objects are instantiated using a shared\_ptr. I can think of a solution that uses a null deleter. But is that a standard way of doing something like this?
There is an implicit conversion to weak\_ptr, so you can use ``` void SetParentPtr(boost::weak_ptr<A> a) { } ``` directly. check also boost::shared\_from\_this so the parent can give a pointer to himself without storing a weak\_ptr explicitly. Otherwise, this seems like a normal way to have a back-pointer. Just check whether there is a real added value in using back-pointers.
What you are doing above is explicitly supported by `weak_ptr` and `shared_ptr`, what happens when you try it? To be more precise, do what you are doing, without the null deleter, and then you use the standard behaviour on the `weak_ptr` to convert it to a `shared_ptr` as needed: ``` boost::shared_ptr<X> it=myWeakPtr.lock(); if (it) { // if the conversion succeeded, then the parent instance is still alive } ```
How to maintain a weak pointer to a parent in C++?
[ "", "c++", "smart-pointers", "weak-references", "" ]
I need to determine which locks are the most-contended-for in my application code. What free tools can I use to determine this ?
you can use jconsole or jstack both in the bin directory of your JDK. The jconsole in particular allows you to connect to your process and monitor the threads including which resources they have locked, and it can help you identify a deadlock state.
You can also view this in eclipse's debugger. In the Debug view, use the little down-triangle menu on the view toolbar to turn on "Java->Show Monitors". When you suspect a deadlock, pause the application (select the application in the debug view and press the pause button on the debug view toolbar) and if there's a deadlock, the blocking threads will turn red. If you expand those threads you can see the lock contention.
What free tools are available to analyze lock contention in java?
[ "", "java", "multithreading", "locking", "" ]
I have two (2GB each) files on my harddisk and want to compare them with each other: * Copying the original files with Windows explorer takes approx. 2-4 minutes (that is reading and writing - on the same physical and logical disk). * Reading with `java.io.FileInputStream` twice and comparing the byte arrays on a byte per byte basis takes 20+ minutes. * `java.io.BufferedInputStream` buffer is 64kb, the files are read in chunks and then compared. * Comparison is done is a tight loop like ``` int numRead = Math.min(numRead[0], numRead[1]); for (int k = 0; k < numRead; k++) { if (buffer[1][k] != buffer[0][k]) { return buffer[0][k] - buffer[1][k]; } } ``` What can I do to speed this up? Is NIO supposed to be faster then plain streams? Is Java unable to use DMA/SATA technologies and does some slow OS-API calls instead? **EDIT:** Thanks for the answers. I did some experiments based on them. As Andreas showed > streams or `nio` approaches do not differ much. > More important is the correct buffer size. This is confirmed by my own experiments. As the files are read in big chunks, even additional buffers (`BufferedInputStream`) do not give anything. Optimising the comparison is possible and I got the best results with 32-fold unrolling, but the time spend in comparison is small compared to disk read, so the speedup is small. Looks like there is nothing I can do ;-(
I tried out three different methods of comparing two identical 3,8 gb files with buffer sizes between 8 kb and 1 MB. the first first method used just two buffered input streams the second approach uses a threadpool that reads in two different threads and compares in a third one. this got slightly higher throughput at the expense of a high cpu utilisation. the managing of the threadpool takes a lot of overhead with those short-running tasks. the third approach uses nio, as posted by laginimaineb as you can see, the general approach does not differ much. more important is the correct buffer size. what is strange that i read 1 byte less using threads. i could not spot the error tough. ``` comparing just with two streams I was equal, even after 3684070360 bytes and reading for 704813 ms (4,98MB/sec * 2) with a buffer size of 8 kB I was equal, even after 3684070360 bytes and reading for 578563 ms (6,07MB/sec * 2) with a buffer size of 16 kB I was equal, even after 3684070360 bytes and reading for 515422 ms (6,82MB/sec * 2) with a buffer size of 32 kB I was equal, even after 3684070360 bytes and reading for 534532 ms (6,57MB/sec * 2) with a buffer size of 64 kB I was equal, even after 3684070360 bytes and reading for 422953 ms (8,31MB/sec * 2) with a buffer size of 128 kB I was equal, even after 3684070360 bytes and reading for 793359 ms (4,43MB/sec * 2) with a buffer size of 256 kB I was equal, even after 3684070360 bytes and reading for 746344 ms (4,71MB/sec * 2) with a buffer size of 512 kB I was equal, even after 3684070360 bytes and reading for 669969 ms (5,24MB/sec * 2) with a buffer size of 1024 kB comparing with threads I was equal, even after 3684070359 bytes and reading for 602391 ms (5,83MB/sec * 2) with a buffer size of 8 kB I was equal, even after 3684070359 bytes and reading for 523156 ms (6,72MB/sec * 2) with a buffer size of 16 kB I was equal, even after 3684070359 bytes and reading for 527547 ms (6,66MB/sec * 2) with a buffer size of 32 kB I was equal, even after 3684070359 bytes and reading for 276750 ms (12,69MB/sec * 2) with a buffer size of 64 kB I was equal, even after 3684070359 bytes and reading for 493172 ms (7,12MB/sec * 2) with a buffer size of 128 kB I was equal, even after 3684070359 bytes and reading for 696781 ms (5,04MB/sec * 2) with a buffer size of 256 kB I was equal, even after 3684070359 bytes and reading for 727953 ms (4,83MB/sec * 2) with a buffer size of 512 kB I was equal, even after 3684070359 bytes and reading for 741000 ms (4,74MB/sec * 2) with a buffer size of 1024 kB comparing with nio I was equal, even after 3684070360 bytes and reading for 661313 ms (5,31MB/sec * 2) with a buffer size of 8 kB I was equal, even after 3684070360 bytes and reading for 656156 ms (5,35MB/sec * 2) with a buffer size of 16 kB I was equal, even after 3684070360 bytes and reading for 491781 ms (7,14MB/sec * 2) with a buffer size of 32 kB I was equal, even after 3684070360 bytes and reading for 317360 ms (11,07MB/sec * 2) with a buffer size of 64 kB I was equal, even after 3684070360 bytes and reading for 643078 ms (5,46MB/sec * 2) with a buffer size of 128 kB I was equal, even after 3684070360 bytes and reading for 865016 ms (4,06MB/sec * 2) with a buffer size of 256 kB I was equal, even after 3684070360 bytes and reading for 716796 ms (4,90MB/sec * 2) with a buffer size of 512 kB I was equal, even after 3684070360 bytes and reading for 652016 ms (5,39MB/sec * 2) with a buffer size of 1024 kB ``` the code used: ``` import junit.framework.Assert; import org.junit.Before; import org.junit.Test; import java.io.BufferedInputStream; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; import java.text.DecimalFormat; import java.text.NumberFormat; import java.util.Arrays; import java.util.concurrent.*; public class FileCompare { private static final int MIN_BUFFER_SIZE = 1024 * 8; private static final int MAX_BUFFER_SIZE = 1024 * 1024; private String fileName1; private String fileName2; private long start; private long totalbytes; @Before public void createInputStream() { fileName1 = "bigFile.1"; fileName2 = "bigFile.2"; } @Test public void compareTwoFiles() throws IOException { System.out.println("comparing just with two streams"); int currentBufferSize = MIN_BUFFER_SIZE; while (currentBufferSize <= MAX_BUFFER_SIZE) { compareWithBufferSize(currentBufferSize); currentBufferSize *= 2; } } @Test public void compareTwoFilesFutures() throws IOException, ExecutionException, InterruptedException { System.out.println("comparing with threads"); int myBufferSize = MIN_BUFFER_SIZE; while (myBufferSize <= MAX_BUFFER_SIZE) { start = System.currentTimeMillis(); totalbytes = 0; compareWithBufferSizeFutures(myBufferSize); myBufferSize *= 2; } } @Test public void compareTwoFilesNio() throws IOException { System.out.println("comparing with nio"); int myBufferSize = MIN_BUFFER_SIZE; while (myBufferSize <= MAX_BUFFER_SIZE) { start = System.currentTimeMillis(); totalbytes = 0; boolean wasEqual = isEqualsNio(myBufferSize); if (wasEqual) { printAfterEquals(myBufferSize); } else { Assert.fail("files were not equal"); } myBufferSize *= 2; } } private void compareWithBufferSize(int myBufferSize) throws IOException { final BufferedInputStream inputStream1 = new BufferedInputStream( new FileInputStream(new File(fileName1)), myBufferSize); byte[] buff1 = new byte[myBufferSize]; final BufferedInputStream inputStream2 = new BufferedInputStream( new FileInputStream(new File(fileName2)), myBufferSize); byte[] buff2 = new byte[myBufferSize]; int read1; start = System.currentTimeMillis(); totalbytes = 0; while ((read1 = inputStream1.read(buff1)) != -1) { totalbytes += read1; int read2 = inputStream2.read(buff2); if (read1 != read2) { break; } if (!Arrays.equals(buff1, buff2)) { break; } } if (read1 == -1) { printAfterEquals(myBufferSize); } else { Assert.fail("files were not equal"); } inputStream1.close(); inputStream2.close(); } private void compareWithBufferSizeFutures(int myBufferSize) throws ExecutionException, InterruptedException, IOException { final BufferedInputStream inputStream1 = new BufferedInputStream( new FileInputStream( new File(fileName1)), myBufferSize); final BufferedInputStream inputStream2 = new BufferedInputStream( new FileInputStream( new File(fileName2)), myBufferSize); final boolean wasEqual = isEqualsParallel(myBufferSize, inputStream1, inputStream2); if (wasEqual) { printAfterEquals(myBufferSize); } else { Assert.fail("files were not equal"); } inputStream1.close(); inputStream2.close(); } private boolean isEqualsParallel(int myBufferSize , final BufferedInputStream inputStream1 , final BufferedInputStream inputStream2) throws InterruptedException, ExecutionException { final byte[] buff1Even = new byte[myBufferSize]; final byte[] buff1Odd = new byte[myBufferSize]; final byte[] buff2Even = new byte[myBufferSize]; final byte[] buff2Odd = new byte[myBufferSize]; final Callable<Integer> read1Even = new Callable<Integer>() { public Integer call() throws Exception { return inputStream1.read(buff1Even); } }; final Callable<Integer> read2Even = new Callable<Integer>() { public Integer call() throws Exception { return inputStream2.read(buff2Even); } }; final Callable<Integer> read1Odd = new Callable<Integer>() { public Integer call() throws Exception { return inputStream1.read(buff1Odd); } }; final Callable<Integer> read2Odd = new Callable<Integer>() { public Integer call() throws Exception { return inputStream2.read(buff2Odd); } }; final Callable<Boolean> oddEqualsArray = new Callable<Boolean>() { public Boolean call() throws Exception { return Arrays.equals(buff1Odd, buff2Odd); } }; final Callable<Boolean> evenEqualsArray = new Callable<Boolean>() { public Boolean call() throws Exception { return Arrays.equals(buff1Even, buff2Even); } }; ExecutorService executor = Executors.newCachedThreadPool(); boolean isEven = true; Future<Integer> read1 = null; Future<Integer> read2 = null; Future<Boolean> isEqual = null; int lastSize = 0; while (true) { if (isEqual != null) { if (!isEqual.get()) { return false; } else if (lastSize == -1) { return true; } } if (read1 != null) { lastSize = read1.get(); totalbytes += lastSize; final int size2 = read2.get(); if (lastSize != size2) { return false; } } isEven = !isEven; if (isEven) { if (read1 != null) { isEqual = executor.submit(oddEqualsArray); } read1 = executor.submit(read1Even); read2 = executor.submit(read2Even); } else { if (read1 != null) { isEqual = executor.submit(evenEqualsArray); } read1 = executor.submit(read1Odd); read2 = executor.submit(read2Odd); } } } private boolean isEqualsNio(int myBufferSize) throws IOException { FileChannel first = null, seconde = null; try { first = new FileInputStream(fileName1).getChannel(); seconde = new FileInputStream(fileName2).getChannel(); if (first.size() != seconde.size()) { return false; } ByteBuffer firstBuffer = ByteBuffer.allocateDirect(myBufferSize); ByteBuffer secondBuffer = ByteBuffer.allocateDirect(myBufferSize); int firstRead, secondRead; while (first.position() < first.size()) { firstRead = first.read(firstBuffer); totalbytes += firstRead; secondRead = seconde.read(secondBuffer); if (firstRead != secondRead) { return false; } if (!nioBuffersEqual(firstBuffer, secondBuffer, firstRead)) { return false; } } return true; } finally { if (first != null) { first.close(); } if (seconde != null) { seconde.close(); } } } private static boolean nioBuffersEqual(ByteBuffer first, ByteBuffer second, final int length) { if (first.limit() != second.limit() || length > first.limit()) { return false; } first.rewind(); second.rewind(); for (int i = 0; i < length; i++) { if (first.get() != second.get()) { return false; } } return true; } private void printAfterEquals(int myBufferSize) { NumberFormat nf = new DecimalFormat("#.00"); final long dur = System.currentTimeMillis() - start; double seconds = dur / 1000d; double megabytes = totalbytes / 1024 / 1024; double rate = (megabytes) / seconds; System.out.println("I was equal, even after " + totalbytes + " bytes and reading for " + dur + " ms (" + nf.format(rate) + "MB/sec * 2)" + " with a buffer size of " + myBufferSize / 1024 + " kB"); } } ```
With such large files, **you are going to get MUCH better performance with [java.nio.](https://docs.oracle.com/javase/7/docs/api/java/nio/package-summary.html)** Additionally, reading single bytes with java streams can be very slow. Using a byte array (2-6K elements from my own experiences, ymmv as it seems platform/application specific) will dramatically improve your read performance with streams.
Java Large Files Disk IO Performance
[ "", "java", "performance", "comparison", "stream", "nio", "" ]
I need to dynamically set values on a bunch or properties on an object, call it a transmission object. There will be a fair number of these transmission objects that will be created and have its properties set in a short space of time. I want to avoid the use of reflection. Are there alternatives? If so are there sample implementations I could look at?
Use [`Delegate.CreateDelegate`](http://msdn.microsoft.com/en-us/library/system.delegate.createdelegate.aspx) to turn a `MethodInfo` into a strongly-typed delegate. This can improve performance *massively*. I have a [blog post about this](https://codeblog.jonskeet.uk/2008/08/09/making-reflection-fly-and-exploring-delegates/) with sample code. Note that this is only going to help if you need to set the same properties multiple times - basically it means that a lot of the type checking is done *once* when you create the delegate, rather than on every invocation. Marc Gravell has a [HyperPropertyDescriptor](http://www.codeproject.com/KB/cs/HyperPropertyDescriptor.aspx) project which achieves even better performance, but introduces an extra dependency. This project became the jumping off point for the more modern [Fast Member](http://blog.marcgravell.com/2012/01/playing-with-your-member.html) ([github](https://github.com/mgravell/fast-member)). In general you would use Fast Member over HyperProperty.
In .NET 4.0 (beta), you can do this with the updated expression trees, using `Expression.Block` and `Expression.Assign` - then compile that to a typed delegate; job done. In .NET 2.0 and above (as Jon mentioned) HyperDescriptor is a reasonable option - it works as a custom `PropertyDescriptor` implementation, so you just do code like: ``` // store this collection for optimum performance PropertyDescriptorCollection props = TypeDescriptor.GetProperties( typeof(SomeType)); props["Name"].SetValue(obj, newName); props["DateOfBirth"].SetValue(obj, newDoB); ``` This still has a little boxing, but that isn't actually a bottleneck.
Improving performance reflection - what alternatives should I consider?
[ "", "c#", ".net", "performance", "reflection", "" ]
I'm trying to build my first facebook app, and it seems that the python facebook ([pyfacebook](http://code.google.com/p/pyfacebook/)) wrapper is really out of date, and the most relevant functions, like stream functions, are not implemented. Are there any mature python frontends for facebook? If not, what's the best language for facebook development?
The updated location of pyfacebook is [on github](http://github.com/sciyoshi/pyfacebook/tree/master). Plus, as [arstechnica](http://arstechnica.com/open-source/news/2009/04/how-to-using-the-new-facebook-stream-api-in-a-desktop-app.ars) well explains: > PyFacebook is also very easy to extend > when new Facebook API methods are > introduced. Each Facebook API method > is described in the PyFacebook library > using a simple data structure that > specifies the method's name and > parameter types. so, even should you be using a pyfacebook version that doesn't yet implement some brand-new thing you need, it's easy to add said thing, as Ryan Paul shows [here](http://bazaar.launchpad.net/~segphault/gwibber/template-facebook-stream/revision/289#gwibber/microblog/support/facelib.py) regarding some of the stream functions (back in April right after they were launched).
Facebook's own Python-SDK covers the newer Graph API now: <http://github.com/facebook/python-sdk/>
Is python a stable platform for facebook development?
[ "", "python", "api", "facebook", "" ]
How does STL algorithm work independent of Iterator type?
Really, they just work. They use some pretty basic properties of templates, sometimes called static polymorphism. If you're familiar with the term, it is essentially a form of ducktyping. (If it looks like a duck, and it quacks like a duck, it must be a duck) The trick is simple. Here's a very simple example: ``` template <typename T> void say_hello(const T& t) { t.hello(); } ``` The `say_hello` function doesn't care which type its argument is. It doesn't have to derive from an interface or make any other kind of "promises" about what it is. All that matters is that the type *works* in this context. All we do with the type is call its `hello` function. Which means that this code will compile for any type that has a `hello` member function. The STL algorithms work similarly. Here's a simple implementation of `std::for_each`: ``` template <typename iter_type, typename func_type> void for_each(iter_type first, iter_type last, func_type f){ for (iter_type cur = first; cur != last; ++cur) { f(*cur); } } ``` This code will compile whenever the template types live up to the requirements placed on them; `iter_type` must have the pre-increment ++-operator. It must have a copy constructor, and it must have the != operator, and it must have the \*-dereference-operator. `func_type` must implement the function-call operator, taking an argument of the same type as you get by dereferencing an object of type `iter_type`. If I call `for_each` with types that satisfy these requirements, the code will compile. `iter_type` can be any type that satisifies these requirements. There is nothing in the code that says "this shall work with vector iterators and list iterators and map iterators". But as long as vector, list or map iterators implement the operators we use, it'll work.
STL algorithm are template functions, which means they can be called with any type. When calling the function with a specific type, the compiler will try to compile an instance of the function for this specific type and report any compilation error (missing methods, type check errors, etc.) For STL algorithms, as long as the type used behaves like an iterator (supports ++, dereferencing), it will work. That's why those algorithms works with native pointers too, because they support the same type of operations than iterators (that is how they were designed in the first place).
How does STL algorithm work independent of Iterator type?
[ "", "c++", "algorithm", "stl", "iterator", "" ]
I'm no php expert (a mere beginner) but need some help! After hours searching Google and trying out about 100 different scripts, I finally found one that does what I need - almost. Basically, my site has a button marked 'Export to Excel'. Visitor to site clicks button and a download begins containing all data from a specified table. I found this on here - [PHP code to convert a MySQL query to CSV](https://stackoverflow.com/questions/125113/php-code-to-convert-a-mysql-query-to-csv/125125#125125) which does exactly what I want except the user sees the following error when trying to open the file: > Error - 'The file you are trying to open, 'export.xls', is in a different format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Wo you want to open the file now?' User clicks 'Yes' and file opens with all data! Brilliant! Except users will not open the file with this error. I would be very grateful if someone knows a way to fix this. Many thanks TT
Or, you could just change the script in the above solution to return a file with the .csv extension. .csv files are associated with Excel, so they should open directly.
Ok, this results from a feature specified by Excel 2007 called Extension Hardening. You can turn it off, but that can only be done client-side. If you click "OK" or "Yes" the file should open anyway. Check [this blog post](http://blogs.msdn.com/vsofficedeveloper/pages/Excel-2007-Extension-Warning.aspx) for more info. EDIT: What this means is that Excel is finding that the file is of a different type (say HTML or CSV) that what is specified by the file extension. Therefore Excel wants to warn you that this file is not what it says it is. Unless you are going to create native Excel files on the server then prompt the user to download them, there is no getting around this error except for each user to turn off Extension Hardening on their own computer.
Export mySQL to excel or csv
[ "", "php", "mysql", "excel", "" ]
I just fixed a memory leak caused by someone forgetting to call the superclass's OnUnload in their override of it. The superclass version frees some resources (as does its superclass). Are there external static analysis tools, or at least some kind of runtime trick I can do to detect this? With the ability to make an exception obviously (though these cases are exceedingly rare). UPDATE: Based on the answers below, I need to add constraints that are specific to our setup, which is a game for Wii/360/PS3. Very specific engine for a very specific application. * We have a deep hierarchy of game object classes (a design I never agreed with, but it's the design we are shipping). Deep and wide, actually. I am going to redo this for the next game to use a Dungeon Siege-style component-based system but in our current codebase, deep hierarchies make the DispatchVirtual() -> onVirtual() pattern difficult to apply. * Destructors do not get called for our game objects because delete doesn't get called. Game objects go into an add-only (stack) allocator-based pool during a world load. At the end of a level I just set the stack pointer back to the low water mark to free everything at once. In advance, we iterate all objects and call OnUnload on them so they can free any external resources they use. You might call it "garbage collection: the nuclear option". So no destructors. * Even if we could use a destructor-based approach it would only solve the narrow problem of an OnUnload or OnFree, but not OnUpdate, OnWorldMessage, OnLoaded, etc. Runtime solutions are interesting but I hate relying on testing to catch this. Optimal would be either a compile-time template trick or an external static analysis tool I can use.
A runtime "trick" you could use is to assert in the destructor of the base class if the constraint you are looking for has failed. Assuming the instance is actually destroyed and not leaked, this will tell you at the time the object is destroyed if the contract was correctly followed.
Don't trust the derived classes to do it; use the [template method design pattern](http://en.wikipedia.org/wiki/Template_method_pattern "template method design pattern") to ensure your base class behavior will always happen: ``` class Base { public: void OnUnload() { // Do stuff that must always be done. this->doOnUnload(); } private: // Virtual method for derived classes to override. // Can be pure virtual if it will always be overridden. virtual void doOnUnload() { // Empty default implementation } }; ``` The only problem is that this only buys you one level of inheritance, and your problem says you need two. In which case, this pattern can be repeated. But in general, it's usually more stable to have base classes call down to derived classes for specific behavior than to require derived classes to call up to base classes.
Is there a way to scan for when people forget to call the base class version of a virtual?
[ "", "c++", "memory", "" ]
I have an FTP client class which returns InputStream pointing the file. I would like to read the file row by row with BufferedReader. The issue is, that the client returns the file in binary mode, and the file has ISO-8859-15 encoding.
If the file/stream/whatever really contains ISO-8859-15 encoded text, you just need to specify that when you create the InputStreamReader: ``` BufferedReader br = new BufferedReader( new InputStreamReader(ftp.getInputStream(), "ISO-8859-15")); ``` Then `readLine()` will create valid Strings in Java's native encoding (which is UTF-16, not UTF-8).
Try this: ``` BufferedReader br = new BufferedReader( new InputStreamReader( ftp.getInputStream(), Charset.forName("ISO-8859-15") ) ); String row = br.readLine(); ```
BufferedReader returns ISO-8859-15 String - how to convert to UTF16 String?
[ "", "java", "character-encoding", "" ]
I have an image .I want to crop 10 px from left and 10px from right of the image.I used the below code to do so ``` string oldImagePath="D:\\RD\\dotnet\\Images\\photo1.jpg"; Bitmap myOriginalImage = (Bitmap)Bitmap.FromFile(oldImagePath); int newWidth = myOriginalImage.Width; int newHeight = myOriginalImage.Height; Rectangle cropArea = new Rectangle(10,0, newWidth-10, newHeight); Bitmap target = new Bitmap(cropArea.Width, cropArea.Height); using (Graphics g = Graphics.FromImage(target)) { g.DrawImage(myOriginalImage,cropArea); } target.Save("D:\\RD\\dotnet\\Images\\test.jpg"); ``` But this is not giving me the results which i expect. This outputs an image which has 10 px cropped from the right and a resized image.Instead of cropiing it is resizing the width i think.So the image is shrinked(by width). Can any one correct me ? Thanks in advance
Your new width should be reduced by twice the crop margin, since you'll be chopping off that amount from both sides. Next, when drawing the image into the new one, draw it at a negative offset. This causes the area that you aren't interested in to be clipped off. ``` int cropX = 10; Bitmap target = new Bitmap(myOriginalImage.Width - 2*cropX, myOriginalImage.Height); using (Graphics g = Graphics.FromImage(target)) { g.DrawImage(myOriginalImage, -cropX, 0); } ```
Okay, I totally fail at explaining this, but hang on: The DrawImage function requires the location of the image, as well as it's position. You need a second position for cropping as how the old relates to the new, not vice versa. That was entirely incomprehensible, but here is the code. ``` g.DrawImage(myOriginalImage, -cropArea.X, -cropArea.Y); ``` I hope that explains it more then I did.
C# : GDI+ Image cropping
[ "", "c#", "gdi+", "" ]
I'm designing a database for an organization that has multiple "types" of users. At first, I created only one user table. However, while all users share some common information (first name, last name, username, password, etc.), each user type requires one or two additional fields that are not applicable to all users. While I can create these additional fields and set them as NULL, I do not wish to do this, as the fields are foreign keys and its been causing problems for me. How is this situation normally handled? Thanks!
Your instincts to not create a big table with lots of NULLS is right on. That's a bad idea, from a storage/retrival/maintenance point of view, as well as a data validation point of view (more on that later). The two most common approcaches: 1) Have a user table with all the common fields in it, including a "userType" field. Then have a separate table for each user type containing the extra fields. All users have a row in the users table and one or more of the specific user type tables. This is the most normalized and the most efficient for storage and quick logins. This also lets you use contraints and foreign keys to assure that all required information for each user type is available. 2) Have a user table with all the common fields in it. Have another table called something like **UserAttributes** that has fields for userid, key, and value. Any extra metadata for a particular user can be stored in here. This has the advantage of not requiring any database administration to add new user types or metadata to be stored for each user type. However, it doesn't let you do any data validation at the DB level.
The relational model, as such, does not support "inheritance", which might help solve this problem (though a few DB engines, such as PostgreSQL, do support inheritance). So, I would first ask myself -- do the different types of users need to be able to appear in the same context, at least in some cases? If so, then you can't just copy and paste the "columns in common" to multiple tables (at least not without compromising the integrity checks that you could get in those cases via foreign keys onto a single table). Second question -- is it **ever** possible for a user to hold more than one role? In many cases it would be **unusual** but not utterly impossible, e.g. an employee might also be a supplier or a customer. If I could get no sharp answers to such questions directing me otherwise, I'd set up a users table with only the common fields; and separate tables for suppliers, employees, beta-testers, customers, and whatever other kinds and roles I might have for users, each with just its own specialized columns plus a foreign key onto the users table to pick up the rest. I realize that normalized schemas are out of fashion now, but they've served me faithfully for decades and I have a deep fondness for them -- I only denormalize when I need specific optimizations, and it happens more rarely than one might think!-). One somewhat denormalization that may likely be useful here is an enumeration column in the users table indicating the "main" or "sole" role of each particular use (it could be nullable and maybe uniformly null at the start, if I was pushy enough to have it in from the beginning...;-)... but I'd likely wait to add it if and when performance of some specific queries needed it as a specific optimization, rather than design the schema that way from the start (note that this is a key reason to never use `SELECT * FROM` in your queries -- if you `ALTER TABLE` later to add a column, that `SELECT *` is the one bit that would break!-).
MySQL Question - How to handle multiple types of users - one table or multiple?
[ "", "sql", "mysql", "database", "" ]
I want to convert a `DOMNode` object from a call to `getElementsByTagName()` to a `DOMElement` in order to access methods like `getElementsByTagName()` on the child element. In any other language, I would cast and it would be easy, but after some quick looking, PHP does not have object casting. So what I need to know is how to get a `DOMElement` object from a `DOMNode` object.
You don't need to cast anything, just call the method: ``` $links = $dom->getElementsByTagName('a'); foreach ($links as $link) { $spans = $link->getElementsByTagName('span'); } ``` And by the way, `DOMElement` is a subclass of `DOMNode`. If you were talking about a `DOMNodeList`, then accessing the elements in such a list can be done, be either the method presented above, with a `foreach()` loop, either by using the `item()` method of `DOMNodeList`: ``` $link_0 = $dom->getElementsByTagName('a')->item(0); ```
This is what I use in my project to minimize IDE warning. ``` /** * Cast a DOMNode into a DOMElement */ function cast_e(DOMNode $node) : DOMElement { if ($node) { if ($node->nodeType === XML_ELEMENT_NODE) { return $node; } } return null; } ```
DOMNode to DOMElement in php
[ "", "php", "xml", "dom", "" ]
I am using Python on Mac OSX Leopard. I am trying to run the program 'dot' (part of Graphviz) from Python: ``` # -*- coding: utf-8 -*- import os print os.environ['PATH'] print os.system("ls >> a.txt") print os.system("dot -o9.png -Tpng ./6.dot") ``` The command "ls" is there just to make sure that python is in the correct directory. It is. The result that I get is: > /usr/bin:/bin:/usr/sbin:/sbin > 0 > 32512 My understanding is that 32512 error means that python could not find the file, and since the file 6.dot is there (If I run "dot -o9.png -Tpng ./6.dot" from the terminal I receive no error, and 9.png gets produced), I assume Python can't find the dot file. I probably need to add the dot file to the path. But I don't know where is it. If I run: ``` whereis dot ``` I receive no answer. How can I find the dot executable? Alternatively, can I run the dot program as a command from inside Python?
You should change the PATH line so it includes the directory which contains `dot`. That directory is `/usr/local/bin`, without `/dot`.
`whereis` locates man pages, `which` locates binaries. So try `which dot`.
How do I run "dot" as a command from Python?
[ "", "python", "macos", "path", "graphviz", "dot", "" ]
I'm not so good with JS and for some reason when I try to add two fields together it joins them rather than adding the sum together.. this is the code I'm trying to use.. ``` function calculateTotal() { var postageVal = document.getElementById('postage').value; //$68.50 var subtotalVal = document.getElementById('subtotal').value; //$378.00 var postage = postageVal.substr(1); //68.50 var subtotal = subtotalVal.substr(1); //378.00 var totalVal = postage+subtotal; alert(postage); alert(subtotal); alert(totalVal); }; ``` The totalVal is echoing/alerting out 68.50378.00 rather than adding them together.. could someone please tell me where I've gone wrong? :( The idea is to update the "total" textfield with totalVal, but I haven't gotten that far yet!
You need to convert your values to a float before adding them: ``` var totalVal = parseFloat(postage) + parseFloat(subtotal); ``` EDIT: Here's a complete example that includes a check for NaN: ``` function calculateTotal() { var postageVal = document.getElementById('postage').value; //$68.50 var subtotalVal = document.getElementById('subtotal').value; //$378.00 var postage = parseFloat(postageVal.substr(1)); //68.50 var subtotal = parseFloat(subtotalVal.substr(1)); //378.00 var postageAsFloat = isNaN(postage) ? 0.0 : postage; var subtotalAsFloat = isNaN(subtotal) ? 0.0 : subtotal; var totalVal = postageAsFloat + subtotalAsFloat; alert(postage); alert(subtotal); alert(totalVal); }; ```
Try converting the numbers to floats: ``` function calculateTotal() { var postageVal = document.getElementById('postage').value; //$68.50 var subtotalVal = document.getElementById('subtotal').value; //$378.00 var postage = parseFloat(postageVal.substr(1)); //68.50 var subtotal = parseFloat(subtotalVal.substr(1)); //378.00 var totalVal = postage+subtotal; alert(postage); alert(subtotal); alert(totalVal); }; ```
Simple JavaScript addition issues
[ "", "javascript", "math", "" ]
I'm running the same php script on many domains on an apache2 server. Only some of the files change between domains, and the rest are always the same. Right now, every time I set up a new domain, I copy all the files, but I'd really like to have these common files in one place so any changes would affect all domains. I've thought of using a bunch of symlinks to point at common files. Is this an ok approach, or are there some simple edits I can make to the php scripts or apache configuration files to make this more efficient? Thanks!
I'd suggest abstracting the common code into a set of 'library' scripts. Placing these in a common directory, and making that available by modifying PHP's **include\_path** variable. This means you most likely won't have to modify your current scripts, while still removing the need to have more than one copy. This path could (and probably should) be outside of your public directories. This enhances the security of your websites by not making them directly available to outside users.
The way I do this kind of thing is to create a "common" directory, where I place all the file that can be shared between each site. Then I simply include them wherever they are needed. This is pretty good because allows to add features across multiple sites.
What is the best way to set up shared php script files on a webserver?
[ "", "php", "apache", "" ]
in c#, ``` var x = new {}; ``` declares an anonymous type with no properties. Is this any different from ``` var x = new object(); ``` ?
Yes, the types used are different. You can tell this at compile-time: ``` var x = new {}; // Won't compile - no implicit conversion from object to the anonymous type x = new object(); ``` If you're asking whether `new{}` is ever *useful* - well, that's a different matter... I can't immediately think of any sensible uses for it.
Well, for starters, object is an actual, non-anonymous type...if you do x.GetType() on the 2nd example, you'll get back System.Object.
Is there any difference between `new object()` and `new {}` in c#?
[ "", "c#", "syntax", "anonymous-types", "" ]
I have a username and password for a particular user in Linux i need to verify that if the user is valid or not using java? Abdul Khaliq
The Java way to do this would be [JAAS](http://java.sun.com/javase/6/docs/technotes/guides/security/jaas/JAASRefGuide.html), but you'll still need a LoginModule that works with Linux. [Here's](http://jaas-pam.sourceforge.net/index.html) a beta implementation that claims to work.
You could use Java Runtime-object to run the command line commands that suit your needs. [Runtime API](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Runtime.html)
Authenticate linux based user in java
[ "", "java", "linux", "authentication", "" ]
When I annotate a class with @Entity and try to resolve the dependencies, I get to choose the package between two different packages, javax.persistence.Entity and org.hibernate.annotations.Entity The javax package is JPA's entity-annotation, but why is there a hibernate entity-annotation and difference does it have with JPA's annotation? Is it just an extension to allow more attributes to be defined?
`org.hibernate.annotations.Entity` has some extra attributes that `javax.persistence.Entity` has not standarized. The extra features will only work if using hibernate's `AnnotationConfiguration` directly or if hibernate is the JPA provider. from [the FAQ](https://www.hibernate.org/329.html): **edit:** new link [the specific question](https://community.jboss.org/wiki/HibernateFAQ-HibernateAnnotationsFAQ#I_use_orghibernateannotationsEntity_and_get_an_Unknown_entity_exception): **edit:** new link [the answer](https://developer.jboss.org/wiki/HibernateFAQ-HibernateAnnotationsFAQ#jive_content_id_I_use_orghibernateannotationsEntity_and_get_an_Unknown_entity_exception): > **I use @org.hibernate.annotations.Entity and get an Unknown entity exception** > > Always import @javax.persistence.Entity > > @org.hibernate.annotations.Entity completes @javax.persistence.Entity but is not a replacement For instance, there is an attribute called `optimisticLock`, which tells hibernate whether to use the standard *version column* or to compare all columns when updating. This behavior is not in the JPA spec, so in order to configure it, you must use hibernate specific extension found in their own annotation. Like this: ``` @Entity @org.hibernate.annotations.Entity(optimisticLock=OptimisticLockType.ALL) public class MyEntity implements Serializable { ... } ```
@org.hibernate.annotations used in your project, if suppose you want to use JDBC template or ibatis we need to change the code. if we use javax.persistence there is no need to change the code. This is the main difference between org.hibernate.annotations and javax persistence
Difference between JPA Entity and Hibernate Entity
[ "", "java", "hibernate", "jpa", "" ]
I am testing a JQuery Web application. I have a JQuery popup that I simply cannot get the submit button to fire in Watir. The same pattern is used thought the application. I have verified the button exists and have tried click, fireEvent you name it and am out of methods to call. Has anyone solved this?
In my case the solution was to use and index value of 1. JQuery creates a copy of the form and all the popup items have an index of 1. The 0 index controls are on the original form.
Which do you want ? A. Control cursor and click "submit" or B. Simula te click "submit" A:need to use Autoit and control cursor and click ,but only for windows OS. B:execute the javascript that when clicking "submit". If B case,there is two ways I used always. 1.execute that code in URL-bar. ex.) @ie.link(:URL, 'javascript:<-CODE->;').click or Make like that module and use it by include in test case. ex.) @ie.excute\_script(<-CODE->) ``` module Watir class IE def execute_script(scriptCode) WIN32OLE.codepage = WIN32OLE::CP_UTF8 window.execScript(scriptCode) end def window WIN32OLE.codepage = WIN32OLE::CP_UTF8 ie.Document.parentWindow end end ``` Maybe... I hope it help. Sorry my poor english. tknv/
Watir does not trigger Jquery event handler
[ "", "javascript", "jquery", "ajax", "watir", "" ]
I want to use the same ant script to do a build in either my local windows environment or on our redhat build server. I have a variable 'buildDirectory' in two files (build\_unix.properties & build\_windows). I want to set variables depending on the environment. ``` <osfamily property="os.family"/> <property file="./build_${os.family}.properties" /> <property name="tmp-base.folder" value="${buildDirectory}/tmp/"/> ``` I also tried ``` <if> <os family="unix"/> <then> <property file="./build_unix.properties" /> </then> <else> <property file="./build_windows.properties" /> </else> </if> ``` Any ideas?
I would expect your if...then...else version to work. As it apparently isn't I would add some extra echo's to make sure your build is doing what you think it is doing. 1. An echo inside the then and the else would let you know for certain what path is being executed. 2. Add a prefix to the properties (e.g. <property file="..." prefix="test" />) and then add an <echoproperties prefix="test" /> to ensure the properties you think are being loaded are.
Are you asking how you can automatically set the os.family property in the first place? If so, here's one approach: ``` <available file="C:\\" property="os.family" value="windows" /> <available file="/usr/bin/" property="os.family" value="unix" /> ``` Ah! Edited question makes it more clear what you are asking (not this), and I see that you're using the "osfamily" task from ant-contrib to determine the OS family. But I'll leave this answer up for anyone who is just using ant without ant-contrib.
Ant: Referencing dynamic properties
[ "", "java", "ant", "build-process", "" ]
What is the best de-facto standard cross-browser method to determine if a variable in JavaScript is an array or not? Searching the web there are a number of different suggestions, some good and quite a few invalid. For example, the following is a basic approach: ``` function isArray(obj) { return (obj && obj.length); } ``` However, note what happens if the array is empty, or obj actually is not an array but implements a length property, etc. So which implementation is the best in terms of actually working, being cross-browser and still perform efficiently?
Type checking of objects in JS is done via `instanceof`, ie ``` obj instanceof Array ``` This won't work if the object is passed across frame boundaries as each frame has its own `Array` object. You can work around this by checking the internal *[[Class]]* property of the object. To get it, use `Object.prototype.toString()` (this is guaranteed to work by ECMA-262): ``` Object.prototype.toString.call(obj) === '[object Array]' ``` Both methods will only work for actual arrays and not array-like objects like the `arguments` object or node lists. As all array-like objects must have a numeric `length` property, I'd check for these like this: ``` typeof obj !== 'undefined' && obj !== null && typeof obj.length === 'number' ``` Please note that strings will pass this check, which might lead to problems as IE doesn't allow access to a string's characters by index. Therefore, you might want to change `typeof obj !== 'undefined'` to `typeof obj === 'object'` to exclude primitives and host objects with types distinct from `'object'` alltogether. This will still let string objects pass, which would have to be excluded manually. In most cases, what you actually want to know is whether you can iterate over the object via numeric indices. Therefore, it might be a good idea to check if the object has a property named `0` instead, which can be done via one of these checks: ``` typeof obj[0] !== 'undefined' // false negative for `obj[0] = undefined` obj.hasOwnProperty('0') // exclude array-likes with inherited entries '0' in Object(obj) // include array-likes with inherited entries ``` The cast to object is necessary to work correctly for array-like primitives (ie strings). Here's the code for robust checks for JS arrays: ``` function isArray(obj) { return Object.prototype.toString.call(obj) === '[object Array]'; } ``` and iterable (ie non-empty) array-like objects: ``` function isNonEmptyArrayLike(obj) { try { // don't bother with `typeof` - just access `length` and `catch` return obj.length > 0 && '0' in Object(obj); } catch(e) { return false; } } ```
The arrival of ECMAScript 5th Edition gives us the most sure-fire method of testing if a variable is an array, [*Array.isArray()*](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/isArray): ``` Array.isArray([]); // true ``` While the accepted answer here will work across frames and windows for most browsers, **it doesn't for Internet Explorer 7 and lower**, because `Object.prototype.toString` called on an array from a different window will return `[object Object]`, not `[object Array]`. IE 9 appears to have regressed to this behaviour also (see updated fix below). If you want a solution that works across all browsers, you can use: ``` (function () { var toString = Object.prototype.toString, strArray = Array.toString(), jscript = /*@cc_on @_jscript_version @*/ +0; // jscript will be 0 for browsers other than IE if (!jscript) { Array.isArray = Array.isArray || function (obj) { return toString.call(obj) == "[object Array]"; } } else { Array.isArray = function (obj) { return "constructor" in obj && String(obj.constructor) == strArray; } } })(); ``` It's not entirely unbreakable, but it would only be broken by someone trying hard to break it. It works around the problems in IE7 and lower and IE9. [The bug still exists in IE 10 PP2](https://connect.microsoft.com/IE/feedback/details/678969/arrays-from-other-documents-are-not-recognized-as-arrays-by-native-js-functions#tabs), but it might be fixed before release. PS, if you're unsure about the solution then I recommend you test it to your hearts content and/or read the blog post. There are other potential solutions there if you're uncomfortable using conditional compilation.
How to detect if a variable is an array
[ "", "javascript", "arrays", "" ]
I'm working with a large matrix (250x250x30 = 1,875,000 cells), and I'd like a way to set an arbitrary number of flags for each cell in this matrix, in some manner that's easy to use and reasonably space efficient. My original plan was a 250x250x30 list array, where each element was something like: `["FLAG1","FLAG8","FLAG12"]`. I then changed it to storing just integers instead: `[1,8,12]`. These integers are mapped internally by getter/setter functions to the original flag strings. This only uses 250mb with 8 flags per point, which is fine in terms of memory. My question is: am I missing another obvious way to structure this sort of data? Thanks all for your suggestions. I ended up rolling a few suggestions into one, sadly I can only pick one answer and have to live with upvoting the others: EDIT: erm the initial code I had here (using sets as the base element of a 3d numpy array) used A LOT of memory. This new version uses around 500mb when filled with `randint(0,2**1000)`. ``` import numpy FLAG1=2**0 FLAG2=2**1 FLAG3=2**2 FLAG4=2**3 (x,y,z) = (250,250,30) array = numpy.zeros((x,y,z), dtype=object) def setFlag(location,flag): array[location] |= flag def unsetFlag(location,flag): array[location] &= ~flag ```
I would generally use a [numpy](https://numpy.org/) array (presumably of short ints, 2 bytes each, since you may need more than 256 distinct values) -- that would take less than 4MB for the <2 million cells. If for some reason I couldn't afford the numpy dependency (e.g on App Engine, which doesn't support numpy), I'd use the standard library [array](https://docs.python.org/3/library/array.html) module - it only supports 1-dimensional arrays, but it's just as space-efficient as numpy for large homogeneous arrays, and the getter/setter routines you mention can perfectly well "linearize" a 3-items tuple that's your natural index into the single integer index into the 1-D array. In general, consider numpy (or array) any time you have large homogeneous, dense vectors or matrices of numbers -- Python built-in lists are highly wasteful of space in this use case (due to their generality which you're not using and don't need here!-), and saving memory indirectly translates to saving time too (better caching, fewer levels of indirection, etc, etc).
Your solution is fine if every single cell is going to have a flag. However if you are working with a sparse dataset where only a small subsection of your cells will have flags what you really want is a dictionary. You would want to set up the dictonary so the key is a tuple for the location of the cell and the value is a list of flags like you have in your solution. ``` allFlags = {(1,1,1):[1,2,3], (250,250,30):[4,5,6]} ``` Here we have the 1,1,1 cell have the flags 1,2, and 3 and the cell 250,250,30 have the flags 4,5, and 6 edit- fixed key tuples, thanks Andre, and dictionary syntax.
Flags in Python
[ "", "python", "matrix", "numpy", "flags", "" ]
I am using the AES methods here: <http://msdn.microsoft.com/en-us/library/system.security.cryptography.rijndaelmanaged.aspx> I want to have a string value that I will convert to byte array and pass it to the AES encrypt method. How many characters should the string be to produce the correct byte array size that the method expects? ``` static byte[] encryptStringToBytes_AES(string plainText, byte[] Key, byte[] IV) { // Check arguments. if (plainText == null || plainText.Length <= 0) throw new ArgumentNullException("plainText"); if (Key == null || Key.Length <= 0) throw new ArgumentNullException("Key"); if (IV == null || IV.Length <= 0) throw new ArgumentNullException("Key"); // Declare the stream used to encrypt to an in memory // array of bytes. MemoryStream msEncrypt = null; // Declare the RijndaelManaged object // used to encrypt the data. RijndaelManaged aesAlg = null; try { // Create a RijndaelManaged object // with the specified key and IV. aesAlg = new RijndaelManaged(); aesAlg.Key = Key; aesAlg.IV = IV; // Create a decrytor to perform the stream transform. ICryptoTransform encryptor = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV); // Create the streams used for encryption. msEncrypt = new MemoryStream(); using (CryptoStream csEncrypt = new CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write)) { using (StreamWriter swEncrypt = new StreamWriter(csEncrypt)) { //Write all data to the stream. swEncrypt.Write(plainText); } } } finally { // Clear the RijndaelManaged object. if (aesAlg != null) aesAlg.Clear(); } // Return the encrypted bytes from the memory stream. return msEncrypt.ToArray(); } ```
The size of the plain text does not matter. Just make sure you use the exact same IV and Key along with the encrypted bytes in the decryptStringFromBytes\_AES(byte[] cipherText, byte[] Key, byte[] IV) method. That will return back to you the entered plain text. For example: ``` string plain_text = "Cool this works"; byte[] iv = new byte[] { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F}; byte[] key = new byte[] { 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0xFF }; byte[] encrytped_text = encryptStringToBytes_AES(plain_text, key, iv); string plain_text_again = decryptStringFromBytes_AES(encrypted_text, key, iv); ``` Here you should see that plain-text and plain-text-again are the same. Now go ahead and change plain\_text to anything you want and see that this works fine. The default values for RijndaelManaged are: BlockSize: 128 KeySize: 256 Mode: CipherMode.CBC Padding: PaddingMode.PKCS7 The valid IV sizes are: 128, 192, 256 bits (This is the BlockSize, make sure to set it to size IV you are using) The valid Key sizes are: 128, 192, 256 bits (This is the KeySize, make sure to set it to the size key you are using) This means that the byte[] iv can be 16, 24, or 32 bytes (in my above example its 16 bytes) and the byte[] key can also be 16, 24, or 32 bytes (in my above example its 16 bytes). Hope that helps.
You need padding for that. Actually, the page you linked have an example on padding (in C++). With paddings, you can encrypt non standard block sizes.
How many characters to create a byte array for my AES method?
[ "", "c#", "asp.net", "encryption", "aes", "rijndaelmanaged", "" ]
What are the differences between `numeric`, `float` and `decimal` datatypes and which should be used in which situations? For any kind of financial transaction (e.g. for salary field), which one is preferred and why?
> use the *float* or *real* data types **only if** the precision provided by *decimal* (up to 38 digits) is insufficient * Approximate numeric data types(see table 3.3) do not store the exact values specified **for many** numbers; they store **an extremely close** approximation of the value. ([Technet](https://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx)) * Avoid using float or real columns in WHERE clause search conditions, especially the = and <> operators. It is best to limit float and real columns to > or < comparisons. ([Technet](https://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx)) so generally choosing Decimal as your data type is the best bet if * your number can fit in it. *Decimal precision* is 10E38[~ 38 digits] * smaller storage space (and maybe calculation speed) of Float is not important for you * exact numeric behavior is required, such as in financial applications, in operations involving rounding, or in equality checks. ([Technet](https://technet.microsoft.com/en-us/library/ms187912(v=sql.105).aspx)) --- 1. **Exact** Numeric Data Types [decimal and numeric - MSDN](https://learn.microsoft.com/sql/t-sql/data-types/decimal-and-numeric-transact-sql) * numeric = decimal (5 to 17 bytes) + will map to Decimal in .NET + both have (18, 0) as default (precision,scale) parameters in SQL server + scale = maximum number of decimal digits that can be stored to the right of the decimal point. + money(8 byte) and smallmoney(4 byte) are also Exact Data Type and will map to Decimal in .NET and have 4 decimal points ([MSDN](https://learn.microsoft.com/sql/t-sql/data-types/money-and-smallmoney-transact-sql)) 2. **Approximate** Numeric Data Types [float and real - MSDN](https://learn.microsoft.com/sql/t-sql/data-types/float-and-real-transact-sql) * real (4 byte) + will map to Single in .NET + The ISO synonym for real is float(24) * float (8 byte) + will map to Double in .NET ![Exact Numeric Data Types](https://i.stack.imgur.com/HGYoK.png) ![Approximate Numeric Data Types](https://i.stack.imgur.com/O6ahT.png) * All **exact** numeric types always produce the same result, regardless of which kind of processor architecture is being used **or the magnitude of the numbers** * The parameter supplied to the float data type defines the number of bits that are used to store the **[mantissa](https://en.wikipedia.org/wiki/Significand) of the floating point number**. * Approximate Numeric Data Type usually uses less storage and have better speed (up to 20x) and you should also consider when they got converted in .NET * [What is the difference between Decimal, Float and Double in C#](https://stackoverflow.com/a/618543/184572) * [Decimal vs Double Speed](https://stackoverflow.com/a/329618/184572) * [SQL Server - .NET Data Type Mappings (From MSDN)](https://msdn.microsoft.com/en-us/library/cc716729.aspx) **main source** : [MCTS Self-Paced Training Kit (Exam 70-433): Microsoft® SQL Server® 2008 Database Development](https://oreilly.com/catalog/9780735626393?cmp=ex-orm-books-msp-9780735626393) - Chapter 3 - Tables, Data Types, and Declarative Data Integrity Lesson 1 - Choosing Data Types (Guidelines) - Page 93
Guidelines from MSDN: [Using decimal, float, and real Data](http://msdn.microsoft.com/en-us/library/ms187912(v=sql.105).aspx) > The default maximum precision of numeric and decimal data types is 38. > In Transact-SQL, numeric is functionally equivalent to the decimal > data type. **Use the decimal data type to store numbers with decimals > when the data values must be stored exactly as specified.** > > The behavior of float and real follows the > IEEE 754 specification on approximate numeric data types. Because of the approximate nature of the float and real data types, do not use these data types when exact > numeric behavior is required, such as in financial applications, in > operations involving rounding, or in equality checks. Instead, use the > integer, decimal, money, or smallmoney data types. Avoid using float > or real columns in WHERE clause search conditions, especially the = > and <> operators. **It is best to limit float and real columns to > or < > comparisons.**
Difference between numeric, float and decimal in SQL Server
[ "", "sql", "sql-server", "types", "" ]
We usually catch exception in the upper level of a code like the GUI (forms). But I usually have this kind of code ``` try { } catch(Exception ex) { Console.WriteLine(ex.Message); MessageBox.Show("Application has encountered error...."); } ``` I could just catch(Exception) without the identifier because I do not need the message on runtime, but for the debugging build, it sure is convenient to break at the catch statement. So I usually write a Console.WriteLine to prevent a lot of warning of unused ex variable. I have a lot of case of Console.WriteLine(ex.Message) in my code. Does this cost performance decrease? Note: Changed title from "Does Console.WriteLine(ex.Message) have performance cost?" to "Calling Console.WriteLine(ex.Message) to prevent warning message"
This is a multiple question in 1 so I will try to unroll it: **Firstly** ``` try{ ... } catch(Exception) { } ``` Is perfectly valid syntax. Adding a Console.WriteLine(ex.Message) just to get the thing to compile without warning is not the right thing to be doing. **Secondly** Console.WriteLine is not the proper way to do diagnostics, look at Trace.WriteLine or better still a [Logging framework](http://www.nlog-project.org/). Of course Console.Writeline has a cost, the cost is not too serious, nonetheless a call is made, and it has a cost. **Thirdly** Sometimes its better to crash, it forces you to fix the root problem, at least do a [Debug.Assert](http://msdn.microsoft.com/en-us/library/system.diagnostics.debug.assert.aspx) if something really bad happens.
You can create an extension method that gets filtered out in debug mode. ``` public static Exception { [Conditional("DEBUG")] public static void Dump( this Exception ex ) { Console.WriteLine( ex.ToString() ); } } ``` Or even better... ``` public static Exception { public static void Log( this Exception ex ) { #if DEBUG Console.WriteLine( ex.ToString() ); #endif Logger.WriteLine( ex.ToString() ); } } ``` Then in your code replace `Console.WriteLine( ex.ToString() )` to `ex.Log();` However, in general the exception itself will be more of a performance issue than dumping to the console.
Calling Console.WriteLine(ex.Message) to prevent warning message
[ "", "c#", "winforms", "exception", "console.writeline", "" ]
I'm used to including and using JS like so: ``` <script type='text/javascript' src='/path/to/script.js'></script> .... <script type='text/javascript'> alert('Do some stuff here, using resources defined in script.js.'); </script> ``` Just out of curiosity, does anyone know the rule for having an inline script with a src attrib, like: ``` <script type='text/javascript' src='/path/to/script.js'> alert('Do some stuff here, using resources defined in script.js.'); </script> ``` I can test what DOES happen in various browsers, but would like to know what the official behavior is.
It's either one or the other, not both. The `src` attribute of the `<script>` tag has precedence over the body of the tag. [HTML 4.01 Specification](http://www.w3.org/TR/html4/interact/scripts.html#h-18.2.1): > The script may be defined within the > contents of the SCRIPT element or in > an external file. If the `src` attribute > is not set, user agents must interpret > the contents of the element as the > script. If the `src` has a URI value, > user agents must ignore the element's > contents and retrieve the script via > the URI.
The HTML specification states > If the src has a URI value, user > agents must ignore the element's > contents and retrieve the script via > the URI.
JavaScript: Inline Script with SRC Attribute?
[ "", "javascript", "include", "" ]
who could be faster ? and why ? 1: ``` Point point = new Point(25,25); //any numbers.. Point point2 = new Point(20,95); //any numbers.. Graphics g = CreateGraphics(); g.DrawLine(point,point2); ``` OR 2: ``` Graphics g = CreateGraphics(); g.DrawLine(new Point(25,25),new Point(20,95)); ```
Neither, there is no real difference between them except for the loss in readability (especially after a JITing).
None of them, since both snippets will compile to the same MSIL code representation. Besides, this is a micro-optimization, which you should avoid before actually knowing that it **is the bottleneck.**
C# Code optimization
[ "", "c#", ".net", "optimization", "" ]
I'm serving up an image from a database using an IHttpHandler. The relevant code is here: ``` public void ProcessRequest(HttpContext context) { context.Response.ContentType = "image/jpeg"; int imageID; if (int.TryParse(context.Request.QueryString["id"], out imageID)) { var photo = new CoasterPhoto(imageID); if (photo.CoasterPhotoID == 0) context.Response.StatusCode = 404; else { byte[] imageData = GetImageData(photo); context.Response.OutputStream.Write(imageData, 0, imageData.Length); context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetExpires(DateTime.Now.AddMinutes(5)); context.Response.Cache.SetLastModified(photo.SubmitDate); } } else context.Response.StatusCode = 404; } ``` The problem is that the browser won't cache the image, presumably because I'm not indicating the right thing in the response headers. The part calling methods on the HttpCachePolicy property is what I thought would force the browser to hold on to the image, but it doesn't. I think the "right" thing is for the handler to return a 304 status code without an image, right? How do I achieve that using IHttpHandler? EDIT: Per the best answer, I got this code running and it completely solves the problem. Yes, it needs some refactoring, but it generally demonstrates what I was after. The relevant parts: ``` if (!String.IsNullOrEmpty(context.Request.Headers["If-Modified-Since"])) { CultureInfo provider = CultureInfo.InvariantCulture; var lastMod = DateTime.ParseExact(context.Request.Headers["If-Modified-Since"], "r", provider).ToLocalTime(); if (lastMod == photo.SubmitDate) { context.Response.StatusCode = 304; context.Response.StatusDescription = "Not Modified"; return; } } byte[] imageData = GetImageData(photo); context.Response.OutputStream.Write(imageData, 0, imageData.Length); context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetLastModified(photo.SubmitDate); ```
AFAIK, **you** are responsible for sending 304 Not Modified, meaning I am not aware of anything in the .Net framework that does it for you in this use case of you sending "dynamic" image data. What you will have to do (in pseudo code): * Check for the If-Modified-Since header in the request and parse out the date (if it exists). * Compare it to the last modification date of your original image (dynamically generated) image. Tracking this is probably the most complex part of the solution to this problem. In your current situation, you are re-creating the image on every request; you **don't** want to do that unless you absolutely have to. * If the date of the file the browser has is newer or equal to what you have for the image, send a 304 Not Modified. * Otherwise, continue with your current implementation A simple way to track last modified times on your end is to cache newly generated images on the file system and keep an in-memory dictionary around that maps the image ID to a struct containing the file name on disk and the last modification date. Use Response.WriteFile to send the data from disk. Of course, every time you restart your worker process, the dictionary would be empty, but you're getting at least some caching benefit without having to deal with persisting caching information somewhere. You can support this approach by separating the concerns of "Image Generation" and "Sending Images over HTTP" into different classes. Right now you're doing two very different things in the same place. I know this may sound a little complex, but it's worth it. I just recently implemented this approach and the savings in processing time and bandwidth usage were incredible.
If you have source file on disk you can use this code: ``` context.Response.AddFileDependency(pathImageSource); context.Response.Cache.SetETagFromFileDependencies(); context.Response.Cache.SetLastModifiedFromFileDependencies(); context.Response.Cache.SetCacheability(HttpCacheability.Public); ``` Also, make sure that you test using IIS, not from Visual Studio. ASP.NET Development Server (aka Cassini) always sets Cache-Control to private. See also: [Caching Tutorial for Web Authors and Webmasters](http://www.mnot.net/cache_docs/)
Image from HttpHandler won't cache in browser
[ "", "c#", ".net", "asp.net", "ihttphandler", "" ]
I am a newbie to asp.net. I have an asp.net page which uses a user control. On Page\_Load event of this control, I want to change the title of the parent aspx page. I need help on this please.
You can try create a property in usercontrol and next call this property using your instance of usercontrol in page like In UserControl ``` protected void Page_Load(object sender, EventArgs e) { this.MyProperty = "This is a test"; } public string MyProperty { get; set; } ``` ` In Page ``` protected void Page_Load(object sender, EventArgs e) { WebUserControl11.PreRender += new EventHandler(WebUserControl11_PreRender); } void WebUserControl11_PreRender(object sender, EventArgs e) { string str = WebUserControl11.MyProperty; this.Header.Title = str; } ```
``` protected void Page_Load(object sender, EventArgs e) { Page.Title = "New Title"; } ```
Changing parent page's title from user control
[ "", "c#", "asp.net", "" ]
I have a table in which I have, among other things, two columns, one for start date and another for end date. I need to write a query that will return a column for each month of the year and the value of that column is 1 if the month is between 0 otherwise. The PIVOT statement seems to be what I am looking for here, but from the best I can tell the PIVOT clause is looking to match values, not check if value is between two others. Is the PIVOT clause the right construct here, or do I need to break down and write 12 case statements and then aggregate those?
I think I've got the solution here. There are 3 basic steps: 1. Get the a date in each of the 12 months 2. Check if this date is between the start and end date 3. PIVOT the result To get the 12 dates, one for each month, I used a little recursive like WITH statement to create a temporary table with 1 column of 12 dates: ``` WITH months (date) AS ( SELECT GETDATE() AS date UNION ALL SELECT DATEADD(MONTH,1,date) FROM months WHERE DATEDIFF(MONTH,GETDATE(),date) < 12) ``` From here I can CROSS JOIN this temporary table with the table with the information I really care about. Then I use WHERE date BETWEEN start AND end to filter off any entries which don't belong to that month. So something like this: ``` SELECT other.Title, MONTH(months.date) CROSS JOIN other WHERE months.date BETWEEN other.start AND other.end ``` In this step we must be careful to only SELECT the columns we explicity desire in the result, or those which will be aggregated using the PIVOT statement. Lastly, we must pivot the result: ``` PIVOT (MAX(PID) FOR date IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12])) ``` So the resultant query might look something like: ``` WITH months (date) AS ( SELECT GETDATE() AS date UNION ALL SELECT DATEADD(MONTH,1,date) FROM months WHERE DATEDIFF(MONTH,GETDATE(),date) < 12) SELECT Title, [1] AS January, [2] AS February, [3] AS March, [4] AS April, [5] AS May, [6] AS June, [7] AS July, [8] AS August, [9] AS September, [10] AS October, [11] AS November, [12] AS December FROM ( SELECT other.Title,MONTH(months.date) CROSS JOIN other WHERE months.date BETWEEN other.startDate AND other.endDate ) AS subquery PIVOT (MAX(PID) FOR date IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12])) AS p ``` I've stripped out all of the other complexity I went through to join in other information I needed, so this isn't actually the query I wrote, but it should encapsulate the basic query structure that I used to get the result as I needed it.
I'd go with the 12 case statements. That's actually the solution I sketched out before I went back and saw you mentioned in it your question.
SQL Pivot based on value between two columns
[ "", "sql", "sql-server-2005", "t-sql", "" ]
Here is the situation,each page will show 30 topics,so I had execute 1 sql statements at least,besides,I also want to show how many relpies with each topic and who the author is,thus I have to use 30 statements to count the number of replpies and use other 30 statements to find the author.Finally,I got 61 statements,I really worry about the efficiency. My tables looks like this: ``` Topic Reply User ------- ---------- ------------ id id id title topic_id username ... ... author_id ```
You should look into joining tables during a query. * Joins in SQLServer <http://msdn.microsoft.com/en-us/library/ms191517.aspx> * Joins in MySQL <http://dev.mysql.com/doc/refman/5.0/en/join.html> As an example, I could do the following: ``` SELECT reply.id, reply.authorid, reply.text, reply.topicid, topic.title, user.username FROM reply LEFT JOIN topic ON (topic.id = reply.topicid) LEFT JOIN user ON (user.id = reply.authorid) WHERE (reply.isactive = 1) ORDER BY reply.postdate DESC LIMIT 10 ```
If I read your requirements correctly, you want the result of the following query: ``` SELECT Topic.title, User.username, COUNT(Reply.topic_id) Replies FROM Topic, User, Reply WHERE Topic.id = Reply.topic_id AND Topic.author_id = User.id GROUP BY Topic.title, User.username ```
What is the best way to reduce sql queries in my situation
[ "", "sql", "database", "web-applications", "join", "" ]
I've got a Django product I'm using iPython to interact with. I'm trying to have modules automatically loaded when I start a shell: python manage.py shell I've copied .ipython/ipythonrc to the root directory of the project and added to the file: import\_some module\_name model1 model2 However, when I start the shell, these names are not being loaded. What am I doing wrong?
I don't know about ipythonrc, but if you only need the models, you could use [`django-extensions`](http://code.google.com/p/django-command-extensions/). After you install it, you've got a plethora of new managment commands, including `shell_plus`, which will open a ipython session and autoload all your models: ``` python manage.py shell_plus ```
BryanWheelock Your solution won't work because your shell is the result of the spawn not a direct interatction with it. What you want to do is this - or at least this is what I do. Within your workspace (the place where you type `python manage.py shell`) create a ipythonrc file. In it put the following: ``` include ~/.ipython/ipythonrc execute from django.contrib.auth.models import User # . # . # . execute import_some module_name model1 model2 ``` For example I also add the following lines in mine.. ``` # Setup Logging execute import sys execute import logging execute loglevel = logging.DEBUG execute logging.basicConfig(format="%(levelname)-8s %(asctime)s %(name)s %(message)s", datefmt='%m/%d/%y %H:%M:%S', stream=sys.stdout ) execute log = logging.getLogger("") execute log.setLevel(loglevel) execute log.debug("Logging has been initialized from ipythonrc") execute log.debug("Root Logger has been established - use \"log.LEVEL(MSG)\"") execute log.setLevel(loglevel) execute log.debug("log.setlevel(logging.DEBUG)") execute print "" ``` This allows you to use logging in your modules and keep it DRY. Hope this helps.
using profiles in iPython and Django
[ "", "python", "django", "ipython", "" ]
I toggle a div using something like this - ``` <input type="radio" name="myRadio" value="myDiv_1" />MyDiv <input type="radio" name="myRadio" value="myDiv_2" />MyDiv2 <input type="radio" name="myRadio" value="myDiv_3" />MyDiv3 <div id="myDiv_1"> 1 Some input fields, text </div> <div id="myDiv_2"> 2 More input fields, text </div> <div id="myDiv_3"> 3 More input fields, text </div> JAVASCRIPT $('#myDiv_1').hide(); $('#myDiv_2').hide(); $('#myDiv_3').hide(); $('input[name="myRadio"]').change(function() { var selected_type = $(this).val(); switch(selected_type) { case "myDiv_1": $('#myDiv_1').slideDown(); //if others are visible just slideup $('#myDiv_2').slideUp(); $('#myDiv_3').slideUp(); break; case "myDiv_2": $('#myDiv_2').slideDown(); //if others are visible just slideup $('#myDiv_1').slideUp(); $('#myDiv_3').slideUp(); break; case "myDiv_3": $('#myDiv_3').slideDown(); //if others are visible just slideup $('#myDiv_2').slideUp(); $('#myDiv_1').slideUp(); break; } } ); ``` This works fine. My question is how I can improve it and make it more flexible as if I have to MORE divs I have to modify all cases of switch. Also should enclose the switch functionality in a function and bind this function to events such as click and change (just to ensure that toggling works)?? Thanks for your help.
This works, I just tested it. ``` <script type="text/javascript"> $(document).ready(function(){ $('.MyDiv').hide(); $('input[name="myRadio"]').change(function(){ var selected = $(this).val(); $('.MyDiv').slideUp(); $('#'+selected).slideDown(); }); }); </script> ``` The radio buttons should look like this, where the `value` is the `id` of the element that should be shown. ``` <form action="example.com" method="post"> <input type="radio" name="myRadio" value="myDiv_1" />MyDiv<br /> <input type="radio" name="myRadio" value="myDiv_2" />MyDiv2<br /> <input type="radio" name="myRadio" value="myDiv_3" />MyDiv3<br /> <input type="radio" name="myRadio" value="myDiv_4" />MyDiv4 </form> ``` And finally, the `div`s should look like this, all having the class `MyDiv`: ``` <div id="myDiv_1" class="MyDiv">Div number 1!</div> <div id="myDiv_2" class="MyDiv">Div number 2!</div> <div id="myDiv_3" class="MyDiv">Div number 3!</div> <div id="myDiv_4" class="myDiv">Div number 4!</div> ```
The following is based on the code you pasted here - before using, read below: ``` $("div").hide(); $("input[name='myRadio']").change(function(){ $("div:not(#"+$(this).val()+")").slideUp(); $("div#"+$(this).val()).slideDown(); }); ``` ## Before Using... I would suggest you add a class to each of the collapsable panels, maybe .panel. And then update the selectors to modify only div.panel instead of every div on the page.
How to toggle a div based upon the radio type selection?
[ "", "php", "jquery", "radio-button", "toggle", "" ]
I have an 8 byte array and I want to convert it to its corresponding numeric value. e.g. ``` byte[] by = new byte[8]; // the byte array is stored in 'by' // CONVERSION OPERATION // return the numeric value ``` I want a method that will perform the above conversion operation.
Assuming the first byte is the least significant byte: ``` long value = 0; for (int i = 0; i < by.length; i++) { value += ((long) by[i] & 0xffL) << (8 * i); } ``` Is the first byte the most significant, then it is a little bit different: ``` long value = 0; for (int i = 0; i < by.length; i++) { value = (value << 8) + (by[i] & 0xff); } ``` Replace long with [BigInteger](http://java.sun.com/javase/6/docs/api/java/math/BigInteger.html), if you have more than 8 bytes. Thanks to Aaron Digulla for the correction of my errors.
One could use the [`Buffer`](http://java.sun.com/javase/6/docs/api/java/nio/Buffer.html)s that are provided as part of the [`java.nio`](http://java.sun.com/javase/6/docs/api/java/nio/package-summary.html) package to perform the conversion. Here, the source `byte[]` array has a of length 8, which is the size that corresponds with a `long` value. First, the `byte[]` array is wrapped in a [`ByteBuffer`](http://java.sun.com/javase/6/docs/api/java/nio/ByteBuffer.html), and then the [`ByteBuffer.getLong`](http://java.sun.com/javase/6/docs/api/java/nio/ByteBuffer.html#getLong()) method is called to obtain the `long` value: ``` ByteBuffer bb = ByteBuffer.wrap(new byte[] {0, 0, 0, 0, 0, 0, 0, 4}); long l = bb.getLong(); System.out.println(l); ``` Result ``` 4 ``` I'd like to thank dfa for pointing out the `ByteBuffer.getLong` method in the comments. --- Although it may not be applicable in this situation, the beauty of the `Buffer`s come with looking at an array with multiple values. For example, if we had a 8 byte array, and we wanted to view it as two `int` values, we could wrap the `byte[]` array in an `ByteBuffer`, which is viewed as a [`IntBuffer`](http://java.sun.com/javase/6/docs/api/java/nio/IntBuffer.html) and obtain the values by [`IntBuffer.get`](http://java.sun.com/javase/6/docs/api/java/nio/IntBuffer.html#get(int)): ``` ByteBuffer bb = ByteBuffer.wrap(new byte[] {0, 0, 0, 1, 0, 0, 0, 4}); IntBuffer ib = bb.asIntBuffer(); int i0 = ib.get(0); int i1 = ib.get(1); System.out.println(i0); System.out.println(i1); ``` Result: ``` 1 4 ```
How to convert a byte array to its numeric value (Java)?
[ "", "java", "algorithm", "arrays", "numbers", "" ]
I have an assignment, and it has (amongst others) two demands: * make an installer * make it so that if it is installed on one computer, anyone trying to run the same files on another computer will fail. I'm using VS9 (2008) express, I think I can do the installer part, but I'm not sure how to do the "security" part. I don't need any hard to crack safety, just a dumb condition that will stop most users from copying the files to another computer. (Like checking the MAC address). Any ideas? **EDIT:** I would like to check the MAC address but I want the program finalized during installation. Meaning that after I install I can't move the program to another machine. It also does not have to be a very smart or difficult condition, just bare minimum. I just don't know how to do it in the installation. **EDIT:** It's sad I don't have the complete VS then I would be able to do it easily.
If you're looking for some way to mark the first computer as the "authorized" computer, then you need some external service you can ask for permission to launch. The first person to ask permission would be allowed, the rest would be prevented. You'll also need to come up with some way of identifying a particular instance of your application that's different for every install. If your app needs to be authorized for the machine, then you will need to calculate some fingerprint for the machine it can use each time (eg across installs). **[Edit]** *This approach is useful when you're worried about copies of the installer being distributed as well. You did specify that its ok to install on multiple machines, so in that case [MasterMind](https://stackoverflow.com/questions/1069964/how-to-make-my-program-not-run-on-other-computers-c/1069976#1069976)'s approach is superior. It will work, and does not requires a 3rd party server* **[Edit 2]** If you're looking for info on how to build a custom installer, try [here](http://www.devcity.net/Articles/339/1/article.aspx)
First of all, come up with some function to generate a unique PC signature, like Windows does for activation. Your installer will be creating this signature and writing it to a local file (better encrypted). You can create a simple console executable to generate this file and include that executable into your installer package, setting it up to run silently after the successful installation. Your program when starting will be creating the signature again using the same algorithm and comparing it to the one created during installation. If the new signature is different from the original one or the signature file is missing, then exit without loading the UI. ADDED: If you don't need it very complex, you can just choose a few unique values like the MAC address you suggested, maybe the hard drive serial number, mainboard serial number, concatenate them into a single string and generate the hash out of it. This approach will allow for an unlimited number of copies to run (but each installation will only be workable on one single machine where it was installed). If you stick to the identification by hardware (or OS product key as well), then the application can run on various OS installations on the same machine. This strategy, however, implies that you control all installations (or perform them yourself) or absolutely trust your client not to install additional copies elsewhere or distribute your installer. If you need that kind of protection as well, then you should consider product activation. It can be quite complicated if you do it yourself. There are however third party products to help you. Some offer product activation services: [Google: activation service](http://www.google.com/search?btnG=1&pws=0&q=activation+service)
How to make my program not run on other computers?
[ "", "c#", "installation", "" ]
If I have a table like this: ``` +------------+ | Id | Value | +------------+ | 1 | 'A' | |------------| | 1 | 'B' | |------------| | 2 | 'C' | +------------+ ``` How can I get a resultset like this: ``` +------------+ | Id | Value | +------------+ | 1 | 'AB' | |------------| | 2 | 'C' | +------------+ ``` I know this is really easy to do in MySQL using GROUP\_CONCAT, but I need to be able to do it in MSSQL 2005 Thanks (Duplicate of [How to use GROUP BY to concatenate strings in SQL Server?](https://stackoverflow.com/questions/273238/how-to-use-group-by-to-concatenate-strings-in-mssql))
For a clean and efficient solution you can create [an user defined aggregate function](http://msdn.microsoft.com/en-us/library/ms190678.aspx), there is even [an example](http://msdn.microsoft.com/en-us/library/ms131056.aspx) that does just what you need. You can then use it like any other aggregate function (with a standard query plan): ![query plan](https://i.stack.imgur.com/y8gcJ.png)
This will do: ``` SELECT mt.ID, SUBSTRING((SELECT mt2.Value FROM MyTable AS mt2 WHERE mt2.ID = mt.ID ORDER BY mt2.VALUE FOR XML PATH('')), 3, 2000) AS JoinedValue FROM MyTable AS mt ```
Concat groups in SQL Server
[ "", "sql", "sql-server", "group-by", "string-concatenation", "" ]
How can I generate a random hexadecimal number with a length of my choice using C#?
``` static Random random = new Random(); public static string GetRandomHexNumber(int digits) { byte[] buffer = new byte[digits / 2]; random.NextBytes(buffer); string result = String.Concat(buffer.Select(x => x.ToString("X2")).ToArray()); if (digits % 2 == 0) return result; return result + random.Next(16).ToString("X"); } ```
``` Random random = new Random(); int num = random.Next(); string hexString = num.ToString("X"); ``` `random.Next()` takes arguments that let you specify a min and a max value, so that's how you would control the length.
Randomly generated hexadecimal number in C#
[ "", "c#", "random", "hex", "" ]
What is the analog in Scala of doing this in Java: ``` public class Outer { private Inner inner; public static class Inner { } public Inner getInner() { return inner; } } ``` I specifically want my inner class to **not** have to have a fully qualified name - i.e. I want `Trade.Type`, not `TradeType`. So in Scala I imagined it might be something like: ``` class Outer(val inner: Inner) { object Inner } ``` But this doesn't seem to work: my scala `Inner` just doesn't seem to be visible from outside the `Outer` class. One solution would of course be: ``` class Inner class Outer(val inner: Inner) ``` Which is OK - but because of the names of my classes, `Inner` is really the "type" of the `Outer` and `Outer` actually has a long name. So: ``` class SomeHorriblyLongNameType class SomeHorriblyLongName(myType: SomeHorriblyLongNameType) ``` Which is verbose and horrible. I could replace `SomeHorriblyLongNameType` with just `Type` but there would then be no obvious connection between it and the class it was related to. Phew
You can do something like this if don't need access to the outer class in the inner class (which you wouldn't have in Java given that your inner class was declared `static`): ``` object A{ class B { val x = 3 } } class A { // implementation of class here } println(new A.B().x) ```
As others have pointed out, "static" classes should be placed inside the companion object. In Scala, classes, traits, and objects which are members of a class are path-dependent. For example: ``` class Button { class Click } val ok = new Button val cancel = new Button val c1 = new ok.Click val c2 = new cancel.Click ``` Now c1 and c2 are instances of -different- classes. One class is ok.Click, and the other is cancel.Click. If you wanted to refer to the type of all Click classes, you could say Button#Click.
Static inner classes in scala
[ "", "java", "scala", "static", "inner-classes", "" ]
My goal is to build an engine that takes the latest HL7 3.0 CDA documents and make them backward compatible with HL7 2.5 which is a radically different beast. The CDA document is an XML file which when paired with its matching XSL file renders a HTML document fit for display to the end user. In HL7 2.5 I need to get the rendered text, devoid of any markup, and fold it into a text stream (or similar) that I can write out in 80 character lines to populate the HL7 2.5 message. So far, I'm taking an approach of using XslCompiledTransform to transform my XML document using XSLT and product a resultant HTML document. My next step is to take that document (or perhaps at a step before this) and render the HTML as text. I have searched for a while, but can't figure out how to accomplish this. I'm hoping its something easy that I'm just overlooking, or just can't find the magical search terms. Can anyone offer some help? FWIW, I've read the 5 or 10 other questions in SO which embrace or admonish using RegEx for this, and don't think that I want to go down that road. I need the rendered text. ``` using System; using System.IO; using System.Xml; using System.Xml.Xsl; using System.Xml.XPath; public class TransformXML { public static void Main(string[] args) { try { string sourceDoc = "C:\\CDA_Doc.xml"; string resultDoc = "C:\\Result.html"; string xsltDoc = "C:\\CDA.xsl"; XPathDocument myXPathDocument = new XPathDocument(sourceDoc); XslCompiledTransform myXslTransform = new XslCompiledTransform(); XmlTextWriter writer = new XmlTextWriter(resultDoc, null); myXslTransform.Load(xsltDoc); myXslTransform.Transform(myXPathDocument, null, writer); writer.Close(); StreamReader stream = new StreamReader (resultDoc); } catch (Exception e) { Console.WriteLine ("Exception: {0}", e.ToString()); } } } ```
Since you have the XML source, consider writing an XSL that will give you the output you want without the intermediate HTML step. It would be far more reliable than trying to transform the HTML.
This will leave you with just the text: ``` class Program { static void Main(string[] args) { var blah = new System.IO.StringReader(sourceDoc); var reader = System.Xml.XmlReader.Create(blah); StringBuilder result = new StringBuilder(); while (reader.Read()) { result.Append( reader.Value); } Console.WriteLine(result); } static string sourceDoc = "<html><body><p>this is a paragraph</p><p>another paragraph</p></body></html>"; } ```
Convert XML to Plain Text
[ "", "c#", "xml", "xslt", "hl7", "" ]
We are moveing our old Access VBA / SQL 2005 application to new technologys and have 2 options do we build a Windows Form Application with Microsoft One Click Deploy or a ASP.Net Website? The application is a work force mangement system booking jobs dynamic scheduling gantt chart components. It will be used by 400 + Users over 20 Locations. Can you give me some feedback on these options.
If you want the application to be maintained in one central location and accessed by the users over the web...go with a web application. If your team is more familiar with Windows Form applications, there's going to be a learning curve involved with this option (maintaining state, dynamic controls, page events, the DOM, etc.) so the time investment is going to be greater. If you want the clients to have a rich interface installed on the desktops that you can manage easily from a central location...go with Windows Forms (or WPF to provide a nicer user experience). **Using ClickOnce for deployment will make sure that the client applictions stay up to date.** It sounds like this would be the more familiar option for your team and produce the best product in the shortest amount of time. If your users are all inside your corporate network, either option can use a central SQL Server database (or shared Access/SQL Server Compact Edition/SQLLite database). Don't get caught up in the hype. Web appilcations aren't always the best option.
I would create a web application. This will provide the benefits of a central location to manage. With 400+ users and 20 locations a windows app will take more time and headaches for updates and maintenance.
Windows App (One Click) Vs Web App
[ "", "c#", ".net", "asp.net", "" ]
I have a web page that consists of a checkbox (parent) and on this same web page, I also have a iframe that is sourced from another page that displays a number of records, which also has a checkbox (children) against each record. If I tick the parent checkbox, I would like to cascade this through to all children checkboxs within the iframe as well as disable these children checkboxes. How can I access the checkboxes within the iframe? My iframe definition on the man page is similar to the following: ``` <iframe id="iframe1" src="'+URL+'" style="border:none;width:799px;height:200px;" frameborder="1" framespacing="0" marginheight="0" marginwidth="0"></iframe> ```
Okay here is an example... First here is the source for the iFrame contents (I called it FramePage.htm)... ``` <body> <input id="Checkbox1" type="checkbox" name="checkbox" /> <input id="Checkbox2" type="checkbox" name="checkbox"/> <input id="Checkbox3" type="checkbox" name="checkbox" /> </body> ``` Here is the source on the page that houses the iFrame... ``` <iframe id="frame" src="FramePage.htm"></iframe> <input id="Button1" type="button" value="button" onclick="setData()" /> <script type="text/javascript"> function setData() { var frame = document.getElementById('frame'); var checkboxes = frame.contentWindow.document.getElementsByName('checkbox'); for (var i = 0; i < checkboxes.length; i++) { checkboxes[i].checked = true; } } </script> ``` Clicking the button on the parent page will select all the checkboxes with the name 'checkboxes' on the frame page. Hope this helps **:)**
This might be helpful: a jquery plugin to [easily access the iframe document](http://brandonaaron.net/blog/2009/05/14/jquery-edge-better-support-for-other-windows-and-documents). That is, if its not cross domain like peter said.
How to access iframe checkbox elements
[ "", "javascript", "" ]
Relative paths in C# are acting screwy for me. In one case Im handling a set of Texture2d objects to my app, its taking the filename and using this to locate the files and load the textures into Image objects. I then load an image from a relative path stored in the class file and use a relative path that needs to be relative to Content/gfx. But if i dont load these textures these relative paths will fail. How can I garuantee that my rel path wont fail? In web work all rel paths are relative to the folder the file we're working from is in, can I set it up this way and make all rel paths to root folder where my app is located?
I recommend not using relative paths in the first place. Use [Path.Combine](http://msdn.microsoft.com/en-us/library/system.io.path.combine.aspx) to turn your relative paths into absolute paths. For example, you can use this to get the full path to your startup EXE: ``` string exeFile = (new System.Uri(Assembly.GetEntryAssembly().CodeBase)).AbsolutePath; ``` Once you have that, you can get it's directory: ``` string exeDir = Path.GetDirectoryName(exeFile); ``` and turn your relative path to an absolute path: ``` string fullPath = Path.Combine(exeDir, "..\\..\\Images\\Texture.dds"); ``` This will be much more reliable than trying to use relative paths.
If you are expecting a resource to be in the same directory as the executable file or in a sub directory of that directory, it's best to always use ``` string fullPath = System.IO.Path.Combine(System.IO.Directory.GetCurrentDirectory(), subPath); ``` or if you are worried that the working directory might be wrong you can do this: ``` string fullPath = System.IO.Path.Combine(System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetEntryAssembly().Location), subPath); ```
Relative Paths in Winforms
[ "", "c#", "winforms", "relative-path", "" ]
I have a google map integrated on part of my page. I would like to create a toggle button to toggle the map between full screen and normal size. So when you click on it - the map extends to fill the whole browser screen, and click on it again, it is restored to its original size on the page. How would I do it?
Here's a jQuery implementation. ``` $("#map_toggler").click(function() { $("#map").toggleClass("fullscreen") }); ``` In the CSS: ``` #map { width: 400px; height: 200px; } #map.fullscreen { position: fixed; width:100%; height: 100%; } ``` Untested, but something along the lines of that should work.
If you have a map on your page all you need to do is write some javascript to resize the DIV that holds the map. I haven't implemented an example that resizes the DIV to fill the browser, but [here](http://www.cannonade.net/geo.php?test=geo5) is one that toggles the size of a map div from javascript (I use mooTools to set the style.width on the element, but you can use whatever you prefer to manipulate the DOM).
How do I create a show full screen button to toggle my google maps page to be full screen?
[ "", "javascript", "google-maps", "" ]
Here is my shortened abstract class: ``` abstract class Report { protected internal abstract string[] Headers { get; protected set; } } ``` Here is a derived class: ``` class OnlineStatusReport : Report { static string[] headers = new string[] { "Time", "Message" } protected internal override string[] Headers { get { return headers; } protected set { headers = value; } } internal OnlineStatusReport() { Headers = headers; } } ``` The idea is, I want to be able to call `Report.Headers` from anywhere in the assembly, but only allow it to be set by derived classes. I tried making `Headers` just internal, but protected does not count as more restrictive than internal. Is there a way to make Headers internal and its set accessor protected AND internal? I feel like I'm grossly misusing access modifiers, so any design help would be greatly appreciate.
What's wrong with making the getter public? If you declare the property as ``` public string[] Headers { get; protected set; } ``` it meets all of the criteria you want: all members of the assembly can get the property, and only derived classes can set it. Sure, classes outside the assembly can get the property too. So? If you genuinely need to expose the property within your assembly but not publicly, another way to do it is to create a different property: ``` protected string[] Headers { get; set; } internal string[] I_Headers { get { return Headers; } } ``` Sure, it's ugly decorating the name with that `I_` prefix. But it's kind of a weird design. Doing some kind of name mangling on the internal property is a way of reminding yourself (or other developers) that the property they're using is unorthodox. Also, if you later decide that mixing accessibility like this is not really the right solution to your problem, you'll know which properties to fix.
It's not possible in C#. Just for the sake of completeness, this is supported in IL (family and assembly access modifier).
How to make a property protected AND internal in C#?
[ "", "c#", "access-modifiers", "" ]
In a project at work we have a certain value type class in our domain model that includes a very large number of attributes... ``` public class BigValueType { private Foo foo; private Bar bar; private Baz baz; //... } ``` We've realized that we would like to "focus" this into a number of different, somewhat more specialized classes that only have some subset of this class's attributes. I think we would like to have something like different "views" of this data. ``` public class SpecializationA { private Foo foo; private Baz baz; //... } public class SpecializationB { private Bar bar; private Baz baz; //... } private class SpecializationC { private Foo foo; private Bar bar; //... } ``` However, this domain model is intended to be rather general, and not specific to this project. It will have project-specific extensions added to it during future projects, but the common domain model will be kept separate from these extensions. If we simply define a bunch of classes now, it's likely that other projects using the domain model will just have to write their own slightly different ones later. (We can't easily predict what views of this data will be useful.) I think what we should do is write project-specific Adapters for this big class that present the different views of the data. This way, future users of the domain don't have to touch anything in the "common" domain model to define new views of this information. ``` public class AdapterA { private BigValueType wrapped; //... public ViewA(BigValueType wrapped) { //... } public Foo getFoo() { return wrapped.getFoo(); } //... } ``` This makes more sense to me than normal inheritance because our top-level class/interface would have almost nothing in it. Any feedback on this approach?
Firstly, it is important to understand the problem you would want to solve. Having a class with a large number of attributes is not necessarily bad and if you want to do the refactoring only to conform to 'good' design principles, i would reconsider the decision. Having said that, this is a fairly common design in the world of SOA. You have a large service which takes a complex message with fairly large number of attributes as the request. This service is then 'adapted' to cater to clients with different needs. So, your design should work well. Of course, it presupposes that you already know all the possible 'views' or you would have to write new adapters for new clients. This 'adaptation' can be performed at two levels - the user interface level (which is essentially your design - to adapt the class) or at a lower level, such as at the database level, and this, in turn modifies your main class too. This would depend on your application, the users of your framework and so on. An alternate approach to consider could also be the REST way of doing this - expose the data (Foo, Baz, etc) , albeit from the main class, and let the clients do their own processing of the data with the main class essentially providing CRUD features on this data. Then your class behaves more like a structure with no real business logic. Any of these three approaches should be okay, IMHO.
well.. seems fine to me. I think your solution using composition is far better than using inheritance, it will affect your model big time and generate lots of low cohesive classes.
Looking for feedback on a use of the Adapter pattern
[ "", "java", "design-patterns", "inheritance", "domain-driven-design", "adapter", "" ]
I have been trying to detect the browser language preference using JavaScript. If I set the browser language in IE in `Tools>Internet Options>General>Languages`, how do I read this value using JavaScript? Same problem for Firefox. I'm not able to detect the setting for `tools>options>content>languages` using `navigator.language`. Using `navigator.userLanguage` , it detects the setting done thru `Start>ControlPanel>RegionalandLanguageOptions>Regional Options` tab. I have tested with `navigator.browserLanguage` and `navigator.systemLanguage` but neither returns the value for the first setting(`Tools>InternetOptions>General>Languages`) I found a [link](http://www.velocityreviews.com/forums/t99655-save-way-to-detect-browser-language.html) which discusses this in detail, but the question remains unanswered :(
I think the main problem here is that the browser settings don't actually affect the `navigator.language` property that is obtained via javascript. What they do affect is the HTTP 'Accept-Language' header, but it appears this value is not available through javascript at all. (Probably why @anddoutoi states he can't find a reference for it that doesn't involve server side.) I have coded a workaround: I've knocked up a google app engine script at <http://ajaxhttpheaders.appspot.com> that will return you the HTTP request headers via JSONP. (Note: this is a hack only to be used if you do not have a back end available that can do this for you. In general you should not be making calls to third party hosted javascript files in your pages unless you have a very high level of trust in the host.) I intend to leave it there in perpetuity so feel free to use it in your code. Here's some example code (in jQuery) for how you might use it ``` $.ajax({ url: "http://ajaxhttpheaders.appspot.com", dataType: 'jsonp', success: function(headers) { language = headers['Accept-Language']; nowDoSomethingWithIt(language); } }); ``` Hope someone finds this useful. Edit: I have written a small jQuery plugin on github that wraps this functionality: <https://github.com/dansingerman/jQuery-Browser-Language> Edit 2: As requested here is the code that is running on AppEngine (super trivial really): ``` class MainPage(webapp.RequestHandler): def get(self): headers = self.request.headers callback = self.request.get('callback') if callback: self.response.headers['Content-Type'] = 'application/javascript' self.response.out.write(callback + "(") self.response.out.write(headers) self.response.out.write(")") else: self.response.headers['Content-Type'] = 'text/plain' self.response.out.write("I need a callback=") application = webapp.WSGIApplication( [('/', MainPage)], debug=False) def main(): run_wsgi_app(application) if __name__ == "__main__": main() ``` Edit3: Have open sourced the app engine code here: <https://github.com/dansingerman/app-engine-headers>
``` var language = window.navigator.userLanguage || window.navigator.language; alert(language); //works IE/SAFARI/CHROME/FF ``` `window.navigator.userLanguage` is IE only and it's the language set in **Windows Control Panel - Regional Options** and NOT browser language, but you could suppose that a user using a machine with Window Regional settings set to France is probably a French user. `navigator.language` is FireFox and all other browser. Some language code: `'it'` = italy, `'en-US'` = english US, etc. --- As pointed out by *rcoup* and *The WebMacheter* in comments below, this workaround won't let you discriminate among English dialects when users are viewing website in browsers other than IE. *`window.navigator.language` (Chrome/FF/Safari) returns always browser language and not browser's preferred language, but: "it's pretty common for English speakers (gb, au, nz, etc) to have an en-us version of Firefox/Chrome/Safari." Hence `window.navigator.language` will still return `en-US` even if the user preferred language is `en-GB`.*
JavaScript for detecting browser language preference
[ "", "javascript", "localization", "internationalization", "" ]
If I enforce a licensing restriction on my application,say the first time the application runs,then should my application check the restrictions (could be a encrypted license file) each time? How do you design this part? What is the best way? Basically I would like someway to save CPU cycles somehow(encryption means decryption blah blah ).I hope you understand my concern.
This really depends on how you want to handle it in your application, and what type of action you want to take for the user. if you only check on the first load, you are going to need to store the success/failure of the validation somewhere, so that your application knows if it is legit or not. This then becomes a concern, as you need to validate that your system is the ONLY thing that is actually storing/updating that information source.
Depends on what kind of application it is. If it's a desktop application, once the license key is verified...you can set a bit somewhere to let your software know that it is running in licensed mode. Make sure the bit is somewhere outside of the application folder (that way if somebody straight copies your app to another machine, your software will know it's no longer licensed). If you're trying to do a web app, an encrypted license file somewhere in your site directory seem to be a popular solution. The file contains the license type and domain name...then your web app verifies the license upon each request (a little bit of overhead, but you can make that code lightning fast).
Application verify license enforement?
[ "", "c#", "licensing", "" ]
In fact, this is the same question as this post: [How can I make sure my LINQ queries execute when called in my DAL, not in a delayed fashion?](https://stackoverflow.com/q/1013201/75642) But since he didn't explain *why* he wanted it, the question seems to have been passed over a bit. Here's my similar-but-better-explained problem: I have a handful of threads in two types (ignoring UI threads for a moment). There's a "data-gathering" thread type, and a "computation" thread type. The data gathering threads are slow. There's a quite a bit of data to be sifted through from a variety of places. The computation threads are comparatively fast. The design model up to this point is to send data-gathering threads off to find data, and when they're complete pass the data up for computation. When I coded my data gathering in Linq I wound up hoisting some of that slowness *back into my computation threads*. There are now data elements that aren't getting resolved completely until they're used during computation -- and that's a problem. I'd like to force Linq to finish its work at a given time (end of statement? end of method? "please finish up, dammit" method call) so that I know I'm not paying for it later on. Adding ".ToList()" to the end of the Linq is 1. awkward, and 2. feels like boxing something that's about to be unboxed in another thread momentarily anyway.
You wouldn't be boxing anything - you'd be buffering the results. Using `ToList()` is basically the way to go if you actually want the data. Unless you're ready to use the data immediately, it's got to be buffered *somewhere*, hasn't it? A list is just a convenient way to do that. The alternative is to do the processing then and there as well - use the data as you produce it, eagerly. I didn't quite follow the different threads side of thing, so it's not clear to me whether that would help you, but those are basically the choices available to you as far as I can see. This is actually *somewhat* explicit in your description: > The design model up to this point is > to send data-gathering threads off to > find data, and when they're complete > pass the data up for computation. Calling `ToList()` basically changes what you return from "a query which can fetch the data when asked to" to "the data itself, buffered in a list".
Can you explain more why .ToList is not acceptable? You mentioned boxing and unboxing but those are completely unrelated topics. Part of forcing a LINQ query to complete on demand necessitates storing the results. Otherwise in order to see the results again, you'd have to repprocess the query. .ToList efficiently achieves this by storing the elements in a `List<T>`. It's possible to store the elements in virtually any other collection style data structure with various trade offs that may suit your needs better.
Force Linq to not delay execution
[ "", "c#", "linq", "delayed-execution", "" ]
Can anyone advise what the best way to check (using .NET 3.5) if a remote server is available? I was thinking of using the following code but would like to know if a better way exists if the community has another option. ``` TcpClient client = new TcpClient("MyServer", 80); if (!client.Connected) { throw new Exception("Unable to connect to MyServer on Port 80"); } client.Close(); ```
You could [ping](http://msdn.microsoft.com/en-us/library/system.net.networkinformation.ping.aspx) it You could [download the default page](http://msdn.microsoft.com/en-us/library/system.net.webclient.downloadstring.aspx) from it You could do a [HEAD](http://www.eggheadcafe.com/tutorials/aspnet/2c13cafc-be1c-4dd8-9129-f82f59991517/the-lowly-http-head-reque.aspx) request If it's a local IIS6 server on your network, and you have some admin details, you could [connect to IIS](http://www.codeproject.com/KB/cs/Start_Stop_IIS_Website.aspx) using some DirectoryEntry code Some of the answers on [136615](https://stackoverflow.com/questions/136615/how-can-i-test-a-connection-to-a-server-with-c-given-the-servers-ip-address) might help too, specifically the accepted answer that talks about [sockets](http://msdn.microsoft.com/en-us/library/4xzx2d41.aspx?ppud=4) For the print servers (or, specifically, the printers), the code by K Scott [here](http://www.odetocode.com/Humor/68.aspx) might help. It's fun code to play with anyway :-) That code mentions dns.resolve, which is obsoleted and replaced by [Dns.GetHostEntry](http://msdn.microsoft.com/en-us/library/ms143998.aspx) I'm about out of ideas :-)
If you just want to see whether a given server is online, then a simple ping should do the job in most cases. ``` PingReply pingReply; using (var ping = new Ping()) pingReply = ping.Send("http://www.stackoverflow.com/"); var available = pingReply.Status == IPStatus.Success; ``` Using this method you're not abusing the HTTP server in any way, too. Otherwise (if you want to check whether a connection is possible on a specific port), that basically looks fine.
C# Check Remote Server
[ "", "c#", ".net", "" ]
At <http://blogs.msdn.com/ericgu/archive/2004/01/29/64717.aspx>, we learn that C# will not inline methods with structs as formal parameters. Is this due to potential dependence on the stack, e.g. for recursion? If so, could I potentially benefit by turning struct parameters into ref parameters like this? ``` public int Sum(int i) { return array1[i] + array2[i]; } ``` turns into: ``` public int Sum(ref int i) { return array1[i] + array2[i]; } ``` Edit: I went to attempt a test, but I can't get anything in inline. Here is what I tried: ``` class Program { private static string result; static void Main(string[] args) { Console.WriteLine(MethodBase.GetCurrentMethod().Name); Console.WriteLine(); m1(); Console.WriteLine(result); } private static void m1() { result = MethodBase.GetCurrentMethod().Name; } } ``` It prints "m1" as the second line, which indicates that it did not get inlined. I built a Release build and ran it with Ctrl-F5 (to not attach the debugger). Any ideas?
As Jon said, it's a very old post. I can confirm that in the following code: ``` using System; using System.Runtime.CompilerServices; struct MyStruct { public MyStruct(int p) { X = p; } public int X; // prevents optimization of the whole thing to a constant. [MethodImpl(MethodImplOptions.NoInlining)] static int GetSomeNumber() { return new Random().Next(); } static void Main(string[] args) { MyStruct x = new MyStruct(GetSomeNumber()); // the following line is to prevent further optimization: for (int i = inlinetest(x); i != 100 ; i /= 2) ; } static int inlinetest(MyStruct x) { return x.X + 1; } } ``` `inlinetest` method is inlined. Main method disassembly: ``` ; set up the stack frame: 00000000 push ebp 00000001 mov ebp,esp ; calls GetSomeNumber: 00000003 call dword ptr ds:[005132D8h] ; inlined function: 00000009 inc eax ; the dummy for loop: 0000000a cmp eax,64h 0000000d je 0000001B 0000000f sar eax,1 00000011 jns 00000016 00000013 adc eax,0 00000016 cmp eax,64h 00000019 jne 0000000F 0000001b pop ebp 0000001c ret ``` I've tested this on x86 .NET Framework 3.5 SP1 on Windows 7 x64 RC. As I believed there's nothing inherently wrong with inlining methods with `struct` parameters. Probably, JIT has not been smart enough at that time.
[Here](http://blogs.msdn.com/vancem/archive/2008/08/19/to-inline-or-not-to-inline-that-is-the-question.aspx)'s a better article describing why some methods will not be inlined. And [here](https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=93858) is a MS connect feedback entry with comments that include benchmark results (FWIW)
Why won't C# inline functions with struct parameters?
[ "", "c#", ".net", "optimization", "inline", "" ]
I am trying to output a sigma character (σ) in a label in a FusionChart graph. How can I specify that character in a PHP string? I have tried the htmlentity `&sigma;`, but it is not interpreted correctly by the graph. Is there any way to specify the character in PHP using some sort of character code?
You need to make sure you're sending the correct headers when outputting. ``` <?php header('Content-Type: text/html; charset=utf-8'); $char = utf8_encode(html_entity_decode('&sigma;')); echo $char; ``` This will output the character. **Edit**: If passing the character into the graph doesn't work, then the software doesn't support UTF-8.
`"\x1F"` will work for regular ASCII characters, but I think sigma is a unicode character, so you're going to have to use something like [utf8\_encode](http://us.php.net/manual/en/function.utf8-encode.php). PHP has poor Unicode support.
How can I specify a special character in PHP
[ "", "php", "unicode", "utf-8", "character-encoding", "fusioncharts", "" ]
I'm trying to improve my TDD/OO skills and each time I try and use TDD to influence design I come up against a wall of where to start. Here is my use case/story: > Identify a subset of clients that are due for a review. Start a review for them and send out a couple of letters. Now my muscle memory has already opened up a query window written the query, designed a UI and then I have to write some code to glue the bits together. I want the domain code to be the focus and I want it under test. So what is the simplest thing to do in this case? I guess I want my list of Clients. I already have a Client Object (CSLA-style) although this has a load of dependencies that are tough to break. I guess that I could have a ClientReviewClients object and test that I get the right number of reviews. There are a number of factors that I need to consider so it does not seem simple. And anyway how do I mock the fact that I have 10 reviews out of 20 clients? Can anyone help me on my way?
Here - I'll start you off with a couple of tests: ``` class IdentifyClientsDueForReview { public void CanStartSearch() { var s = new ClientSearcher(); } public void CanSearchClients() { var s = new ClientSearcher(); var r = s.Find(c => c.Id == 1); Assert.IsNotNull(r); } public void Finds10Clients() { var db = new MockDB(); // Clients that need review for (int i = 0; i < 10; i++) { db.Add(new Client() { NextReview = DateTime.Today.SubtractDays(i) }); } // Clients that don't need review for (int i = 0; i < 10; i++) { db.Add(new Client() { NextReview = DateTime.Today.AddDays(i) }); } var s = new ClientSearcher(db); var r = s.Find(c => c.NextReview <= DateTime.Today); Assert.AreEqual(10, r.Count); } } ``` That's built with a Linq To Sql or similar backend ORM in mind - otherwise, you'd probably ditch the `Find` method and have some hardcoded `FindBy<Criteria>` methods. This should give you the `ClientSearcher` class, which uses an interface to hit the database. Both `MockDB` and your `RealDB` class would implement that interface.
Unit tests must be fast. If a test touches a database, it's an integration test (which are valuable too), not a unit test. As for the number of clients who need a review, I wouldn't be particularly interested in knowing that I got 10 out of 20 needing a review but that for a particular client, am I correctly deciding whether that client needs a review based on my business rules? You may find the two-part series "TDD/Using Mock objects with CSLA.Net" helpful: * [Part 1](http://www.nermins.net/post/TDDUsing-Mock-objects-with-CSLANet.aspx) * [Part 2](<http://www.nermins.net/post/2007/05/09/TDDUsing-Mock-objects-with-CSLANet-(Round-II).aspx)> You mentioned difficult dependencies, and I highly recommend *Working Effectively with Legacy Code* by Michael Feathers. The book is full of [conservative dependency-breaking techniques](https://stackoverflow.com/questions/1017917/steps-to-take-to-slowly-integrate-unit-testing-into-a-project/1018033#1018033) useful for bringing code under test.
Thinking OO with TDD - Where to start?
[ "", "c#", "oop", "tdd", "" ]
Ok I have a Asp .net 3.5 Website in IIS6 on Windows Server 2003 (32bit). with a Asp.Net 1.1 WebApplication in a sub virtual directory. (this is set to use the older 1.1 .net runtime and is configured with it's own App Pool. So for all intents and purposes is completely seperate. Except it keeps on inheriting the root website's .net 3.5 web.config. I've tried adding ``` <location path="." inheritInChildApplications="false"> ``` to the root websites web.config but it doesn't seem to work. Strangely the error is actually ``` <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> ``` which is being reported from the .net 1.1 runtime with the website path WEBSITE/OLD\_WEBAPP but it's moaning about the webSite (as in the .net 3.5 one's) web.config and how it doesn't understand the type attribute on the sectionGroup tag. What am I doing wrong? Please tell me it's something obvious. Thanks
Just to let everyone know, I've found an answer that serves my purposes for now. I ended up putting all the configuration in the .net 2.0 global web.config ``` C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG ``` I got this suggestion from this comment on [connect.microsoft.com](https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=329367) > The configuration systems for both ASP.NET 1.1 and 2.0 are hierarchichal. As a result if a different framework version is a child of the root website (which is usually inetpub\wwwroot), then the configuration system in the child will attempt to merge the configuration from the root website. This behavior is by design since both the 1.1 and 2.0 configuration systems walk up the physical directory structure looking for parent web.configs. > > A simple workaround is to move the > common 2.0 configuration information > to the root web.config file for the > 2.0 framework. The root web.config is located at: > C:\windows\Microsoft.NET\Framework\v2.0.50727\CONFIG. > With this approach all of your 2.0 > applications will pick up the common > sections while 1.1 applications won't > see the sections. > > Thank you for submitting this issue. > Hope the above information is useful > for you. > > Posted by Microsoft on 22/02/2008 at > 17:35 Seems to work for me, though it's not quite as elegant as if it had been a .net 2.0 application. Hope this helps someone else. Dan
``` <location path="." inheritInChildApplications="false"> <system.web> ... </system.web> </location> ```
How do I stop web.config inheritance
[ "", "c#", "asp.net", "web-config", "" ]
I have an eum type with 5 members. Is it possible to tell intellisense to order them the way I want? ``` public enum numbers { zero, one, two, three, four } ``` Intelisense shows (in alpha): ``` four one three two zero ```
No you cant. But the Visual Studio team has considered the suggestion even though not exactly the way you would want it. But hey, its a start. You can vote on the link @sepster provided: <https://developercommunity2.visualstudio.com/t/Intellisense-display-enums-by-numeric-or/871057?space=8&entry=suggestion> --- Edit: Microsoft connect has been retired. Leaving this here for future reference. See the Microsoft connect response here <http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=419602> I have added your requirement as a comment on the site.
I think the only way to do that would be to do something like ``` public enum numbers { num_00, num_01, num_02, num_03, num_04 } ```
Can I order the enum values in intellisense?
[ "", "c#", ".net", "visual-studio-2005", "intellisense", "" ]
My page view currently has a dropdownlist on it that is getting bound to a collection from the controller. This is working fine, However I want to insert an item to the top of the dropdownlist that is not in my collection e.g the list currently gets the following items ``` Open Closed ``` I want to add a third option of "All" but I don't want to add this option to my database. In Webforms I would have just bound the control then inserted another item however it seems this is not possible with MVC, am I right in thinking I will need to add some Javascript to the view to add this new item once the dropdownlist has been bound? Thanks
No. Construct your data as a list of SelectListItems and prepend in the controller. ``` var list = db.Table .Select( t => new SelectListItem { Key = t.ID.ToString(), Value = t.Name } ) .ToList(); list.Insert( 0, new SelectListItem { Key = "-1", Value = "All" } ); ViewData["TableSelect"] = list; ``` On the view side: ``` <%= Html.DropDownList( "TableID", (IEnumerable<SelectListItem>)ViewData["TableSelect"] ) %> ```
Simple you can make it as you wish in controller. and pass it in viewdata. other option is directly in view page. ``` <%= Html.DropDownList("DropDown","all" )%> ``` But you can add only one option.. Make sure you are not storing added options in ur db, right? so before you save it, check option value in action and apply your logic.
ASP.Net MVC Add Items To Bound Dropdownlist
[ "", "c#", "asp.net-mvc", "drop-down-menu", "" ]
I have a web page that is intended to be loaded on a person's iPhone. When the page is loaded, I want to hide the status and address bar that is at the top. I have seen other sites do this. In an attempt to accomplish this, I have placed the following code in the section of my web page: ``` <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; user-scalable=yes;" /> <script type="text/javascript"> function page_Load() { setTimeout(function() { window.scrollTo(0, 1); }, 100); } </script> ``` The "page\_Load" function is triggered through the onload event of the page's body element. Oddly, when the page loads, the status/title bar is hidden, however, not the address bar. How do I hide both the status/title bar and the address bar when a web page loads? Thank you!
Figured it out. It turns out my page needed to be "longer". From a absolute perspective, the sizing was correct, but I need to add a couple of pixels at the bottom. This hid the address bar as desired. Thank you.
For those of you using jQuery here's an even simpler version: ``` $('body').scrollTop(1); ```
iPhone - Hide Address Bar on Page Load
[ "", "javascript", "iphone", "" ]
I want to print the bit representation of numbers onto console, so that I can see all operations that are being done on bits itself. How can I possibly do it in python?
This kind of thing? ``` >>> ord('a') 97 >>> hex(ord('a')) '0x61' >>> bin(ord('a')) '0b1100001' ```
**From Python 2.6** - with the [string.format method](http://docs.python.org/2/library/string.html#string-formatting): ``` "{0:b}".format(0x1234) ``` in particular, you might like to use padding, so that multiple prints of different numbers still line up: ``` "{0:16b}".format(0x1234) ``` and to have left padding with leading 0s rather than spaces: ``` "{0:016b}".format(0x1234) ``` **From Python 3.6** - with [f-strings](https://docs.python.org/3/reference/lexical_analysis.html#f-strings): The same three examples, with f-strings, would be: ``` f"{0x1234:b}" f"{0x1234:16b}" f"{0x1234:016b}" ```
printing bit representation of numbers in python
[ "", "python", "" ]
I am using the PHP in\_array() function in order to authenticate (with sessions) if a user can access a particular page. For some reason, it is not working... ## PHP PAGE ``` session_start(); require_once('../scripts/functions.php'); $role_auth = @$_SESSION['role_auth']; access($role_auth, array(0,1,2,3,4)); ``` ## access FUNCTION ``` function access($role_auth, $array){ if(!(in_array($role_auth, $array))){ header("Location: ../index.html"); } } ``` If I insert print statements in the function, I can see that all of the correct values are being passed into the function. The problem is, if the function is called without a session variable set, for some reason it is considered as being in the array, and it authenticates. Any ideas?
you may want to enable strict type checks by using: ``` in_array($role_auth, $array, true) ``` as what is likely happening is that $role\_auth is being eval'd as false and that could match 0 in your in\_array statement. what you SHOULD be doing is this: ``` session_start(); require_once('../scripts/functions.php'); $role_auth = (isset($_SESSION['role_auth']))?$_SESSION['role_auth']:-1; access($role_auth, array(0,1,2,3,4)); ``` or something similiar. nothing good ever comes of using the @ operator
I would check to see if $\_SESSION['role\_auth'] is actually set (with `isset`) instead of using @ to suppress warnings (which is bad practice IMHO) I think what's happening is that false == 0 ... so `in_array` returns true when nothing is in $role\_auth because it sees it as 0 and 0 is in your array
PHP in_array not working
[ "", "php", "arrays", "session", "" ]
I am trying to convert the following Perl regex I found in the [Video::Filename](http://search.cpan.org/dist/Video-Filename) Perl module to a Python 2.5.4 regex to parse a filename ``` # Perl > v5.10 re => '^(?:(?<name>.*?)[\/\s._-]*)?(?<openb>\[)?(?<season>\d{1,2})[x\/](?<episode>\d{1,2})(?:-(?:\k<season>x)?(?<endep>\d{1,2}))?(?(<openb>)\])(?:[\s._-]*(?<epname>[^\/]+?))?$', ``` I would like to use named groups too, and I know in Python the regex extension for named groups is different, but I am not 100% sure on the syntax. This is what I tried: ``` # Python (not working) r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))?(?(P<openb>)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$') ``` The error I get: ``` raise error, v # invalid expression sre_constants.error: bad character in group name ``` For example, this one I managed to convert and it works. But the one above I can't seem to get right. I get a compilation error in Python. ``` # Perl: re => '^(?:(?<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?<part>\d+))?(?<subep>[a-z])?(?:[\/\s._-]*(?<epname>[^\/]+?))?$', # Python (working): r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]+)?(?:s|se|season|series)[\s._-]?(?P<season>\d{1,2})[x\/\s._-]*(?:e|ep|episode|[\/\s._-]+)[\s._-]?(?P<episode>\d{1,2})(?:-?(?:(?:e|ep)[\s._]*)?(?P<endep>\d{1,2}))?(?:[\s._]?(?:p|part)[\s._]?(?P<part>\d+))?(?P<subep>[a-z])?(?:[\/\s._-]*(?P<epname>[^\/]+?))?$') ``` I am not sure where to start looking.
There are 2 problems with your translation. First of all, the second mention of `openb` has extra parenthesis around it making it a [conditional expression](http://perldoc.perl.org/perlre.html#%27%28?%28condition%29yes-pattern|no-pattern%29%27 "Perl RE conditional expression documentation"), not a named expression. Next is that you didn't translate the `\k<season>` [backreference](http://perldoc.perl.org/perlre.html#%27%5Ck%3CNAME%3E%27 "Perl RE backreference documentation"), Python uses `(P=season)` to match the same. The following compiles for me: ``` r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:(?P=season)x)?(?P<endep>\d{1,2}))?(?(openb)\])(?:[\s._-]*(?P<epname>[^\/]+?))?$') ``` If I were you, I'd use re.VERBOSE to split this expression over multiple lines and add copious documentation so you can keep understanding the expression in the future if this is something that needs to remain maintainable though. *(edited after realising the second `openb` reference was a conditional expression, and to properly translate the backreference).*
I found the offending part but can't figure out what exactly is wrong without wrapping my mind around the whole thing. ``` r = re.compile(r'^(?:(?P<name>.*?)[\/\s._-]*)?(?P<openb>\[)?(?P<season>\d{1,2})[x\/](?P<episode>\d{1,2})(?:-(?:\kP<season>x)?(?P<endep>\d{1,2}))? (?(P<openb>)\]) // this part here causes the error message (?:[\s._-]*(?P<epname>[^\/]+?))?$') ``` The problem seems to be with the fact that group names in python must be valid python identifiers (check [documentation](http://docs.python.org/library/re.html)). The parentheses seem to be the problem. Removing them gives ``` (?(P<openb>)\]) //with parentheses (?P<openb>\]) //without parentheses redefinition of group name 'openb' as group 6; was group 2 ```
How can I convert a Perl regex with named groups to Python?
[ "", "python", "regex", "perl", "" ]
I don't get what jQuery(document) is here. I thought you always used $(document) see here in his examples: <http://sorgalla.com/projects/jcarousel/>
There is no difference. $ is shorthand for jQuery. Sometimes $ is disabled as it may conflict with other Javascript libraries you are using. It is otherwise identical.
jQuery uses $(). Prototype uses $(). AFAIK Mootools uses $(). If that was the end of it, it would be impossible to use any combination of the three. So responsible frameworks provide a way to disable $() to avoid conflicts and use something else instead. In jQuery's case, that's jQuery(). jQuery() is recommended over $() for use in jQuery plugins, so they keep functioning if you do disable $().
What is jQuery(document) vs. $(document)
[ "", "javascript", "jquery", "" ]
I want to raise an "event" that shows up in the system event viewer (eventvwr.exe) from .NET. Unfortunately, Google is just giving me lots of other stuff about "events" which aren't the sort of events I want to raise. What is the correct API call? **Update** Thanks for the answers so far. Interestingly, I have found that calls to "LogEvent" work with a new source even if I have not created the source. i.e. ``` // The following works even though I haven't created "SomeNewSource" EventLog.WriteEntry("SomeNewSource", message); ``` Can anyone explain why that is?
You call CreateEventSource if the source does not already exist and then use WriteEntry to write to the log. There are a couple of things to keep in mind, though. CreateEventSource will need admin access the first time the program is run. I always just keep a short command-line program around that takes the Event Source as an argument. Run it as admin during installation just once, and then your program can write events without problem under an appropriate level of access. WriteEntry also takes an entry type and an error number, like: ``` myEventLog.WriteEntry("Health Check. Program started normally",EventLogEntryType.Info,1011); ``` or ``` myEventLog.WriteEntry("Error. Could not open database connection!",EventLogEntryType.Error,1012); ``` These can be helpful, since monitoring systems like Microsoft Operations Manager can be set to watch these and notify you or the persons on call. I usually create a unique set of error numbers so that the sysadmins know who to call; me, a dba, a vendor's help line to report their web service down, and so on. Saves you a lot of 2AM calls. Here's a sample: ``` using System.Diagnostics; class TestClass { private EventLog log; public TestClass() { if(!EventLog.SourceExists("TestApplication")) { EventLog.CreateEventSource("TestApplication","Application"); } log=new EventLog("Application",".","TestApplication"); } public int ParseFile(StreamReader sr) { string[] lines=sr.ReadToEnd().Trim().Split('\n'); int linecount=lines.Length; string connString=System.Configuration.ConfigurationSettings.AppSettings["ConnectString"]; SqlConnection conn=new SqlConnection(connString); try { conn.Open(); } catch (Exception e) { log.WriteEntry("Cannot connect to database for file import: "+e.Message, EventLogEntryType.Error,1171); return linecount; } // write to database, etc. } } ```
``` using System; using System.Diagnostics; namespace Test { class TestEventLog { static void Main(string[] args) { string source = "MyApplication"; if (!EventLog.SourceExists(source)) { EventLog.CreateEventSource(source,"Application"); } EventLog.WriteEntry(source, "Here is an event-log message"); } } } ```
How does one raise an event viewer "Event" in .NET?
[ "", "c#", ".net", "" ]
``` int[] arr = new int[5]; Console.WriteLine(arr.Count.ToString());//Compiler Error Console.WriteLine(((ICollection)arr).Count.ToString());//works print 5 Console.WriteLine(arr.Length.ToString());//print 5 ``` Do you have an explanation for that?
Arrays have .Length, not .Count. But this is available (as an **explicit interface implementation**) on ICollection etc. Essentially, the same as: ``` interface IFoo { int Foo { get; } } class Bar : IFoo { public int Value { get { return 12; } } int IFoo.Foo { get { return Value; } } // explicit interface implementation } ``` `Bar` doesn't have public a `Foo` property - but it is available if you cast to `IFoo`: ``` Bar bar = new Bar(); Console.WriteLine(bar.Value); // but no Foo IFoo foo = bar; Console.WriteLine(foo.Foo); // but no Value ```
While `System.Array` implement the `ICollection` interface it doesn't directly expose the `Count` property. You can see the **explicit** implementation of `ICollection.Count` in the MSDN documentation [here](http://msdn.microsoft.com/en-us/library/system.array_properties.aspx). The same applies to `IList.Item`. Take look at this Blog entry for more details on explicit and implicit interface implementation: [Implicit and Explicit Interface Implementations](http://blogs.msdn.com/mhop/archive/2006/12/12/implicit-and-explicit-interface-implementations.aspx)
I can not access Count property of the array but through casting to ICollection !
[ "", "c#", "arrays", "icollection", "" ]
I want the origin to be at the center of my window. ``` ______________ | ^ | | | | | o----->| | | |____________| ``` .NET wants it to be in the top left hand corner. ``` _____________> | | | | | | | | V____________| ``` Dot net and I are trying to get along.. Does anyone know how to do this in C# just using the Graphics object? Graphics.TranslateTransform doesn't do it since it leaves the coordinates flipped upside down. Combining this Graphics.ScaleTransform(1,-1) isn't satisfactory either since that makes text appear upside down.
One solution would be to use the TranslateTransform property. Then, instead of using the Point/PointF structs you could create a FlippedPoint/FlippedPointF structs of your own that have implicit casts to Point/PointF (but by casting them the coords get flipped): ``` public struct FlippedPoint { public int X { get; set; } public int Y { get; set; } public FlippedPoint(int x, int y) : this() { X = x; Y = y; } public static implicit operator Point(FlippedPoint point) { return new Point(-point.X, -point.Y); } public static implicit operator FlippedPoint(Point point) { return new FlippedPoint(-point.X, -point.Y); } } ```
You can continue using `ScaleTransform(1, -1)` and reset the current transformation temporarily while drawing your text: ``` // Convert the text alignment point (x, y) to pixel coordinates PointF[] pt = new PointF[] { new PointF(x, y) }; graphics.TransformPoints(CoordinateSpace.Device, CoordinateSpace.World, pt); // Revert transformation to identity while drawing text Matrix oldMatrix = graphics.Transform; graphics.ResetTransform(); // Draw in pixel coordinates graphics.DrawString(text, font, brush, pt[0]); // Restore old transformation graphics.Transform = oldMatrix; ```
Make +y UP, Move Origin C# System.Drawing.Graphics
[ "", "c#", "drawing", "system.drawing", "" ]
I'm having trouble understanding the following bit of code that I was hoping would create an array on the heap and fill it with the characters 9 down to 0 (I know I could just index the array like a normal stack array with [] notation to do this but I'm doing it this way to try to understand pointers in more depth): ``` int *ptrHeapArray = new int[10]; for(int f=9; f>=0 ;f--) { *ptrHeapArray = f; ptrHeapArray++; } for(int f=0; f<10; f++) cout << ptrHeapArray[f] << "\n"; ``` It prints out compleletly unexpected values. As I understand the above, the 'new' command creates an array on the heap and sends me back a pointer to the address where the array is. Since the pointer I assign (ptrHeapArray) is of int size I assumed I could use pointer post incrementing to navigate through the array. However the results indicate that my assumptions are wrong. This got me to thinking that perhaps the pointer passed back by the 'new' keyword is just a pointer to the whole array and can't be used to step through the array for some reason. So I tried creating another pointer to the pointer returned by the 'new' keyword and used that to do my array population: ``` int *ptrHeapArray = new int[10]; //array to hold FRANK data in 32 bit chunks int *ptrToHeapArrayPointer = ptrHeapArray; for(int f=9; f>=0 ;f--) { *ptrToHeapArrayPointer = f; ptrToHeapArrayPointer++; } for(int f=0; f<10; f++) cout << ptrHeapArray[f] << "\n"; ``` This worked fine. Can anyone explain to me why I had to do this and couldn't just have used the pointer passed backed to me by the 'new' keyword? Thanks
The line ``` ptrHeapArray++; ``` in your first for loop increments the pointer, such that it doesn't point to the beginning of the array anymore. The line ``` int *ptrHeapArray = new int[10]; ``` allocates the memory for 10 integers and points ptrHeapArray to the beginning of that memory. In your for loop you then move this pointer. When ptrHeapArray points to the third of the integers: ``` [0] [1] [2] [3] [4] ^ ^ ^ orig. | | | +-- ptrHeapArray[2] | +-- ptrHeapArray now points here ``` then ptrHeapArray[2] would give you the integer at the position orignally numbered with 4.
You are modifying the pointer in code. After the first loop in the first snippet, the pointer will point to the end of the array rather than the beginning. To make things clearer, this would work too (not suggested but demonstrates the behavior): ``` int *ptrHeapArray = new int[10]; for(int f=9; f>=0 ;f--) { *ptrHeapArray = f; ptrHeapArray++; } ptrHeapArray -= 10; // reset the pointer to its original location for(int f=0; f<10; f++) cout << ptrHeapArray[f] << "\n"; ```
Creating arrays on the heap and addressing them with pointers
[ "", "c++", "pointers", "heap-memory", "new-operator", "" ]
I'm implementing a cross-tabulation library in Python as a programming exercise for my new job, and I've got an implementation of the requirements that *works* but is inelegant and redundant. I'd like a better model for it, something that allows a nice, clean movement of data between the base model, stored as tabular data in flat files, and all of the statistical analysis results that might be asked of this. Right now, I have a progression from a set of tuples for each row in the table, to a histogram counting the frequencies of the appearances of the tuples of interest, to a serializer that -- somewhat clumsily -- compiles the output into a set of table cells for display. However, I end up having to go back up to the table or to the histogram more often than I want to because there's never enough information in place. So, any ideas? Edit: Here's an example of some data, and what I want to be able to build from it. Note that "." denotes a bit of 'missing' data, that is only conditionally counted. ``` 1 . 1 1 0 3 1 0 3 1 2 3 2 . 1 2 0 . 2 2 2 2 2 4 2 2 . ``` If I were looking at the correlation between columns 0 and 2 above, this is the table I'd have: ``` . 1 2 3 4 1 0 1 0 3 0 2 2 1 1 0 1 ``` In addition, I'd want to be able to calculate ratio of frequency/total, frequency/subtotal, &c.
You could use an in-memory `sqlite` database as a data structure, and define the desired operations as SQL queries. ``` import sqlite3 c = sqlite3.Connection(':memory:') c.execute('CREATE TABLE data (a, b, c)') c.executemany('INSERT INTO data VALUES (?, ?, ?)', [ (1, None, 1), (1, 0, 3), (1, 0, 3), (1, 2, 3), (2, None, 1), (2, 0, None), (2, 2, 2), (2, 2, 4), (2, 2, None), ]) # queries # ... ```
S W has posted [a good basic recipe for this on activestate.com](http://code.activestate.com/recipes/334695-pivotcrosstabdenormalization-of-a-normalized-list/). The essence seems to be... 1. Define xsort=[] and ysort=[] as arrays of your axes. Populate them by iterating through your data, or some other way. 2. Define rs={} as a dict of dicts of your tabulated data, by iterating through your data and incrementing rs[yvalue][xvalue]. Create missing keys if/when needed. Then for example the total for row y would be `sum([rs[y][x] for x in xsort])`
What's a good data model for cross-tabulation?
[ "", "python", "algorithm", "data-structures", "statistics", "crosstab", "" ]
I am going to be making a series of about 5 sites. They need to all run on the same system. I am planning on making them all point to the same server, then depending on which domain is used to access it, different content/styles will be served. So its essentially going to be 5 or so different retail sites specializing in a particular product type. But all using the same back-end code so that it is much easier to maintain. I have never set anything like this up before and want to know if there is any information I should consider or if anyone knows of a good place that explains how to do this well? Also we are not hosting ourselves but going through a hosting company (if that matters). Thanks!
**Sounds pretty straight forward** to me: Just have a directory: ``` /var/www/siteLibraries/foo.php ``` Or other suitable directory. Then have ``` /var/www/site1/foo.php /var/www/site2/foo.php ``` and setup the **default class loading** in PHP (<http://uk.php.net/manual/en/language.oop5.autoload.php>) so that when the specialist site instantiates a class ... it will search the site's local libraries (for specialist overrides) and then default to the shared libraries. You can also arrange content this way. A brief explanation, but a rough overview of how I would tackle it if you don't need to worry about both sites sharing permissions (I assume clients wont edit code). **Remember:** your per-site code should specialize from the default code base (OOP is great for this). What is not specific, is shared. Of course, the shared code-base should be aware it is shared, so things like **logs** should specify which specific site was utilizing the library when an error occurs and all that. **Good Luck**
I'm currently in the process of writing a content management system that addresses this exact issue. A few things that I found helpful to think of. When developing your sites, reduce them each to the lowest common denominator. That is, what elements are common to all of your websites? For example, each website will have a series of pages, and probably some form of shopping cart system. For my system, I have a separate template file for each site. This way, I keep my PHP and HTML apart, which makes things a lot easier. Once I had the lowest common denominator, the hardest challenge for me was telling sites apart. I used my htaccess file to redirect example.com to www.example.com - this way, I can then reliably pick up www.example.com from the header, and use that to select the information for a given site from the database. Aiden also has some good points on code specific. For me, I found that code wasn't particularly the issue, but rather how I implemented that code for a multi site environment. Hope this helps.
Mutliple sites on same server sharing code
[ "", "php", "architecture", "lamp", "" ]
According to [MSDN](http://msdn.microsoft.com/en-us/library/ms187819(SQL.90).aspx) the range for date datatype is January 1, 1753, through December 31, 9999 In SQL2005 the following is interesting: ``` declare @date as datetime SET @date = '1/1/1901 12:00:00 AM' SELECT @date if (isnull(@date,0) = 0) SELECT 'date is null' ELSE SELECT 'date not null' ``` change the date to `'1/1/1901 12:00:00 AM'` and suddenly `isnull(@date,0)` is 0. What am I missing? (Edit) fixed daterange for sql2005, link to sql2005 BOL
Try this: ``` declare @date as datetime SET @date = '1/1/1901 12:00:00 AM' SELECT @date, CASE WHEN @date IS NULL THEN 'date is null' ELSE 'date not null' END ```
``` declare @date as datetime SET @date = '1/1/1901 12:00:00 AM' SELECT @date if @date is null SELECT 'date is null' ELSE SELECT 'date not null' ``` `ISNULL()` is a conversion function. It is not the check if a value is `NULL`. A date has a numeric representation. The numeric representation of `'1/1/1901 12:00:00 AM'` happens to be `0`, on SQL Server. You can't compare the result of `ISNULL(@date,0)` to `0` without losing the ability to distinguish between a date that could be represented as `0` and a date that is `NULL`.
SQL date type range not according to spec?
[ "", "sql", "datetime", "date-range", "isnull", "" ]
I've recently become a fan of the [VSBuildStatus](http://visualstudiogallery.msdn.microsoft.com/en-us/2a2293b4-1808-44aa-b030-661f6803d8a1) Visual Studio add-in which shows the status of your build in a nice graphical way. Its much more informative than the current progress bar or "Output" window. However, I can't seem to find an easy way to make it pop up during the build, then hide itself when done, which is similar to the current option in visual studio "Show Output window when build starts". I've seen a few references to creating visual studio macros that can look for build events, but my current experience with VS macros are that they are slow and only in VB. I don't mind going that route if its the only way, but I figured I'd ask the hive mind first. Thanks! Jason
It's not that difficult to hook up to an Environment Event: 1) Open the Macro IDE (Tools Menu --> Macros) 2) Open the 'EnvironmentEvents' module in the 'MyMacros' project 3) Add the following code after the auto-generated code: ``` Private Sub BuildEvents_OnBuildBegin(ByVal Scope As EnvDTE.vsBuildScope, ByVal Action As EnvDTE.vsBuildAction) Handles BuildEvents.OnBuildBegin DTE.Windows.Item("{43CD29AA-0CA4-4F1C-8265-219788EF4908}").Activate() 'Build Status End Sub ``` You can dismiss the window by clicking anywhere outside of it. You'll also see that it is very fast... BTW, thanks for the addin, I didn't know about it and it is very useful indeed ;-)
How about a keyboard macro? I think it still generates a vb macro in the background, but I've used plenty that aren't slow. <http://www.madprops.org/blog/quick-visual-studio-keyboard-macros/> You could start recording, open the window you want, Ctrl-shift-b, stop recording, etc.
In Visual Studio, how do you make a specific window open during a build?
[ "", "c#", "visual-studio", "visual-studio-2008", "" ]
What is the best way to find if a DOM object is visible? Various cases when object is considered not visible: 1. display: none; 2. visibility: hidden; 3. one of the parents has display: none or visibility: hidden 4. Another DOM element is obscuring the queried element (Nice to have, but I can manage without it). 5. Item outside of screen boundaries.
since its mootools and this got dealt with on the mootools mail list and it is now going to be a part of Element.shortcuts... ``` /* * Inspired from http://github.com/jeresig/sizzle/commit/7631f9c3f85e5fa72ac51532399cb593c2cdc71f * and this http://github.com/jeresig/sizzle/commit/5716360040a440041da19823964f96d025ca734b * and then http://dev.jquery.com/ticket/4512 */ Element.implement({ isHidden: function(){ var w = this.offsetWidth, h = this.offsetHeight, force = (this.tagName === 'TR'); return (w===0 && h===0 && !force) ? true : (w!==0 && h!==0 && !force) ? false : this.getStyle('display') === 'none'; }, isVisible: function(){ return !this.isHidden(); } }); ``` <http://gist.github.com/137880>
Stolen from <http://snippets.dzone.com/posts/show/5757>: ``` function isVisible(obj) { if (obj == document) return true if (!obj) return false if (!obj.parentNode) return false if (obj.style) { if (obj.style.display == 'none') return false if (obj.style.visibility == 'hidden') return false } //Try the computed style in a standard way if (window.getComputedStyle) { var style = window.getComputedStyle(obj, "") if (style.display == 'none') return false if (style.visibility == 'hidden') return false } //Or get the computed style using IE's silly proprietary way var style = obj.currentStyle if (style) { if (style['display'] == 'none') return false if (style['visibility'] == 'hidden') return false } return isVisible(obj.parentNode) } ```
Best way to find if a DOM object is visible or not, using mootools
[ "", "javascript", "mootools", "" ]
Have you ever attempted using Swing only to end up changing courses because it just couldn't do what you wanted? I'm pretty new to Swing, having only used it for school projects over 5 years ago, but it seems Swing has come a long way in providing a more native look and feel, so much so that I'm considering using it to develop the GUI for an app on Mac OS X. Before I do, though, I wanted to see if anyone has run into any showstopper issues that prevented them from using Swing. Just off the top of my head, some possibilities: * Problems developing custom components that looked "right" * Bad interactions with native applications and widgets * Performance issues (unresponsiveness, repaint problems) * Inability to mimic native behaviors (like Dock interaction)
Swing isn't going to give you perfect fidelity with the hosting OS. Sun simply can't devote the considerable resources necessary to do so. My impression is that Swing has gotten much better, but is still going to look out of place by default. The minimum required to even hope to pass as a Mac app: * package your .jar in a .app * set the L&F to system default * set apple.laf.useScreenMenuBar property to true + must do this before any UI code Dock interaction is non-existent in standard Java. You'll have to use Apple's Cocoa-Java bridge, which is no longer supported. I have no idea how tractable JNI is on OS X, which is the only real alternative. Performance shouldn't be a problem. Drag & Drop is probably as hairy on OS X as it is everywhere else. Basically, if you're explicitly targeting OS X you're best off using Objective-C. Its far from impossible to build an app on OS X using Java & Swing, but its a lot of work to make it look "native".
As Kevin and John said you should try Objective-C, Cocoa and XCode if you are only targeting Mac users. The [developer tools](http://developer.apple.com/leopard/overview/tools.html) for Mac is freely available. If you want to (or have to) use Java and Swing you can use some libraries to create a GUI that looks well on Macs: * [Quaqua look and feel](http://www.randelshofer.ch/quaqua/) * [MacWidgets](http://code.google.com/p/macwidgets/) For deploying your application you should read the [JarBundler](http://developer.apple.com/documentation/Java/Conceptual/Jar_Bundler/Introduction/Introduction.html) docs. However, in this case interaction with dock and native applications is very limited. Some other good links are: * [Making Java/Swing Applications Look (More) Like Native Mac OS X Applications](http://www.devdaily.com/apple/mac/java-mac-native-look/java-on-mac.shtml) * [Java: How to handle drop events to the Mac OS X Dock icon](http://www.devdaily.com/blog/post/jfc-swing/java-handle-drag-drop-events-mac-osx-dock-application-icon/)
Java Swing GUIs on Mac OS X
[ "", "java", "user-interface", "swing", "macos", "" ]
Is it possible to convert a string to an operator for use in a logical condition. For example ``` if(x Convert.ToOperator(">") y) {} ``` or ``` if(x ">" as Operator y){} ``` I appreciate that this might not be standard practice question, therefore I'm not interested in answers that ask me why the hell would want to do something like this. Thanks in advance EDIT: OK I agree, only fair to give some context. We have system built around reflection and XML. I would like to be able to say something like, for ease. ``` <Value = "Object1.Value" Operator = ">" Condition = "0"/> ``` EDIT: Thanks for your comments, I can't properly explain this on here. I guess my question is answered by "You can't", which is absolutely fine (and what I thought). Thanks for your comments. EDIT: Sod it I'm going to have a go. Imagine the following ``` <Namespace.LogicRule Value= "Object1.Value" Operator=">" Condition="0"> ``` This will get reflected into a class, so now I want to test the condition, by calling ``` bool LogicRule.Test() ``` That's the bit where it would all need to come together. EDIT: OK, so having never looked at Lambdas or Expressions I thought I would have a look after @jrista's suggestions. My system allows Enums to be parsed, so Expressions are attractive because of the ExpressionType Enum. So I created the following class to test the idea: ``` public class Operation { private object _Right; private object _Left; private ExpressionType _ExpressionType; private string _Type; public object Left { get { return _Left; } set { _Left = value; } } public object Right { get { return _Right; } set { _Right = value; } } public string Type { get { return _Type; } set { _Type = value; } } public ExpressionType ExpressionType { get { return _ExpressionType; } set { _ExpressionType = value; } } public bool Evaluate() { var param = Expression.Parameter(typeof(int), "left"); var param2 = Expression.Parameter(typeof(int), "right"); Expression<Func<int, int, bool>> expression = Expression.Lambda<Func<int, int, bool>>( Expression.MakeBinary(ExpressionType, param, param2), param, param2); Func<int, int, bool> del = expression.Compile(); return del(Convert.ToInt32(Left), Convert.ToInt32(Right)); } } ``` Obviously this will only work for Int32 right now and the basic ExpressionTypes, I'm not sure I can make it generic? I've never use Expressions before, however this seems to work. This way can then be declared in our XML way as ``` Operation<Left="1" Right="2" ExpressionType="LessThan" Type="System.Int32"/> ```
You could do something like this: ``` public static bool Compare<T>(string op, T x, T y) where T:IComparable { switch(op) { case "==" : return x.CompareTo(y)==0; case "!=" : return x.CompareTo(y)!=0; case ">" : return x.CompareTo(y)>0; case ">=" : return x.CompareTo(y)>=0; case "<" : return x.CompareTo(y)<0; case "<=" : return x.CompareTo(y)<=0; } } ```
**EDIT** As JaredPar pointed out, my suggestion below won't work as you can't apply the operators to generics... So you'd need to have specific implementations for each types you wanted to compare/compute... ``` public int Compute (int param1, int param2, string op) { switch(op) { case "+": return param1 + param2; default: throw new NotImplementedException(); } } public double Compute (double param1, double param2, string op) { switch(op) { case "+": return param1 + param2; default: throw new NotImplementedException(); } } ``` **ORIG** You could do something like this. You'd also need to try/catch all this to ensure that whatever T is, supports the particular operations. Mind if I ask why you would possibly need to do this. Are you writing some sort of mathematical parser ? ``` public T Compute<T> (T param1, T param2, string op) where T : struct { switch(op) { case "+": return param1 + param2; default: throw new NotImplementedException(); } } public bool Compare<T> (T param1, T param2, string op) where T : struct { switch (op) { case "==": return param1 == param2; default: throw new NotImplementedException(); } } ```
C# convert a string for use in a logical condition
[ "", "c#", "logic", "" ]
Good Morning everyone, I am using an update command in php to update data in mysql. This is my code: ``` $sql=mysql_query("UPDATE blpublication SET JournalName = '$_POST[journal]', AcceptanceDate = '$_POST[acceptancedate]', PublishedDate = '$_POST[publisheddate]', Comment = '$_POST[comment]' WHERE JobNo = '$_POST[jobno]'"); if (!mysql_query($sql,$con)) { die('Error: ' . mysql_error()); } echo "record Updated"; ``` It does updates the field but, it gives me the following error. And i can not figure it out why am i getting this error. "Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '1' at line 1" Can you help me in this Best Zeeshan
Can you tell us what the exactly output of `$sql` is? By the way, **BIG** security hole there. You should **always** escape query inputs namely: ``` $journal = mysql_real_escape_string($_POST['journal']); $acceptance_date = mysql_real_escape_string($_POST['acceptancedate']); $publish_date = mysql_real_escape_string($_POST['publisheddate']); $comment = mysql_real_escape_string($_POST['comment']); $job_no = intval($_POST['jobno']); // assuming jobNo is a number $sql = <<<END UPDATE blpublication SET JournalName = '$journal', AcceptanceDate = '$acceptance_date', PublishedDate = '$publish_date', Comment = '$comment' WHERE JobNo = $jobno END; mysql_query($sql); if (mysql_error()) { die("Error executing query '$sql': " . mysql_error()); } echo "record Updated"; ```
I would sanitize your input first. This could lead to some very nasty errors such as what you are experincing and malicious attacks. Look up [SQL Injection](http://en.wikipedia.org/wiki/SQL_injection).
Error in MySQL update command. (in php)
[ "", "php", "mysql", "" ]
My particular scenario involves doing some text transformation using regular expressions within a private method. The private method calls preg\_replace\_callback, but is seems that callbacks need to be public on objects, so I'm stuck breaking out of the private world and exposing implementation details when I'd rather not. So, in a nutshell: Can I use an instance method as a callback without losing encapsulation? Thanks.
Yes, it seems you can: ``` <?php //this works class a { private function replaceCallback($m) { return 'replaced'; } public function test() { $str = " test test "; $result = preg_replace_callback('/test/', array($this, 'replaceCallback'), $str); echo $result; } } $a = new a(); $a->test(); //this doesn't work $result = preg_replace_callback('/test/', array(new a(), 'replaceCallback'), ' test test '); echo $result; ``` So it seems that `preg_replace_callback()`, or PHP's callback mechanism, is aware of the scope in which it was called. Tested on 5.2.8
Can you create a separate helper class for text processing, and create the callback method within that class (it can be public there)? You can pass in a reference to the original class(object) if you need access to specific instance data. The public interface to your main class remains consistent and clean, and you pull specific string processing out of a class it probably doesn't belong in anyway...
Can I use private instance methods as callbacks?
[ "", "php", "callback", "encapsulation", "" ]
does anyone know if its possible to remove a css link at run time... Basically i have partial HTM files in a directory so hence if i add a css link (so i can use my design time editor) then this link isn't valid at runtime and i don't need it as my CSS is in another file. I just want to be able to strip it out when i load it in.. Anyone know of something that can do this..?? Same goes for the paths on images etc.. while in design time i need them like ....\content\images etc so i can design with ease ... but at runtime i need them to be content\images etc Anyone know of a good addon for jquery, framework or similar?? Thanks
If I understand correctly: you want your project to use an 'offline' stylesheet and when you upload the project you want to use the original stylesheets? **Firstly:** Make sure the file structure is the same offline as it will be online, then you won't have URL problems. At least you'll be able to sort out the problems on your localhost before you make your project live. **Secondly:** Why complicate things with javascript? Just comment out your 'design-time' stylesheet(s) before you upload your project. ``` <link href="/styles/live.css" rel="stylesheet" /> <!-- <link href="/styles/design.css" rel="stylesheet" /> --> ``` If this is not what you were looking for, please clarify your question.
You can access a `link` tag by an `id`. So try this (using jQuery): ``` $('#testLink').attr('href','') ``` That should remove any styling associated with that stylesheet. I just tried it with FF3, and it worked.
remove CSS Link / update paths with jquery/javsscript? (Design time CSS)
[ "", "javascript", "jquery", "css", "visual-studio-2008", "" ]
I am currently in the process of removing html tags from fields within an internal database. Everything has gone smoothly except for turning tags to plain text new line characters. I would like to convert this: ``` The victory halted Spain&rsquo;s 35-game unbeaten streak, handing the Spanish their first defeat since November 2006. The Americans now advance to the final Sunday to face the winner of Thursday&rsquo;s semifinal between South Africa and Brazil, the five-time World Cup winner. Brazil defeated the Americans, 3-0, in their earlier meeting in this tournament.<br> <br> In the final, though, the United States will be without midfielder Michael Bradley, who received a red card for a harsh tackle in the 87th minute, the third such ejection for the Americans in this tournament. It was the only careless blemish on an otherwise nearly perfect evening. ``` in to this: ``` The victory halted Spain’s 35-game unbeaten streak, handing the Spanish their first defeat since November 2006. The Americans now advance to the final Sunday to face the winner of Thursday’s semifinal between South Africa and Brazil, the five-time World Cup winner. Brazil defeated the Americans, 3-0, in their earlier meeting in this tournament. In the final, though, the United States will be without midfielder Michael Bradley, who received a red card for a harsh tackle in the 87th minute, the third such ejection for the Americans in this tournament. It was the only careless blemish on an otherwise nearly perfect evening. ``` I am using the following line of code to change the to a new line character: ``` value = value.Replace("<br>", Environment.NewLine).Trim(); ``` After running that code this is what is saved in my database: ``` The victory halted Spain's 35-game unbeaten streak, handing the Spanish their first defeat since November 2006. The Americans now advance to the final Sunday to face the winner of Thursday's semifinal between South Africa and Brazil, the five-time World Cup winner. Brazil defeated the Americans, 3-0, in their earlier meeting in this tournament. In the final, though, the United States will be without midfielder Michael Bradley, who received a red card for a harsh tackle in the 87th minute, the third such ejection for the Americans in this tournament. It was the only careless blemish on an otherwise nearly perfect evening. ``` If I take the parsed text saved to my database and paste it into notepad or Word I get only one paragraph instead of two. Is this the correct way to handle this? The database I am using is SQL Server 2005.
Based on your follow up comment (when you debugged it), it sounds like the correct value is at least being sent to the database correctly. It's probably not this simple, but worth checking... When you say you "ran a query to pull the value ... and pasted it to Word", what are you using to do the query? Because I know if you query something using SQL Server 2005 Management Studio in the default "Results to Grid" view, it doesn't render new lines properly (I think it just replaces them with spaces)... If you switch it to "Results to Text" (or you get the value from the database in your code & debug the value returned), you'll get a more accurate representation of the actual value, complete with new lines showing...
Your method of using **Environment.Newline** is correct. I believe the issue is with how some queries are returned directly in SQL Server, assuming you're copy/pasting directly out of SQL Server Management Studio (or similar). I'm about 99% positive that if you pull the data out with a SqlConnection and then output it to a winform, text file, etc... then you'll get the line breaks you're looking for. Sorry, but I can't recall why this happens when you copy/paste directly out of the grid of results in SQL Server.
Parse <br> to plain text new paragraph
[ "", "c#", "html", "string", "" ]
I'm using the PHP Doctrine ORM to build my queries. However, I can't quite seem to figure how to write the following WHERE clause using DQL (Doctrine Query Language): ``` WHERE name='ABC' AND (category1 = 'X' OR category2 = 'X' OR category3 = 'X') AND price > 10 ``` How can I specify where the parentheses go? What I currently have in my PHP code is this: ``` ->where('name = ?', 'ABC') ->andWhere('category1 = ?', 'X') ->orWhere('category2 = ?', 'X') ->orWhere('category3 = ?', 'X') ->andWhere('price > ?', 10) ``` But this produces something like ``` WHERE name='ABC' AND category1 = 'X' OR category2 = 'X' OR category3 = 'X' AND price > 10 ``` which, due to order of operations, doesn't return the intended results. Also, is there a difference between the "where", "andWhere", and "addWhere" methods? **UPDATE** Ok, it seems like you can't do complex queries using DQL, so I've been trying to write the SQL manually and use the andWhere() method to add it. However, I'm using WHERE..IN and Doctrine seems to be stripping out my enclosing parentheses: ``` $q->andWhere("(category1 IN $subcategory_in_clause OR category2 IN $subcategory_in_clause OR category3 IN $subcategory_in_clause)"); ```
From my experience, each complex `where` function is grouped within parenthesis (I'm using Doctrine 1.2.1). ``` $q->where('name = ?', 'ABC') ->andWhere('category1 = ? OR category2 = ? OR category3 = ?', array('X', 'X', 'X')) ->andWhere('price < ?', 10) ``` produces the following SQL: ``` WHERE name = 'ABC' AND (category1 = 'X' OR category2 = 'X' OR category3 = 'X') AND price < 10 ```
The correct way of doing this can be found at [doctrine 2 - query builder conditional queries... If statements?](https://stackoverflow.com/questions/7716624/doctrine-2-query-builder-conditional-queries-if-statements/7720723#7720723) as noted by @Jekis. Here is how to use the expression builder to solve this like in @anushr's example. ``` $qb->where($qb->expr()->eq('name', ':name')) ->andWhere( $qb->expr()->orX( $qb->expr()->eq('category1', ':category1'), $qb->expr()->eq('category2', ':category2'), $qb->expr()->eq('category3', ':category3') ) ->andWhere($qb->expr()->lt('price', ':price') ->setParameter('name', 'ABC') ->setParameter('category1', 'X') ->setParameter('category2', 'X') ->setParameter('category3', 'X') ->setParameter('price', 10); ```
Complex WHERE clauses using the PHP Doctrine ORM
[ "", "php", "mysql", "doctrine", "dql", "" ]
I have the following MySQL timestamp: 2009-06-23 16:21:48 How can I convert it to a format like mktime()?
There is a MySQL function [`unix_timestamp`](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_unix-timestamp). In your SQL query, instead of selecting the Datetime or Timestamp column directly, do this: ``` SELECT unix_timestamp(MyDatetimeColumn) FROM MyTable ``` Alternatively, if you have the string already, you could use the PHP function [`strtotime()`](https://www.php.net/manual/en/function.strtotime.php).
ok, I was wrestling with this for a week (longer but i took a break from it). I have two specific fields in tables ``` creationDate > timestamp > current_timestamp editDate > timestamp > current_timestamp ``` they were pulling out either dec 31 1969, or just nothing... annoying... very annoying in mysql query i did: ``` unix_timestamp(creationDate) AS creationDate unix_timestamp(editDate) AS editDate ``` in php convert i did: ``` $timestamp = $result_ar['creationDate']; $creationDate = date("Y-M-d (g:i:s a)", $timestamp) echo($creationDate); $editstamp = $result_ar['editDate']; $editDate = date("Y-M-d (g:i:s a)", $editstamp) echo($editDate); ``` this solved my problem for me returning ``` 2010-Jun-28 (5:33:39 pm) 2010-Jun-28 (12:09:46 pm) ``` respectively. I hope this helps someone out...
convert mysql timestamp to mktime
[ "", "php", "mysql", "timestamp", "mktime", "" ]
I have five tables: ``` models: id, name, specification models_networks: id, model_id, network_id networks: id, name, description countries_networks: id, country_id, network_id countries: id, countryName, etc, etc ``` * the `models` table is connected to the `networks` table via `models_networks` with a many to many relation. * the `networks` table is connected to the `countries` table via `countries_networks` with a many to many relation I need to do the following query, but I'm stuck: > Select all the models that will work in a specific country. e.g.: say France has two networks. PigNetwork and CowNetwork. I want to get all the models that work on PigNetwork or CowNetwork, basically any that work in that country one way or the other. If I've made myself clear, can someone help with the JOIN query please? I've only ever gone as far as joining two tables before. Thanks.
``` SELECT m.name AS model_name, c.countryName, COUNT(*) AS network_count FROM models AS m INNER JOIN models_networks AS mn ON mn.model_id = m.id INNER JOIN networks AS n ON n.id = mn.network_id INNER JOIN countries_networks AS cn ON cn.network_id = n.id INNER JOIN countries AS c ON c.id = cn.country_id WHERE c.countryName = 'France' GROUP BY m.name, c.countryName ```
Something along the lines of this should work... ``` SELECT M.Name As ModelName FROM Countries C INNER JOIN Countries_Networks CN ON C.CountryId = CN.CountryId INNER JOIN Networks N ON CN.NetworkId = N.NetworkId INNER JOIN ModelNetworks MN ON MN.NetworkId = N.NetworkId INNER JOIN Model M ON M.ModelId = MN.ModelId WHERE C.CountryName = 'FRANCE' ```
SQL join over five tables
[ "", "sql", "mysql", "join", "" ]
Can anybody tell me How to store and return List of Strings. I am asked this Because i have written a function which returns Collection of strings and I want to prepare a COM for that one and need to consume that COM(to get the returned list ) in VC++ where I can extend some functionality using that list of strings. I hope this would be clear.
[List<string>](http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx) or string[] are the best options. Here is a sample method that returns list of strings : ``` public static List<string> GetCities() { List<string> cities = new List<string>(); cities.Add("Istanbul"); cities.Add("Athens"); cities.Add("Sofia"); return cities; } ```
In C# you can simply return `List<string>`, but you may want to return `IEnumerable<string>` instead as it allows for lazy evaluation.
How to return a list of strings in C#
[ "", "c#", "visual-c++", "com", "" ]
Let's assume I have a list with objects of type `Value`. `Value` has a `Name` property: ``` private List<Value> values = new List<Value> { new Value { Id = 0, Name = "Hello" }, new Value { Id = 1, Name = "World" }, new Value { Id = 2, Name = "World" }, new Value { Id = 3, Name = "Hello" }, new Value { Id = 4, Name = "a" }, new Value { Id = 5, Name = "a" }, }; ``` Now I want to get a list of all "repeating" values (elements where the name property was identical with the name property of the previous element). In this example I want a list with the two elements "world" and "a" (id = 2 and 5) to be returned. Is this event possible with linq? Of course I could so smth. like this: ``` List<Value> tempValues = new List<Value>(); String lastName = String.Empty(); foreach (var v in values) { if (v.Name == lastName) tempValues.Add(v); lastName = v.Name; } ``` but since I want to use this query in a more complex context, maybe there is a "linqish" solution.
There won't be anything built in along those lines, but if you need this frequently you could roll something bespoke but fairly generic: ``` static IEnumerable<TSource> WhereRepeated<TSource>( this IEnumerable<TSource> source) { return WhereRepeated<TSource,TSource>(source, x => x); } static IEnumerable<TSource> WhereRepeated<TSource, TValue>( this IEnumerable<TSource> source, Func<TSource, TValue> selector) { using (var iter = source.GetEnumerator()) { if (iter.MoveNext()) { var comparer = EqualityComparer<TValue>.Default; TValue lastValue = selector(iter.Current); while (iter.MoveNext()) { TValue currentValue = selector(iter.Current); if (comparer.Equals(lastValue, currentValue)) { yield return iter.Current; } lastValue = currentValue; } } } } ``` Usage: ``` foreach (Value value in values.WhereRepeated(x => x.Name)) { Console.WriteLine(value.Name); } ``` You might want to think about what to do with triplets etc - currently everything except the first will be yielded (which matches your description), but that might not be quite right.
You could implement a [Zip extension](http://blogs.msdn.com/ericlippert/archive/2009/05/07/zip-me-up.aspx), then Zip your list with .Skip(1) and then Select the rows that match. This should work and be fairly easy to maintain: ``` values .Skip(1) .Zip(items, (first,second) => first.Name==second.Name?first:null) .Where(i => i != null); ``` The slight disadvantage of this method is that you iterate through the list twice.
Use Linq to find consecutively repeating elements
[ "", "c#", "linq", "" ]
We have a program with a main() that parses certain CLPs but does not save them anywhere. I then have my own plug-in code that needs access to the original CLPs (so I can transmit more parameters) for it. However, I cannot change main() [I saw that there is apparently a way to do this in C#](https://stackoverflow.com/questions/742630/how-to-access-command-line-parameters-outside-of-main-in-c), I'm looking for an equivalent Java solution on Linux. UPDATE: Obviously, I'm aware of how main() works. Unfortunately, I cannot change the existing application or the way it is invoked (except for CLPs). I can only access via a sandboxed plugin code. My question is whether there is a way to get the command line (rather then the environment variables with -D) that the JVM was invoked with.
Apart from doing it in main in some way I think the only other option that you have would be to drop to the operating system level and execute some commands to get the arguments. On linux the cmd line arguments for a running process are stored at **/proc/*pid*/cmdline** So to get them you would have to find the process id. See here: [How can a Java program get its own process ID?](https://stackoverflow.com/questions/35842/process-id-in-java) Then using this open **/proc/*pid*/cmdline** and parse it. The format of this file and an example in c is here: <http://www.unix.com/unix-advanced-expert-users/86740-retrieving-command-line-arguments-particular-pid.html> It might be best to wrap these two calls in one shell script that you call from java. Please note that this will be extremely non portable and is a bit hacky. But if needs must...
The solution is easy once you realize that Java's main method is just another static method that takes a String array as argument. Create a new class that stores the CLPs, then calls the old class. Later on, you can access your CLPs using the new class: ``` import NotToBeChangedMainClass; public MyMainClass { public static final String[] ARGUMENTS; public static void main(String ... args) { ARGUMENTS = args; NotToBeChangedMainClass.main(args); } ``` } Finally, change whatever external caller (e.g. any Batch files) to use MyMainClass instead of NotToBeChangedMainClass. If you are using runable jars or something similar, this requires changing the appropriate configuration file.
In Java, is there a way to obtain the command line parameters even if main() didn't save them?
[ "", "java", "command-line-arguments", "" ]
Long story short: I'm in a situation where I'd like a PHP-style getter, but in JavaScript. My JavaScript is running in Firefox only, so Mozilla specific JS is OK by me. The only way I can find to make a JS getter requires specifying its name, but I'd like to define a getter for *all* possible names. I'm not sure if this is possible, but I'd very much like to know.
`Proxy` can do it! I'm so happy this exists!! An answer is given here: [Is there a javascript equivalent of python's \_\_getattr\_\_ method?](https://stackoverflow.com/questions/1529496/is-there-a-javascript-equivalent-of-pythons-getattr-method) . To rephrase in my own words: ``` var x = new Proxy({}, { get(target, name) { return "Its hilarious you think I have " + name } }) console.log(x.hair) // logs: "Its hilarious you think I have hair" ``` Proxy for the win! Check out the MDN docs: <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy> Works in chrome, firefox, and node.js. Downsides: doesn't work in IE - freakin IE. Soon.
You can combine proxy and class to have a **nice looking code like php**: ``` class Magic { constructor () { return new Proxy(this, this); } get (target, prop) { return this[prop] || 'MAGIC'; } } ``` this binds to the handler, so you can use this instead of target. **Note: unlike PHP, proxy handles all prop access.** ``` let magic = new Magic(); magic.foo = 'NOT MAGIC'; console.log(magic.foo); // NOT MAGIC console.log(magic.bar); // MAGIC ``` You can check which browsers support proxy <http://caniuse.com/#feat=proxy>.
JavaScript getter for all properties
[ "", "javascript", "firefox", "getter", "" ]
I am trying to get a better handle on decoupling my code, code reuse, etc. I'm tired of typing the below every time I want to read some rows: ``` using(SqlConnection conn = new SqlConnection(myConnString)) { using(SqlCommand cmd = new SqlCommand(cmdTxt, conn)) { conn.Open(); using(SqlDataReader rdr = cmd.ExecuteReader()) { while(rdr.Read()) { /* do something with rows */ } } } } ``` I understand there is LINQ to SQL (I don't like it), and the Entity Framework (still a baby). I have no problems having to type my queries out, I just don't want to have to type the command contruction, row iterator, etc each time. I looked around and found something that I thought would work for me, and tried to implement it to make things easier for me. As you can see in the comment, I get an error that the SqlDataReader is closed. I'm guessing it's probably because of the using statement int the DataFactory.ExecuteReader() method. When the reader is returned, the dispose method is called on my SqlConnection and SqlCommand variables. Am I right there? If so, how should one manage the connection and command variables? **Edit:** I updated my code example to better reflect what I am doing. ``` public class DataFactory { public DataFactory() {} public DataFactory(string connectionString) { _connectionString = connectionString; } protected _connectionString = "Data Source=Localhost, etc, etc"; private string ConnectionString { get{return _connectionString;} } public SqlConnection GetSqlConnection() { return new SqlConnection(ConnectionString); } public SqlDataReader ExecuteReader(string cmdTxt) { using(SqlConnection conn = new SqlConnection(ConnectionString)) { using(SqlCommand cmd = new SqlCommand(cmdTxt, conn)) { conn.Open(); return cmd.ExecuteReader(); } } } } public IRepository<T> { T GetById(int id); } public MyTypeRepository: IRepository<MyType> { private static DataFactory _df = new DataFactory(); public MyType GetById(int id) { string cmdTxt = String.Format("SELECT Name FROM MyTable WHERE ID = {0}", id); using(SqlDataReader rdr = _df.ExecuteReader(cmdTxt)) { if(rdr.Read()) /* I get an error that the reader is already closed here */ { return new MyType( Convert.ToInt32(rdr["Id"]), rdr["Name"]); } else { return null; } } } } public class MyType { public MyType(int id, string name) { _id = id; _name = name; } private string _name; public string Name { get{return _name;} } private int _id; public int Id { get{return _id;} } public override void ToString() { return string.Format("Name: {0}, Id: {1}", Name, Id); } } public class Program { private static MyTypeRepository _mtRepo = new MyTypeRepository(); static void Main() { MyType myType = _mtRepo.GetById(1); Console.WriteLine(myType.ToString()); } } ``` I also would like to know if what I'm doing makes any sense, or, if not, how to achieve something similar so that I don't have to type the connection creation, etc so often.
Your method ExecuteReader will close the connection before returning the Reader. Instead it should be implemented something like: ``` public IDataReader ExecuteReader(string cmdTxt) { SqlConnection conn = new SqlConnection(...); try { SqlCommand cmd = new SqlCommand(cmdTxt, conn); conn.Open(); return cmd.ExecuteReader(CommandBehavior.CloseConnection); } catch { conn.Close(); throw; } } ``` Callers of the ExecuteReader method will need to dispose the IDataReader: ``` using(IDataReader reader = ExecuteReader(commandText)) { ... } // reader will be disposed here and will close the connection. ``` Note that the above does not call Dispose on the SqlCommand object. In my experience and from looking at SqlCommand with Reflector it's not necessary as long as the SqlConnection is disposed. But I believe the following will work if you do want to dispose it: ``` public IDataReader ExecuteReader(string cmdTxt) { SqlConnection conn = new SqlConnection(...); SqlCommand cmd = null; try { cmd = new SqlCommand(cmdTxt, conn); conn.Open(); IDataReader reader = cmd.ExecuteReader(CommandBehavior.CloseConnection); cmd.Dispose(); return reader; } catch { if (cmd != null) cmd.Dispose(); conn.Close(); throw; } } ```
It's very important that you close and/or dispose your data reader after using it then everyone who wants to use your DataFactory should remember to do that.I think it's a good idea to return a DataTable instead of SqlDataReader so that your DataFactory is not dependent to SqlDataReader. I mean : ``` public DataTable ExecuteReader(string cmdTxt) { using(SqlConnection conn = new SqlConnection(ConnectionString)) { using(SqlCommand cmd = new SqlCommand(cmdTxt, conn)) { conn.Open(); using(SqlDataReader reader=cmd.ExecuteReader()) { DataTable dt=new DataTable(); dt.Load(reader); return dt; } } } } ``` **EDIT:** Good point.I don't like data tables either ( We use NHibernate so I actually don't use data tables in our applications) So if you'd like to map a data reader to your own objects maybe you can have a data mapper that maps data reader to your own objects I mean: ``` public T[] ExecuteReader<T>(string cmdTxt) { using(SqlConnection conn = new SqlConnection(ConnectionString)) { using(SqlCommand cmd = new SqlCommand(cmdTxt, conn)) { conn.Open(); using(SqlDataReader reader=cmd.ExecuteReader()) { var result=new List<T>(); while(reader.Read()) result.Add(ObjectMapper.MapReader<T>(reader)); return result.ToArray(); } } } } ```
How to manage SqlDataReaders in a data access layer?
[ "", "c#", "data-access-layer", "" ]
I have a destination path and a file name as strings and I want to concatenate them with c++. Is there a way to do this and let the program/compiler choose between / and \ for windows or unix systems?
If you wanted to do it at compile time you could certainly do something like ``` #ifdef WIN32 #define OS_SEP '\\' #else #define OS_SEP '/' #endif ``` Or you could just use '/' and things will work just fine on windows (except for older programs that parse the string and only work with '\'). It only looks funny if displayed to the user that way.
As is so often the case, Boost has a library that does what you want. [Here](http://www.ibm.com/developerworks/aix/library/au-boostfs/index.html)'s a tutorial.
How to generate a OS independent path in c++
[ "", "c++", "cross-platform", "filesystems", "" ]
I've used HashSet and Dictionary a lot in C#, and found them very fast... I've tried using std::map and std::hash\_map and am finding them very slow in comparision. Does this sound like expected behaviour? Is there something I might be doing wrong in my use of std::hash\_map? Or, is there a better C++ Hash container out there? I'm hashing int32s, usually around 100,000 of them. Update: I created a repro in C# and C++. It runs two trials, they take 19ms and 13ms in C#, and about 11,000ms in C++. There must be something really wrong with my C++ code :) (Both were run as Release builds, both are Console apps) C# Output: ``` Found 511 values in the intersection, in 19 ms Found 508 values in the intersection, in 13 ms ``` C++ Output: ``` Found 308 values in the intersection, in 11764.7ms Found 316 values in the intersection, in 11742.8ms ``` C++ Output (using stdext::hash\_map instead of std::map) ``` Found 300 values in the intersection, in 383.552ms Found 306 values in the intersection, in 2277.02ms ``` C++ Output (using stdext::hash\_map, a release x64 build) ``` Found 292 values in the intersection, in 1037.67ms Found 302 values in the intersection, in 3663.71ms ``` Notes: * Set2 is not getting populated quite as I wanted in C++, I was expecting it to have a 50% intersection with Set1 (as it does in C#), but I had to multiply my random number by 10 for some reason to even get them to partially not intersect C#: ``` static void Main(string[] args) { int start = DateTime.Now.Millisecond; int intersectionSize = runIntersectionTest(); int duration = DateTime.Now.Millisecond - start; Console.WriteLine(String.Format("Found {0} values in the intersection, in {1} ms", intersectionSize, duration)); start = DateTime.Now.Millisecond; intersectionSize = runIntersectionTest(); duration = DateTime.Now.Millisecond - start; Console.WriteLine(String.Format("Found {0} values in the intersection, in {1} ms", intersectionSize, duration)); Console.ReadKey(); } static int runIntersectionTest() { Random random = new Random(DateTime.Now.Millisecond); Dictionary<int,int> theMap = new Dictionary<int,int>(); List<int> set1 = new List<int>(); List<int> set2 = new List<int>(); // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set1.Add(value); } // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int value = 1000000000 + (random.Next() % 200000 + 1); set2.Add(value); } // Now intersect the two sets by populating the map foreach( int value in set1 ) { theMap[value] = 1; } int intersectionSize = 0; foreach ( int value in set2 ) { int count; if ( theMap.TryGetValue(value, out count ) ) { intersectionSize++; theMap[value] = 2; } } return intersectionSize; } ``` C++: ``` int runIntersectionTest() { std::map<int,int> theMap; vector<int> set1; vector<int> set2; // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set1.push_back(value); } // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int random = rand() % 200000 + 1; random *= 10; int value = 1000000000 + random; set2.push_back(value); } // Now intersect the two sets by populating the map for ( vector<int>::iterator iterator = set1.begin(); iterator != set1.end(); iterator++ ) { int value = *iterator; theMap[value] = 1; } int intersectionSize = 0; for ( vector<int>::iterator iterator = set2.begin(); iterator != set2.end(); iterator++ ) { int value = *iterator; map<int,int>::iterator foundValue = theMap.find(value); if ( foundValue != theMap.end() ) { theMap[value] = 2; intersectionSize++; } } return intersectionSize; } int _tmain(int argc, _TCHAR* argv[]) { srand ( time(NULL) ); Timer timer; int intersectionSize = runIntersectionTest(); timer.Stop(); cout << "Found " << intersectionSize << " values in the intersection, in " << timer.GetMilliseconds() << "ms" << endl; timer.Reset(); intersectionSize = runIntersectionTest(); timer.Stop(); cout << "Found " << intersectionSize << " values in the intersection, in " << timer.GetMilliseconds() << "ms" << endl; getchar(); return 0; } ```
Hash\_map and hash\_set are non-standard, [unordered\_map](http://msdn.microsoft.com/en-us/library/bb983026.aspx) and [unordered\_set](http://msdn.microsoft.com/en-us/library/bb982739.aspx) are the most likely soon to be standard versions. Without having a reproducer, I don't think this is going to get far though. Under the hood, they are the same data structures, so they should have similar performance. --- I compiled the provided sample under MS Visual Studio 2008 v9.0.30729.1, as Visual C++ -> Win32 -> Console Application (though I rolled my own Timer class because I wasn't sure what you were using). Under debug, I got times of 1000 ms, but compiling under release was 50 ms. ``` #include <vector> #include <iostream> #include <map> #include <stdio.h> #include <stdlib.h> #include <time.h> #include <windows.h> typedef struct { LARGE_INTEGER start; LARGE_INTEGER stop; } stopWatch; class CStopWatch { private: stopWatch timer; LARGE_INTEGER frequency; double LIToSecs( LARGE_INTEGER & L); public: CStopWatch(); void startTimer( ); void stopTimer( ); double getElapsedTime(); }; double CStopWatch::LIToSecs( LARGE_INTEGER & L) { return ((double)L.QuadPart /(double)frequency.QuadPart) ; } CStopWatch::CStopWatch(){ timer.start.QuadPart=0; timer.stop.QuadPart=0; QueryPerformanceFrequency( &frequency ) ; } void CStopWatch::startTimer( ) { QueryPerformanceCounter(&timer.start) ; } void CStopWatch::stopTimer( ) { QueryPerformanceCounter(&timer.stop) ; } double CStopWatch::getElapsedTime() { LARGE_INTEGER time; time.QuadPart = timer.stop.QuadPart - timer.start.QuadPart; return LIToSecs( time) ; } using namespace std; int runIntersectionTest() { std::map<int,int> theMap; vector<int> set1; vector<int> set2; // Create 100,000 values for set1 for ( int i = 0; i < 100000; i++ ) { int value = 1000000000 + i; set1.push_back(value); } // Create 1,000 values for set2 for ( int i = 0; i < 1000; i++ ) { int random = rand() % 200000 + 1; random *= 10; int value = 1000000000 + random; set2.push_back(value); } // Now intersect the two sets by populating the map for ( vector<int>::iterator iterator = set1.begin(); iterator != set1.end(); iterator++ ) { int value = *iterator; theMap[value] = 1; } int intersectionSize = 0; for ( vector<int>::iterator iterator = set2.begin(); iterator != set2.end(); iterator++ ) { int value = *iterator; map<int,int>::iterator foundValue = theMap.find(value); if ( foundValue != theMap.end() ) { theMap[value] = 2; intersectionSize++; } } return intersectionSize; } int main(int argc, char* argv[]) { srand ( time(NULL) ); int tests = 2; while(tests--){ CStopWatch timer; timer.startTimer(); int intersectionSize = runIntersectionTest(); timer.stopTimer(); cout << "Found " << intersectionSize << " values in the intersection, in " << timer.getElapsedTime() << "s\r\n"; } getchar(); return 0; } ``` (I would try with unordered\_map but my version doesn't have it). I suspect there is some problem in your setup for C++.
We managed to get to the bottom of this, see: [Why does my STL code run so slowly when I have the debugger/IDE attached?](https://stackoverflow.com/questions/1060337/why-does-my-stl-code-run-so-slowly-when-i-have-the-debugger-ide-attached) What happens is when you attach the debugger a different (DEBUG) memory heap is used - you can turn it off if you want.
Fast C++ container like the C# HashSet<T> and Dictionary<K,V>?
[ "", "c++", "hashtable", "hashmap", "" ]
I'm confused with the code block below because why it downloads the contents of the webpage rather the file itself. I create dummy file with some texts in it then I download it, but when I open the download file, I don't see any text that I wrote but it has weird web language tags. ``` private bool DownloadCSVfile() { bool downloadOk = false; WebClient client = null; try { client = new WebClient(); client.Credentials = CredentialCache.DefaultCredentials; client.DownloadFile(myURL, CSVfile); if (File.Exists(CSVfile)) downloadOk = true; else downloadOk = false; } catch (Exception error) { downloadOk = false; string err = error.Message; } //release resource if (client != null) { client.Dispose(); client = null; } // if (downloadOk == true) { return true; } else { return false; } } ```
I'm guessing `myURL` ends in "/" and not ".csv" Am I right? Are you thinking that `myURL + CSVFile` will contain the full path to the file? It won't. ([See doc](http://msdn.microsoft.com/en-us/library/ms144194.aspx)) The URL has to be the path to the file itself. You are allowed to have something like this: ``` client.DownloadFile(@"./file.csv", "filename.txt"); ``` If I'm not on the right track, please tell what's in the URL and what some of the first several tags are in the downloaded file.
Does the site require a login and it's really redirecting you? I've had one before that required me faking my browsing movements (and storing cookies and such) before it would allow me to download a file I needed.
DownloadFile from the Internet?
[ "", "c#", "webclient", "" ]
I need to write small a log analyzer application to process some log files generated by a 3rd party closed source library (having custom logger inside) used in my project. In case of an exception entry in the log I need to collect aggregated information about the methods involved along the stack trace from the top to the actual place of the exception. Unfortunately, by default Java printStackTrace() does not print every method in the call stack but up to a certain number and the rest is just referenced as `16 more...`. If I could catch the exception myself I would use the getStackTrace() and print it myself but the root cause is never included in the exception this library throws. *Is there a way to ask Java to print the entire call stack in the stacktrace?* *Apart from my situation do common logging frameworks have option for this?* **Edit:** The program runs on Sun's JVM with JDK 1.5.0\_09. No option to change that.
[here is an explanation](https://stackoverflow.com/questions/437756/how-do-i-stop-stacktraces-truncating-in-logs) of the 'caused by' and '... *n* more' lines in the printed trace. see also the [JavaDoc for printStackTrace](http://docs.oracle.com/javase/7/docs/api/java/lang/Throwable.html#printStackTrace%28%29). you might not have any work to do. > Note the presence of lines containing the characters "...". These lines indicate that the remainder of the stack trace for this exception matches the indicated number of frames from the bottom of the stack trace of the exception that was caused by this exception (the "enclosing" exception). This shorthand can greatly reduce the length of the output in the common case where a wrapped exception is thrown from same method as the "causative exception" is caught.
Can't you do something with `Thread.currentThread().getStackTrace()`? Here's a real simple example which calls a method recursively 20 times and then dumps out the stack of the current thread. ``` public class Test { public static void main(String[] args) { method(); } static int x = 0; private static void method() { if(x>20) { StackTraceElement[] elements = Thread.currentThread().getStackTrace(); for(int i=0; i<elements.length; i++) { System.out.println(elements[i]); } } else { x++; method(); } } } ```
Print full call stack on printStackTrace()?
[ "", "java", "logging", "stack-trace", "" ]
Here is the query ``` from a in this._addresses where a.Street.Contains(street) || a.StreetAdditional.Contains(streetAdditional) select a).ToList<Address>() ``` if both properties in the where clause have values this works fine, but if for example, a.StreetAdditional is null (Most of the times), I will get a null reference exception. Is there a work around this?
The most obvious one: ``` from a in this._addresses where (a.Street != null && a.Street.Contains(street)) || (a.StreetAdditional != null && a.StreetAdditional.Contains(streetAdditional)) select a).ToList<Address>() ``` Alternatively you could write an extension method for Contains that accepts a null argument without error. Some might say that it is not so pretty to have such a method, because it looks like a normal method call, but is allowed for null values (thereby setting aside normal null-checking practices).
I'd use the null-coalescing operator... ``` (from a in this._addresses where (a.Street ?? "").Contains(street) || (a.StreetAdditional ?? "").Contains(streetAdditional) select a).ToList<Address>() ```
LINQ to SQL and Null strings, how do I use Contains?
[ "", "c#", ".net", "linq-to-sql", "" ]
I have two tables: a schedule table that contains information about how an employee is scheduled and a numbers table in which each number corresponds to a date. The tables look like: ``` [Employee Schedule] ID Employee ID Project ID Day ID ----------- ----------- ----------- ----------- 1 64 2 168 2 64 2 169 3 64 2 170 4 64 2 171 5 64 1 169 6 64 1 170 7 64 1 171 8 64 1 172 9 64 2 182 10 64 2 183 11 64 2 184 ``` and ``` [Day Numbers] ID Day ----------- ---------- 168 2009-06-18 169 2009-06-19 170 2009-06-20 171 2009-06-21 172 2009-06-22 173 2009-06-23 174 2009-06-24 175 2009-06-25 176 2009-06-26 177 2009-06-27 178 2009-06-28 179 2009-06-29 180 2009-06-30 181 2009-07-01 182 2009-07-02 183 2009-07-03 184 2009-07-04 ``` As you can see, Employee 64 is scheduled on project 1 from 2009-06-19 to 2009-06-22 and project 2 from 2009-06-18 to 2009-06-21 and again from 2009-07-02 to 2009-07-04. My question is: what algorithm can I use to quickly determine the spans of the employee's schedule in a fashion such that I can display it as follows? ``` Employee ID Project ID Duration ----------- ---------- ------------ 64 1 2009-06-19 to 2009-06-22 64 2 2009-06-18 to 2009-06-21 64 2 2009-07-02 to 2009-07-04 ``` I can do this on the SQL side or the code side. I have Linq at my disposal if I need it. The table doesn't need to be compiled by SQL. This will happen dynamically on a website and should be as efficient as possible. I don't want to have to iterate through each and look for breaks in contiguous days if I don't have to.
Assuming the Day IDs are always sequential for a partial solution... ``` select * from employee_schedule a where not exists( select * from employee_schedule b where a.employeeid = b.employeeid and a.projectid = b.projectid and (a.dayid - 1) = b.dayid ) ``` lists the start day IDs: ``` ID EMPLOYEEID PROJECTID DAYID 1 64 2 168 5 64 1 169 9 64 2 182 select * from employee_schedule a where not exists( select * from employee_schedule b where a.employeeid = b.employeei and a.projectid = b.projectid and (a.dayid + 1) = b.dayid ) ``` lists the end day IDs: ``` ID EMPLOYEEID PROJECTID DAYID 4 64 2 171 8 64 1 172 11 64 2 184 ```
Lets make a view to make things easier: ``` create view EmployeeProjectDates as select e.[Employee ID], e.[Project ID], d.Day from [Employee Scchedule] e join [Day Numbers] d on e.[Day Id] = d.ID ``` You can do a query like this to get all the start dates: ``` select one.[Employee ID], one.[Project ID], one.Day as StartDate from EmployeeProjectDays one left join EmployeeProjectDays two on one.[Employee ID] = two.[Employee ID] and one.[Project ID] = two.[Project ID] and one.Day = DATEADD(DAY, 1, two.Day) where two.Day is null ``` And then do a similar query to get the end dates and match them up. I think that something like this would get you both. ``` select one.[Employee ID], one.[Project ID], one.Day as StartDate, (select min(two_end.Day) from EmployeeProjectDays one_end join EmployeeProjectDays two_end on one_end.[Employee ID] = two_end.[Employee ID] and one_end.[Project ID] = two_end.[Project ID] and one.Day = DATEADD(DAY, 1, two.Day) where one_end.Day is null and two_end.Day > one.Day) as EndDate from EmployeeProjectDays one left join EmployeeProjectDays two on one.[Employee ID] = two.[Employee ID] and one.[Project ID] = two.[Project ID] and one.Day = DATEADD(DAY, 1, two.Day) where two.Day is null ``` I haven't tested any of these queries, but something similar should work. I had to use a similar query before we implemented something in our application code to find the start and end dates.
Finding Start and End Dates from Date Numbers Table (Date Durations)
[ "", "sql", "vb.net", "linq", "algorithm", "clr", "" ]
Hey guys, I am very new to AJAX and and working on a rating script, but I want to be able to pass multiple values back into my ajax function : Right now it runs a to a php script called ratings, where it takes the total value of votes / votes and multiplies it by the width of each start to get an accurate current rating. What I'd like to do is also pass back the amount of votes so I can display them. I know how I could do this by making another function but that seems redundant and not very efficient. My question is, is it possible to pass back not only the width (value / votes \* 22) for my rating box, but also the total # of votes in 1 query. If not the better question would it be better to pass back a string in jquery that already has the votes & value, and do the width calculation with java script ? ``` $(document).ready(function() { getRating(); function getRating(){ $.ajax({ type: "GET", url: "../includes/rating.php", data: "action=get&bookid="+$("#current").attr("value"), cache: false, async: false, success: function($rating) { $("#current").css({ width: "" + $rating }); }, error: function(result) { alert("Error"); } }); } ``` Thanks!
Yes you can pass back both values. Just send JSON using [json\_encode](https://www.php.net/json_encode) instead of text. ``` $(document).ready(function() { getRating(); function getRating(){ $.ajax({ type: "GET", dataType: 'json', url: "../includes/rating.php", data: "action=get&bookid="+$("#current").attr("value"), cache: false, async: false, success: function(data) { $("#current").css({ width: "" + data.rating }); $("#votes").html(data.votes); }, error: function(result) { alert("Error"); } }); } ```
If your struggling to passback multiple values in one callback you could try inserting a random character (like a #, something your not going to use) inbetween the values then split it on the server side. For example. **Client Side...** ``` var callback = value1##value2##value3; ``` **Server Side...** ``` $values = split($callback, "##"); $value1 = $values[0]; $value2 = $values[1]; ``` Hope that helps...
Get multiple values through an AJAX query
[ "", "php", "jquery", "ajax", "" ]
I have a table that has a forced auto increment column and this column is a very valuable ID that is retained through out the entire app. Sorry to say it was poor development on my part to have this be the auto incrementing column. So, here is the problem. I have to insert into this table an ID for the column that has already been created and removed from the table. Kind of like resurrecting this ID and putting it back into the table. So how can I do this programatically do this without turning the column increment off. Correct me if I am wrong, if I turn it off programatically, It will restart at 0 or 1 and I don't want that to happen...
If you are in Microsoft SQL Server, you can "turn off" the autoIncrementing feature by issuing the statement `Set Identity_Insert [TableName] On`, as in: ``` Set Identity_Insert [TableName] On -- -------------------------------------------- Insert TableName (pkCol, [OtherColumns]) Values(pkValue, [OtherValues]) -- ---- Don't forget to turn it back off ------ Set Identity_Insert [TableName] Off ```
In addition to Charles' answer (which is now 100% correct :-) and which preserves the current value of the IDENTITY on the table), you might also want to check the current value of an IDENTITY on a table - you can do this with this command here: ``` DBCC CHECKIDENT('YourTableName') ``` If you ever need to actually **change** it, you can do so by using this command here: ``` DBCC CHECKIDENT ('YourTableName', RESEED, (new value for IDENTITY) ) ```
SQL-How to Insert Row Without Auto incrementing a ID Column?
[ "", "sql", "sql-server", "auto-increment", "" ]
How can I use PHP to show five rows from a MySQL database, then create a new line and show another five, etc?
Use the [LIMIT clause](http://php.about.com/od/mysqlcommands/g/Limit_sql.htm) if you want to limit the amount of results returned from the query. If you want to print an `<hr/>` after every fifth record you can check it via the [modulus operator](https://www.php.net/manual/en/language.operators.arithmetic.php): ``` $counter = 1; while ($row = mysql_fetch_assoc($rst)) { // print $row stuff if ($counter % 5 == 0) print "<hr />"; $counter++; } ``` Basically, we have a variable used to count how many records we've printed. Once that counter can be divided by five, and leave no remainder, we print our horizontal-rule.
Something like this may be helpful: ``` $result = mysql_query($query); if($result) { while($row = mysql_fetch_assoc($result)) { if(++$i%5==0 && $i>0) { /* do the extra thing... new line some styles */ } } } ```
How to display five rows?
[ "", "php", "mysql", "rows", "" ]
I am moving from VB to C#. I am trying to loop through a collection class which is a collection of data classes but I can't seem to get the actual values out of the data class properties(find the correct code to do so). I have a method that loops through the collection class(Contacts) and saves each record(Contact). I am using reflection because my method will not know if it is Contacts class or a Customer class and so forth. Here is my code in VB(watered down) ``` Public Function SaveCollection(ByVal objCollection as Object, ByVal TableName as string, ByVal spSave as string) Dim objClass as Object Dim propInfo as PropertyInfo For Each objClass in objCollection propInfo = objClass.GetType.GetProperty("TableFieldName") Next End Function ``` I am having problems in C# with the objClass.GetType.GetProperty("TableFieldName") line Here is my C# code ``` public in SaveCollection(DictionaryBase objCollection, string TableName, string spSave) { PropertyInfo propInfo; foreach (DictionaryEntry objClass in objCollection) { propInfo = objClass.GetType().GetProperty("TableFieldName") } } ``` The C# code keeps returning null. In my locals window I can see the proprties on the class on objClass and the value of the propery but I can seem to figure out how to access it through code. I used the DictionaryBase because that seems to closely match would I need to do. My data class(Contact) has a bunch or properties that match the field names in the database of the Contact Table. After I get the propInfo variable set I then set up my SQLParameter with the fieldname, datatype etc and then set the value to the propInfo.value. Thanks for the help.
you need to get the value afterwords. also note that `GetValue` returns an `object`, you can then cast it to string or int or whatever type the value you expect is in. ``` public in SaveCollection(DictionaryBase objCollection, string TableName, string spSave) { PropertyInfo propInfo; foreach (DictionaryEntry objClass in objCollection) { object tempObj = objClass.Value; propInfo = tempObj.GetType().GetProperty("TableFieldName"); object[] obRetVal = new Object[0]; object value = propInfo.GetValue(tempObj,obRetVal); } } ``` if you know that TableFieldName will be a string then change this line. ``` string value = propInfo.GetValue(tempObj,obRetVal) as string; ```
It looks like you are passing a different collection to the VB code and the C# code. My guess is that in the VB code you are passing the values of a dictionary and in C# you are passing the dictionary itself. Try changing the C# line to the following ``` propInfo = objClass.Value.GetType().GetProperty("TableFieldName"); ```
C# accessing properties of an object from a collection class
[ "", "c#", "vb.net", "" ]
The MSDN documentation mentions that `double` type includes negative zero. However, both `-1.0 / double.PositiveInfinity` and `-double.Epsilon / 2` appear to return normal 0 (and compare equal to it). How can I get -0?
Here is a practical example of differentiating between the two without examining the bits. MSDN links [here](http://msdn.microsoft.com/en-us/library/aa691146(VS.71).aspx) and [here](http://msdn.microsoft.com/en-us/library/aa691373(VS.71).aspx) assisted me in constructing this example. ``` static void Main(string[] args) { float a = 5 / float.NegativeInfinity; float b = 5 / float.PositiveInfinity; float c = 1 / a; float d = 1 / b; Console.WriteLine(a); Console.WriteLine(b); Console.WriteLine(c); Console.WriteLine(d); } ``` Output: ``` 0 0 -Infinity Infinity ``` Take note that -0 and 0 both look the same for comparisons, output, etc. But if you divide 1 by them, you get a -Infinity or Infinity, depending on which zero you have.
Negative zero is to do with the way that the number is stored in binary, not any real achievable result from a mathematical calculation. In floating point storage, the topmost bit is often used to denote sign. This leaves 31 bits for data (in a 32bit floating point value) so there are actually two representations for zero. 00000000 00000000 00000000 00000000 Or 00000000 00000000 00000000 00000001 Both represent zero, but one with the sign bit set to negative. Naturally, this would normally occur when you incremented the highest possible positive number, it would overflow back to negative zero. In .net however I think by default the type does overflow checks and will throw an exception rather than let you overflow, so the only way to really archive this value is by setting it directly. Also, -0 should always compare equal to +0. There is more about it on [Wikipeida](http://en.wikipedia.org/wiki/%E2%88%920_(number))
How to get a -0 result in floating point calculations and distinguish it from +0 in C#?
[ "", "c#", "floating-point", "zero", "" ]
Given these two commands A: ``` $ java -Xms10G -Xmx10G myjavacode input.txt ``` B: ``` $ java -Xms5G -Xmx5G myjavacode input.txt ``` I have two questions: 1. Since command A reserves more memory with its parameters, will A run faster than B? 2. How do `-Xmx` and `-Xms` affect the running process and the output of my program?
It depends on the GC your java is using. Parallel GCs might work better on larger memory settings - I'm no expert on that though. In general, if you have larger memory the less frequent it needs to be GC-ed - there is lots of room for garbage. However, when it comes to a GC, the GC has to work on more memory - which in turn might be slower.
The `-Xmx` argument defines the max memory size that the heap can reach for the JVM. You must know your program well and see how it performs under load and set this parameter accordingly. A low value can cause *OutOfMemoryExceptions* or a very poor performance if your program's heap memory is reaching the maximum heap size. If your program is running in dedicated server you can set this parameter higher because it wont affect other programs. The `-Xms` argument sets the initial heap memory size for the JVM. This means that when you start your program the JVM will allocate this amount of memory instantly. This is useful if your program will consume a large amount of heap memory right from the start. This avoids the JVM to be constantly increasing the heap and can gain some performance there. If you don't know if this parameter is going to help you, **don't use it**. In summary, this is a compromise that you have to decide based only in the memory behavior of your program.
Speed tradeoff of Java's -Xms and -Xmx options
[ "", "java", "performance", "jvm-arguments", "" ]
Is it possible to change the name(the one that apears under 'processes' in Task Manager) of a process at runtime in win32? I want the program to be able to change it's own name, not other program's. Help would be appreciated, preferably in C++. And to dispel any thoughts of viruses, no this isn't a virus, yes I know what I'm doing, it's for my own use.
I know you're asking for Win32, but under most \*nixes, this can be accomplished by just changing argv[0]
I would like to submit what i believe IS a valid reason for changing the process name at runtime: I have an exe that runs continuously on a server -- though it is not a service. Several instances of this process can run on the server. The process is a scheduling system. An instance of the process runs for each line that is being scheduled, monitored and controlled. Imagine a factory with 7 lines to be scheduled. Main Assembly line, 3 sub assembly lines, and 3 machining lines. Rather than see sched.exe 7 times in task manager, it would be more helpful to see: sched-main sched-sub1 sched-sub2 sched-sub3 sched-mach1 sched-mach2 sched-mach3 This would be much more helpful to the Administrator ( the user in this situation should never see task manager). If one process is hung, the Administrator can easily know which one to kill and restart.
Process name change at runtime (C++)
[ "", "c++", "winapi", "" ]