Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm having some information in Google Spreadsheets as a single sheet. Is there any way by which I can read this information from .NET by providing the google credentials and spreadsheet address. Is it possible using Google Data APIs. Ultimately I need to get the information from Google spreadsheet in a DataTable. How can I do it? If anyone has attempted it, pls share some information.
According to the [.NET user guide](https://developers.google.com/google-apps/spreadsheets/): Download the [.NET client library](http://code.google.com/p/google-gdata/): Add these using statements: ``` using Google.GData.Client; using Google.GData.Extensions; using Google.GData.Spreadsheets; ``` Authenticate: ``` SpreadsheetsService myService = new SpreadsheetsService("exampleCo-exampleApp-1"); myService.setUserCredentials("jo@gmail.com", "mypassword"); ``` Get a list of spreadsheets: ``` SpreadsheetQuery query = new SpreadsheetQuery(); SpreadsheetFeed feed = myService.Query(query); Console.WriteLine("Your spreadsheets: "); foreach (SpreadsheetEntry entry in feed.Entries) { Console.WriteLine(entry.Title.Text); } ``` Given a SpreadsheetEntry you've already retrieved, you can get a list of all worksheets in this spreadsheet as follows: ``` AtomLink link = entry.Links.FindService(GDataSpreadsheetsNameTable.WorksheetRel, null); WorksheetQuery query = new WorksheetQuery(link.HRef.ToString()); WorksheetFeed feed = service.Query(query); foreach (WorksheetEntry worksheet in feed.Entries) { Console.WriteLine(worksheet.Title.Text); } ``` And get a cell based feed: ``` AtomLink cellFeedLink = worksheetentry.Links.FindService(GDataSpreadsheetsNameTable.CellRel, null); CellQuery query = new CellQuery(cellFeedLink.HRef.ToString()); CellFeed feed = service.Query(query); Console.WriteLine("Cells in this worksheet:"); foreach (CellEntry curCell in feed.Entries) { Console.WriteLine("Row {0}, column {1}: {2}", curCell.Cell.Row, curCell.Cell.Column, curCell.Cell.Value); } ```
[I wrote a simple wrapper](http://bugsquash.blogspot.com/2008/10/crud-api-for-google-spreadsheets.html) around [Google's .Net client library](http://code.google.com/p/google-gdata/), it exposes a simpler database-like interface, with strongly-typed record types. Here's some sample code: ``` public class Entity { public int IntProp { get; set; } public string StringProp { get; set; } } var e1 = new Entity { IntProp = 2 }; var e2 = new Entity { StringProp = "hello" }; var client = new DatabaseClient("you@gmail.com", "password"); const string dbName = "IntegrationTests"; Console.WriteLine("Opening or creating database"); db = client.GetDatabase(dbName) ?? client.CreateDatabase(dbName); // databases are spreadsheets const string tableName = "IntegrationTests"; Console.WriteLine("Opening or creating table"); table = db.GetTable<Entity>(tableName) ?? db.CreateTable<Entity>(tableName); // tables are worksheets table.DeleteAll(); table.Add(e1); table.Add(e2); var r1 = table.Get(1); ``` There's also a LINQ provider that translates to google's [structured query operators](http://code.google.com/apis/spreadsheets/docs/2.0/reference.html#ListParameters): ``` var q = from r in table.AsQueryable() where r.IntProp > -1000 && r.StringProp == "hello" orderby r.IntProp select r; ```
Accessing Google Spreadsheets with C# using Google Data API
[ "", "c#", "google-sheets", "google-api-dotnet-client", "google-data-api", "google-sheets-api", "" ]
If you have an enum in your application and you only have a few items, should you force the underlying type to be the smallest possible type? ``` enum smaller : byte { one, two, three }; ```
No. Don't prematurely optimize **unless** you've proved with a profiler that it's actually a problem.
**Relating to best practice:** When you don't have a particular reason for making the enum a type byte, you should leave it as the default. Any time you use an enum in a switch statement you should have a "default" clause for an invalid enum value. So it doesn't matter if you are checking for 256-NumRealEnumValues or 2^32-NumRealEnumValues. Both will have a default clause that handles all invalid cases. One reason for explicitly setting the type of the enum, is if you want your enum to correspond to another type in your program and you need to explicitly cast between them. Changing the type to the smallest fit will not help you with versioning problems either. Unless you have exactly the max size of the enum filled out. By versioning problems I mean when you have a compiled dll using the enum, then you add a new enum value, some code may execute that was not meant to go in the "default" clause of a switch statement. **Relating to efficiency:** No there is no benefit in terms of efficiency to make it a byte. int is more efficient to use because the cpu on x86 has 32-bit registers. Copying into a register is done 32-bits at a time. When you use a smaller type, you have to zero out part of the register and copy into the rest of the register's lower order bits.
When declaring an enum, should you force the type to byte for under 256 entities?
[ "", "c#", "enums", "types", "" ]
Hi I'd like to make my own 'parser', e.g: computing (4+(3-4^2))\*2 or parsing java,jsf,html code. In fact I did something like this but I feel it's not good. Is there anything good for me? I've tried to read more, but I'm bit confused, LL, LR, AST,BNF,javacc yacc etc :). I'm not sure which way to go, when I would like to compute 4+... or if I'd like to parse java,jsf code and produce something from this(another java code) Is there anything generaly good enough like ast? or something which I can use for both? thank you for help.
Before anything else, you have to understand that everything about parsing is based on **grammars**. Grammars describe the language you want to implement in terms of how to decompose the text in basic units and how to stack those units in some meaning ful way. You may also want to look for the **token**, **non-terminal**, **terminal** concepts. Differences between LL and LR can be of two kinds: implementation differences, and grammar writing differences. If you use a standard tool you only need to understand the second part. I usually use LL (top-down) grammars. They are simpler to write and to implement even using custom code. LR grammars theoretically cover more kinds of languages but in a normal situation they are just a hindrance when you need some correct error detection. Some random pointers: * **javacc** (java, LL), * **antlr** (java, LL), * **yepp** (smarteiffel, LL), * **bison** (C, LR, GNU version of the venerable **yacc**)
Parsers can be pretty intense to write. The standard tools are bison or yacc for the grammar, and flex for the syntax. These all output code in C or C++.
How do I make my own parser for java/jsf code?
[ "", "java", "parsing", "" ]
How can I generate random Int64 and UInt64 values using the `Random` class in C#?
This should do the trick. (It's an extension method so that you can call it just as you call the normal `Next` or `NextDouble` methods on a `Random` object). ``` public static Int64 NextInt64(this Random rnd) { var buffer = new byte[sizeof(Int64)]; rnd.NextBytes(buffer); return BitConverter.ToInt64(buffer, 0); } ``` Just replace `Int64` with `UInt64` everywhere if you want unsigned integers instead and all should work fine. **Note:** Since no context was provided regarding security or the desired randomness of the generated numbers (in fact the OP specifically mentioned the `Random` class), my example simply deals with the `Random` class, which is the preferred solution when randomness (often quantified as [information entropy](http://en.wikipedia.org/wiki/Information_entropy)) is not an issue. As a matter of interest, see the other answers that mention `RNGCryptoServiceProvider` (the RNG provided in the `System.Security` namespace), which can be used almost identically.
Use [`Random.NextBytes()`](http://msdn.microsoft.com/en-us/library/system.random.nextbytes.aspx) and [`BitConverter.ToInt64`](http://msdn.microsoft.com/en-us/library/system.bitconverter.toint64.aspx) / [`BitConverter.ToUInt64`](http://msdn.microsoft.com/en-us/library/system.bitconverter.touint64.aspx). ``` // Assume rng refers to an instance of System.Random byte[] bytes = new byte[8]; rng.NextBytes(bytes); long int64 = BitConverter.ToInt64(bytes, 0); ulong uint64 = BitConverter.ToUInt64(bytes, 0); ``` Note that using [`Random.Next()`](http://msdn.microsoft.com/en-us/library/9b3ta19y.aspx) twice, shifting one value and then ORing/adding doesn't work. `Random.Next()` only produces non-negative integers, i.e. it generates 31 bits, not 32, so the result of two calls only produces 62 random bits instead of the 64 bits required to cover the complete range of `Int64`/`UInt64`. ([Guffa's answer](https://stackoverflow.com/questions/677373/generate-random-values-in-c/677602#677602) shows how to do it with *three* calls to `Random.Next()` though.)
Generate random values in C#
[ "", "c#", "random", "long-integer", "int64", "" ]
This is my SQL statement that works using datediff: ``` SELECT SUM(b134_nettpay) AS Total, b134_rmcid, b134_recdate FROM B134HREC WHERE datediff (dd, '2006-05-05', getdate()) > 90 GROUP BY b134_rmcid, b134_recdate ORDER BY b134_recdate DESC, b134_rmcid ``` I need to Replace the Hardcoded date '2006-05-05' with MAX(b134\_recdate) like so: ``` SELECT SUM(b134_nettpay) AS Total, b134_rmcid, b134_recdate FROM B134HREC where datediff (dd, MAX(b134_recdate), getdate()) > 90 GROUP BY b134_rmcid, b134_recdate ORDER BY b134_recdate DESC, b134_rmcid ``` But i get this Error message: > An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. Any idea how to fix my SQL Statement?
Try ``` SELECT SUM(b134_nettpay) AS Total, b134_rmcid, b134_recdate FROM B134HRE where datediff (dd, (select MAX(b134_recdate) from B134HRE) , getdate()) > 90 GROUP BY b134_rmcid, b134_recdate ORDER BY b134_recdate DESC, b134_rmcid ```
Have you tried moving this: `datediff (dd, MAX(b134_recdate), getdate()) > 90` to a HAVING clause? The sql statement would be: `SELECT SUM(b134_nettpay) AS Total, b134_rmcid, b134_recdate FROM B134HRE GROUP BY b134_rmcid, b134_recdate ORDER BY b134_recdate DESC, b134_rmcid HAVING datediff (dd, MAX(b134_recdate), getdate()) > 90`
Using datediff in MySQL statement
[ "", "sql", "mysql", "sql-server", "datediff", "" ]
I am trying to learn PHP5 and am having a couple of problems with it. I am working with prepared statements and am trying to run the following code: ``` <?php require_once 'includes/config.php'; $conn = new mysqli(DB_SERVER, DB_USER, DB_PASSWORD, DB_NAME) or die('There was a problem connecting to the database.'); $query = "SELECT * FROM user_table"; if($stmt = $conn->prepare($query)) { $stmt->execute(); while ($row = $stmt->fetch()) { print_r ($row); } } ?> ``` I have 2 rows it should return each containing an id, login\_name, login\_password and a login\_level. When the statement runs it only prints the following: 11 Any help would be greatly appreciated.
The [fetch()](https://www.php.net/manual/en/mysqli-stmt.fetch.php) method returns `TRUE`, `FALSE`, or `NULL` depending on whether it succeeded in fetching the data. It doesn't return the data in an array. Instead it places the results in variables bound by the [bind\_result()](http://ca.php.net/manual/en/mysqli-stmt.bind-result.php) method.
It returns 1 because 1 is TRUE in PHP. What you should do is bind a variable with the `[bind_result][1]` method and then do: ``` while ($stmt->fetch()) { printf ("%s\n", $variable); } ``` A great example is on the [bind result documentation page.](http://is.php.net/manual/en/mysqli-stmt.bind-result.php)
Why would print_r ($row); only be returning a number 1?
[ "", "php", "mysql", "mysqli", "" ]
What is the simplest way to get the directory that a file is in? I'm using this to set a working directory. ``` string filename = @"C:\MyDirectory\MyFile.bat"; ``` In this example, I should get "C:\MyDirectory".
If you definitely have an absolute path, use [`Path.GetDirectoryName(path)`](http://msdn.microsoft.com/en-us/library/system.io.path.getdirectoryname.aspx). If you might only have a relative name, use `new FileInfo(path).Directory.FullName`. Note that `Path` and `FileInfo` are both found in the namespace `System.IO`.
``` System.IO.Path.GetDirectoryName(filename) ```
How do I get the directory from a file's full path?
[ "", "c#", ".net", "file", "file-io", "directory", "" ]
What functionality does the `stackalloc` keyword provide? When and Why would I want to use it?
From [MSDN](http://msdn.microsoft.com/en-us/library/cx9s2sy4.aspx): > Used in an unsafe code context to allocate a block of memory on the > stack. One of the main features of C# is that you do not normally need to access memory directly, as you would do in C/C++ using `malloc` or `new`. However, if you really want to explicitly allocate some memory you can, but C# considers this "unsafe", so you can only do it if you compile with the `unsafe` setting. `stackalloc` allows you to allocate such memory. You almost certainly don't need to use it for writing managed code. It is feasible that in some cases you could write faster code if you access memory directly - it basically allows you to use pointer manipulation which suits some problems. Unless you have a specific problem and unsafe code is the only solution then you will *probably* never need this.
Stackalloc will allocate data on the stack, which can be used to avoid the garbage that would be generated by repeatedly creating and destroying arrays of value types within a method. ``` public unsafe void DoSomeStuff() { byte* unmanaged = stackalloc byte[100]; byte[] managed = new byte[100]; //Do stuff with the arrays //When this method exits, the unmanaged array gets immediately destroyed. //The managed array no longer has any handles to it, so it will get //cleaned up the next time the garbage collector runs. //In the mean-time, it is still consuming memory and adding to the list of crap //the garbage collector needs to keep track of. If you're doing XNA dev on the //Xbox 360, this can be especially bad. } ```
When would I need to use the stackalloc keyword in C#?
[ "", "c#", "keyword", "stackalloc", "" ]
Sometimes I need to call WCF service in Silverlight and block UI until it returns. Sure I can do it in three steps: 1. Setup handlers and block UI 2. Call service 3. Unblock UI when everything is done. However, I'd like to add DoSomethingSync method to service client class and just call it whenever I need. Is it possible? Has anyone really implemented such a method? **UPDATE:** Looks like the answer is not to use sync calls at all. Will look for some easy to use pattern for async calls. Take a look at [this](http://petesbloggerama.blogspot.com/2008/07/omg-silverlight-asynchronous-is-evil.html) post (taken from comments) for more.
Here's the point; you **shouldn't** do sync IO in Silverlight. Stop fighting it! Instead: * disable any critical parts of the UI * start async IO with callback * (...) * in the callback, process the data and update/re-enable the UI As it happens, I'm actively working on ways to make the async pattern more approachable (in particular with Silverlight in mind). [Here's](http://marcgravell.blogspot.com/2009/02/async-without-pain.html) a first stab, but I have something better up my sleeve ;-p
I'd disagree with Marc there are genuine cases where you need to do synchronous web service calls. However what you probably should avoid is blocking on the UI thread as that creates a very bad user experience. A very simple way to implement a service call synchronously is to use a ManualResetEvent. ``` ManualResetEvent m_svcMRE = new ManualResetEvent(false); MyServiceClient m_svcProxy = new MyServiceClient(binding, address); m_svcProxy.DoSomethingCompleted += (sender, args) => { m_svcMRE.Set(); }; public void DoSomething() { m_svcMRE.Reset(); m_svcProxy.DoSomething(); m_svcMRE.WaitOne(); } ```
How I can implement sync calls to WCF services in SIlverlight?
[ "", "c#", "silverlight", "wcf", "synchronous", "" ]
My question is about writing a video file to the hard drive that is being downloaded from the network and playing it at the same time using Windows Media Player. The file is pretty big and will take awhile to download. It is necessary to download it rather than just stream it directly to Windows Media Player. What happens is that, while I can write to the video file and read it at the same time from my own test code, it cannot be done using Windows Media Player (at least I haven't figured it out). I know it is possible to do because Amazon Unbox downloads does it. Unbox lets you play WMVs while it is downloading them. And Unbox is written in .NET so... I've read the "[C# file read/write fileshare doesn’t appear to work](https://stackoverflow.com/questions/124946/c-file-read-write-fileshare-doesnt-appear-to-work)" question and answers for opening a file with the FileShare flags. But it's not working for me. Process Monitor says that Media Player is opening the file with Fileshare flags, but it errors out. In the buffering thread I have this code for reading the file from the network and writing it to a file (no error handling or other stuff to make it more readable): ``` // the download thread void StartStreaming(Stream webStream, int bufferFullByteCount) { int bytesRead; var buffer = new byte[4096]; var fileStream = new FileStream(MediaFile.FullName, FileMode.Create, FileAccess.Write, FileShare.ReadWrite); var writer = new BinaryWriter(fileStream); var totalBytesRead = 0; do { bytesRead = webStream.Read(buffer, 0, 4096); if (bytesRead != 0) { writer.Write(buffer, 0, (int)bytesRead); writer.Flush(); totalBytesRead += bytesRead; } if (totalBytesRead >= bufferFullByteCount) { // fire an event to a different thread to tell // Windows Media Player to start playing OnBufferingComplete(this, new BufferingCompleteEventArgs(this, MediaFile)); } } while (bytesRead != 0); } ``` This seems to work fine. The file writes to the disk and has the correct permissions. But then heres the event handler in the other thread for playing back the video ``` // the playback thread private void OnBufferingComplete(object sender, BufferingCompleteEventArgs e) { axWindowsMediaPlayer1.URL = e.MediaFile.FullName; } ``` Windows Media Player indicates that its opening the file and then just stops with an error that the file can't be opened "already opened in another process." I have tried everything I can think of. What am I missing? If the Amazon guys can do this then so can I, right? edit: this code works with mplayer, VLC, and Media Player Classic; just not Windows Media Player or Windows Media Center player. IOW, the only players I need them to work with. ugh! edit2: I went so far as to use MiniHttp to stream the video to Windows Media Player to see if that would "fool" WMP into playing a video that is being download. Nothing doing. While WMP did open the file it waited until the mpeg file was completely copied before it started playing. How does it know? edit3: After some digging I discovered the problem. I am working with MPEG2 files. The problem is not necessarily with Windows Media Player, but with the Microsoft MPEG2 DirectShow Splitter that WMP uses to open the MPEG2 files that I am trying to play and download at the same time. The Splitter opens the files in non-Shared mode. Not so with WMV files. WMP opens them in shared mode and everything works as expected.
I've decided to answer my own question in case someone else runs into this rare situation. The short answer is this: Windows Media Player (as of this writing) will allow a file to be downloaded and play at the same time as long as that functionality is supported by CODECS involved in rendering the file. To quote from the last edit to the question: After some digging I discovered the problem. I am working with MPEG2 files. The problem is not necessarily with Windows Media Player, but with the Microsoft MPEG2 DirectShow Splitter that WMP uses to open the MPEG2 files that I am trying to play and download at the same time. The Splitter opens the files in non-Shared mode. Not so with WMV files. WMP opens them in shared mode and everything works as expected.
Updated 16 March after comment by @darin: You're specifying `FileShare.ReadWrite` when you're writing the file, which theoretically allows another process to open it for writing too. Try altering your code to only request `FileShare.Read`: ``` var fileStream = new FileStream( MediaFile.FullName, FileMode.Create, FileAccess.Write, FileShare.Read); // Instead of ReadWrite ``` To quote [MSDN](http://msdn.microsoft.com/en-us/library/system.io.fileshare.aspx): > FileShare.Read: Allows subsequent > opening of the file for reading. > > FileShare.ReadWrite: Allows subsequent > opening of the file for reading or > writing.
Playback large files on Windows Media Player during download
[ "", "c#", "fileshare", "media-player", "" ]
I am so annoyed. Typically I like replace acting as it does in C# but is there a C++ styled replace where it only replaces one letter at a time or the X amount I specify?
No there is not a Replace method in the BCL which will replace only a single instance of the character. The two main Replace methods will replace all occurances. However, it's not terribly difficult to write a version that does a single character replacement. ``` public static string ReplaceSingle(this string source, char toReplace, char newChar) { var index = source.IndexOf(toReplace); if ( index < 0 ) { return source; } var builder = new StringBuilder(); for( var i = 0; i < source.Length; i++ ) { if ( i == index ) { builder.Append(newChar); } else { builder.Append(source[i]); } } return builder.ToString(); } ```
Just use IndexOf and SubString if you only want to replace one occurance.
.NET String.Replace
[ "", "c#", ".net", "string", "replace", "" ]
I've got an enum with possible values: ``` public enum Language { English = 1, French = 2, German = 3 } ``` Now i want my class to be dynamic in the sense that it can cater for multiple values based on the enum list. So if the enum list grew i can capture all possible values. Here's how my initial design looks: ``` public class Locale { public string EnglishValue { get; set; } public string FrenchValue { get; set; } public string GermanValue { get; set; } } ``` But i want something that doesnt need to recompile if the enum list (or any list) grows: Is it possible to express something like: ``` public class Locale { public string StringValue<T> Where T : Language } ``` I am open to any suggestions or design patterns that can nicely deal with this problem. Thanks
Just use a Dictionary. ``` public class Locale : Dictionary<Language, string> { } ``` Whatever you do, if you change the enum, you have to recompile, but what you probably mean is "maintain" the Locale class.
What you're proposing won't work unfortunately. This would be a better design IMHO: ``` public class Locale { public string GetStringValue(Language language) { // return value as appropriate } } ```
Is generics the solution to this class design?
[ "", "c#", "generics", "" ]
I have the displeasure of generating table creation scripts for Microsoft Access. I have not yet found any documentation describing what the syntax is for the various types. I have [found the documentation](http://msdn.microsoft.com/en-us/library/bb177893.aspx) for the Create Table statement in Access but there is little mention of the types that can be used. For example: ``` CREATE TABLE Foo (MyIdField *FIELDTYPE*) ``` Where FIELDTYPE is one of...? Through trial and error I've found a few like INTEGER, BYTE, TEXT, SINGLE but I would really like to find a page that documents all to make sure I'm using the right ones.
I've found the table in the link below pretty useful: <http://allenbrowne.com/ser-49.html> It lists what Access's Gui calls each data type, the DDL name, DAO name and ADO name (they are all different...).
Some of the best documentation from Microsoft on the topic of SQL Data Definition Language (SQL DDL) for ACE/Jet can be found here: [Intermediate Microsoft Jet SQL for Access 2000](http://msdn.microsoft.com/en-us/library/aa140015(office.10).aspx) Of particular interest are the synonyms, which are important for writing portable SQL code. One thing to note is that the Jet 4.0 version of the SQL DDL syntax requires the interface to be in ANSI-92 Query Mode; the article refers to ADO because ADO always uses ANSI-92 Query Mode. The default option for the MS Access interface is ANSI-89 Query Mode, however from Access2003 onwards the UI can be put into ANSI-92 Query Mode. All versions of DAO use ANSI-89 Query Mode. I'm not sure whether SQL DDL syntax was extended for ACE for Access2007. For more details about query modes, see [About ANSI SQL query mode (MDB)](http://office.microsoft.com/en-gb/access/HP030704831033.aspx)
Field types available for use with "CREATE TABLE" in Microsoft Access
[ "", "sql", "database", "ms-access", "" ]
I have a Java program running on Windows (a Citrix machine), that dispatches a request to Java application servers on Linux; this dispatching mechanism is all custom. The Windows Java program (let's call it `W`) opens a listen socket to a port given by the OS, say 1234 to receive results. Then it invokes a "dispatch" service on the server with a "business request". This service splits the request and sends it to other servers (let's call them `S1 ... Sn`), and returns the number of jobs to the client synchronously. In my tests, there are 13 jobs, dispatched to a number of servers and within 2 seconds, all servers have finished processing their jobs and try to send the results back to the `W`'s socket. I can see in the logs that 9 jobs are received by `W` (this number varies from test to test). So, I try to look for the 4 remaining jobs. If I do a `netstat` on this Windows box, I see that 4 sockets are open: ``` TCP W:4373 S5:48197 ESTABLISHED TCP W:4373 S5:48198 ESTABLISHED TCP W:4373 S6:57642 ESTABLISHED TCP W:4373 S7:48295 ESTABLISHED ``` If I do a thread dump of `W`, I see 4 threads trying to read from these sockets, and apparently stuck in `java.net.SocketInputStream.socketRead0(Native Method)`. If I go on each of the `S` boxes and do a `netstat`, I see that some bytes are still in the Send Queue. This number of bytes does not move for 15 minutes. (The following is the aggregation of `netstat`s on the different machines): ``` Proto Recv-Q Send-Q Local Address Foreign Addr State tcp 0 6385 S1:48197 W:4373 ESTABLISHED tcp 0 6005 S1:48198 W:4373 ESTABLISHED tcp 0 6868 S6:57642 W:4373 ESTABLISHED tcp 0 6787 S7:48295 W:4373 ESTABLISHED ``` If I do a thread dump of the servers, I see the threads are also stuck in `java.net.SocketInputStream.socketRead0(Native Method)`. I would expect a write, but maybe they're waiting for an ACK? (Not sure here; would it show in Java? Shouldn't it be handled by the TCP protocol directly?) Now, the very strange thing is: after 15 minutes (and it's always 15 minutes), the results are received, sockets are closed, and everything continues as normal. This used to always work before. The `S` servers moved to a different data center, so `W` and `S` are no longer in the same data center. Also, `S` is behind a firewall. All ports should be authorized between `S` and `W` (I'm told). The mystery is really the 15 minute delay. I thought that it could be some protection against DDOS? I'm no network expert so I asked for help, but nobody's available to help me. I spent 30 minutes with a guy capturing packets with Wireshark (formerly Ethereal), but for "security reasons," I cannot look at the result. He has to analyze this and get back to me. I asked for the firewall logs; same story. I'm not root or administrator on these boxes, now I don't know what to do... I'm not expecting a solution from you guys, but some ideas on how to progress would be great!
If it worked ok in your local network, then I don't envisage this being a programming issue (re. the `flush()` comments). Is network connectivity between the 2 machines normal otherwise ? Can you transfer similar quantities of data via (say) FTP with no problem. Can you replicate this issue by knocking together a client/server script just to send appropriately sized chunks of data. i.e. is the network connectivity good between W and S ? Another question. You now have a firewall inbetween. Could this be a possible bottleneck that wasn't there before ? (not sure how that would explain the consistent 15m delay though). Final question. What are your TCP configuration parameters set up to be (on both W and S - I'm thinking about the OS-level parameters). Is there anything there that would suggest or lead to a 15m figure. Not sure if that's any help.
Right. If you're using a BufferedOutputStream you need to call flush() unless you reach the max buffer size.
(network sockets) bytes stuck in Send Queue for 15 minutes; why?
[ "", "java", "networking", "routes", "firewall", "" ]
This does not work ``` int blueInt = Color.Blue.ToArgb(); Color fred = Color.FromArgb(blueInt); Assert.AreEqual(Color.Blue,fred); ``` Any suggestions? [Edit] I'm using NUnit and the output is failed: Expected: Color [Blue] But was: Color [A=255, R=0, G=0, B=255] [Edit] This works! ``` int blueInt = Color.Blue.ToArgb(); Color fred = Color.FromArgb(blueInt); Assert.AreEqual(Color.Blue.ToArgb(),fred.ToArgb()); ```
From the [MSDN documentation on `Color.operator ==`](http://msdn.microsoft.com/en-us/library/system.drawing.color.op_equality.aspx): > This method compares more than the > ARGB values of the Color structures. > It also does a comparison of some > state flags. If you want to compare > just the ARGB values of two Color > structures, compare them using the > ToArgb method. I'm guessing the state flags are different.
They won't equal the same, as Color.Blue doesn't equal your colour object, it equals something stored internally, a "new Color(KnownColor.Blue);" to be exact.
Using ToArgb() followed by FromArgb() does not result in the original color
[ "", "c#", ".net", "colors", "system.drawing.color", "" ]
How do I take a DataTable and convert it to a List? I've included some code below in both C# and VB.NET, the issue with both of these is that we create a new object to return the data, which is very costly. I need to return a reference to the object. The DataSetNoteProcsTableAdapters.up\_GetNoteRow object does implement the INote interface. I am using ADO.NET, along with .NET 3.5 c# code ``` public static IList<INote> GetNotes() { DataSetNoteProcsTableAdapters.up_GetNoteTableAdapter adapter = new DataSetNoteProcsTableAdapters.up_GetNoteTableAdapter(); DataSetNoteProcs.up_GetNoteDataTable table = new DataSetNoteProcs.up_GetNoteDataTable(); IList<INote> notes = new List<INote>(); adapter.Connection = DataAccess.ConnectionSettings.Connection; adapter.Fill(table); foreach (DataSetNoteProcs.up_GetNoteRow t in table) { notes.Add((INote)t); } return notes; } ``` VB.NET Code ``` Public Shared Function GetNotes() As IList(Of INote) Dim adapter As New DataSetNoteProcsTableAdapters.up_GetNoteTableAdapter Dim table As New DataSetNoteProcs.up_GetNoteDataTable Dim notes As IList(Of INote) = New List(Of INote) adapter.Connection = DataAccess.ConnectionSettings.Connection adapter.Fill(table) For Each t As DataSetNoteProcs.up_GetNoteRow In table notes.Add(CType(t, INote)) Next Return notes End Function ```
No, creating a list is not costly. Compared to creating the data table and fetching the data from the database, it's very cheap. You can make it even cheaper by creating the list after populating the table, so that you can set the initial capacity to the number of rows that you will put in it: ``` IList<INote> notes = new List<INote>(table.Rows.Count); ```
I have another approach that might be worth taking a look at. It's a helper method. Create a custom class file named CollectionHelper: ``` public static IList<T> ConvertTo<T>(DataTable table) { if (table == null) return null; List<DataRow> rows = new List<DataRow>(); foreach (DataRow row in table.Rows) rows.Add(row); return ConvertTo<T>(rows); } ``` Imagine you want to get a list of customers. Now you'll have the following caller: ``` List<Customer> myList = (List<Customer>)CollectionHelper.ConvertTo<Customer>(table); ``` The attributes you have in your DataTable must match your Customer class (fields like Name, Address, Telephone). I hope it helps! For who are willing to know why to use lists instead of DataTables: [link text](https://stackoverflow.com/questions/275269/does-a-datatable-consume-more-memory-than-a-listt) The full sample: <http://lozanotek.com/blog/archive/2007/05/09/Converting_Custom_Collections_To_and_From_DataTable.aspx>
DataTable to List<object>
[ "", "c#", "dataset", "" ]
I would like to encrypt the passwords on my site using a 2-way encryption within PHP. I have come across the mcrypt library, but it seems so cumbersome. Anyone know of any other methods that are easier, but yet secure? I do have access to the Zend Framework, so a solution using it would do as well. I actually need the 2-way encryption because my client wants to go into the db and change the password or retrieve it.
You should store passwords hashed (and **[properly salted](https://stackoverflow.com/questions/1645161/salt-generation-and-open-source-software/1645190#1645190)**). **There is no excuse in the world that is good enough to break this rule.** Currently, using [crypt](http://www.php.net/crypt), with CRYPT\_BLOWFISH is the best practice. CRYPT\_BLOWFISH in PHP is an implementation of the Bcrypt hash. Bcrypt is based on the Blowfish block cipher. * If your client tries to login, you hash the entered password and compare it to the hash stored in the DB. if they match, access is granted. * If your client wants to change the password, they will need to do it trough some little script, that properly hashes the new password and stores it into the DB. * If your client wants to recover a password, a new random password should be generated and send to your client. The hash of the new password is stored in the DB * If your clients want to look up the current password, *they are out of luck*. And that is exactly the point of hashing password: the system does not know the password, so it can never be 'looked up'/stolen. [Jeff](https://stackoverflow.com/users/1/jeff-atwood) blogged about it: [You're Probably Storing Passwords Incorrectly](http://www.codinghorror.com/blog/archives/000953.html) If you want to use a standard library, you could take a look at: [Portable PHP password hashing framework](http://www.openwall.com/phpass/) and make sure you use the CRYPT\_BLOWFISH algorithm. (Generally speaking, messing around with the records in your database directly is asking for trouble. Many people -including very experienced DB administrators- have found that out the hard way.)
Don't encrypt passwords. You never really need to decrypt them, you only need to verify them. Using mcrypt is not much better than doing nothing at all, since if a hacker broke into your site and stole the encrypted passwords, they would probably also be able to steal the key used to encrypt them. Create a single "password" function for your php application where you take the user's password, concatenate it with a salt and run the resulting string through the sha-256 hashing function and return the result. Whenever you need to verify a password, you only need to verify that the hash of the password matches the hash in the database. [<http://phpsec.org/articles/2005/password-hashing.html>](http://phpsec.org/articles/2005/password-hashing.html)
What is the best way to implement 2-way encryption with PHP?
[ "", "php", "security", "encryption", "passwords", "" ]
I want to find out my Python installation path on Windows. For example: ``` C:\Python25 ``` How can I find where Python is installed?
In your Python interpreter, type the following commands: ``` >>> import os >>> import sys >>> os.path.dirname(sys.executable) 'C:\\Python25' ``` Also, you can club all these and use a single line command. Open cmd and enter following command ``` python -c "import os, sys; print(os.path.dirname(sys.executable))" ```
If you have Python in your environment variable then you can use the following command in cmd or powershell: ``` where python ``` or for Unix enviroment ``` which python ``` command line image : [![enter image description here](https://i.stack.imgur.com/sL31x.png)](https://i.stack.imgur.com/sL31x.png)
How can I find where Python is installed on Windows?
[ "", "python", "windows", "path", "" ]
**complaining {** I always end up incredibly frustrated when I go to profile my code using Visual Studio 2008's profiler (from the Analyze menu option). It is one of the poorest designed features of VS, in my opinion, and I cringe every time I need use it. **}** I have a few questions concerning it's use, I'm hoping you guys can give me some pointers :-) 1. Do you need to have your projects built in Debug or Release to profile them? One dialog (that I have no idea how to get back to) mentioned I should profile under Release. Okay - I do that - when I go to run the code, it tells me I'm missing PDB files. Awesome. So I go back to Debug mode, and I try to run it, and it tells me that half of my projects need to be recompiled with the /PROFILE switch on. 2. Is it possible to profile C++/CLI projects? With the /PROFILE switch on, half of the time I get absolutely no output from my C++/CLI projects. 3. Why, when attempting to profile C# projects, do they show up in the Report under Modules view Name list as 0x0000001, 0x0000002, 0x0000003, etc? Really, VS? Really? You can't take a guess at the names of my modules? 4. For that matter, why are function names reported as 0x0A000011, 0x06000009, 0xA0000068, etc? 5. Why, oh why, does VS rebuild EVERYTHING in the dependency tree of the executable being profiled? It might not be a problem if you have 5-6 projects in your solution, but when you have 70, **it's almost quicker to not even bother with the profiler**. 6. Can you recommend any good guides on using the VS2008 profiler?
> 1.Do you need to have your projects built in Debug or Release to profile them? Normally you use Release mode. > 6.Can you recommend any good guides on using the VS2008 profiler? Step 1, download [ANTS Profiler](http://www.red-gate.com/Products/ants_profiler/index.htm). Step 2, follow the easy to use on screen instructions. Step 3, look at the easy to read reports. Seriously, the Microsoft profiler is garbage compared to ANTS.
If you're finding it difficult to use, there's a really great .NET profiler called [nprof](http://nprof.sourceforge.net/Site/Description.html), and if you're debugging non-CLR projects, AMD has a really spectacular statistical profiler called [Code Analyst](http://www.amd.com/codeanalyst/). Both are free(!), and exceedingly easy to use. A much nicer alternative, and I expect from your post above you're about ready to ditch the VS builtin profiler anyway :)
Do I just completely misunderstand how to use Visual Studio's 2008 profiler?
[ "", "c#", "visual-studio-2008", "profiling", "c++-cli", "" ]
In the following code: ``` Expression<Func<int, bool>> isOdd = i => (i & 1) == 1; ``` ...what is the meaning of `(i & 1) == 1`?
[Bitwise AND](http://msdn.microsoft.com/en-us/library/z0zec0b2(VS.71).aspx). In this case, checking whether the last bit in `i` is set. If it is, it must be an odd number since the last bit represents 1 and all other bits represent even numbers.
& is a bitwise AND operator, AND being one of the fundamental operations in a binary system. AND means 'if both A and B is on'. The real world example is two switches in series. Current will only pass through if both are allowing current through. In a computer, these aren't physical switches but semiconductors, and their functionality are called [logic gates](http://en.wikipedia.org/wiki/Logic_gate). They do the same sorts of things as the switches - react to current or no current. When applied to integers, every bit in one number is combined with every bit in the other number. So to understand the bitwise operator AND, you need to convert the numbers to binary, then do the AND operation on every pair of matching bits. That is why: ``` 00011011 (odd number) AND 00000001 (& 1) == 00000001 (results in 1) ``` Whereas ``` 00011010 (even number) AND 00000001 (& 1) == 00000000 (results in 0) ``` The (& 1) operation therefore compares the right-most bit to 1 using AND logic. All the other bits are effectively ignored because anything AND nothing is nothing. This is equivalent to checking if the number is an odd number (all odd numbers have a right-most bit equal to 1). *The above is adapted from a similar answer I wrote to [this question](https://stackoverflow.com/questions/600202/understanding-phps-operator/601349#601349).*
What is the meaning of the & operator?
[ "", "c#", ".net", "linq", "operators", "" ]
I have a complete separation of my Entity Framework objects and my POCO objects, I just translate them back and forth... i.e: ``` // poco public class Author { public Guid Id { get; set; } public string UserName { get; set; } } ``` and then I have an EF object "Authors" with the same properties.. So I have my business object ``` var author = new Author { UserName="foo", Id="Guid thats in the db" }; ``` and I want to save this object so I do the following: ``` var dbAuthor = new dbAuthor { Id=author.Id, UserName=author.UserName }; entities.Attach(dbAuthor); entities.SaveChanges(); ``` but this gives me the following error: > An object with a null EntityKey value > cannot be attached to an object > context. **EDIT:** It looks like I have to use entities.AttachTo("Authors", dbAuthor); to attach without an EntityKey, but then I have hard coded magic strings, which will break if I change my entity set names at all and I wont have any compile time checking... Is there a way I can attach that keeps compile time checking? I would hope I'd be able to do this, as hard coded strings killing off compile time validation would suck =)
Have you tried using [AttachTo](http://msdn.microsoft.com/en-us/library/system.data.objects.objectcontext.attachto.aspx) and specifying the entity set?.. ``` entities.AttachTo("Authors", dbAuthor); ``` where `"Authors"` would be your actual entity set name. Edit: Yes there is a better way (well there should be). The designer should have generated ["Add" methods](http://msdn.microsoft.com/en-us/library/system.data.objects.objectcontext.addobject.aspx) to the ObjectContext for you which translate out to the call above.. So you should be able to do: ``` entities.AddToAuthors(dbAuthor); ``` which should literally be: ``` public void AddToAuthors(Authors authors) { base.AddObject("Authors", authors); } ``` defined in the whateverobjectcontext.designer.cs file.
Just seeing this now. If you want to Attach() to the ObjectContext, i.e. convince the entity framework that an entity exists in the database already, and you want to avoid using magic strings i.e. ``` ctx.AttachTo("EntitySet", entity); ``` You can try two possibilities based on extension methods, both of which definitely make life more bearable. The first option allows you to write: ``` ctx.AttachToDefault(entity); ``` and is covered in here: [Tip 13 - How to attach an entity the easy way](http://blogs.msdn.com/alexj/archive/2009/04/15/tip-13-how-to-attach-an-entity-the-easy-way.aspx) The second option allows you to write: ``` ctx.EntitySet.Attach(entity); ``` and is covered here: [Tip 16 - How to mimic .NET 4.0's ObjectSet today](http://blogs.msdn.com/alexj/archive/2009/05/01/tip-16-how-to-mimic-net-4-0-s-objectset-t-today.aspx) As you can see both are really easy to use and avoid strings altogether. Hope this helps Alex
How can I attach an Entity Framework object that isn't from the database?
[ "", "c#", "entity-framework", ".net-3.5", "linq-to-entities", "" ]
While testing some functions to convert strings between wchar\_t and utf8 I met the following weird result with Visual C++ express 2008 ``` std::wcout << L"élève" << std::endl; ``` prints out "ÚlÞve:" which is obviously not what is expected. This is obviously a bug. How can that be ? How am I suppose to deal with such "feature" ?
The C++ compiler does not support Unicode in code files. You have to replace those characters with their escaped versions instead. Try this: ``` std::wcout << L"\x00E9l\x00E8ve" << std::endl; ``` Also, your console must support Unicode as well. UPDATE: It's not going to produce the desired output in your console, because the console does not support Unicode.
I found these related questions with useful answers [Is there a Windows command shell that will display Unicode characters?](https://stackoverflow.com/questions/379240/is-there-a-windows-command-shell-that-will-display-unicode-characters) [How can I embed unicode string constants in a source file?](https://stackoverflow.com/questions/442735/how-can-i-embed-unicode-string-constants-in-a-source-file)
Unexpected output of std::wcout << L"élève"; in Windows Shell
[ "", "c++", "unicode", "wchar-t", "mojibake", "" ]
I have simple application, that shows list of many items, where user can display detail for each item, which is obtained by Ajax. However, if user closes the detail and opens again, application makes another Ajax request, to get the same content again. Is there any simple solution how to prevent this by caching requests on client, so when the user displays the same detail again, the content will be loaded from cache. Preferably using **jQuery**. I think this could be solved with proxy object, which would store request when its made for the first time, and when the request is made again, proxy will just return previous result without making another Ajax request. But I'm looking for some simpler solution, where I won't have to implement all this by myself.
Take a look at these jQuery plugins: * <http://plugins.jquery.com/project/cache> * <http://plugins.jquery.com/project/Tache> * <http://plugins.jquery.com/project/jCache> ## jQache sample: ``` // [OPTIONAL] Set the max cached item number, for example 20 $.jCache.maxSize = 20; // Start playing around with it: // Put an item into cache: $.jCache.setItem(theKey, theValue); // Retrieve an item from cache: var theValue = $.jCache.getItem(theKey); // Clear the cache (well, I think most of us don't need this case): $.jCache.clear(); ```
IMHO simplest way is to create a global array: ``` var desc_cache=[]; ``` and then create a function like this: ``` function getDesc(item){ if(desc_cache[item]) return desc_cache[item] else $.ajax(...); } ``` After getting ajax data save results to desc\_cache.
How to implement cache for Ajax requests
[ "", "javascript", "ajax", "caching", "jquery", "" ]
Just started working on a basic grid analysis algorithm in JavaScript but I have come up against an error that is perplexing me. ``` var max = 9; var testArray = new Array( ['7', '3', '9', '6', '4', '1', '5', '2', '8'], ['1', '8', '2', '7', '5', '3', '4', '6', '9'], ['9', '5', '7', '3', '8', '2', '1', '4', '6'], ['3', '1', '4', '9', '6', '7', '2', '8', '5'], ['6', '2', '8', '5', '1', '4', '9', '3', '7'], ['5', '4', '6', '2', '9', '8', '3', '7', '1'], ['8', '7', '1', '4', '3', '5', '6', '9', '2'], ['2', '9', '3', '1', '7', '6', '8', '5', '4'] ); function checkYoSelf(myGrid) { var i; var j; var horizLine = new String; for( i = 0; i <= (max - 1); i++ ) { for( j = 0; j <= (max - 1); j++) { document.write(i+"<br />"); horizLine += myGrid[i][j]; } var test = RegExp(i, "ig"); var result = new Array(horizLine.match(test)); if( result.length > 1 ) { alert("fail"); } } } html file has <a href='#' onclick="checkYoSelf(testArray);">check</a> ``` According to firebug `myGrid[i]` is undefined but I'm not sure why this should be. What am I doing wrong?
Well, this is working for me... I've just replaced your "max" variable to something more dynamic: ``` <script type="text/javascript"> var testArray = new Array( ['7', '3', '9', '6', '4', '1', '5', '2', '8'], ['1', '8', '2', '7', '5', '3', '4', '6', '9'], ['9', '5', '7', '3', '8', '2', '1', '4', '6'], ['3', '1', '4', '9', '6', '7', '2', '8', '5'], ['6', '2', '8', '5', '1', '4', '9', '3', '7'], ['5', '4', '6', '2', '9', '8', '3', '7', '1'], ['8', '7', '1', '4', '3', '5', '6', '9', '2'], ['2', '9', '3', '1', '7', '6', '8', '5', '4'] ); function checkYoSelf(myGrid) { var i; var j; var horizLine = new String; var maxRows = myGrid.length; for( i = 0; i < maxRows; i++ ) { var maxColumns = myGrid[i].length; for( j = 0; j < maxColumns; j++) { document.write(i+"<br />"); horizLine += myGrid[i][j]; } var test = RegExp(i, "ig"); var result = new Array(horizLine.match(test)); if( result.length > 1 ) { alert("fail"); } } } </script> <a href='#' onclick="checkYoSelf(testArray);">check</a> ``` Not sure what you're trying to do with this, but at least it doesn't give any error.
where is `max` coming from? > max is defined globally (in this case max = 9) you only have 8 rows.
Why is firebug telling me my JavaScript array is undefined?
[ "", "javascript", "" ]
Let's say you have a function that modifies a variable. Should you write it like this: `void myfunc(int *a)` or like this `void myfunc(int &a)`? The former forces you to call the function with `myfunc(&b)` so the caller is aware that `b` will be modified, but the latter is shorter and can be called simply with `myfunc(b)`. So which is better to use? Is there something else I'm missing?
Pointers (ie. the '\*') should be used where the passing "NULL" is meaningful. For example, you might use a NULL to represent that a particular object needs to be created, or that a particular action doesn't need to be taken. Or if it ever needs to be called from non-C++ code. (eg. for use in shared libraries) eg. The libc function `time_t time (time_t *result);` If `result` is not NULL, the current time will be stored. But if `result` is NULL, then no action is taken. If the function that you're writing doesn't need to use NULL as a meaningful value then using references (ie. the '&') will probably be less confusing - assuming that is the convention that your project uses.
Whenever possible I use references over pointers. The reason for this is that it's a lot harder to screw up a reference than a pointer. People can always pass NULL to a pointer value but there is no such equivalent to a reference. The only real downside is there reference parameters in C++ have a lack of call site documentation. Some people believe that makes it harder to understand code (and I agree to an extent). I usually define the following in my code and use it for fake call site documentation ``` #define byref ... someFunc(byref x); ``` This of course doesn't enforce call site documentation. It just provides a very lame way of documenting it. I did some experimentation with a template which enforces call site documentation. This is more for fun than for actual production code though. <http://blogs.msdn.com/jaredpar/archive/2008/04/03/reference-values-in-c.aspx>
C++ functions: ampersand vs asterisk
[ "", "c++", "function", "pointers", "" ]
I am using this code to detect whether modifier keys are being held down in the KeyDown event of a text box. ``` private void txtShortcut_KeyDown(object sender, KeyEventArgs e) { if (e.Shift || e.Control || e.Alt) { txtShortcut.Text = (e.Shift.ToString() + e.Control.ToString() + e.Alt.ToString() + e.KeyCode.ToString()); } } ``` How would I display the actual modifier key name and not the bool result and also display the non-modifier key being pressed at the end of the modifier key if a non-modifier key like the letter A is being pressed at the same time too? Is there a way to do it all in the same txtShortcut.Text = (); line?
You can check the `Control.ModifierKeys` - because that is an enum it should be more human friendly. Alternatively, just ``` string s = (e.Shift ? "[Shift]+" : "") + (e.Control ? "[Ctrl]+" : "") + (e.Alt ? "[Alt]+" : "") + e.KeyCode; ```
use the ?: Operator ``` txtShortcut.Text = (e.Shift? "Shift ": "") + (e.Control? "Control ": "") + (e.Alt? "Alt ": "") + e.KeyCode.ToString()); ```
C#: How do you display the modifier key name + non-modifier key name in this keydown event?
[ "", "c#", "" ]
I have a **Question** class that has public List property that can contain several **Answers**. I have a question repository which is responsible for reading the questions and its answers from an xml file. So I have a collection of Questions (List) with each Question object having a collection of Answers and I'd like to query this collection of Questions for an Answer (ie by its Name) by using Linq. I don't know how to do this properly. I could do it with a foreach but I'd like to know whether there is a pure Linq way since I'm learning it.
To find an answer. ``` questions.SelectMany(q => q.Answers).Where(a => a.Name == "SomeName") ``` To find the question of an answer. ``` questions.Where(q => q.Answers.Any(a => a.Name == "SomeName")) ``` In fact you will get collections of answers or questions and you will have to use `First()`, `FirstOrDefault()`, `Single()`, or `SingleOrDefault()` depending on your needs to get one specific answer or question.
``` from question in Questions from answer in question.Answers where answer.Name == something select question // or select answer ```
linq question: querying nested collections
[ "", "c#", ".net", "linq", "collections", "linq-to-xml", "" ]
I'm writing a simple messaging program, where there is a table of messages, which can be claimed by users and have stuff done to them by that user. It isn't predestined which user will claim a given message and so I want a query to select the first of all the available messages, which I have, and then one to mark that message as take, which I also have. The problem is, that I don't want two users using it at the same time to claim the same message and so I want to run the two statements consecutively, without having to go back to the program to find out what to run next in between statements. I believe I can run two consecutive statements by separating them with semi-colons but I want to use data returned in the first query as part of the second. Variables would be perfect, but as far as I'm aware they don't exist in SQL. Is there any way I can preserve state between queries?
Transactions is a good way to go, like le dorfier says, but there are alernatives: You could do the update first, i.e. tagging a message with the user id or similar. You dont mention which sql flavour youre using, but in mysql, i think it'd look something like this: ``` UPDATE message SET user_id = ... WHERE user_id = 0 -- Ensures no two users gets the same message LIMIT 1 ``` In ms sql, it'd be something along the lines of: ``` WITH q AS ( SELECT TOP 1 FROM message m WHERE user_id = 0 ) UPDATE q SET user_id = 1 ``` /B
This is what BEGIN TRAN and COMMIT TRAN are for. Place the statements you want to protect within a transaction.
Consecutive SQL statements with state
[ "", "sql", "database", "variables", "state", "" ]
I'm looking for a way to loop through the columns of a table to generate an output as described below. The table looks like that: ``` ID Name OPTION1 OPTION2 OPTION3 OPTION4 OPTION5 1 MyName1 1 0 1 1 0 2 MyName2 0 0 1 0 0 ``` And the output looks like that: ``` MyName1 -> OPTION1, OPTION3, OPTION4 MyName2 -> OPTION3 ``` Any directions of doing this simply would be greatly appreciated. Otherwise, I suppose I'll have to use a cursor or a temporary table... The database engine is MSSQL. The reason I'm doing formatting at the database level is to feed its output into a limited programmable environment. **Update**: the ouput can by in any form, a string or rows of strings. **Update**: Would the be a way to accomplish that by building a string using @str = @str + ... ? **Update**: I changed the output... this should be easier. Thanks!
Well, in case of a known number of columns, you can do: ``` SELECT MyName + " ->" + case OPTION1 when 1 then ' OPTION1' else '' end + case OPTION2 when 1 then ' OPTION2' else '' end + ... FROM Table ``` If columns are unknown when you create the query - I'd probably still go that way with some dynamically created SQL. The advantage is that the code probably does what you wants and is very simple.
You might want to have a look at PIVOT Tables.
Loop through columns SQL
[ "", "sql", "sql-server", "csv", "loops", "" ]
I'm using Python and PIL. I have images in RGB and I would like to know those who contain only one color (say #FF0000 for example) or a few very close colors (#FF0000 and #FF0001). I was thinking about using the histogram but it is very hard to figure out something with the 3 color bands, so I'm looking for a more clever algorithm. Any ideas? > ImageStat module is THE answer! Thanks Aaron. > I use ImageStat.var to get the variance and it works perfectly. Here is my piece of code: ``` from PIL import Image, ImageStat MONOCHROMATIC_MAX_VARIANCE = 0.005 def is_monochromatic_image(src): v = ImageStat.Stat(Image.open(src)).var return reduce(lambda x, y: x and y < MONOCHROMATIC_MAX_VARIANCE, v, True) ```
Try the [ImageStat module](http://www.pythonware.com/library/pil/handbook/imagestat.htm). If the values returned by `extrema` are the same, you have only a single color in the image.
First, you should define a distance between two colors. Then you just have to verify for each pixel that it's distance to your color is small enough.
How to check if an RGB image contains only one color?
[ "", "python", "image", "colors", "python-imaging-library", "" ]
I'm using JavaScript to dynamically generate a dialogue box (it's a div element), containing a textbox and a submit button. I plan on submitting the value in the textbox to another page using AJAX. My problem is that I can generate my textbox just fine, but I can't get the value from it. innerHTML comes back blank every time. I'm not sure what I'm doing wrong. ``` // Generate dialogue box using div function create_div_dynamic() { //Create the div element dv = document.createElement('div'); //unique tags var unique_div_id = 'mydiv' + Math.random() * .3245; var unique_textbox_id = 'mytext' + Math.random() * .3245; //Set div id dv.setAttribute('id',unique_div_id); //Set div style dv.style.position = 'absolute'; dv.style.left = '100 px'; dv.style.top = '100 px'; dv.style.width = '500px'; dv.style.height = '100px'; dv.style.padding = '7px'; dv.style.backgroundColor = '#fdfdf1'; dv.style.border = '1px solid #CCCCCC'; dv.style.fontFamily = 'Trebuchet MS'; dv.style.fontSize = '13px'; //Create textbox element txt = document.createElement('input'); txt.setAttribute('id',unique_textbox_id); txt.setAttribute('type','text'); txt.style.marginRight = '10px'; //Add textbox element to div dv.appendChild(txt) //Add the div to the document document.body.appendChild(dv); dv.innerHTML += '<input type="button" id="mysubmit" value="Read Textbox" onclick="javascript:alert(\'' + document.getElementById(unique_textbox_id).innerHTML + '\');" />'; } ```
Textarea elements don't have an **innerHTML** property. Just read the **value** property like you would with any other form element. ``` document.getElementById(unique_textbox_id).value ```
The input type="text" fields have no innerHTML, they are usually represented as self-closing tags. Use the `value` attribute instead: ``` document.getElementById(unique_textbox_id).value ```
Get innerHTML value from dynamically generated textbox (in javascript)
[ "", "javascript", "textbox", "" ]
I have a large number of name - value pairs (approx 100k) that I need to store in some sort of cache (say a hash map) where the value is a string with an average of about 30k bytes in size. Now I know for a fact that a large number of the values have exactly the same string data. In order to avoid having to allocate the identical string data several times, I would like to somehow reuse a previously allocated string thus consuming less memory. In addition this needs to be reasonably fast. i.e. scanning through all the previously allocated values one-by-one is not an option. Any recommendations on how I could solve this problem?
Do *not* use String.intern (there have been various memory issues related to this through the years). instead, create your own cache, similar to String.intern. basically, you want a Map, where each key maps to itself. then, before caching any string, you "intern" it: ``` private Map<String,WeakReference<String>> myInternMap = new WeakHashMap<String,,WeakReference<String>>(); public String intern(String value) { synchronized(myInternMap) { WeakReference<String> curRef = myInternMap.get(value); String curValue = ((curRef != null) ? curRef.get() : null); if(curValue != null) { return curValue; } myInternMap.put(value, new WeakReference<String>(value)); return value; } } ``` note, you use weakreferences for the keys and values so that you don't keep references for strings which you are no longer using.
[String.intern()](http://java.sun.com/javase/6/docs/api/java/lang/String.html#intern%28%29) will help you here (most likely). It will resolve multiple instances of the *same* string down to one copy. EDIT: I suggested this would 'most likely' help. In what scenarios will it not ? Interning strings will have the effect of storing those interned string representations *permanently*. If the problem domain is a one-shot process, this may not be an issue. If it's a long running process (such as a web app) then you may well have a problem. I would hesitate to say *never* use interning (I would hesistate to say *never* do anything). However there are scenarios where it's not ideal.
Optimize memory usage of a collection of Strings in Java
[ "", "java", "string", "memory-management", "" ]
The authentication system for an application we're using right now uses a two-way hash that's basically little more than a glorified caesar cypher. Without going into too much detail about what's going on with it, I'd like to replace it with a more secure encryption algorithm (and it needs to be done server-side). Unfortunately, it needs to be two-way and the algorithms in hashlib are all one-way. What are some good encryption libraries that will include algorithms for this kind of thing?
If it's two-way, it's not really a "hash". It's encryption (and from the sounds of things this is really more of a 'salt' or 'cypher', *not* real encryption.) A hash is one-way *by definition*. So rather than something like MD5 or SHA1 you need to look for something more like PGP. Secondly, can you explain the reasoning behind the 2-way requirement? That's not generally considered good practice for authentication systems any more.
I assume you want an encryption algorithm, not a hash. The [PyCrypto](https://pypi.python.org/pypi/pycrypto) library offers a pretty wide range of options. It's in the middle of moving over to a [new maintainer](http://www.dlitz.net/software/pycrypto/), so the docs are a little disorganized, but [this](http://www.dlitz.net/software/pycrypto/doc/#crypto-cipher-encryption-algorithms) is roughly where you want to start looking. I usually use AES for stuff like this.
What's a good two-way encryption library implemented in Python?
[ "", "python", "encryption", "" ]
I have a need to include `*/` in my JavaDoc comment. The problem is that this is also the same sequence for closing a comment. What the proper way to quote/escape this? Example: ``` /** * Returns true if the specified string contains "*/". */ public boolean containsSpecialSequence(String str) ``` **Follow up**: It appears I can use `&#47;` for the slash. The only downside is that this isn't all that readable when viewing the code directly in a text editor. ``` /** * Returns true if the specified string contains "*&#47;". */ ```
Use HTML escaping. So in your example: ``` /** * Returns true if the specified string contains "*&#47;". */ public boolean containsSpecialSequence(String str) ``` `&#47;` escapes as a "/" character. Javadoc should insert the escaped sequence unmolested into the HTML it generates, and that should render as "\*/" in your browser. If you want to be very careful, you could escape both characters: `&#42;&#47;` translates to `*/` **Edit:** > *Follow up: It appears I can use &#47; > for the slash. The only downside is > that this isn't all that readable when > view the code directly.* So? The point isn't for your code to be readable, the point is for your code **documentation** to be readable. Most Javadoc comments embed complex HTML for explaination. Hell, C#'s equivalent offers a complete XML tag library. I've seen some pretty intricate structures in there, let me tell you. **Edit 2:** If it bothers you too much, you might embed a non-javadoc inline comment that explains the encoding: ``` /** * Returns true if the specified string contains "*&#47;". */ // returns true if the specified string contains "*/" public boolean containsSpecialSequence(String str) ```
Nobody mentioned [{@literal}](http://docs.oracle.com/javase/7/docs/technotes/guides/javadoc/whatsnew-1.5.0.html). This is another way to go: ``` /** * Returns true if the specified string contains "*{@literal /}". */ ``` Unfortunately you cannot escape `*/` at a time. With some drawbacks, this also fixes: > > The only downside is that this isn't all that readable when viewing the code directly in a text editor.
How to quote "*/" in JavaDocs
[ "", "java", "comments", "javadoc", "" ]
I'm developing on my Mac notebook, I use MAMP. I'm trying to set a cookie with PHP, and I can't. I've left off the domain, I've tried using "\" for the domain. No luck. ``` setcookie("username", "George", false, "/", false); setcookie("name","Joe"); ``` I must be missing something obvious. I need a quick and simple solution to this. Is there one? I'm not doing anything fancy, simply loading (via MAMP) the page, <http://localhost:8888/MAMP/lynn/setcookie.php> That script has the setcookie code at the top, prior to even writing the HTML tags. (although I tried it in the BODY as well). I load the page in various browsers, then open the cookie listing. I know the browsers accept cookies, because I see current ones in the list. Just not my new one.
From the docs: > setcookie() defines a cookie to be sent along with the rest of the HTTP headers. Like other headers, cookies must be sent before any output from your script (this is a protocol restriction). This requires that you place calls to this function prior to any output, including and tags as well as any whitespace. Is that it? **edit:** Can you see the cookie being sent by the server, e.g. by using the Firefox extension *Tamper Data*, or telnet? Can you see it being sent back by the browser on the next request? What's the return value of setcookie()? Is it not working in all browsers, or just in some?
``` <?php ob_start(); if (isset($_COOKIE['test'])) { echo 'cookie is fine<br>'; print_r($_COOKIE); } else { setcookie('test', 'cookie test content', time()+3600); /* expire in 1 hour */ echo 'Trying to set cookie. Reload page plz'; } ``` Try this.
How do I set a cookie on localhost with MAMP + MacOSx + PHP?
[ "", "php", "cookies", "mamp", "" ]
I am not sure if this is possible but I want to iterate through a class and set a field member property without referring to the field object explicitly: ``` public class Employee { public Person _person = new Person(); public void DynamicallySetPersonProperty() { MemberInfo[] members = this.GetType().GetMembers(); foreach (MemberInfo member in members.Where(a => a.Name == "_person")) //get the _person field { Type type = member.GetType(); PropertyInfo prop = type.GetProperty("Name"); //good, this works, now to set a value for it //this line does not work - the error is "property set method not found" prop.SetValue(member, "new name", null); } } } public class Person { public string Name { get; set; } } ``` In the answer that I marked as the answer you need to add: ``` public static bool IsNullOrEmpty(this string source) { return (source == null || source.Length > 0) ? true : false; } ```
``` public class Person { public string Name { get; set; } } public class Employee { public Person person = new Person(); public void DynamicallySetPersonProperty() { var p = GetType().GetField("person").GetValue(this); p.GetType().GetProperty("Name").SetValue(p, "new name", null); } } ```
Here's a complete working example: ``` public class Person { public string Name { get; set; } } class Program { static void PropertySet(object p, string propName, object value) { Type t = p.GetType(); PropertyInfo info = t.GetProperty(propName); if (info == null) return; if (!info.CanWrite) return; info.SetValue(p, value, null); } static void PropertySetLooping(object p, string propName, object value) { Type t = p.GetType(); foreach (PropertyInfo info in t.GetProperties()) { if (info.Name == propName && info.CanWrite) { info.SetValue(p, value, null); } } } static void Main(string[] args) { Person p = new Person(); PropertySet(p, "Name", "Michael Ellis"); Console.WriteLine(p.Name); PropertySetLooping(p, "Name", "Nigel Mellish"); Console.WriteLine(p.Name); } } ``` EDIT: added a looping variant so you could see how to loop through property info objects.
c# - How to iterate through classes fields and set properties
[ "", "c#", "class", "properties", "set", "field", "" ]
Our build script creates a HTML log with some embedded javascript. When I open that in Internet Explorer, I get the yellow warning bar that IE has blocked running "scripts or activex controls". Since it is a local file, I cannot add it to trusted sites (IE expects a domain here). I do not want to change security settings for the default zone. Any idea how to permanently unblock it? IE version is 7.0.5730.13 on XP Pro.
Embed the [Mark of the Web](http://msdn.microsoft.com/en-us/library/ms537628.aspx): ``` <!-- saved from url=(0016)http://localhost --> ```
You could add [The Mark of the Web](http://msdn.microsoft.com/en-us/library/ms537628(VS.85).aspx) to the document so that IE will act as if it's from a certain security zone.
How to tell IE a HTML file on my disk is not a security risk?
[ "", "javascript", "security", "internet-explorer", "" ]
What is the best way to handle exceptions occurring in catch statements. Currently we are writing the exception message to response object's write method. But I want a solution by which the user will get only a general error message that something has gone wrong but we have to get a detailed description about the error. I would like to know the different practices employed for exception handling in C#.
For the web project and to guard against any exceptions getting pushed down to the browser, you could enable Health Monitoring and also the use of Custom Error Pages. If you are expecting the possibility of an exception inside the catch statement, simply nest another try catch in there so that it falls over graciously. In the global.asax also you can subscribe to the Application\_Error event, which will be called for an unhandled exception Andrew Health Monitoring in ASP.NET : <https://web.archive.org/web/20211020102851/https://www.4guysfromrolla.com/articles/031407-1.aspx>
Good for you for wanting to fix this. Writing exception messages directly back to the user can pose a significant security risk -- as you've figured out already, exception messages can contain lots of information that could help a malicious user gain access to your site. I'd take a look at [ELMAH](http://msdn.microsoft.com/en-us/library/aa479332.aspx) (Error Logging Modules and Handlers); it's an easy way to add logging of detailed errors to your web app.
Methods to handle exceptions in a web project using C#
[ "", "c#", "exception", "" ]
Here's the problem....I have three components...A Page that contains a User Control and a Server-side control which is in the user control. The user control has a handful of events that the page listens to which change what the server-side control does. The server control essentially creates a TV-Guide looking list filled with many many smaller ASP.NET controls that need to have events. All of these controls are created dynamically (server side obviously). Populating this server side control requires operations that are intensive in nature and should therefore be only done once per postback. Event1 and Event2 fire and basically will change things about the serverside controls rendering (sorting, filtering etc) so when they fire I need to call PopulateControls for sure. Here's the problem: I cannot find the best place to put the call to PopulateControls. If i put it in the PageLoad, it fires and draws the list before any events. Then my events have to call PopulateControls themselves to respond to the event. If i put the call in PreRender, the server-side control's events do not fire because from what i read, they need to be created before or during PageLoad. So where do i put this call so that I only have to call it once and so that it can respond to the proper events? Here's some psuedo code ``` public class MyPage : System.Web.UI.Page { protected UserControl MyUserControl; // Wire up event handlers MyUserControl.Event1 += OnEvent1; MyUserControl.Event2 += OnEvent2; public Page_Load() { } public PreRender() { PopulateControls(); } private PopulateControls() { // Do intensive operation to populate controls } protected OnEvent1() { } protected OnEvent2() { } } ```
I might be missing something here, but I will take a crack at this anyway. You have to create the control hierarchy the same as it was on the inital rendering for you ids to match and allow your events to fire properly. It sounds like you need to do some reordering after an event fires so you will need to create the controls twice. **Edit:** You need to be able to recreate your controls and the control hierarchy on post back. You should send enough information to the client so that you can recreate the controls correctly on Postback. This can be done by saving information in ViewState if necessary. You should only recompute the control inforation when it is required. In your case because of some post back event. The basic flow should be like this. In you Load event, if you don't have the ViewState needed to create your controls, call your populate method, otherwise create them from ViewState. When a post back occurs, the Load event will create them, but they are being reconstructed from viewstate so this should be cheap. When the post back event fires, call you populate method to recreate the controls in the new order or with the new information, depending on what event happened. Whenever I am building a control I like to think about how the control will behave if a post back occurs from some other control on the page. Your control should continue to look the same unless affected by that post back and it should be able to be recreated without doing any data access, but from information it has collected on previous calls. I edited my answer and hopefully this is a littler clearer now.
Child controls ought to be created in CreateChildControls, which is called in the "Init" phase of control life-cycle.
.NET Control Events and Page Life Cycle
[ "", "c#", ".net", "user-controls", "controls", "" ]
I am trying to set up some unit tests for an existing c++ project. Here's the setup: I have chosen Google Mock, which includes Google Test. I have added another project (called Tests) to the Visual Studio Solution. The units to test are in another project called Main. The plan is to add each cpp file that I want to test to the Tests project. The Tests project has access to all header files from Main. I have added one cpp file to the Tests project, and it compiles, but comes up with linker errors. Most are because of a class derived from COleDateTime, called CTimeValue. The unit under test has methods with pass-by-value CTimeValue parameters and also declares some CTimeValue attributes. I want to test the UUT in isolation, and use mocks and fakes for all dependencies. I don't see how to do it with CTimeValue. It is used as a value, contains no virtual methods, but is still quite complex and would deserve a seperate unit test. CTimeValue is only one of many classes that is like this in the project. How can I isolate the testing of classes that use these user-defined types? Cheers, Felix
Sometimes one can not simply mock things. In that case what you can do is have a comprehensive test for the class in question (CTimeValue) and make sure you run the tests for that class as a subsuite in your other test.
Using a mock object you only need to add the method signatures that your UUT uses so maybe you can create a mock using google mock. Of course you will need a separate test suite for the CTimeValue class, and if you have that then it is probably better to link in the actual object. You need to decide if it is worth the effort to create an interface class to create the mock from.
Unit testing and mocking small, value-like classes in C++
[ "", "c++", "visual-studio", "unit-testing", "mocking", "googletest", "" ]
How can I rollback an UPDATE query in SQL server 2005? I need to do this in SQL, not through code.
``` begin transaction // execute SQL code here rollback transaction ``` If you've already executed the query and want to roll it back, unfortunately your only real option is to restore a database backup. If you're using Full backups, then you should be able to restore the database to a specific point in time.
You need this tool and you can find the transaction and reverse it. [ApexSQL Log](http://www.apexsql.com/sql_tools_log.asp?gclid=COWEvOex3JkCFQMQswod4RgXXA)
How can I rollback an UPDATE query in SQL server 2005?
[ "", "sql", "sql-server-2005", "sql-update", "rollback", "" ]
I am trying to reduce the amount of memory my application uses. In my application, I use System.Reflection.Emit.EnumBuilder to create enumerations on the fly from information I receive at runtime. I only use the EnumBuilder to create an individual enumeration, and after that I have no further use for it. While using CLRProfiler, I noticed that the EnumBuilders are never destroyed. My application can make thousands of enumerations while running, so the amount of memory used by these EnumBuilders can add up. EnumBuilder does not define a Dispose method, so I cannot destroy it that way. Is it possible to somehow remove leftover EnumBuilders from memory?
I have noticed the same. It's a 'leak' in ModuleBuilder that hangs on to the bytes and strings IIRC. You can have a look in WinDbg with SOS loaded for better memory info.
`EnumBuilder` creates the enumeration in a dynamic assembly. According to [How to: Load and Unload Assemblies](http://msdn.microsoft.com/en-us/library/ms173101(VS.80).aspx) (C# Programming Guide), there's no way to unload an assembly without unloading all app domains that use it. In particular: "Even if the assembly goes out of scope, the actual assembly file will remain loaded until all application domains that contain it are unloaded."
Destroy the EnumBuilder After Creating an Enum?
[ "", "c#", ".net", "enums", "reflection.emit", "" ]
First, I'm aware of [this question](https://stackoverflow.com/questions/479308/sql-server-globally-unique-identifiers), and the suggestion (using GUID) doesn't apply in my situation. I want simple UIDs so that my users can easily communicate this information over the phone : > Hello, I've got a problem with order > 1584 as opposed to > hello, I've got a problem with order > 4daz33-d4gerz384867-8234878-14 I want those to be unique (database wide) because I have a few different kind of 'objects' ... there are order IDs, and delivery IDs, and billing-IDs and since there's no one-to-one relationship between those, I have no way to guess what kind of object an ID is referring to. With database-wide unique IDs, I can immediately tell what object my customer is referring to. My user can just input an ID in a search tool, and I save him the extra-click to further refine what is looking for. My current idea is to use identity columns with different seeds 1, 2, 3, etc, and an increment value of 100. This raises a few question though : * What if I eventually get more than 100 object types? granted I could use 1000 or 10000, but something that doesn't scale well "smells" * Is there a possibility the seed is "lost" (during a replication, a database problem, etc?) * more generally, are there other issues I should be aware of? * is it possible to use an non integer (I currently use bigints) as an identity columns, so that I can prefix the ID with something representing the object type? (for example a varchar column) * would it be a good idea to user a "master table" containing only an identity column, and maybe the object type, so that I can just insert a row in it whenever a need a new idea. I feel like it might be a bit overkill, and I'm afraid it would complexify all my insertion requests. Plus the fact that I won't be able to determine an object type without looking at the database * are there other clever ways to address my problem?
Why not use identities on all the tables, but any time you present it to the user, simply tack on a single char for the type? e.g. O1234 is an order, D123213 is a delivery, etc.? That way you don't have to engineer some crazy scheme...
Handle it at the user interface--add a prefix letter (or letters) onto the ID number when reporting it to the users. So o472 would be an order, b531 would be a bill, and so on. People are quite comfortable mixing letters and digits when giving "numbers" over the phone, and are more accurate than with straight digits.
Database-wide unique-yet-simple identifiers in SQL Server
[ "", "sql", "sql-server", "database", "database-design", "identity", "" ]
Example code: ``` int a[] = new int[]{0, 1, 2, 3}; int result = 0; for (int i : a) result += i; ``` Is the loop guaranteed to iterate across `a[0]`, `a[1]`, `a[2]`, `a[3]` in that order? I strongly believe the answer is yes, but [this page](http://java.sun.com/j2se/1.5.0/docs/guide/language/foreach.html) seems to not unambiguously state order. Got a solid reference?
According to [the JLS, The enhanced `for` statement](http://java.sun.com/docs/books/jls/third_edition/html/statements.html#259170), your for-loop is equivalent to ``` int[] array = a; for (int index = 0; index < a.length; index++) { int i = array[index]; result += i; } ``` "where `array` and `index` are compiler-generated identifiers that are distinct from any other identifiers (compiler-generated or otherwise) that are in scope at the point where the enhanced `for` statement occurs." (slightly paraphrasing the variable names here). So yes: the order is absolutely guaranteed.
See [section 14.14.2 of the Java Language Specification, 3rd edition](http://java.sun.com/docs/books/jls/third_edition/html/statements.html#24588). > If the type of Expression is a subtype > of Iterable, then let I be the type of > the expression Expression.iterator(). > The enhanced for statement is > equivalent to a basic for statement of > the form: > > ``` > for (I #i = Expression.iterator(); #i.hasNext(); ) { > VariableModifiersopt Type Identifier = #i.next(); > Statement > } > ``` > > Where #i is a compiler-generated > identifier that is distinct from any > other identifiers (compiler-generated > or otherwise) that are in scope (§6.3) > at the point where the enhanced for > statement occurs.
Is Java foreach iteration order over primitives precisely defined?
[ "", "java", "foreach", "iteration", "" ]
Is there some hard and fast rule about how big is too big for a SQL table? We are storing SCORM tracking data in a name/value pair format and there could be anywhere from 4-12 rows per user per course, down the road is this going to be a bad thing since there are hundreds of courses and thousands of users?
I personally have had tables in production with 50 million rows, and this is small compared with I have heard. You might need to optimize your structure with partitioning but until you test your system in your environment you shouldn't waste time doing that. What you described is pretty small IMHO I should add I was using SQL Server 2000 & 2005, each DBMS has its own sizing limitations.
The magic number is billions. Until you get to billions of rows of data, you're not talking about very much data at all. Do the math. 4-12 rows per user per course,... hundreds of courses and thousands of users? 400,000 to 1,200,000 rows. Let's assume 1000 bytes per row. That's 400Mb to 1.2Gb of data. You can buy 100Gb drives for $299 at the Apple store. You can easily spend more than $299 of billable time sweating over details that don't much matter any more. Until you get to 1Tb of data (1,000 Gb), you're not talking about much data at all.
How many rows of data is too many rows of data?
[ "", "sql", "size", "" ]
In Python the interface of an iterable is a subset of the [iterator interface](http://docs.python.org/library/stdtypes.html#iterator-types). This has the advantage that in many cases they can be treated in the same way. However, there is an important semantic difference between the two, since for an iterable `__iter__` returns a new iterator object and not just `self`. How can I test that an iterable is really an iterable and not an iterator? Conceptually I understand iterables to be collections, while an iterator only manages the iteration (i.e. keeps track of the position) but is not a collection itself. The difference is for example important when one wants to loop multiple times. If an iterator is given then the second loop will not work since the iterator was already used up and directly raises `StopIteration`. It is tempting to test for a `next` method, but this seems dangerous and somehow wrong. Should I just check that the second loop was empty? Is there any way to do such a test in a more pythonic way? I know that this sound like a classic case of LBYL against EAFP, so maybe I should just give up? Or am I missing something? **Edit:** S.Lott says in his answer below that this is primarily a problem of wanting to do multiple passes over the iterator, and that one should not do this in the first place. However, in my case the data is very large and depending on the situation has to be passed over multiple times for data processing (there is absolutely no way around this). The iterable is also provided by the user, and for situations where a single pass is enough it will work with an iterator (e.g. created by a generator for simplicity). But it would be nice to safeguard against the case were a user provides only an iterator when multiple passes are needed. **Edit 2:** Actually this is a very nice Example for [Abstract Base Classes](http://www.python.org/dev/peps/pep-3119/#abcs-for-containers-and-iterators). The `__iter__` methods in an iterator and an iterable have the same name but are sematically different! So `hasattr` is useless, but `isinstance` provides a clean solution.
``` 'iterator' if obj is iter(obj) else 'iterable' ```
> However, there is an important semantic difference between the two... Not really semantic or important. They're both iterable -- they both work with a for statement. > The difference is for example important when one wants to loop multiple times. When does this ever come up? You'll have to be more specific. In the rare cases when you need to make two passes through an iterable collection, there are often better algorithms. For example, let's say you're processing a list. You can iterate through a list all you want. Why did you get tangled up with an iterator instead of the iterable? Okay that didn't work. Okay, here's one. You're reading a file in two passes, and you need to know how to reset the iterable. In this case, it's a file, and `seek` is required; or a close and a reopen. That feels icky. You can `readlines` to get a list which allows two passes with no complexity. So that's not necessary. Wait, what if we have a file so big we can't read it all into memory? And, for obscure reasons, we can't seek, either. What then? Now, we're down to the nitty-gritty of two passes. On the first pass, we accumulated something. An index or a summary or something. An index has all the file's data. A summary, often, is a restructuring of the data. With a small change from "summary" to "restructure", we've preserved the file's data in the new structure. In both cases, we don't need the file -- we can use the index or the summary. All "two-pass" algorithms can be changed to one pass of the original iterator or iterable and a second pass of a different data structure. This is neither LYBL or EAFP. This is algorithm design. You don't need to reset an iterator -- YAGNI. --- **Edit** Here's an example of an iterator/iterable issue. It's simply a poorly-designed algorithm. ``` it = iter(xrange(3)) for i in it: print i,; #prints 1,2,3 for i in it: print i,; #prints nothing ``` This is trivially fixed. ``` it = range(3) for i in it: print i for i in it: print i ``` The "multiple times in parallel" is trivially fixed. Write an API that *requires* an iterable. And when someone refuses to read the API documentation or refuses to follow it after having read it, their stuff breaks. As it should. The "nice to safeguard against the case were a user provides only an iterator when multiple passes are needed" are both examples of insane people writing code that breaks our simple API. If someone is insane enough to read most (but not all of the API doc) and provide an iterator when an iterable was *required*, you need to find this person and teach them (1) how to read all the API documentation and (2) follow the API documentation. The "safeguard" issue isn't very realistic. These crazy programmers are remarkably rare. And in the few cases when it does arise, *you know who they are* and can help them. --- **Edit 2** The "we have to read the same structure multiple times" algorithms are a fundamental problem. Do not do this. ``` for element in someBigIterable: function1( element ) for element in someBigIterable: function2( element ) ... ``` Do this, instead. ``` for element in someBigIterable: function1( element ) function2( element ) ... ``` Or, consider something like this. ``` for element in someBigIterable: for f in ( function1, function2, function3, ... ): f( element ) ``` In most cases, this kind of "pivot" of your algorithms results in a program that might be easier to optimize and might be a net improvement in performance.
How to tell the difference between an iterator and an iterable?
[ "", "python", "iterator", "" ]
Having at least one virtual method in a C++ class (or any of its parent classes) means that the class will have a virtual table, and every instance will have a virtual pointer. So the memory cost is quite clear. The most important is the memory cost on the instances (especially if the instances are small, for example if they are just meant to contain an integer: in this case having a virtual pointer in every instance might double the size of the instances. As for the memory space used up by the virtual tables, I guess it is usually negligible compared to the space used up by the actual method code. This brings me to my question: is there a measurable performance cost (i.e. speed impact) for making a method virtual? There will be a lookup in the virtual table at runtime, upon every method call, so if there are very frequent calls to this method, and if this method is very short, then there might be a measurable performance hit? I guess it depends on the platform, but has anyone run some benchmarks? The reason I am asking is that I came across a bug that happened to be due to a programmer forgetting to define a method virtual. This is not the first time I see this kind of mistake. And I thought: why do we *add* the virtual keyword when needed instead of *removing* the virtual keyword when we are absolutely sure that it is *not* needed? If the performance cost is low, I think I will simply recommend the following in my team: simply make *every* method virtual by default, including the destructor, in every class, and only remove it when you need to. Does that sound crazy to you?
I [ran some timings](https://web.archive.org/web/20210225014950/http://assemblyrequired.crashworks.org/how-slow-are-virtual-functions-really/) on a 3ghz in-order PowerPC processor. On that architecture, a virtual function call costs 7 nanoseconds longer than a direct (non-virtual) function call. So, not really worth worrying about the cost unless the function is something like a trivial Get()/Set() accessor, in which anything other than inline is kind of wasteful. A 7ns overhead on a function that inlines to 0.5ns is severe; a 7ns overhead on a function that takes 500ms to execute is meaningless. The big cost of virtual functions isn't really the lookup of a function pointer in the vtable (that's usually just a single cycle), but that the indirect jump usually cannot be branch-predicted. This can cause a large pipeline bubble as the processor cannot fetch any instructions until the indirect jump (the call through the function pointer) has retired and a new instruction pointer computed. So, the cost of a virtual function call is much bigger than it might seem from looking at the assembly... but still only 7 nanoseconds. **Edit:** Andrew, Not Sure, and others also raise the very good point that a virtual function call may cause an instruction cache miss: if you jump to a code address that is not in cache then the whole program comes to a dead halt while the instructions are fetched from main memory. This is *always* a significant stall: on Xenon, about 650 cycles (by my tests). However this isn't a problem specific to virtual functions because even a direct function call will cause a miss if you jump to instructions that aren't in cache. What matters is whether the function has been run before recently (making it more likely to be in cache), and whether your architecture can predict static (not virtual) branches and fetch those instructions into cache ahead of time. My PPC does not, but maybe Intel's most recent hardware does. My timings control for the influence of icache misses on execution (deliberately, since I was trying to examine the CPU pipeline in isolation), so they discount that cost.
There is definitely measurable overhead when calling a virtual function - the call must use the vtable to resolve the address of the function for that type of object. The extra instructions are the least of your worries. Not only do vtables prevent many potential compiler optimizations (since the type is polymorphic the compiler) they can also thrash your I-Cache. Of course whether these penalties are significant or not depends on your application, how often those code paths are executed, and your inheritance patterns. In my opinion though, having everything as virtual by default is a blanket solution to a problem you could solve in other ways. Perhaps you could look at how classes are designed/documented/written. Generally the header for a class should make quite clear which functions can be overridden by derived classes and how they are called. Having programmers write this documentation is helpful in ensuring they are marked correctly as virtual. I would also say that declaring every function as virtual could lead to more bugs than just forgetting to mark something as virtual. If all functions are virtual everything can be replaced by base classes - public, protected, private - everything becomes fair game. By accident or intention subclasses could then change the behavior of functions that then cause problems when used in the base implementation.
What is the performance cost of having a virtual method in a C++ class?
[ "", "c++", "performance", "virtual-functions", "" ]
I'm trying to serialize a very large `IEnumerable<MyObject>` using an `XmlSerializer` without keeping all the objects in memory. The `IEnumerable<MyObject>` is actually lazy.. I'm looking for a streaming solution that will: 1. Take an object from the `IEnumerable<MyObject>` Serialize it to the underlying stream using the standard serialization (*I don't want to handcraft the XML here!*) 2. Discard the in memory data and move to the next I'm trying with this code: ``` using (var writer = new StreamWriter(filePath)) { var xmlSerializer = new XmlSerializer(typeof(MyObject)); foreach (var myObject in myObjectsIEnumerable) { xmlSerializer.Serialize(writer, myObject); } } ``` but I'm getting multiple XML headers and I cannot specify a root tag `<MyObjects>` so my XML is invalid. Any idea? Thanks
The `XmlWriter` class is a fast streaming API for XML generation. It is rather low-level, MSDN has an [article](https://msdn.microsoft.com/en-us/library/system.xml.xmlwriter(v=vs.110).aspx) on instantiating a validating XmlWriter using `XmlWriter.Create()`. Edit: link fixed. Here is sample code from the article: ``` async Task TestWriter(Stream stream) { XmlWriterSettings settings = new XmlWriterSettings(); settings.Async = true; using (XmlWriter writer = XmlWriter.Create(stream, settings)) { await writer.WriteStartElementAsync("pf", "root", "http://ns"); await writer.WriteStartElementAsync(null, "sub", null); await writer.WriteAttributeStringAsync(null, "att", null, "val"); await writer.WriteStringAsync("text"); await writer.WriteEndElementAsync(); await writer.WriteCommentAsync("cValue"); await writer.WriteCDataAsync("cdata value"); await writer.WriteEndElementAsync(); await writer.FlushAsync(); } } ```
Here's what I use: ``` using System; using System.Collections.Generic; using System.Xml; using System.Xml.Serialization; using System.Text; using System.IO; namespace Utils { public class XMLSerializer { public static Byte[] StringToUTF8ByteArray(String xmlString) { return new UTF8Encoding().GetBytes(xmlString); } public static String SerializeToXML<T>(T objectToSerialize) { StringBuilder sb = new StringBuilder(); XmlWriterSettings settings = new XmlWriterSettings {Encoding = Encoding.UTF8, Indent = true}; using (XmlWriter xmlWriter = XmlWriter.Create(sb, settings)) { if (xmlWriter != null) { new XmlSerializer(typeof(T)).Serialize(xmlWriter, objectToSerialize); } } return sb.ToString(); } public static void DeserializeFromXML<T>(string xmlString, out T deserializedObject) where T : class { XmlSerializer xs = new XmlSerializer(typeof (T)); using (MemoryStream memoryStream = new MemoryStream(StringToUTF8ByteArray(xmlString))) { deserializedObject = xs.Deserialize(memoryStream) as T; } } } } ``` Then just call: ``` string xml = Utils.SerializeToXML(myObjectsIEnumerable); ``` I haven't tried it with, for example, an `IEnumerable` that fetches objects one at a time remotely, or any other weird use cases, but it works perfectly for `List<T>` and other collections that are in memory. **EDIT**: Based on your comments in response to this, you could use `XmlDocument.LoadXml` to load the resulting XML string into an `XmlDocument`, save the first one to a file, and use that as your master XML file. For each item in the `IEnumerable`, use `LoadXml` again to create a new in-memory `XmlDocument`, grab the nodes you want, append them to the master document, and save it again, getting rid of the new one. After you're finished, there may be a way to wrap all of the nodes in your root tag. You could also use XSL and `XslCompiledTransform` to write another XML file with the objects properly wrapped in the root tag.
streaming XML serialization in .net
[ "", "c#", ".net", "xml", "serialization", "streaming", "" ]
I've got an assembly somewhere on the file system, e.g. "C:\temp\test.dll". In that assembly there's a ResourceDictionary, e.g. "abc.xaml". How can i get that ResourceDictionary? Maybe there is a way using Reflections? I didn't find a solution so far. Thanks in advance! Edit: Just wanted to add that I want to access the Resources in the Dictionary, e.g. a Style.
**Edit:** I found an even better solution which works with ResourceDictionaries: ``` Assembly.LoadFrom(@"C:\temp\test.dll"); ResourceDictionary rd = new ResourceDictionary(); rd.Source = new Uri("/test;component/myresource.xaml"); ``` --- Well, I couldn't get it to work with ResourceDictionaries, so I'm using good old Resource Files instead ;) For anyone interested, here is how I did it: ``` Assembly a = Assembly.LoadFile(@"C:\temp\test.dll"); ResourceManager rm = new ResourceManager("NameOfResource", a); object o = rm.GetObject("xyz"); ``` You can get "NameOfResource" with Reflector, as Ian suggested.
You actually need to write the Uri like this: ``` Assembly.LoadFrom(@"C:\temp\test.dll"); ResourceDictionary rd = new ResourceDictionary(); rd.Source = new Uri(@"pack://application:,,,/test;component/myresource.xaml"); ```
Load a ResourceDictionary from an assembly
[ "", "c#", "wpf", "xaml", "resourcedictionary", "" ]
I'm trying to get some results from UniProt, which is a protein database (details are not important). I'm trying to use some script that translates from one kind of ID to another. I was able to do this manually on the browser, but could not do it in Python. In <http://www.uniprot.org/faq/28> there are some sample scripts. I tried the Perl one and it seems to work, so the problem is my Python attempts. The (working) script is: ``` ## tool_example.pl ## use strict; use warnings; use LWP::UserAgent; my $base = 'http://www.uniprot.org'; my $tool = 'mapping'; my $params = { from => 'ACC', to => 'P_REFSEQ_AC', format => 'tab', query => 'P13368 P20806 Q9UM73 P97793 Q17192' }; my $agent = LWP::UserAgent->new; push @{$agent->requests_redirectable}, 'POST'; print STDERR "Submitting...\n"; my $response = $agent->post("$base/$tool/", $params); while (my $wait = $response->header('Retry-After')) { print STDERR "Waiting ($wait)...\n"; sleep $wait; print STDERR "Checking...\n"; $response = $agent->get($response->base); } $response->is_success ? print $response->content : die 'Failed, got ' . $response->status_line . ' for ' . $response->request->uri . "\n"; ``` My questions are: 1) How would you do that in Python? 2) Will I be able to massively "scale" that (i.e., use a lot of entries in the query field)?
**question #1:** This can be done using python's urllibs: ``` import urllib, urllib2 import time import sys query = ' '.join(sys.argv) # encode params as a list of 2-tuples params = ( ('from','ACC'), ('to', 'P_REFSEQ_AC'), ('format','tab'), ('query', query)) # url encode them data = urllib.urlencode(params) url = 'http://www.uniprot.org/mapping/' # fetch the data try: foo = urllib2.urlopen(url, data) except urllib2.HttpError, e: if e.code == 503: # blah blah get the value of the header... wait_time = int(e.hdrs.get('Retry-after', 0)) print 'Sleeping %i seconds...' % (wait_time,) time.sleep(wait_time) foo = urllib2.urlopen(url, data) # foo is a file-like object, do with it what you will. foo.read() ```
Let's assume that you are using Python 2.5. We can use [httplib](http://docs.python.org/library/httplib.html?highlight=httplib#module-httplib) to directly call the web site: ``` import httplib, urllib querystring = {} #Build the query string here from the following keys (query, format, columns, compress, limit, offset) querystring["query"] = "" querystring["format"] = "" # one of html | tab | fasta | gff | txt | xml | rdf | rss | list querystring["columns"] = "" # the columns you want comma seperated querystring["compress"] = "" # yes or no ## These may be optional querystring["limit"] = "" # I guess if you only want a few rows querystring["offset"] = "" # bring on paging ##From the examples - query=organism:9606+AND+antigen&format=xml&compress=no ##Delete the following and replace with your query querystring = {} querystring["query"] = "organism:9606 AND antigen" querystring["format"] = "xml" #make it human readable querystring["compress"] = "no" #I don't want to have to unzip conn = httplib.HTTPConnection("www.uniprot.org") conn.request("GET", "/uniprot/?"+ urllib.urlencode(querystring)) r1 = conn.getresponse() if r1.status == 200: data1 = r1.read() print data1 #or do something with it ``` You could then make a function around creating the query string and you should be away.
How can I talk to UniProt over HTTP in Python?
[ "", "python", "http", "user-agent", "bioinformatics", "" ]
I have a link that uses javascript to submit a form, like so: ``` <a href="javascript:document.some_form.submit()">some link</a> ``` However, this will point to a vendor-supplied application that of course only works in IE. Is there any way to make this link open up in IE if the user is using a different browser?
Without putting compiled code on the user's machine, then more than likely, no. It would require you to specify to the os a specific program to run, and that's going to violate the security restrictions that are (and should) be on most browsers in the market today. You might be able to create plug-ins for each of the major browser vendors which will intercept the link and open it in IE, but that becomes tedious, as there are different models for each browser. On top of that, you have to get your plugin installed on each machine, which users might not be interested in doing. The best option here is to inform the user the site must be browsed in IE, and if possible, work with the vendor to make it run in other browsers. Another possibility is that you might host both your app and the vendor app in a client side program which uses an embedded WebBrowser control, which is essentially, IE.
No, there's not. It would be a severe security risk - a site exploiting an Internet Explorer bug would be able to infect a user on Mozilla Firefox in this manner. Just give a warning that it's an external site that will require IE (perhaps using user-agent sniffing to avoid displaying it if they're already in IE).
Is there a way to make a javascript link open up in IE?
[ "", "javascript", "html", "internet-explorer", "" ]
I am going to start a game in about 3 weeks and I would really like the game to run at least on another platform (linux, MacOS) but my team thinks that's a lot of work. I am up for it but wanted to know what are the things I should watch out for that won't port to linux (apart from Windows specific APIs like DirectXsound)? I've been reading online and Windows "\_s" functions like `sprintf_s` appear to exist only on Windows; is this correct or are they implemented on linux also?
No, the \_s functions are NOT implemented in the standard gcc library. (At least, grepping the include files for 'sprintf\_s' turns up nothing at all.) It might be worth looking at cross platform libraries like [boost](http://www.boost.org/) and [apr](http://apr.apache.org/) to do some of the heavy lifting work. A sample of specific things to look for: * Input/Output (DirectX / SDL / OpenGL) * Win32/windows.h functionality (CreateThread, etc) * Using windows controls on the UI * Synchronization primitives (critical sections, events) * Filepaths (directory separators, root names) * Wide char implementations (16 bit on windows, 32bit on linux) * No MFC support on linux (CString, etc)
If I were you I would use some of the available Frameworks out there, that handle Platform independence. I wrote a 3D-Game as a hobby project with a friend of mine, the server being in Java and the clients running on Windows and Linux. We ended up using Ogre as 3D-Engine and OpenAL as Sound-Engine, both platform independent and available under LGPL. The only things I really had to write separately were the whole Socket-handling, reading the config from file system and the initialization of the System. Compared to the rest of the Program, that was almost nothing. The most time consuming will be to set up the entire project to compile under Windows and Linux (or Mac), especially if you're concentrating on one and only occasionally check the other for problems. If you have one in your team who checks regularly for these problems while they're being produced you won't have that much overhead from that as well. All in all compared to the programming of the game itself, adapting it to different platforms is almost no effort, if all frameworks used are well written, platform independent systems.
cross platform game development what to look for?
[ "", "c++", "linux", "macos", "" ]
I need to write a sql query that adds one column from one database (DB1) to another column and the sum is save in that column in the second database(DB2). where userIds are the same ``` DB1 TableA UserId People DB2 TableB Amount UserId ``` it would be something like this DB2.TableB.Amount = DB2.TableB.Amount + DB1.TableA.People
Do you mean: ``` UPDATE b SET Amount = b.Amount + a.People FROM DB2.dbo.TableB b INNER JOIN DB1.dbo.TableA a ON a.UserId = b.UserId ``` dbo = owner of table, it can also be unspecified: DB1..TableA
``` INSERT INTO DB2.dbo.TableB SELECT COUNT(*), UserID FROM DB1.dbo.TableA GROUP BY UserID ```
how to write sql query? for ms sql server
[ "", "sql", "sql-server", "" ]
I've got a dictionary like: ``` { 'a': 6, 'b': 1, 'c': 2 } ``` I'd like to iterate over it *by value*, not by key. In other words: ``` (b, 1) (c, 2) (a, 6) ``` What's the most straightforward way?
``` sorted(dictionary.items(), key=lambda x: x[1]) ``` for these of you that hate lambda :-) ``` import operator sorted(dictionary.items(), key=operator.itemgetter(1)) ``` However `operator` version requires CPython 2.5+
For non-Python 3 programs, you'll want to use iteritems to get the performance boost of generators, which yield values one at a time instead of returning all of them at once. ``` sorted(d.iteritems(), key=lambda x: x[1]) ``` For even larger dictionaries, we can go a step further and have the key function [be in C](http://svn.python.org/projects/python/trunk/Modules/operator.c) instead of Python as it is right now with the lambda. ``` import operator sorted(d.iteritems(), key=operator.itemgetter(1)) ``` Hooray!
How do I iterate over a Python dictionary, ordered by values?
[ "", "python", "" ]
I utilize TcpClient class to implement a telnet connection. What I don't know is how to determine the end of response. It seems that once DataAvailable property is true, I read the data from Stream and DataAvailable is then temporarily set to false while buffer is being filled with another chunk of data but I think is all read and go away. How do I ensure that all response was read? You might advise me how to change the code snippet below to make this work correctly. It works correctly when I step through the code because there is plenty of time for the whole response to be retrieved. Appreciate your help. ``` if (this.Stream.DataAvailable) { readBuffer = new Byte[this.Client.ReceiveBufferSize]; do { // read data bytes = this.Stream.Read(readBuffer, 0, readBuffer.Length); Thread.Sleep(10); responseData = String.Concat(responseData, System.Text.Encoding.ASCII.GetString(readBuffer, 0, bytes)); } while (this.Stream.DataAvailable); allDataRead = true; break; } ```
With TCP connections for applications like Telnet there is no such concept as the end of the transmission. You'll often find telnet clients that send each key press one at a time as the user enters them and it's up to the server to decide when to take action, normally when it gets a line feed. Other clients may buffer up a few client key presses and send them all at once. By the looks of your app you should be looking for line feeds to indicate the end of a transmission. ``` do { bytes = this.Stream.Read(readBuffer, 0, readBuffer.Length); responseData = String.Concat(responseData, System.Text.Encoding.ASCII.GetString (readBuffer, 0, bytes)); } while (this.Stream.DataAvailable && !responseData.Contains("\n")); ```
I don't think it's a good idea to use DataAvailable for this purpose. The end of your response is whatever your protocol says it is. It's better to use blocking calls like Read to read your stuff, and use whatever protocol you've established to determine to read more, and how much. It might be that not all data you want is returned, because Read doesn't return when all your data is read, but when some data is read, so you need to check on how much was read. If your protocol is Telnet, you'll likely want to respond to every single byte received, as with a terminal you want to echo it or process commands or what not. DataAvailable is perhaps suited to check if a Read would block when called (it will return with whatever is in the buffer) but it makes your loop less straightforward.
How to deal with delays on telnet connection programmatically?
[ "", "c#", "telnet", "latency", "tcpclient", "lag", "" ]
When using the SQL MIN() function, along with GROUP BY, will any additional columns (not the MIN column, or one of the GROUP BY columns) match the data in the matching MIN row? For example, given a table with department names, employee names, and salary: ``` SELECT MIN(e.salary), e.* FROM employee e GROUP BY department ``` Obviously I'll get two good columns, the minimum salary and the department. Will the employee name (and any other employee fields) be from the same row? Namely the row with the MIN(salary)? I know there could very possibly be two employees with the same (and lowest) salary, but all I'm concerned with (now) is getting all the information on the (or *a single*) cheapest employee. Would this select the cheapest salesman? ``` SELECT min(salary), e.* FROM employee e WHERE department = 'sales' ``` Essentially, can I be sure that the data returned along with the MIN() function will matches the (or *a single*) record with that minimum value? If the database matters, I'm working with MySql.
If you wanted to get the "cheapest" employee in each department you would have two choices off the top of my head: ``` SELECT E.* -- Don't actually use *, list out all of your columns FROM Employees E INNER JOIN ( SELECT department, MIN(salary) AS min_salary FROM Employees GROUP BY department ) AS SQ ON SQ.department = E.department AND SQ.min_salary = E.salary ``` Or you can use: ``` SELECT E.* FROM Employees E1 LEFT OUTER JOIN Employees E2 ON E2.department = E1.department AND E2.salary < E1.salary WHERE E2.employee_id IS NULL -- You can use any NOT NULL column here ``` The second statement works by effectively saying, show me all employees where you can't find another employee in the same department with a lower salary. In both cases, if two or more employees have equal salaries that are the minimum you will get them both (all).
``` SELECT e.* FROM employee e WHERE e.id = ( SELECT id FROM employee ei WHERE ei.department = 'sales' ORDER BY e.salary LIMIT 1 ) ``` To get values for each department, use: ``` SELECT e.* FROM department d LEFT JOIN employee e ON e.id = ( SELECT id FROM employee ei WHERE ei.department = d.id ORDER BY e.salary LIMIT 1 ) ``` To get values only for those departments that have employees, use: ``` SELECT e.* FROM ( SELECT DISTINCT eo.department FROM employee eo ) d JOIN employee e ON e.id = ( SELECT id FROM employee ei WHERE ei.department = d.department ORDER BY e.salary LIMIT 1 ) ``` Of course, having an index on `(department, salary)` will greatly improve all three queries.
Extra Fields with SQL MIN() & GROUP BY
[ "", "sql", "mysql", "aggregate-functions", "" ]
[HTML can be used in Java swing GUI components](http://java.sun.com/docs/books/tutorial/uiswing/components/html.html), like `JTextBox`, `JButton`, ... etc. Is there an equivalent to that in .NET (C#) WinForms? Or at least some simple way of making a single word **bold** inside a textbox without using another GUI component?
No, I don't think you can easily do that with a normal TextBox or Button. There are third party controls that might do that. WinForms is not a very extensible GUI framework at heart. If you are looking for that kind of capabilities, WPF is a great thing. WPF supports control composition and styling almost everywhere.
For .Net you'll need to either use a RichTextbox control (and rtf formatting), embed a WebBrowser control, or inherit your own OwnerDrawn control from textbox. I would be surprised if there aren't third-party controls out there that do that last option for you, but there's nothing "official" for this and you'll have to evaluate them yourself.
Do .NET GUI components support HTML (like Java swing does)?
[ "", "java", ".net", "html", "winforms", "swing", "" ]
I thought this might be a fast way to remove the contents of a very large table (3000 rows): ``` $jq("tbody", myTable).remove(); ``` But it's taking around five seconds to complete in firefox. Am I doing something dumb (aside from trying to load 3000 rows in to a browser)? Is there faster way to do it?
``` $("#your-table-id").empty(); ``` That's as fast as you get.
It's better to avoid any kind of loops, just remove all elements directly like this: ``` $("#mytable > tbody").html(""); ```
jquery - fastest way to remove all rows from a very large table
[ "", "javascript", "jquery", "dom", "" ]
I have a web app (ASP.NET 2.0 C#)where I get data from an Oracle database into a gridview. I had asked in a previous question how I could turn that data into links that would, for example, take me to a details page. This was the answer I got: ``` <asp:HyperLinkField DataNavigateUrlFields="ID" DataNavigateUrlFormatString="DetailPage.aspx?id={0}" DataTextField="CategoryName" NavigateUrl="DetailPage.aspx" /> ``` Doing this makes the ID's links, and once I click it, the ID is in the URL of the next page. My question is, how am I supposed to use this data from the URL to actually display the information about that ID? Do I have to do something to my code behind? Thank you
On the page load of the DetailPage use the QueryString to get the id ``` if(!IsPostBack) { try { intMyId = (int)Request.QueryString["id"]; //Do something with intMyId } catch(InvalidCastException ex) { //Record and show error message } } ``` NOTE: You will want to verify that the id is a valid id because an attacker could put whatever they want in there. Also verify that the user can view the id.
Yes in your code behind you'll need to grab the id from the query string, send it to the database, and wire the return value up to the display elements. You might be able to achieve this using the DataSource if they can pull parameters from a query string.
Using HyperLinkFields to display details
[ "", "c#", "asp.net", "url", "gridview", "hyperlink", "" ]
I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?
It depends on what you mean by "dropped". For TCP sockets, if the other end closes the connection either through close() or the process terminating, you'll find out by reading an end of file, or getting a read error, usually the errno being set to whatever 'connection reset by peer' is by your operating system. For python, you'll read a zero length string, or a socket.error will be thrown when you try to read or write from the socket.
Short answer: > use a non-blocking recv(), or a blocking recv() / select() with a very > short timeout. Long answer: The way to handle socket connections is to read or write as you need to, and be prepared to handle connection errors. TCP distinguishes between 3 forms of "dropping" a connection: timeout, reset, close. Of these, the timeout can not really be detected, TCP might only tell you the time has not expired yet. But even if it told you that, the time might still expire right after. Also remember that using shutdown() either you or your peer (the other end of the connection) may close only the incoming byte stream, and keep the outgoing byte stream running, or close the outgoing stream and keep the incoming one running. So strictly speaking, you want to check if the read stream is closed, or if the write stream is closed, or if both are closed. Even if the connection was "dropped", you should still be able to read any data that is still in the network buffer. Only after the buffer is empty will you receive a disconnect from recv(). Checking if the connection was dropped is like asking "what will I receive after reading all data that is currently buffered ?" To find that out, you just have to read all data that is currently bufferred. I can see how "reading all buffered data", to get to the end of it, might be a problem for some people, that still think of recv() as a blocking function. With a blocking recv(), "checking" for a read when the buffer is already empty will block, which defeats the purpose of "checking". In my opinion any function that is documented to potentially block the entire process indefinitely is a design flaw, but I guess it is still there for historical reasons, from when using a socket just like a regular file descriptor was a cool idea. What you can do is: * set the socket to non-blocking mode, but than you get a system-depended error to indicate the receive buffer is empty, or the send buffer is full * stick to blocking mode but set a very short socket timeout. This will allow you to "ping" or "check" the socket with recv(), pretty much what you want to do * use select() call or asyncore module with a very short timeout. Error reporting is still system-specific. For the write part of the problem, keeping the read buffers empty pretty much covers it. You will discover a connection "dropped" after a non-blocking read attempt, and you may choose to stop sending anything after a read returns a closed channel. I guess the only way to be sure your sent data has reached the other end (and is not still in the send buffer) is either: * receive a proper response on the same socket for the exact message that you sent. Basically you are using the higher level protocol to provide confirmation. * perform a successful shutdow() and close() on the socket The python socket howto says send() will return 0 bytes written if channel is closed. You may use a non-blocking or a timeout socket.send() and if it returns 0 you can no longer send data on that socket. But if it returns non-zero, you have already sent something, good luck with that :) Also here I have not considered OOB (out-of-band) socket data here as a means to approach your problem, but I think OOB was not what you meant.
How to tell if a connection is dead in python
[ "", "python", "sockets", "" ]
Overloading the comparison operator, how to compare if the two variables points to the same object(i.e. not value) ``` public static bool operator ==(Landscape a, Landscape b) { return a.Width == b.Width && a.Height == b.Height; } public static bool operator !=(Landscape a, Landscape b) { return !(a.Width == b.Width && a.Height == b.Height); } ```
Use the Object.ReferenceEquals static method. Of course, in order for the == and != method to retain their full functionality, you should also be overriding Equals and GetHashCode so that they return a consistent set of responses to callers.
Try [`a.ReferenceEquals(b);`](http://msdn.microsoft.com/en-us/library/system.object.referenceequals.aspx)
After overloading the operator==, how to compare if two variables points at the same object?
[ "", "c#", "operators", "operator-overloading", "equals-operator", "" ]
How do I sort List<List<string>> according to number of fields? The list got structure like ``` a|b|c|d|eeee|rere|ewew| ewqewq|ewew| ewew|ewewewewew| ``` By Sort I'd like to do it according to numbers of blockes (asc/desc) **EDIT** the `<list<list<string>> is "items"` and I access each list by items[0] each of these items[xx] is a list which means that I want them to sort the array to ``` a|b|c|d|eeee|rere|ewew| a|b|c|d|eeee|rere a|b|c|d|eeee ```
If you don't need to sort "in place" you can use LINQ to give you a new sorted, list. ``` var sorted = oldList.OrderBy( l => l.Count ); ``` Otherwise you'll need to write your own Comparer that takes two lists and returns the ordering by their size. ``` public class CountComparer : IComparer<List<string>> { #region IComparer<List<string>> Members public int Compare( List<string> x, List<string> y ) { return x.Count.CompareTo( y.Count ); } #endregion } oldList.Sort( new CountComparer() ); ``` Or, as @Josh points out, you can do this with a lambda expression. ``` oldList.Sort( (a,b) => a.Count.CompareTo( b.Count ) ) ``` IMO this latter works well if the comparison is relatively simple or used once, but the actual class may be preferable as the comparison gets more complex or if you need to repeat it in multiple places.
``` List<List<string>> list; // filled elsewhere list.Sort((x,y) => x.Count.CompareTo(y.Count);); ```
How to sort List<List<string>> according to List<string> number of fields?
[ "", "c#", "list", "sorting", "" ]
I found this in some production login code I was looking at recently... ``` HttpContext.Current.Trace.Write(query + ": " + username + ", " + password)); ``` ...where query is a short SQL query to grab matching users. Does this have any sort of performance impact? I assume its very small. Also, what is the purpose of this exact type of trace, using the HTTP Context? Where does this data get traced to? Thanks in advance!
Yes it will have a performance impact whenever the TRACE conditional compilation constant is defined during build. Doing anything has some type of impact :) As to whether or not this has a significant impact on an application. It's highly unlikely that it would as Trace is designed to be run and is run in many production applications. Only an abuse of the feature should lead to a noticable performance difference. But as always, don't trust me, trust the profiler.
Trace messages can go to a lot of different places. You can add (or remove) TraceListeners for the Console, VisualStudio debug window, Files, a database table, or the Event Log to name a few. You can even build your own. Also, you can configure Trace to not do anything when compiled for Release. Thus, the performance impact of using Trace can vary wildly, all the way from effectively zero to completely bogging down your app, depending on the performance of the various active listeners. Most listeners, though, have about the impact you'd expect; it takes about so much work to write to a file, or database, or the console, and Trace doesn't add that much overhead relative to those I/O-bound activities. --- Performance impact aside, I'm staring in **absolute horror** at the idea of tracing password values. That's something you definitely must NOT do.
What is the performance impact of tracing in C# and ASP.NET?
[ "", "c#", "asp.net", "performance", "httpcontext", "trace", "" ]
I've a shell window constructed of an Header, Main Content and Footer. The main content is a tab control. According to the design, one of the tab items content (a user control) needs to expand a little bit over the footer. At first I thought about implmenting this with a negative Margin, but the footer no matter what I did always get drawn on top of my content. For example consider the following xaml: ``` <Grid> <Grid.RowDefinitions> <RowDefinition Height="33*"/> <RowDefinition Height="33*"/> <RowDefinition Height="33*"/> </Grid.RowDefinitions> <StackPanel Background="Blue" Grid.Row="0"/> <!-- Header --> <StackPanel Background="Red" Grid.Row="2"/> <!-- Footer --> <TabControl Grid.Row="1" > <!-- Content --> <TabItem> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="33*"/> <ColumnDefinition Width="33*"/> <ColumnDefinition Width="33*"/> </Grid.ColumnDefinitions> <StackPanel Grid.Column="1" Background="Yellow" > <Button Width="100" Height="50" Content="Text" /> </StackPanel> </Grid> </TabItem> </TabControl> ``` What I would want to achieve is making the Yellow StackPanel to reach the bottom of the screen somewhow, overlapping on top of the red footer. Hope it is understandable. Thanks Ariel
The problem is that you want the stackpanel contained within the tab control, but you want it to extend beyond the bottom of the tab control. This isn't supported by the tab control.
Try this code sample: ``` <Grid> <Grid.RowDefinitions> <RowDefinition Height="33*"/> <RowDefinition Height="33*"/> <RowDefinition Height="33*"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="33*"/> <ColumnDefinition Width="33*"/> <ColumnDefinition Width="33*"/> </Grid.ColumnDefinitions> <StackPanel Grid.ColumnSpan="3" Background="Blue" Grid.Row="0"/> <!-- Header --> <StackPanel Grid.ColumnSpan="3" Background="Red" Grid.Row="2"/> <!-- Footer --> <TabControl Grid.ColumnSpan="3" Grid.Row="1"> <!-- Content --> <TabItem> <Grid> <Button Width="100" Grid.Column="1" Height="50" Content="Text" /> </Grid> </TabItem> </TabControl> <StackPanel Grid.Column="1" Grid.Row="3" Background="Yellow" /> </Grid> ```
WPF problem: Need to draw a control on top of another
[ "", "c#", "wpf", "xaml", "layout", "" ]
I have a need to be able to have a super class execute callbacks defined by a class that inherits from it. I am relatively new to C++ and from what I can tell it looks like the subject of member-function-pointers is a very murky area. I have seen answers to questions and random blog posts that discuss all sorts of things, but I am not sure if any of them are specifically dealing with my question here. Here is a simple chunk of code that illustrates what I am trying to do. The example might not make a lot of sense, but it accurately resembles the code I am trying to write. ``` class A { protected: void doSomething(void (A::*someCallback)(int a)) { (*this.*someCallback)(1234); } }; class B : public A { public: void runDoIt() { doSomething(&B::doIt); } void runDoSomethingElse() { doSomething(&B::doSomethingElse); } protected: void doIt(int foo) { cout << "Do It! [" << foo << "]\n"; } void doSomethingElse(int foo) { cout << "Do Something Else! [" << foo << "]\n"; } }; int main(int argc, char *argv[]) { B b; b.runDoIt(); b.runDoSomethingElse(); } ```
If you can use boost libraries, I would suggest you use boost::function for the task at hand. ``` class A { public: void doSomething( boost::function< void ( int ) > callback ) { callback( 5 ); } }; ``` Then any inheriting (or external class) can use boost::bind do make a call: ``` class B { public: void my_method( int a ); }; void test() { B b; A a; a.doSomething( boost::bind( &B::my_method, &b, _1 ) ); }; ``` I have not checked the exact syntax and I have typed it from the top of my head, but that is at least close to the proper code.
The problem is that a member function of `B` isn't a member function of `A`, even if `B` derives from `A`. If you have a `void (A::*)()`, you can invoke it on any `A *`, regardless of the actual derived type of the object pointed at. (The same principle would apply to an `A &` too, of course.) Assume `B` derives from `A`, and `C` derives from `A`; were it were possible to consider a `void (B::*)()` (say) as a `void (A::*)()`, one would be able to do something like this: ``` A *c=new C; A *b=new B; void (A::*f)()=&B::fn;//fn is not defined in A (c->*f)(); ``` And the member function of `B` would be called on an object of type `C`. The results would be unpredictable at best. Based on the example code, and assuming non-use of something like boost, I'd be inclined to structure the callback as an object: ``` class Callback { public: virtual ~Callback() { } virtual Do(int a)=0; }; ``` Then the function that calls the callback takes one of these objects somehow rather than a plain function pointer: ``` class A { protected: void doSomething(Callback *c) { c->Do(1234); } }; ``` You could then have one callback per derived function you are interested in calling. For doIt, for example: ``` class B:public A { public: void runDoIt() { DoItCallback cb(this); this->doSomething(&cb); } protected: void doIt(int foo) { // whatever } private: class DoItCallback:public Callback { public: DoItCallback(B *b):b_(b) {} void Do(int a) { b_->doIt(a); } private: B *b_; }; }; ``` An obvious way of cutting back on the boilerplate would be to put the member function pointer into the callback, since the derived callback is free to deal with objects of a specific type. This would make the callback a bit more generic, in that when called back it would invoke an arbitrary member function on an object of type B: ``` class BCallback:public Callback { public: BCallback(B *obj,void (B::*fn)(int)):obj_(obj),fn_(fn) {} void Do(int a) { (obj_->*fn_)(a); } private: B *obj_; void (B::*fn_)(int); }; ``` This would make doIt like this: ``` void B::runDoIt() { BCallback cb(this,&B::doIt); this->doSomething(&cb); } ``` This could potentially be "improved", though not all readers may see it quite in that way, by templating it: ``` template<class T> class GenericCallback:public Callback { public: GenericCallback(T *obj,void (T::*fn)(int)):obj_(obj),fn_(fn) {} void Do(int a) { (obj_->*fn_)(a); } private: T *obj_; void (T::*fn_)(int); }; ``` Using this, the runDoIt function above could become: ``` void B::runDoIt() { GenericCallback<B> cb(this,&B::doIt); this->doSomething(&cb); } ``` (The generic callback could also be templated on the member function pointer itself, though this is unlikely to provide any practical advantage in most cases. It's just more typing.) I've found structuring things this way to turn out well, as it doesn't require inheritance. It therefore imposes few restrictions on the code that is to be called back, which is to my mind always a good thing, and I've found that to outweigh the verbosity that is difficult to eliminate entirely. Impossible based on the example to tell whether this approach would actually suit, though...
C++ Pointers to Member Functions Inheritance
[ "", "c++", "function", "pointers", "callback", "" ]
I have a development version of PHP on Apache. I moved it to production and got this weird notices in my website. I don't have it on development version. How to enable these notices on my development version of website to fix them?
Change your php.ini file, the line that says error\_reporting, to E\_ALL. I.e: ``` error_reporting = E_ALL ```
If you have access to your php.ini, then Björn answer is the way to go. However, if you don't, or if you want to change a particular script / project error level, do this at the beginning of your code: ``` ini_set('display_errors', 1); // Enable error reporting for NOTICES error_reporting(E_NOTICE); ``` You can see which levels are available for error\_reporting here: <https://www.php.net/manual/en/function.error-reporting.php>. It's always a good practice not to show any errors on production environments, but logging any weird behaviors and sending by mail to the administrator. NOTICES should only be enabled on development environments.
How to enable notices on my development server
[ "", "php", "error-handling", "" ]
When checking if an event handler is null, is this done on a per-thread basis? Ensuring someone is listening to the event is done like this: ``` EventSeven += new DivBySevenHandler(dbsl.ShowOnScreen); ``` If I add code following this pattern above where I check for null, then why would I need a null check ([code taken from this site](http://www.codeproject.com/KB/cs/csevents01.aspx)). What am I missing? Also, what's the rule with events and GC?
It's really not clear what you mean I'm afraid, but if there's the possibility of the delegate being null, you need to check that separately on each thread. Typically you'd do: ``` public void OnSeven() { DivBySevenHandler handler = EventSeven; if (handler != null) { handler(...); } } ``` This ensures that even if `EventSeven` changes during the course of `OnSeven()` you won't get a `NullReferenceException`. But you're right that you don't need the null check if you've definitely got a subscribed handler. This can easily be done in C# 2 with a "no-op" handler: ``` public event DivBySevenHandler EventSeven = delegate {}; ``` On the other hand, you *might* want some sort of locking just to make sure that you've got the "latest" set of handlers, if you might get subscriptions from various threads. I have an [example in my threading tutorial](http://jonskeet.uk/csharp/threads/lockchoice.shtml) which can help - although usually I'd recommend trying to avoid requiring it. In terms of garbage collection, the event *publisher* ends up with a reference to the event *subscriber* (i.e. the target of the handler). This is only a problem if the publisher is meant to live longer than the subscriber.
The problem is that if nobody subscribes the the event, it is null. And you can't invoke against a null. Three approaches leap to mind: * check for null (see below) * add a "do nothing" handler: `public event EventHandler MyEvent = delegate {};` * use an extension method (see below) When checking for null, to be thread-safe, you must *in theory* capture the delegate reference first (in case it changes between the check and the invoke): ``` protected virtual void OnMyEvent() { EventHandler handler = MyEvent; if(handler != null) handler(this, EventArgs.Empty); } ``` Extension methods have the unusual property that they are callable on null instances... ``` public static void SafeInvoke(this EventHandler handler, object sender) { if (handler != null) handler(sender, EventArgs.Empty); } public static void SafeInvoke<T>(this EventHandler<T> handler, object sender, T args) where T : EventArgs { if (handler != null) handler(sender, args); } ``` then you can call: ``` MyEvent.SafeInvoke(this); ``` and it is both null-safe (via the check) and thread-safe (by reading the reference once only).
Use of null check in event handler
[ "", "c#", "" ]
If I have a dictionary like: ``` {'a': 1, 'b': 2, 'c': 3} ``` How can I convert it to this? ``` [('a', 1), ('b', 2), ('c', 3)] ``` And how can I convert it to this? ``` [(1, 'a'), (2, 'b'), (3, 'c')] ```
``` >>> d = { 'a': 1, 'b': 2, 'c': 3 } >>> list(d.items()) [('a', 1), ('c', 3), ('b', 2)] ``` [For Python 3.6 and later](https://stackoverflow.com/a/39980744/353337), the order of the list is what you would expect. In Python 2, you don't need `list`.
since no one else did, I'll add py3k versions: ``` >>> d = { 'a': 1, 'b': 2, 'c': 3 } >>> list(d.items()) [('a', 1), ('c', 3), ('b', 2)] >>> [(v, k) for k, v in d.items()] [(1, 'a'), (3, 'c'), (2, 'b')] ```
How can I convert a dictionary into a list of tuples?
[ "", "python", "list", "dictionary", "" ]
I have a class with numerous parameters of various types. I want to iterate over all type A members , and run a specific functions ( A.doSomething() ) This doesn't even compile: The conversion from field to XPathDataElement is illegal ``` Field[] fields = this.getClass().getDeclaredFields(); for (Field field : fields) { if (field. getType().getName().equals(XPathDataElement.class.getName())) { tmp = (XPathDataElement)field; // Doesn't compile sb.append(field.getName() + ":"); tmp.update(); } } ``` Thanks!
It's hard to debug your code when you don't say what's wrong with it. Two things I can see: 1. There's no need to compare strings to decide if the field's type is the right class. ``` if (field.getType().equals(XPathDataElement.class)) ``` should work. **Edit:** Steve Reed [points out](https://stackoverflow.com/questions/721594/use-reflection-to-iterate-over-class-members/721657#721657) that you don't necessarily need it to be exactly `XPathDataElement`; a subclass will work just as well. To check if the field can be treated as an `XPathDataElement`, you should use [`Class.isAssignableFrom(Class)`](http://java.sun.com/javase/6/docs/api/java/lang/Class.html#isAssignableFrom(java.lang.Class)). ``` if (XPathDataElement.class.isAssignableFrom(field.getType())) ``` would be the code. 2. I guess your real question is how to get the value of a field reflectively? If so, then [`Field.get(Object)`](http://java.sun.com/javase/6/docs/api/java/lang/reflect/Field.html#get(java.lang.Object)) is what you want. The object that you pass to `get()` is the one whose field you want to retrieve; if you're operating on `this` (which is a *strong* code smell), then your code would be ``` XPathDataElement tmp = (XPathDataElement) field.get(this); ```
I strongly suggest avoiding reflection unless you really need it. Just write the code out: ``` this.x.doSomething(); this.y.doSomething(); this.z.doSomething(); ``` Or if you like: ``` for (A a : new A[] { this.x, this.y, this.z }) { a.doSomething(); } ```
Use reflection to iterate over class members
[ "", "java", "reflection", "" ]
1) Can you recommend me a PHP accelerator for PHP V5.2.6? 2) Do you know about any recent test comparation/review of those modules(Alternative PHP Cache, eAccelerator, XCache, Zend Optimizer, Zend Platform, ionCube PHP Accelerator, Turck MMCache, Nusphere PhpExpress)?
[**APC**](http://www.php.net/manual/en/book.apc.php) — standard choice, included in PECL, comes prepackaged in most Linux distros, to be bundled in by default in PHP6. As a bonus it can serve as [data cache](http://www.php.net/manual/en/function.apc-store.php) (something like local memcache). eAccelerator was popular, for some time it was [the fastest](http://2bits.com/articles/benchmarking-drupal-with-php-op-code-caches-apc-eaccelerator-and-xcache-compared.html) bytecode cache. But the difference in speed is not enough to justify choosing it over APC. Turck MMcache — dead. eAccelerator was forked from it. ionCube — dead.
[APC](http://www.php.net/apc) is pretty much the standard choice. It's scheduled to be included in PHP 6 core. Unlike most of the other candidates, it's stable and it's free.
php accelerator review
[ "", "php", "caching", "performance", "accelerator", "" ]
Consider the following C# code: ``` double result1 = 1.0 + 1.1 + 1.2; double result2 = 1.2 + 1.0 + 1.1; if (result1 == result2) { ... } ``` result1 should always equal result2 right? The thing is, it doesn't. result1 is 3.3 and result2 is 3.3000000000000003. The only difference is the order of the constants. I know that doubles are implemented in such a way that rounding issues can occur. I'm aware that I can use decimals instead if I need absolute precision. Or that I can use Math.Round() in my if statement. I'm just a nerd who wants to understand what the C# compiler is doing. Can anyone tell me? **Edit:** Thanks to everyone who's so far suggested reading up on floating point arithmetic and/or talked about the inherent inaccuracy of how the CPU handles doubles. But I feel the main thrust of my question is still unanswered. Which is my fault for not phrasing it correctly. Let me put it like this: Breaking down the above code, I would expect the following operations to be happening: ``` double r1 = 1.1 + 1.2; double r2 = 1.0 + r1 double r3 = 1.0 + 1.1 double r4 = 1.2 + r3 ``` Let's assume that each of the above additions had a rounding error (numbered e1..e4). So r1 contains rounding error e1, r2 includes rounding errors e1 + e2, r3 contains e3 and r4 contains e3 + e4. Now, I don't know how exactly how the rounding errors happen but I would have expected e1+e2 to equal e3+e4. Clearly it doesn't, but that seems somehow wrong to me. Another thing is that when I run the above code, I don't get any rounding errors. That's what makes me think it's the C# compiler that's doing something weird rather than the CPU. I know I'm asking a lot and maybe the best answer anyone can give is to go and do a PHD in CPU design, but I just thought I'd ask. **Edit 2** Looking at the IL from my original code sample, it's clear that it's the compiler not the CPU that's doing this: ``` .method private hidebysig static void Main(string[] args) cil managed { .entrypoint .maxstack 1 .locals init ( [0] float64 result1, [1] float64 result2) L_0000: nop L_0001: ldc.r8 3.3 L_000a: stloc.0 L_000b: ldc.r8 3.3000000000000003 L_0014: stloc.1 L_0015: ret } ``` The compiler is adding up the numbers for me!
> I would have expected e1+e2 to equal e3+e4. That's not entirely unlike expecting ``` floor( 5/3 ) + floor( 2/3 + 1 ) ``` to equal ``` floor( 5/3 + 2/3 ) + floor( 1 ) ``` except you're multiplying by 2^53 before taking the floor. Using 12 bit precision floating point and truncation with your values: ``` 1.0 = 1.00000000000 1.1 = 1.00011001100 1.2 = 1.00110011001 1.0 + 1.1 = 10.00011001100 // extended during sum r1 = 1.0 + 1.1 = 10.0001100110 // truncated to 12 bit r1 + 1.2 = 11.01001100101 // extended during sum r2 = r1 + 1.2 = 11.0100110010 // truncated to 12 bit 1.1 + 1.2 = 10.01001100110 // extended during sum r3 = 1.1 + 1.2 = 10.0100110011 // truncated to 12 bit r3 + 1.0 = 11.01001100110 // extended during sum r4 = r3 + 1.0 = 11.0100110011 // truncated to 12 bit ``` So changing the order of operations/truncations causes the the error to change, and r4 != r2. If you add 1.1 and 1.2 in this system, the last bit carries, so in not lost on truncation. If you add 1.0 to 1.1, the last bit of 1.1 is lost and so the result is not the same. In one ordering, the rounding (by truncation) removes a trailing `1`. In the other ordering, the rounding removes a trailing `0` both times. One does not equal zero; so the errors are not the same. Doubles have many more bits of precision, and C# probably uses rounding rather than truncation, but hopefully this simple model shows you different errors can happen with different orderings of the same values. The difference between fp and maths is that + is shorthand for 'add then round' rather than just add.
The c# compiler isn't doing anything. The CPU is. if you have A in a CPU register, and you then add B, the result stored in that register is A+B, approximated to the floating precision used If you then add C, the error adds up. This error addition is not a transitive operation, thus the final difference.
Why does the order affect the rounding when adding multiple doubles in C#
[ "", "c#", ".net", "compiler-construction", "rounding", "precision", "" ]
Is there a quick way to prevent insertion of repeated data into a table? I mean, the key will always be different but the rest of the entry could be repeated and by so, there would be 2+ different keys identifying the same data. I could search the whole table but i am afraid of the performance lost when doing this. Note: I'm just starting to learn SQL, please bear with me if this is a dumb question to you.
You want a [UNIQUE constraint](http://msdn.microsoft.com/en-us/library/ms191166.aspx) on the table.
As Joel said you can use an UNIQUE constraint on several field where you would not want repeated data. Also if you can make sure your primary key always identifies a unique record you should never run into this trouble. ex: my personal record would allways be diferent from yours if we use my id card number as a primary key
Prevent insertion of repeated information in SQL
[ "", "sql", "constraints", "" ]
I need to read a string, detect a {VAR}, and then do a file\_get\_contents('VAR.php') in place of {VAR}. The "VAR" can be named anything, like TEST, or CONTACT-FORM, etc. I don't want to know what VAR is -- not to do a hard-coded condition, but to just see an uppercase alphanumeric tag surrounded by curly braces and just do a file\_get\_contents() to load it. I know I need to use preg\_match and preg\_replace, but I'm stumbling through the RegExps on this. How is this useful? It's useful in hooking WordPress.
Orion above has a right solution, but it's not really necessary to use a callback function in your simple case. Assuming that the filenames are A-Z + hyphens you can do it in 1 line using PHP's /e flag in the regex: ``` $str = preg_replace('/{([-A-Z]+)}/e', 'file_get_contents(\'$1.html\')', $str); ``` This'll replace any instance of {VAR} with the contents of VAR.html. You could prefix a path into the second term if you need to specify a particular directory. There are the same vague security worries as outlined above, but I can't think of anything specific.
You'll need to do a number of things. I'm assuming you can do the legwork to get the page data you want to preprocess into a string. 1. First, you'll need the regular expression to match correctly. That should be fairly easy with something like `/{\w+}/`. 2. Next you'll need to use all of the flags to preg\_match to get the offset location in the page data. This offset will let you divide the string into the before, matching, and after parts of the match. 3. Once you have the 3 parts, you'll need to run your include, and stick them back together. 4. Lather, rinse, repeat. 5. Stop when you find no more variables. This isn't terribly efficient, and there are probably better ways. You may wish to consider doing a preg\_split instead, splitting on `/[{}]/`. No matter how you slice it you're assuming that you can trust your incoming data, and this will simplify the whole process a lot. To do this, I'd lay out the code like so: 1. Take your content and split it like so: `$parts = preg_split('/[{}]/', $page_string);` 2. Write a recursive function over the parts with the following criteria: * Halt when length of arg is < 3 * Else, return a new array composed of * $arg[0] . load\_data($arg[1]) . $arg[2] * plus whatever is left in $argv[3...] 3. Run your function over $parts.
Replacing Tags with Includes in PHP with RegExps
[ "", "php", "regex", "preg-replace", "preg-match", "" ]
I have a CSV output on one of my applications. This produces a file from of web form data. In some cases I am getting a carriage return character in my notes field. This causes an error when importing the file. I would like to remove this character. The issue appears to be happening when users paste information into the form from word documents or holding down the shift key and pressing enter. The field is ntext and populated in a multi line text box control. I have been trying to remove this with a replace function but some carriage return characters seem to be getting through. SQL ``` REPLACE(Fieldname), CHAR(13) + CHAR(10), ' ') AS new_Fieldname ```
It may be best to replace the characters separately, as they do not always occur together or in that order: ``` REPLACE(REPLACE(Fieldname, CHAR(13),' '), CHAR(10), ' ') AS new_Fieldname ```
Note that you may have a carriage return + line feed, or just a carriage return (depending on the source platform, the source of the data etc.). So you will probably need to handle both cases.
CSV Carriage Return Character
[ "", "sql", "csv", "" ]
How do I find out the fully qualified name of my assembly such as: ``` MyNamespace.MyAssembly, version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 ``` I've managed to get my PublicKeyToken using the `sn.exe` in the SDK, but I'ld like to easily get the full qualified name.
If you can load the assembly into a .NET application, you can do: ``` typeof(SomeTypeInTheAssembly).Assembly.FullName ``` If you cannot then you can use ildasm.exe and it will be in there somewhere: ``` ildasm.exe MyAssembly.dll /text ```
This is a shameless copy-paste from [I Note It Down](http://inoteitdown.blogspot.co.nz/2011/07/get-assembly-fully-qualified-name.html) and is a simple way to get the FQN for the project output: ``` Open Visual Studio Go to Tools –> External Tools –> Add Title: Get Qualified Assembly Name Command: Powershell.exe Arguments: -command "[System.Reflection.AssemblyName]::GetAssemblyName(\"$(TargetPath)\").FullName" Check "Use Output Window". ``` The new tool appears under `Tools –> Get Qualified Assembly Name`. When the menu item is selected, the assembly name is given in the output window.
How do I find the fully qualified name of an assembly?
[ "", "c#", "deployment", "assemblies", "strongname", "" ]
I want to count the number of visits (or visitors, not page requests or hits) made to my whole site. I want to store each visit in a database to show some graphics with this data in the future. Also, I want it to be reliable, fast and not to be polling to database for each page request. So, I think I should put some code in some entry point, but I don't know where, and not in the Global.asax event AppEndRequest. I'm guessing to code the SessionStart or SessionEnd events. How can I make this visit counter?
Note that Syed's suggestion will track the number of *visitors*, while Jhonny's will track the *page hits*. You might want to decide which one you're after. Assuming you want visitors, I agree with Syed, catch Session\_Start in global and increment a number in the database. You could get more useful information if you store extra info in the database - for example a table like this: ``` tblVisitor - SessionID nvarchar(32) - SessionStart datetime ``` This would probably be better than just incrementing a counter, because you can check the number of visitors *last month* or *this year*, etc. and also get information about time-of-day usage. However you did ask for the simplest possible way, and a simple counter would accomplish that for sure.
Use google analytics or something similar. It's the simplest solution.
What is the most simple visit counter that could be made?
[ "", "c#", "asp.net", "" ]
I am using this class in php for autoloading. <http://pastebin.com/m75f95c3b> But when I have somewhere ``` class Bar extends Foo ``` And I have a file called foo.class.php it won't find the class. But when i chagne the filename to Foo.class.php it will find the class. I am trying to add some functionallity to my class to always find the file if its there no matter wether the filename starts with a capital or not. But so far I haven't scuceeded. Anyone?
Gumbo's solution is the best one and that's what almost everyone uses. I do not understand why you do not like. Of course you can first check whether a file with the class name capitalized exists or not, and if not then check whether lowercased version exists or not. That's definitely not better than Gumbo's solution. After all "your" programmers have to follow some conventions/rules.
If all of your file have lowercase names, apply `strtolower` to the `$className` variable value in each method of your class: ``` $className = strtolower($className); ``` But I suggest that you draft some coding guidelines that every developer has to stick to. Otherwise you will have to test each of the 2*<length of file name>* possibilities of writing a file name using uppercase and lowercase letters.
autoloader with upper and lowercase classname
[ "", "php", "autoload", "" ]
In python I have a dictionary that maps tuples to a list of tuples. e.g. `{(1,2): [(2,3),(1,7)]}` I want to be able to encode this data use it with javascript, so I looked into json but it appears keys must be strings so my tuple does not work as a key. Is the best way to handle this is encode it as "1,2" and then parse it into something I want on the javascript? Or is there a more clever way to handle this.
You might consider saying ``` {"[1,2]": [(2,3),(1,7)]} ``` and then when you need to get the value out, you can just parse the keys themselves as JSON objects, which all modern browsers can do with the built-in `JSON.parse` method (I'm using `jQuery.each` to iterate here but you could use anything): ``` var myjson = JSON.parse('{"[1,2]": [[2,3],[1,7]]}'); $.each(myjson, function(keystr,val){ var key = JSON.parse(keystr); // do something with key and val }); ``` On the other hand, you might want to just structure your object differently, e.g. ``` {1: {2: [(2,3),(1,7)]}} ``` so that instead of saying ``` myjson[1,2] // doesn't work ``` which is invalid Javascript syntax, you could say ``` myjson[1][2] // returns [[2,3],[1,7]] ```
If your key tuples are truly integer pairs, then the easiest and probably most straightforward approach would be as you suggest.... encode them to a string. You can do this in a one-liner: ``` >>> simplejson.dumps(dict([("%d,%d" % k, v) for k, v in d.items()])) '{"1,2": [[2, 3], [1, 7]]}' ``` This would get you a javascript data structure whose keys you could then split to get the points back again: ``` '1,2'.split(',') ```
Best way to encode tuples with json
[ "", "python", "json", "" ]
How i add an addListener to GOOGLE MAPS to get MapType
Try: ``` GEvent.addListener(map, "maptypechanged", function() { var newMapType = map.getCurrentMapType(); // ... }) ```
ok! i found it myself map.getCurrentMapType().getName();
How i add an addListener to GOOGLE MAPS to get MapType
[ "", "javascript", "google-maps", "" ]
With `java.sql.ResultSet` is there a way to get a column's name as a `String` by using the column's index? I had a look through the API doc but I can't find anything.
You can get this info from the `ResultSet` metadata. See [ResultSetMetaData](https://docs.oracle.com/javase/8/docs/api/java/sql/ResultSetMetaData.html) e.g. ``` ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM TABLE2"); ResultSetMetaData rsmd = rs.getMetaData(); String name = rsmd.getColumnName(1); ``` and you can get the column name from there. If you do ``` select x as y from table ``` then `rsmd.getColumnLabel()` will get you the retrieved label name too.
In addition to the above answers, if you're working with a dynamic query and you want the column names but do not know how many columns there are, you can use the ResultSetMetaData object to get the number of columns first and then cycle through them. Amending Brian's code: ``` ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM TABLE2"); ResultSetMetaData rsmd = rs.getMetaData(); int columnCount = rsmd.getColumnCount(); // The column count starts from 1 for (int i = 1; i <= columnCount; i++ ) { String name = rsmd.getColumnName(i); // Do stuff with name } ```
Retrieve column names from java.sql.ResultSet
[ "", "java", "jdbc", "" ]
While analyzing some ASP.NET MVC projects I got to see anonymous types scattered all over. HTML helpers have them: ``` <%=Html.TextBox("view.Address", "address", new { Class = "text_field" })%> ``` A lot of the return types for actions have them: ``` JsonNetResult jsonNetResult = new JsonNetResult { Formatting = Formatting.Indented, Data = new {Something= “”} } ``` I know this came from LINQ: ``` from p in context.Data select new { p.Name, p.Age }; ``` Are these really the correct way to accomplish things now outside of LINQ? Do they hurt code reusability and readability?
IMHO, the biggest problems with anonymous types stem from the inability to name their type. That is, it's not possible to expilictly specify the type of an anonymous type as an expression in code. This really makes in awkward to do things like create a generic List. ``` var list = New List<No way to specify an Ananymous type>(); ``` Usually you have to resort to a helper method. ``` public List<T> CreateList<T>(T notUsed) { return new List<T>(); } var list = CreateList(new { .Class = "foo" }); ``` This also has a larger impact in that you can't use an anonymous type as a return type, makes casting extremely awkward (need a helper method), etc ... But these are not the operations that Anonymous Types were designed for. In many ways they are designed to used within a particular defined function and it's subsequently created lambda expressions. Not as a data communication type between to full fledged functions. This is certainly a limitation in the design and at times drives me batty. But overall I find them to be a very useful construct in the language. Many parts of LINQ would not be possible without them in some form.
When the object being created is a transitory object, i.e., it's immediately consumed or converted into something else, I like the idea of anonymous types. It prevents you from littering your code with single-purpose classes whose only use is as a short-lived container. Your examples are typical of the types of uses where it comes in handy, i.e., in the helper extensions it's almost always immediately converted into a parameter dictionary and with the json result it gets serialized. If the class has domain significance or needs to be used as a first-class object, then by all means create a domain class for it.
Are anonymous types a good thing to use outside of LINQ?
[ "", "c#", ".net", "linq", "" ]
Is it possible to use CXF with Tomcat and without Spring? If so, how?
I suppose you mean to create a Web Service with CFX that it would run in Tomcat? This is totally possible and Spring is optional. You don't have to use it, if you don't want to.
You can configure CXF programmatically without Spring. See the code examples [here](http://cxf.apache.org/docs/servlet-transport.html#ServletTransport-UsingtheservlettransportwithoutSpring). Putting the web application context path together with the end point extension--Greeter in the code example--will display a summary page in the browser with a link to the WSDL. No Spring necessary, giving CXF a very small footprint. I found the only jars necessary to be (for CXF 2.2.1): * XmlSchema-1.4.5.jar * cxf-2.2.3.jar * wsdl4j-1.6.2.jar I believe neethi-2.0.4.jar may also be necessary for some configurations, but I haven't needed it. Have fun!
CXF without Spring
[ "", "java", "web-services", "tomcat", "cxf", "" ]
How can i retrieve the current CPU usage in c# using WMI? I've seen plenty of posts using performance counters, but I need a solution that can work with remote machines. I've also found a VB solution [here](http://www.microsoft.com/technet/scriptcenter/resources/qanda/apr05/hey0421.mspx), but I'd prefer to accomplish this in C# if possible.
Got it working. Used pretty much the same code as found here: <http://www.csharphelp.com/archives2/archive334.html> Turns out I had a bad path, which i finally got sorted out: new ManagementPath(string.Format("\\{0}\root\cimv2",machineName));
Performance with WMI is messy, to say the least. Performance counters work OK with remote machines. Use the `System.Diagnostics.PerformanceCounterXxx` classes, the constructors have overloads which take a `machineName` argument.
CPU Usage using WMI & C#
[ "", "c#", "wmi", "cpu", "cpu-usage", "" ]
I am having a C++ code which have lot of recursion involved . I am thinking of using register class for my variables . Do you think by doing so I will be saving stack memory and will improve the performance Thanks Sameer
I could bet that the compiler is NOT going to honor your request. Say that you have a local variable, and that you recursively call the function 100 times. If it were to honor all your auto variables 'register' keyword it would require 100 hardware registers just for that variable (all the variables are alive at the 100th call) Performance is a difficult problem. Analyze where is the program really spending time and try to optimize there, but be cautious: some decisions can end in no gain, some can end up in worse performance. As it has been mentioned before, compilers are really good at what they do. Forcing a variable into a register means one less register for the rest of the variables to use.
No, I think it will probably have no effect at all. Modern compilers are typically much better at scheduling register use than humans, and will probably ignore the "register" keyword. Having said that, the only real way to find out is to write some code and measure its performance with the register keyword and without it - the code change is trivial.
Register Variables
[ "", "c++", "recursion", "" ]
I have to create variable number of Labels and next to them TextBox controls - arranging the whole thing into a column, each line a Label and a TextBox. If the my Main window is smaller than the total height of all the TextBox controls, somehow I need a scrollbar which can scroll the list of TextBoxes. Pressing the enter key would have to take the focus to the next TextBox and also scroll in case of too many TextBoxes. This is a rather generic problem, I guess there are already some pre-baked solutions for this. Any advice?
You can use TableLayoutPanel as container for controls (Labels and TextBoxes) and create them dynamicaly in code. Example: ``` void Form1_Load( object sender, EventArgs e ) { const int COUNT = 10; TableLayoutPanel pnlContent = new TableLayoutPanel(); pnlContent.Dock = DockStyle.Fill; pnlContent.AutoScroll = true; pnlContent.AutoScrollMargin = new Size( 1, 1 ); pnlContent.AutoScrollMinSize = new Size( 1, 1 ); pnlContent.RowCount = COUNT; pnlContent.ColumnCount = 3; for ( int i = 0; i < pnlContent.ColumnCount; i++ ) { pnlContent.ColumnStyles.Add( new ColumnStyle() ); } pnlContent.ColumnStyles[0].Width = 100; pnlContent.ColumnStyles[1].Width = 5; pnlContent.ColumnStyles[2].SizeType = SizeType.Percent; pnlContent.ColumnStyles[2].Width = 100; this.Controls.Add( pnlContent ); for ( int i = 0; i < COUNT; i++ ) { pnlContent.RowStyles.Add( new RowStyle( SizeType.Absolute, 20 ) ); Label lblTitle = new Label(); lblTitle.Text = string.Format( "Row {0}:", i + 1 ); lblTitle.TabIndex = (i * 2); lblTitle.Margin = new Padding( 0 ); lblTitle.Dock = DockStyle.Fill; pnlContent.Controls.Add( lblTitle, 0, i ); TextBox txtValue = new TextBox(); txtValue.TabIndex = (i * 2) + 1; txtValue.Margin = new Padding( 0 ); txtValue.Dock = DockStyle.Fill; txtValue.KeyDown += new KeyEventHandler( txtValue_KeyDown ); pnlContent.Controls.Add( txtValue, 2, i ); } } void txtValue_KeyDown( object sender, KeyEventArgs e ) { if ( e.KeyCode == Keys.Enter ) { SendKeys.Send( "{TAB}" ); } } ```
Use a [TableLayoutPanel](http://msdn.microsoft.com/en-us/library/system.windows.forms.tablelayoutpanel.aspx). You can dynamically add controls, [specify their row/column](http://msdn.microsoft.com/en-us/library/he3sxc2a.aspx), and it will maintain a scrollbar for you (with the appropriate settings). It has its quirks, but should suit for this case. If you use the WinForms designer to place the TableLayoutPanel, then you can use it to also define the style of the columns. You can also vary the style of each row as suggested by Tcks. To add the control with a specified row/column: ``` int column = 42; int row = 7; myTableLayoutPanel.Controls.Add(new TextBox(), column, row); ```
WinForms: variable number of dynamic TextBox controls
[ "", "c#", ".net", "winforms", "" ]
I have two independant threads F1 and F2 (to be precise, two instances of java.util.concurrent.FutureTask) that are running in parallel. F1 do some processing, and then copy the result in a XML file. Then, it repeats these steps until it has nothing to do (many XML files are created). F2 looks in the F1 output directory, and take one file, parse it, and execute some processing on it. This works quite pretty well, except that sometimes, F2 gets truncated XML data from the file. I mean by that an incomplete XML, where some XML node are not present. The problem is that it is not always reproductible, and the files that are truncated are not always the same. Because of that, I am thinking that while F1 is writing one file on the disk, F2 is trying to read the same file. That's why sometimes I get this kind of error. **My question**: I am wondering if there is some mechanism that locks (even for reading) the file F1 is currently writing until it has completely finished to write it on the disk, so F2 will not be able to read it until the file is unlocked. Or any other way to solve my issue will be welcome ! F1 is writing the file this way: ``` try { file = new File("some-file.xml"); FileUtils.writeStringToFile(file, xmlDataAsString); } catch (IOException ioe) { LOGGER.error("Error occurred while storing the XML in a file.", ioe); } ``` F2 is reading the file this way: ``` private File getNextFileToMap() { File path = getPath(); // Returns the directory where F1 stores the results... File[] files = path.listFiles(new FilenameFilter() { public boolean accept(File file, String name) { return name.toLowerCase().endsWith(".xml"); } }); if (files.length > 0) { return files[0]; } return null; } // Somewhere in my main method of F2 ... f = getNextFileToMap(); Node xmlNode = null; try { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); Document doc = builder.parse(f); if (doc != null) { xmlNode = doc.getDocumentElement(); } } catch (Exception e) { LOGGER.error("Error while getting the XML from the file " + f.getAbsolutePath(), e); } ```
Since you're already filtering for `.xml` files in F2, have F1 output to a `.temp` file, then rename it to `.xml` as a final step. That way, F2 will ignore the file F1 is making until F1 is completely done with it.
Have you looked at the [`java.nio.channels.FileLock`](http://java.sun.com/javase/6/docs/api/java/nio/channels/FileLock.html) API? A simpler solution may well be to write to a different filename (e.g. foo.tmp) and then rename it (e.g. to foo.xml) when it's ready - the rename is atomic on most operating systems (within a directory), so when the other process sees the XML file it should be complete. This is likely to be a lot simpler than locking.
Lock a file while writing it on the disk
[ "", "java", "file-io", "file-locking", "java.util.concurrent", "" ]
Lets say I have projects x and y in brother directories: projects/x and projects/y. There are some utility funcs common to both projects in myutils.py and some db stuff in mydbstuff.py, etc. Those are minor common goodies, so I don't want to create a single package for them. Questions arise about the whereabouts of such files, possible changes to PYTHONPATH, proper way to import, etc. What is the 'pythonic way' to use such files?
The pythonic way is to create a single extra package for them. Why don't you want to create a package? You can distribute this package with both projects, and the effect would be the same. You'll never do it right for all instalation scenarios and platforms if you do it by mangling with PYTHONPATH and custom imports. Just create another package and be done in no time.
You can add path to shared files to [**`sys.path`**](http://docs.python.org/library/sys.html#sys.path) either directly by `sys.path.append(pathToShared)` or by defining `.pth` files and add them to with [**`site.addsitedir`**](http://docs.python.org/library/site.html#site.addsitedir). Path files (`.pth`) are simple text files with a path in each line.
What is the pythonic way to share common files in multiple projects?
[ "", "python", "" ]
I have seen `SQL` that uses both `!=` and `<>` for ***not equal***. What is the preferred syntax and why? I like `!=`, because `<>` reminds me of `Visual Basic`.
Technically they function the same if you’re using SQL Server AKA T-SQL. If you're using it in stored procedures there is no performance reason to use one over the other. It then comes down to personal preference. I prefer to use <> as it is ANSI compliant. You can find links to the various ANSI standards at... <http://en.wikipedia.org/wiki/SQL>
Most databases support `!=` (popular programming languages) and `<>` (ANSI). Databases that support both `!=` and `<>`: * Apache Derby 10.16: [`!=` and `<>`](https://db.apache.org/derby/docs/10.16/ref/rrefsqlj23075.html) * IBM Informix Dynamic Server 14.10: [`!=` and `<>`](https://www.ibm.com/docs/en/informix-servers/14.10?topic=statements-create-comparison-condition) * InterBase/Firebird: [`!=` and `<>`](https://firebirdsql.org/file/documentation/chunk/en/refdocs/fblangref40/fblangref40-commons.html#fblangref40-commons-compar) * Microsoft SQL Server 2000/2005/2008/2012/2016: [`!=`](https://learn.microsoft.com/sql/t-sql/language-elements/not-equal-to-transact-sql-exclamation) and [`<>`](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/comparison-operators-transact-sql) * Mimer SQL 11.0: [`!=` and `<>`](https://docs.mimer.com/MimerSqlManual/v110/html/Manuals/App_deprecated/App_deprecated.htm) * MySQL 8.0.33: [`!=` and `<>`](https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_not-equal) * Oracle 23c: [`!=` and `<>`](https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/Comparison-Conditions.html) * PostgreSQL 15: [`!=` and `<>`](https://www.postgresql.org/docs/current/functions-comparison.html#FUNCTIONS-COMPARISON-TABLE) * Sybase Adaptive Server Enterprise 16.0: [`!=` and `<>`](https://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc36271.1600/doc/html/san1393050522540.html) Databases that support the ANSI standard operator, **exclusively**: * IBM DB2 UDB 9.5: `<>` * Microsoft Access 2010: `<>`
Should I use != or <> for not equal in T-SQL?
[ "", "sql", "sql-server", "t-sql", "" ]
I have two subsequent mysqli statements, and the second returns: > Fatal error: Call to a member function bind\_param() on a non-object in ... Why this happens? Does this means that I need to open two different connection? Is there any way to avoid this (I love keeping the SQL connection details in one file)? Here the code: ``` $db = new mysqli("localhost", "root", "", "database"); $stmt = $db->prepare("UPDATE posts SET vote_".$_POST['vote']." = vote_".$_POST['vote']." + 1 WHERE id=?"); $stmt->bind_param('s', $_POST['id_post']); $stmt->execute(); $stmt->close(); $stmt = $db->prepare("INSERT INTO votes (kind, users_id, posts_id) VALUES (?, ?, ?)"); $stmt->bind_param('sss',$_POST['vote'],$_POST['id_user'],$_POST['id_post']); $stmt->execute(); $stmt->close(); ```
Check the return value of [mysqli::prepare](http://php.net/manual/de/mysqli.prepare.php). If it is FALSE, you should get the details for the occured error with [mysqli::error](http://php.net/manual/de/mysqli.error.php).
I think your $stmt variable is null when you call bind\_param over it. maybe your $\_POST['vote'] is empty? you can check it before you bind the param on the command
mysqli why does this happens?
[ "", "php", "mysqli", "" ]
I have the following data : ``` SectorKey Sector foo 1 A null 2 B null ... ... ... 1 null a 2 null b 2 null c 1 null d 2 null e ... ... ... ``` I want to update column Sector when it's null based on the value of sectorKey, ie I want Sector to be 'A' when SectorKey is 1 and 'B' when SectorKey is 2 I've tried this query : ``` update tbFoo set Sector=A.sector from tbFoo A INNER JOIN tbFoo B ON A.SectorKey=B.SectorKey and A.Sector is not null and B.Sector is null ``` and got this error message : > The table 'tbFoo' is ambiguous. I've tried to alias the first tbFoo, but it doesn't seem to be a valid syntax. I don't understand why SQLServer complains about an ambiguous naming since I've got all my tables aliased. I've found [this thread](https://stackoverflow.com/questions/440702/how-can-i-update-a-field-in-one-table-with-a-field-from-another-table-sql), and I feel like I'm doing exactly the same thing as in the upvoted answer. I've also tried the query suggested in the accepted answer : ``` update tbFoo A set Sector = (select Sector from tbFoo where A.SectorKey=SectorKey and Sector is not null) ``` and then SQLServer complains about an incorrect syntax near 'A' Any ideas on what may be happening, and to fix this? I'm using SQLServer 2008. **EDIT** I've not shown the total data of my table. I don't have only two cases (A and B), but rather a few thousands of cases. So an explicit case is not an option
Use the alias in the first part of your update query: ``` update B set Sector=A.sector from tbFoo A INNER JOIN tbFoo B ON A.SectorKey=B.SectorKey and A.Sector is not null and B.Sector is null ``` Otherwise it doesn't know which instance of the table to update.
Try to use a CTE and change the name of field for alias: ``` WITH CTE_TBFOO(SETOR) AS ( SELECT Sector FROM tbFoo T1 ) update tbFoo set Sector= A.SETOR from CTE_TBFOO A WHERE A.SETOR = SectorKey and A.SETOR is not null and B.Sector is null ```
How can I update a table with data retrieved by a join on itself?
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Consider the following code: ``` string[] s = new[] { "Rob", "Jane", "Freddy" }; string joined = string.Join(", ", s); // joined equals "Rob, Jane, Freddy" ``` For UI reasons I might well want to display the string `"Rob, Jane or Freddy"`. Any suggestions about the most concise way to do this? ## Edit I am looking for something that is concise to type. Since I am only concatenating small numbers (<10) of strings I am not worried about run-time performance here.
What about: ``` if (s.Length > 1) { uiText = string.Format("{0} and {1}", string.Join(", ", s, 0, s.Length - 1), s[s.Length - 1]); } else { uiText = s.Length > 0 ? s[0] : ""; } ```
Concise meaning to type? or to run? The fastest to run will be hand-cranked with `StringBuilder`. But to type, probably (**edit** handle 0/1 etc): ``` string joined; switch (s.Length) { case 0: joined = ""; break; case 1: joined = s[0]; break; default: joined = string.Join(", ", s, 0, s.Length - 1) + " or " + s[s.Length - 1]; break; } ``` --- The `StringBuilder` approach might look something like: ``` static string JoinOr(string[] values) { switch (values.Length) { case 0: return ""; case 1: return values[0]; } StringBuilder sb = new StringBuilder(); for (int i = 0; i < values.Length - 2; i++) { sb.Append(values[i]).Append(", "); } return sb.Append(values[values.Length-2]).Append(" or ") .Append(values[values.Length-1]).ToString(); } ```
Concatenating an array of strings to "string1, string2 or string3"
[ "", "c#", ".net", "string", "" ]
The following snippet draws a gray square. ``` glColor3b(50, 50, 50); glBegin(GL_QUADS); glVertex3f(-1.0, +1.0, 0.0); // top left glVertex3f(-1.0, -1.0, 0.0); // bottom left glVertex3f(+1.0, -1.0, 0.0); // bottom right glVertex3f(+1.0, +1.0, 0.0); // top right glEnd(); ``` In my application, behind this single square exists a colored cube. What function should I use to make square (and only this square) opaque?
In the init function, use these two lines: ``` glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); ``` And in your render function, ensure that `glColor4f` is used instead of `glColor3f`, and set the 4th argument to the level of opacity required. ``` glColor4f(1.0, 1.0, 1.0, 0.5); glBegin(GL_QUADS); glVertex3f(-1.0, +1.0, 0.0); // top left glVertex3f(-1.0, -1.0, 0.0); // bottom left glVertex3f(+1.0, -1.0, 0.0); // bottom right glVertex3f(+1.0, +1.0, 0.0); // top right glEnd(); ```
glColor4f(float r,float g, float b, flaot alpha); (in your case maybe clColor4b) also make sure, that blending is enabled. (you have to reset the color to non-alpha afterwads, which might involve a glGet\* to save the old vertexcolor)
How do I set the opacity of a vertex in OpenGL?
[ "", "c++", "opengl", "transparency", "opacity", "" ]
Is there some mechanism by which I can be notified (in C#) when a file is modified on the disc?
That would be [System.IO.FileSystemWatcher](http://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher.aspx).
You can use the [`FileSystemWatcher`](https://learn.microsoft.com/en-us/dotnet/api/system.io.filesystemwatcher) class. ``` public void CreateFileWatcher(string path) { // Create a new FileSystemWatcher and set its properties. FileSystemWatcher watcher = new FileSystemWatcher(); watcher.Path = path; /* Watch for changes in LastAccess and LastWrite times, and the renaming of files or directories. */ watcher.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.FileName | NotifyFilters.DirectoryName; // Only watch text files. watcher.Filter = "*.txt"; // Add event handlers. watcher.Changed += new FileSystemEventHandler(OnChanged); watcher.Created += new FileSystemEventHandler(OnChanged); watcher.Deleted += new FileSystemEventHandler(OnChanged); watcher.Renamed += new RenamedEventHandler(OnRenamed); // Begin watching. watcher.EnableRaisingEvents = true; } // Define the event handlers. private static void OnChanged(object source, FileSystemEventArgs e) { // Specify what is done when a file is changed, created, or deleted. Console.WriteLine("File: " + e.FullPath + " " + e.ChangeType); } private static void OnRenamed(object source, RenamedEventArgs e) { // Specify what is done when a file is renamed. Console.WriteLine("File: {0} renamed to {1}", e.OldFullPath, e.FullPath); } ```
Notification when a file changes?
[ "", "c#", "file", "filesystemwatcher", "" ]
What solution would you recommend for including files in a PHP project? 1. There aren't manual calls of require/include functions - everything loads through autoload functions 2. Package importing, when needed. Here is the package importing API: ``` import('util.html.HTMLParser'); import('template.arras.*'); ``` In this function declaration you can explode the string with dots (package hierarchy delimeter), looping through files in particular package (folder) to include just one of them or all of them if the asterisk symbol is found at the end of the string, e.g. ('template.arras.\*'). One of the benefits I can see in package importing method, is that it can force you to use better object decomposition and class grouping. One of the drawbacks I can see in autoload method - is that autoload function can become very big and not very obvious/readable. What do you think about it? * What benefits/drawbacks can you name in each of this methods? * How can I find the best solution for the project? * How can I know if there will be any performance problems if package management is used?
I use \_\_autoload() extensively. The autload function that we use in our application has a few tweaks for backwards compatibility of older classes, but we generally follow a convention when creating new classes that allow the autoload() to work fairly seemlessly: * **Consistent Class Naming**: each class in its own file, each class is named with camel-case separated by an underscore. This maps to the class path. For example, Some\_CoolClass maps to our class directory then 'Some/CoolClass.class.php'. I think some frameworks use this convention. * **Explicitly Require External Classes**: since we don't have control over the naming of any external libraries that we use, we load them using PHP's require\_once() function.
The import method is an improvement but still loads up more than needed. Either by using the asterisk or loading them up in the beginning of the script (because importing before every "new Classname" will become cumbersome) I'm a fan of `__autoload()` or the even better [`spl_autoload_register()`](http://php.net/manual/en/function.spl-autoload-register.php) Because it will include only the classes you're using and the extra benefit of not caring where the class is located. If your colleges moves a file to another directory you are not effected. The downside is that it need additional logic to make it work properly with directories.
Import package or autoloading for PHP?
[ "", "php", "autoload", "package-managers", "" ]
``` typedef struct temp { int a,b; char *c; temp(){ c = (char*)malloc(10);}; ~temp(){free(c);}; }temp; int main() { temp a; list<temp> l1; l1.push_back(a); l1.clear(); return 0; } ``` giving segmentation fault.
You don't have a copy constructor. When you push 'a' into the list, it gets copied. Because you don't have a copy constructor (to allocate memory for c and copy from old c to new c) c is the same pointer in a and the copy of a in the list. The destructor for both a's gets called, the first will succeed, the second will fail because the memory c points to has already been freed. You need a copy constructor. To see whats happening, put some couts in the constructors and destructors and step through the code.
You need a deep-copy constructor to avoid double free(). You have a variable of *temp class* (*a*), then you add it to the list. The variable is copied. Then you clear the list, the element inside is destroyed and free() is called. Then *a* variable is destroyed and free() is called again for the same address which leads to segmentation fault. You need a copy constructor for deep copying class temp variables which would malloc() another buffer and copy data.
Seg fault after is item pushed onto STL container
[ "", "c++", "memory", "memory-management", "stl", "pointers", "" ]
One of the method signatures for the DataRow Add Method is: ``` DataRow.Add(params object[] values) ``` When using the above, if I am passing in some strings for example, do I have to do it like the following: ``` DataRow.Add(new object[]{"a","b","c"}); ``` or can I just do it like the following: ``` DataRow.Add("a","b","c"); ``` Would both ways work? The same question applies to the a collection of DataColumns when passing adding columns to a DataTable using the AddRange method. Do I have to use DataTable.Columns.AddRange(new DataColumn[]{}) or can I just pass the columns without instantiating a new array (meaning does it do it indirectly)
Yes, the both will work fine. Though the second syntax is preferable.
Yes, both ways would work. The `params` keyword is magic like that.
How to pass parameters to a the Add Method of a DataRow?
[ "", "c#", "datarow", "" ]
C# has a syntax feature where you can concatenate many data types together on 1 line. ``` string s = new String(); s += "Hello world, " + myInt + niceToSeeYouString; s += someChar1 + interestingDecimal + someChar2; ``` What would be the equivalent in C++? As far as I can see, you'd have to do it all on separate lines as it doesn't support multiple strings/variables with the + operator. This is OK, but doesn't look as neat. ``` string s; s += "Hello world, " + "nice to see you, " + "or not."; ``` The above code produces an error.
``` #include <sstream> #include <string> std::stringstream ss; ss << "Hello, world, " << myInt << niceToSeeYouString; std::string s = ss.str(); ``` Take a look at this Guru Of The Week article from Herb Sutter: [The String Formatters of Manor Farm](http://www.gotw.ca/publications/mill19.htm)
In 5 years nobody has mentioned `.append`? ``` #include <string> std::string s; s.append("Hello world, "); s.append("nice to see you, "); s.append("or not."); ``` Or on one line: ``` s.append("Hello world, ").append("nice to see you, ").append("or not."); ```
How do I concatenate multiple C++ strings on one line?
[ "", "c++", "string", "compiler-errors", "concatenation", "one-liner", "" ]
What is the purpose of adding the batch="false" in the compilation tag in ASP.NET 1.1?
MSDN says the purpose of the batch flag > eliminates the delay caused by the compilation required when you > access a file for the first time. When this attribute is set to True, > ASP.NET precompiles all the uncompiled files in a batch mode, which > causes an even longer delay the first time the files are compiled. > However, after this initial delay, the compilation delay is eliminated > on subsequent access of the file. Having it set to false will probably make it compile faster the first time, but slower subsequent times, and I believe this applies to 1.1 as well. [MSDN Link](http://msdn.microsoft.com/en-us/library/s10awwz0.aspx)
I know this question is closed (and about v1.1) but the batch attribute is actually defaulted to True in .Net 2.0 onwards. <http://msdn.microsoft.com/en-us/library/s10awwz0%28VS.80%29.aspx>
web.config batch="false"
[ "", "c#", "asp.net", "vb.net", "web-config", ".net-1.1", "" ]
I'm looking for a quick way to create a list of values in C#. In Java I frequently use the snippet below: ``` List<String> l = Arrays.asList("test1","test2","test3"); ``` Is there any equivalent in C# apart from the obvious one below? ``` IList<string> l = new List<string>(new string[] {"test1","test2","test3"}); ```
Check out C# 3.0's [Collection Initializers](http://www.developer.com/net/csharp/article.php/3607421). ``` var list = new List<string> { "test1", "test2", "test3" }; ```
If you're looking to reduce clutter, consider ``` var lst = new List<string> { "foo", "bar" }; ``` This uses two features of C# 3.0: type inference (the `var` keyword) and the collection initializer for lists. Alternatively, if you can make do with an array, this is even shorter (by a small amount): ``` var arr = new [] { "foo", "bar" }; ```
Quick way to create a list of values in C#?
[ "", "c#", "list", "" ]
I need one liner (or close to it) that verifies that given array of 9 elements doesn't contain repeating numbers 1,2,3,...,9. Repeating zeroes do not count (they represent empty cells). The best I have came out so far is: ``` var a = new int[9] {1,2,3,4,5,6,7,8,9}; var itIsOk = a.Join(a, i => i, j => j, (x, y) => x) .GroupBy(y => y).Where(g => g.Key > 0 && g.Count() > 1).Count() == 0; ``` If you don't want to solve my problems :), could you at least tell if the above algorithm works correctly? And, yes, a have read [this one](https://stackoverflow.com/questions/289537/a-cool-algorithm-to-check-a-sudoku-field).
Lucky for you I built a sudoku solver myself not too long ago :) The whole thing was about 200 lines of C#, and it would solve the toughest puzzles I could find line in 4 seconds or less. Performance probably isn't that great due to the use of .Count, but it should work: ``` !a.Any(i => i != 0 && a.Where(j => j != 0 && i == j).Count > 1) ``` Also, the `j != 0` part isn't really needed, but it should help things run a bit faster. [edit:] kvb's answer gave me another idea: ``` !a.Where(i => i != 0).GroupBy(i => i).Any(gp => gp.Count() > 1) ``` Filter the 0's *before* grouping. Though based on how IEnumerable works it may not matter any. Either way, For best performance replace `.Count > 1` in either of those with a new IEnumerable extension method that looks like this: ``` bool MoreThanOne(this IEnumerable<T> enumerable, Predictate<T> pred) { bool flag = false; foreach (T item in enumerable) { if (pred(item)) { if (flag) return true; flag = true; } } return false; } ``` It probably won't matter too much since arrays are limited to 9 items, but if you call it a lot it might add up.
This is about 50-250 times faster than a LINQ solution (depending on how early the duplicate is found): ``` public static bool IsValid(int[] values) { int flag = 0; foreach (int value in values) { if (value != 0) { int bit = 1 << value; if ((flag & bit) != 0) return false; flag |= bit; } } return true; } ```
Sudoku algorithm in C#
[ "", "c#", "linq", "sudoku", "" ]