Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
In the Microsoft.Office.Interop.Visio library each entity (for e.g Shape), is described as a class (ShapeClass) and an interface, Shape. So for each element you have: ``` interface Shape { ... } class ShapeClass : Shape { ... } interface Page { ... } class PageClass : Page { ... } ... ``` Why is it designed like this?
The "Interop" part of the namespace hints that this is actually a COM-based API. COM was Microsoft's first attempt at a language-neutral component model for developers, and one of its core tenets was interface-based design. So, in your example, `ShapeClass` is called a "co-class", which is a named implementation of the `Shape` interface. Co-classes were registered globally (in the Win32 registry) and could be created based on their friendly name ("prog-ID") or a GUID, called "CLSID".
I guess it will be because they are all implemented as COM objects and the interface is there to define the contract for the class implementation - the interfaces will be implemented in IDL [wiki COM](http://en.wikipedia.org/wiki/Component_Object_Model)
Why is there interface / class (IFoo, FooClass) pattern in Office Interop API?
[ "", "c#", "com", "com-interop", "" ]
How do I sort columns of integers in a ListView c#, .net 2.0, Winform System.Windows.Forms.ListView
This is how I accomplished being able to sort on multiple columns, and being able to sort each column as a number, or as text. First use this class: ``` class Sorter : System.Collections.IComparer { public int Column = 0; public System.Windows.Forms.SortOrder Order = SortOrder.Ascending; public int Compare(object x, object y) // IComparer Member { if (!(x is ListViewItem)) return (0); if (!(y is ListViewItem)) return (0); ListViewItem l1 = (ListViewItem)x; ListViewItem l2 = (ListViewItem)y; if (l1.ListView.Columns[Column].Tag == null) { l1.ListView.Columns[Column].Tag = "Text"; } if (l1.ListView.Columns[Column].Tag.ToString() == "Numeric") { float fl1 = float.Parse(l1.SubItems[Column].Text); float fl2 = float.Parse(l2.SubItems[Column].Text); if (Order == SortOrder.Ascending) { return fl1.CompareTo(fl2); } else { return fl2.CompareTo(fl1); } } else { string str1 = l1.SubItems[Column].Text; string str2 = l2.SubItems[Column].Text; if (Order == SortOrder.Ascending) { return str1.CompareTo(str2); } else { return str2.CompareTo(str1); } } } } ``` In your form's constructor, set the sorter like this: ``` lvSeries.ListViewItemSorter = new Sorter(); ``` Then handle the ColumnClick even of your listview control like this: ``` private void lvSeries_ColumnClick(object sender, ColumnClickEventArgs e) { Sorter s = (Sorter)lvSeries.ListViewItemSorter; s.Column = e.Column; if (s.Order == System.Windows.Forms.SortOrder.Ascending) { s.Order = System.Windows.Forms.SortOrder.Descending; } else { s.Order = System.Windows.Forms.SortOrder.Ascending; } lvSeries.Sort(); } ``` This is all dependent on the Tag property of each column either being set to "Numeric" or not, so the sorter knows how to sort. In the above example I cast the values as floats when numeric, you may want to change that to int.
If you are getting started with a ListView, your life will be much much easier if you use an [ObjectListView](http://www.codeproject.com/KB/list/objectlistview.aspx) instead. ObjectListView is an open source wrapper around .NET WinForms ListView, and it solves all these annoying little problems that normally make working with a ListView so frustrating. For example, it automatically sorts ints so that '100' comes after '3' (DateTimes, bools, and everything else sorts correctly too). Seriously, you will never want to go back to a plain ListView after using an ObjectListView. Yes, I am the author -- but that doesn't mean I'm biased... OK, well maybe it does :) Look [here](http://objectlistview.sourceforge.net/cs/index.html#what-people-have-said-about-objectlistview) for some other people's opinions.
how do I sort Integers in a listview
[ "", "c#", "winforms", "listview", "" ]
All of my C++ programs so far have been using the command line interface and the only other language I have experience with is PHP which doesn't support GUIs. Where do I start with graphical user interface programming in C++? How do I create one?
Essentially, an operating system's windowing system exposes some API calls that you can perform to do jobs like create a window, or put a button on the window. Basically, you get a suite of header files and you can call functions in those imported libraries, just like you'd do with stdlib and `printf`. Each operating system comes with its own GUI toolkit, suite of header files, and API calls, and their own way of doing things. There are also cross platform toolkits like [GTK](https://en.wikipedia.org/wiki/GTK), [Qt](https://en.wikipedia.org/wiki/Qt_%28software%29), and [wxWidgets](http://en.wikipedia.org/wiki/WxWidgets) that help you build programs that work anywhere. They achieve this by having the same API calls on each platform, but a different implementation for those API functions that call down to the native OS API calls. One thing they'll all have in common, which will be different from a CLI program, is something called an event loop. The basic idea there is somewhat complicated, and difficult to compress, but in essence it means that not a hell of a lot is going in in your main class/main function, except: * check the event queue if there's any new events * if there is, dispatch those events to appropriate handlers * when you're done, yield control back to the operating system (usually with some kind of special "sleep" or "select" or "yield" function call) * then the yield function will return when the operating system is done, and you have another go around the loop. There are plenty of resources about event-based programming. If you have any experience with JavaScript, it's the same basic idea, except that you, the scripter, have no access or control over the event loop itself, or what events there are, your only job is to write and register handlers. You should keep in mind that GUI programming is incredibly complicated and difficult, in general. If you have the option, it's actually much easier to just integrate an embedded webserver into your program and have an HTML/web based interface. The one exception that I've encountered is Apple's [Cocoa](http://en.wikipedia.org/wiki/Cocoa_%28API%29) + [Xcode](http://en.wikipedia.org/wiki/Xcode) + interface builder + tutorials that make it easily the most approachable environment for people new to GUI programming that I've seen.
There are plenty of **free portable GUI libraries**, each with its own strengths and weaknesses: * [Qt](https://www.qt.io/)* [Dear ImGui](https://github.com/ocornut/imgui)* [GTKmm](http://www.gtkmm.org/) (based on [GTK+](http://www.gtk.org/))* [wxWidgets](http://www.wxwidgets.org/)* [FLTK](http://www.fltk.org/)* [Ultimate++](http://www.ultimatepp.org/)* [JUCE](https://www.juce.com/)* ... Especially [Qt has nice tutorials](http://doc.qt.io/qt-5/qtexamplesandtutorials.html) and tools which help you getting started. Enjoy! Note, however, that you should **avoid platform specific** functionality such as the Win32 API or MFC. That ties you unnecessarily on a specific platform with almost no benefits.
How do I build a graphical user interface in C++?
[ "", "c++", "user-interface", "" ]
In Windows, is there a way to check for the existence of an environment variable for another process? Just need to check existence, not necessarily get value. I need to do this from code.
If you know the virtual address at which the environment is stored, you can use [`OpenProcess`](http://msdn.microsoft.com/en-us/library/ms684320(VS.85).aspx) and [`ReadProcessMemory`](http://msdn.microsoft.com/en-us/library/ms680553(VS.85).aspx) to read the environment out of the other process. However, to find the virtual address, you'll need to poke around in the [Thread Information Block](http://en.wikipedia.org/wiki/Win32_Thread_Information_Block) of one of the process' threads. To get that, you'll need to call [`GetThreadContext()`](http://msdn.microsoft.com/en-us/library/ms679362(VS.85).aspx) after calling [`SuspendThread()`](http://msdn.microsoft.com/en-us/library/ms686345(VS.85).aspx). But in order to call those, you need a thread handle, which you can get by calling [`CreateToolhelp32Snapshot`](http://msdn.microsoft.com/en-us/library/ms682489(VS.85).aspx) with the `TH32CS_SNAPTHREAD` flag to create a snapshot of the process, [`Thread32First`](http://msdn.microsoft.com/en-us/library/ms686728(VS.85).aspx) to get the thread ID of the first thread in the process, and [`OpenThread`](http://msdn.microsoft.com/en-us/library/ms684335(VS.85).aspx) to get a handle to the thread.
Here is a working example which the printed output can be used to check existence as well as read the value, (build it as the same architecture as the executable's process identifier you must target): getenv.cpp ``` #include <string> #include <vector> #include <cwchar> #include <windows.h> #include <winternl.h> using std::string; using std::wstring; using std::vector; using std::size_t; // define process_t type typedef DWORD process_t; // #define instead of typedef to override #define RTL_DRIVE_LETTER_CURDIR struct {\ WORD Flags;\ WORD Length;\ ULONG TimeStamp;\ STRING DosPath;\ }\ // #define instead of typedef to override #define RTL_USER_PROCESS_PARAMETERS struct {\ ULONG MaximumLength;\ ULONG Length;\ ULONG Flags;\ ULONG DebugFlags;\ PVOID ConsoleHandle;\ ULONG ConsoleFlags;\ PVOID StdInputHandle;\ PVOID StdOutputHandle;\ PVOID StdErrorHandle;\ UNICODE_STRING CurrentDirectoryPath;\ PVOID CurrentDirectoryHandle;\ UNICODE_STRING DllPath;\ UNICODE_STRING ImagePathName;\ UNICODE_STRING CommandLine;\ PVOID Environment;\ ULONG StartingPositionLeft;\ ULONG StartingPositionTop;\ ULONG Width;\ ULONG Height;\ ULONG CharWidth;\ ULONG CharHeight;\ ULONG ConsoleTextAttributes;\ ULONG WindowFlags;\ ULONG ShowWindowFlags;\ UNICODE_STRING WindowTitle;\ UNICODE_STRING DesktopName;\ UNICODE_STRING ShellInfo;\ UNICODE_STRING RuntimeData;\ RTL_DRIVE_LETTER_CURDIR DLCurrentDirectory[32];\ ULONG EnvironmentSize;\ }\ // shortens a wide string to a narrow string static inline string shorten(wstring wstr) { int nbytes = WideCharToMultiByte(CP_UTF8, 0, wstr.c_str(), (int)wstr.length(), NULL, 0, NULL, NULL); vector<char> buf(nbytes); return string { buf.data(), (size_t)WideCharToMultiByte(CP_UTF8, 0, wstr.c_str(), (int)wstr.length(), buf.data(), nbytes, NULL, NULL) }; } // replace all occurrences of substring found in string with specified new string static inline string string_replace_all(string str, string substr, string nstr) { size_t pos = 0; while ((pos = str.find(substr, pos)) != string::npos) { str.replace(pos, substr.length(), nstr); pos += nstr.length(); } return str; } // func that splits string by first occurrence of equals sign vector<string> string_split_by_first_equalssign(string str) { size_t pos = 0; vector<string> vec; if ((pos = str.find_first_of("=")) != string::npos) { vec.push_back(str.substr(0, pos)); vec.push_back(str.substr(pos + 1)); } return vec; } // checks whether process handle is 32-bit or not static inline bool IsX86Process(HANDLE process) { BOOL isWow = true; SYSTEM_INFO systemInfo = { 0 }; GetNativeSystemInfo(&systemInfo); if (systemInfo.wProcessorArchitecture == PROCESSOR_ARCHITECTURE_INTEL) return isWow; IsWow64Process(process, &isWow); return isWow; } // helper to open processes based on pid with full debug privileges static inline HANDLE OpenProcessWithDebugPrivilege(process_t pid) { HANDLE hToken; LUID luid; TOKEN_PRIVILEGES tkp; OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, &hToken); LookupPrivilegeValue(NULL, SE_DEBUG_NAME, &luid); tkp.PrivilegeCount = 1; tkp.Privileges[0].Luid = luid; tkp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; AdjustTokenPrivileges(hToken, false, &tkp, sizeof(tkp), NULL, NULL); CloseHandle(hToken); return OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, pid); } // get wide character string of pids environ based on handle static inline wchar_t *GetEnvironmentStringsW(HANDLE proc) { PEB peb; SIZE_T nRead; ULONG res_len = 0; PROCESS_BASIC_INFORMATION pbi; RTL_USER_PROCESS_PARAMETERS upp; HMODULE p_ntdll = GetModuleHandleW(L"ntdll.dll"); typedef NTSTATUS (__stdcall *tfn_qip)(HANDLE, PROCESSINFOCLASS, PVOID, ULONG, PULONG); tfn_qip pfn_qip = tfn_qip(GetProcAddress(p_ntdll, "NtQueryInformationProcess")); NTSTATUS status = pfn_qip(proc, ProcessBasicInformation, &pbi, sizeof(pbi), &res_len); if (status) { return NULL; } ReadProcessMemory(proc, pbi.PebBaseAddress, &peb, sizeof(peb), &nRead); if (!nRead) { return NULL; } ReadProcessMemory(proc, peb.ProcessParameters, &upp, sizeof(upp), &nRead); if (!nRead) { return NULL; } PVOID buffer = upp.Environment; ULONG length = upp.EnvironmentSize; wchar_t *res = new wchar_t[length / 2 + 1]; ReadProcessMemory(proc, buffer, res, length, &nRead); if (!nRead) { return NULL; } res[length / 2] = 0; return res; } // get env of pid as a narrow string string env_from_pid(process_t pid) { string envs; HANDLE proc = OpenProcessWithDebugPrivilege(pid); wchar_t *wenvs = NULL; if (IsX86Process(GetCurrentProcess())) { if (IsX86Process(proc)) { wenvs = GetEnvironmentStringsW(proc); } } else { if (!IsX86Process(proc)) { wenvs = GetEnvironmentStringsW(proc); } } string arg; if (wenvs == NULL) { return ""; } else { arg = shorten(wenvs); } size_t i = 0; do { size_t j = 0; vector<string> envVec = string_split_by_first_equalssign(arg); for (const string &env : envVec) { if (j == 0) { if (env.find_first_of("%<>^&|:") != string::npos) { continue; } if (env.empty()) { continue; } envs += env; } else { envs += "=\"" + string_replace_all(env, "\"", "\\\"") + "\"\n"; } j++; } i += wcslen(wenvs + i) + 1; arg = shorten(wenvs + i); } while (wenvs[i] != L'\0'); if (envs.back() == '\n') { envs.pop_back(); } if (wenvs != NULL) { delete[] wenvs; } CloseHandle(proc); return envs; } // test function (can be omitted) int main(int argc, char **argv) { if (argc == 2) { printf("%s", env_from_pid(stoul(string(argv[1]), nullptr, 10)).c_str()); printf("%s", "\r\n"); } else { printf("%s", env_from_pid(GetCurrentProcessId()).c_str()); printf("%s", "\r\n"); } return 0; } ``` buildx86.sh ``` g++ getenv.cpp -o getenv.exe -std=c++17 -static-libgcc -static-libstdc++ -static -m32 ``` buildx64.sh ``` g++ getenv.cpp -o getenv.exe -std=c++17 -static-libgcc -static-libstdc++ -static -m64 ``` Quotes are added around the printed value for clarity, and escaping is applied to inner quotes.
Check for environment variable in another process?
[ "", "c++", "c", "windows-services", "permissions", "" ]
Since there is the Double-checked locking issue so we have to use synchronization to guarantee the concurrent access to the following method (org.apache.struts.util.MessageResources class) : LAZY INSTANTIATION ``` public synchronized static MessageResources getMessageResources(String config) { if (defaultFactory == null) { defaultFactory = MessageResourcesFactory.createFactory(); } return defaultFactory.createResources(config); } ``` Why not to use: EAGER INSTANTIATION ``` static { // Construct a new instance of the specified factory class try { if (clazz == null) clazz = RequestUtils.applicationClass(factoryClass); MessageResourcesFactory defaultFactory = (MessageResourcesFactory) clazz.newInstance(); } catch (Exception e) { LOG.error("MessageResourcesFactory.createFactory", e); } } ``` And then: ``` public static MessageResources getMessageResources(String config) { return defaultFactory.createResources(config); } ``` It would allow concurrent access to the method getMessageResources which at least in my case it may be called quite a few times. The implications when not using synchronized are in here: <http://en.wikipedia.org/wiki/Double-checked_locking>
I think it is a way for Struts to make sure that it works fine when in multi-thread mode, no matter if the person overriding org.apache.struts.util.MessageResources defines createResources(String configuration) as synchronized or not.
Is `MessageResourcesFactory` thread-safe? The `synchronized` method protects both the setting of the field and the `createResources` method call. If it is thread-safe the locking could be moved to cover just setting the field and leave the method call outside the critical section.
Why lazy instantiation of the MessageResourcesFactory in Struts 1.2.7?
[ "", "java", "struts", "lazy-loading", "eager-loading", "" ]
Sorry, I couldn't provide a better title for my problem as I am quite new to SQL. I am looking for a SQL query string that solves the below problem. Let's assume the following table: ``` DOCUMENT_ID | TAG ---------------------------- 1 | tag1 1 | tag2 1 | tag3 2 | tag2 3 | tag1 3 | tag2 4 | tag1 5 | tag3 ``` Now I want to select all distinct document id's that contain one or more tags (but those must provide all specified tags). For example: Select all document\_id's with tag1 and tag2 would return 1 and 3 (but not 4 for example as it doesn't have tag2). What would be the best way to do that? Regards, Kai
``` SELECT document_id FROM table WHERE tag = 'tag1' OR tag = 'tag2' GROUP BY document_id HAVING COUNT(DISTINCT tag) = 2 ``` ### Edit: Updated for lack of constraints...
This assumes DocumentID and Tag are the Primary Key. *Edit*: Changed HAVING clause to count DISTINCT tags. That way it doesn't matter what the primary key is. **Test Data** ``` -- Populate Test Data CREATE TABLE #table ( DocumentID varchar(8) NOT NULL, Tag varchar(8) NOT NULL ) INSERT INTO #table VALUES ('1','tag1') INSERT INTO #table VALUES ('1','tag2') INSERT INTO #table VALUES ('1','tag3') INSERT INTO #table VALUES ('2','tag2') INSERT INTO #table VALUES ('3','tag1') INSERT INTO #table VALUES ('3','tag2') INSERT INTO #table VALUES ('4','tag1') INSERT INTO #table VALUES ('5','tag3') INSERT INTO #table VALUES ('3','tag2') -- Edit: test duplicate tags ``` **Query** ``` -- Return Results SELECT DocumentID FROM #table WHERE Tag IN ('tag1','tag2') GROUP BY DocumentID HAVING COUNT(DISTINCT Tag) = 2 ``` **Results** ``` DocumentID ---------- 1 3 ```
SQL query that gives distinct results that match multiple columns
[ "", "sql", "database", "h2", "" ]
Attempting to deserialize JSON data and update each object's prototype and inherit a common function. However, the following script throws error "people[0].getFullName is not a function". The prototype for deserialized objects appears to be undefined after assignment. ``` <html> <head> <script> var json = '[ {"firstName": "John", "lastName": "Smith"}, {"firstName": "Nancy", "lastName": "Jones"} ]'; var people; eval('people = ' + json); function Person() { } Person.prototype.getFullName = function() { return this.firstName + ' ' + this.lastName; } //assign prototype for(var i=0; i < people.length; i++){ people[i].prototype = new Person(); } if(people[0].getFullName() !== 'John Smith') alert('Expected fullname to be John Smith but was ' + people[0].getFullName()); </script> </head> </html> ```
The `prototype` property is a property of *constructors*, not of *instances*. What you are looking for is the property **`__proto__`**: ``` people[i].__proto__ = new Person(); ``` The bad news is that **it does not work in all browsers**. It does work in Firefox and Safari, it does not work in IE. An alternative is to use constructors to instantiate your array of people. Unfortunately you'll have to copy all properties: ``` function Person(obj) { for (var property in obj) { this[property] = obj[property]; } return this; } Person.prototype.getFullName = function() { return this.firstName + ' ' + this.lastName; } var people; eval('people = ' + json); for(var i=0; i < people.length; i++) { people[i] = new Person(people[i]); } ```
An object x that is created by x=*new Person()* is linked to/inherits from Person.prototype, but as far as the ecma standard is concerned you cannot change x.prototype in order to change that link/inheritance afterwards, that's the "magic power" only the *new* keyword possesses. Mozilla seems to offer a way to change the object link after an object has been created through the non-standard property [\_\_proto\_\_](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Object/Proto). Mozilla-only: ``` //assign prototype for(var i=0; i < people.length; i++){ people[i].__proto__ = Person.prototype; } ``` should work anywhere: ``` function Person(data) { this.data = data; } Person.prototype.getFullName = function() { return this.data.firstName + ' ' + this.data.lastName; } eval('people = ' + json); //assign prototype for(var i=0; i < people.length; i++){ people[i] = new Person(people[i]); } ```
Javascript prototype undefined after eval deserialization
[ "", "javascript", "json", "prototype", "eval", "" ]
I've set `MaskedTextBox's` Mask to: "Fl\air H\al ###.## , something here: ####.##" When user inputs the value final text looks something like this: Flair Hal 987.67 , something here: 1234.12 What will be the best way to extract 976.67 and 1234.12 from the MaskedTextBox's Text. I am looking for a List which will have all the values of the mask (976.67, 1234.12). There can be any number of masks in the mask string and the mask can be any valid mask. I am thinking of first removing '\' from the Mask and then in a for loop keep comparing the Mask with the Text and detect changes and add them to the List. But this doesnt sound good to me and i think there probably is a better way of doing it.
Well i found out that there is no good way of doing it. As adrianbanks said i have to code myself to get this information. I have written my own usercontrol which uses combination of labels and maskedtexboxes to get the input. I use curly braces to indicate where i wan the masked textbox and the user control puts one masked textbox per pair of curly braces. "Flair Hal {###.##} , something here: {####.##}" Then I can use the values collection which has the values for the masks.
There are four values of the mask in your example: 987, 67, 1234, 12. The fact that blocks separated by a . are treated as one is your own logic, so I think you will just have to write code to get the information yourself. Have a look at the [MaskedTextProvider](http://msdn.microsoft.com/en-us/library/system.windows.forms.maskedtextbox.maskedtextprovider.aspx) property of the `MaskedTextBox`, and its [EditPositions](http://msdn.microsoft.com/en-us/library/system.componentmodel.maskedtextprovider.editpositions.aspx) property. The `EditPositions` give you the positions within the `Text` that the user could enter.
Getting values from Masked text box
[ "", "c#", ".net", "" ]
Is there a way to delete all files & sub-directories of a specified directory without iterating over them? The non elegant solution: ``` public static void EmptyDirectory(string path) { if (Directory.Exists(path)) { // Delete all files foreach (var file in Directory.GetFiles(path)) { File.Delete(file); } // Delete all folders foreach (var directory in Directory.GetDirectories(path)) { Directory.Delete(directory, true); } } } ```
How about System.IO.Directory.Delete? It has a recursion option, you're even using it. Reviewing your code it looks like you're trying to do something slightly different -- empty the directory without deleting it, right? Well, you could delete it and re-create it :) --- In any case, you (or some method you use) must iterate over all of the files and subdirectories. However, you can iterate over both files and directories at the same time, using [GetFileSystemInfos](https://msdn.microsoft.com/en-us/library/system.io.directoryinfo.getfilesysteminfos(v=vs.110).aspx): ``` foreach(System.IO.FileSystemInfo fsi in new System.IO.DirectoryInfo(path).GetFileSystemInfos()) { if (fsi is System.IO.DirectoryInfo) ((System.IO.DirectoryInfo)fsi).Delete(true); else fsi.Delete(); } ```
Why is that not elegant? It's clean, very readable and does the job.
What is the best way to empty a directory?
[ "", "c#", ".net", "filesystems", "" ]
`DataSets` can be 10+ times slower than `DataReader` at retrieving data from DB. I assume this is due to overhead of `DataSets` having to deal with relations etc. But is the speed difference between `DataSets` and `DataReader` due to `DataSets` having to retrieve more data (information about relations ...) from DB, or due to application having to do more processing, or both? I assume `DataAdapter` uses `DataReader` under the hood and thus the number of commands application needs to execute in order to retrieve 100 rows with `DataAdapter` is equal to or greater than number of commands app needs to execute if these 100 rows are retrieved directly by `DataReader`. Does `DataReader` retrieve one row at the time or one field (of a particular row) at a time?
There are some different types of overhead that can occur when using a DataSet over a DataReader: A DatSet contains DataTable objects, which contains DataRow object, that contain the data. There is a small overhead creating all the objects. Each DataRow treats all it's values as objects, so any value types are boxed which adds a bit of overhead for each field. When you use a DataAdapter to populate a DataSet, it's easy to get a lot of data that you won't use. If you don't specify what fields you want, you get all the fields even if you won't use them all. If you don't filter the query, you get all the rows from the table. Even if you filter them later with a DataView on the DataTable, you still have fetched them from the database. With a DataReader you are closer to query that gets the data, so the connection to what you get in the result is more obvious. If you fetch data into several DataTable objects in a DataSet and use relations to let the DataSet combine the data, you make the DataSet do work that you could have let the database do, which is more optimised for it. If you use a DataSet well, the overhead is not that bad, rather 30% than 1000%. You are correct to assume that a DataAdapter uses a DataReader. If you are careful how you use the DataAdapter, the database operations itself is the same as if you use the DataReader yourself. A DataReader will fetch a record at a time from the underlying database driver, which in turn will fetch a buffer full of records at a time from the database. If the records are very large only one at a time might fit in the buffer, but usually there are tens of records in the buffer or even hundreds if they are really small.
A few pointers on MSDN: * [Benchmarks](http://msdn.microsoft.com/en-us/library/ms978388.aspx) * [DataSet vs. DataReader](http://msdn.microsoft.com/en-us/library/ms998569.aspx#scalenetchapt12_topic13) * [Working with DataReaders, DataSets, DataAdapters, and DataViews](http://msdn.microsoft.com/en-us/library/ms971481.aspx#adonetbest_topic3)
Is DataSet slower than DataReader due to...?
[ "", "c#", "ado.net", "dataset", "" ]
I followed [the Maven tutorial](http://maven.apache.org/download.html#Installation) to the letter but I still can't get Maven installed on Windows. When I run the following in command prompt: ``` E:\Documents and Settings\zach>mvn --version ``` I get: ``` 'mvn' is not recognized as an internal or external command, operable program or batch file. ``` I navigated to the maven install folder and ran `mvn --version` and got: ``` E:\java resources\apache-maven-2.2.0\bin>mvn --version ERROR: JAVA_HOME is set to an invalid directory. JAVA_HOME = "E:\Sun\SDK\jdk\bin" Please set the JAVA_HOME variable in your environment to match the location of your Java installation ``` but when I run `java -version` I get: ``` java version "1.6.0_14" Java(TM) SE Runtime Environment (build 1.6.0_14-b08) Java HotSpot(TM) Client VM (build 14.0-b16, mixed mode) ``` So I do have Java installed. Anyone know what the problem is?
The problems are to do with your paths. 1. Make sure that the directory "E:\java resources\apache-maven-2.2.0\bin" is on your command search path. 2. Make sure that the JAVA\_HOME variable refers to the *home directory* for your Java installation. If you are executing Java from "E:\Sun\SDK\jdk\bin", then the JAVA\_HOME variable needs to point to "E:\Sun\SDK\jdk". NB: JAVA\_HOME should NOT end with "\bin"1. 3. Make sure that you haven't put a semicolon in the JAVA\_HOME variable2. NB: JAVA\_HOME should be a single directory name, not "PATH-like" list of directory names separated by semicolons. Also note that you could run into problems if you have ignored this advice in the [Maven on Windows](http://maven.apache.org/guides/getting-started/windows-prerequisites.html) instructions about spaces in key pathnames. > "Maven, like many cross-platform tools, can encounter problems *when there are space characters in important pathnames*." > > "You need to install the Java SDK (e.g. from Oracle's download site), and you should install it *to a pathname without spaces*, such as c:\j2se1.6."' > > "You need to unpack the Maven distribution. Don't unpack it in the middle of your source code; pick some location (*with no spaces in the path!*) and unpack it there." The simple remedy for this would be to reinstall Java or Maven in a different location so that there *isn't* a space in the path --- 1 - .... unless you have made an *insane* choice for the name for your installation location. 2 - Apparently a common "voodoo" solution to Windows path problems is to whack a semicolon on the end. It is not recommended in general, absolutely does not work here.
> ERROR: JAVA\_HOME is set to an invalid directory. JAVA\_HOME = "E:\Sun\SDK\jdk\bin" Please set the JAVA\_HOME variable in your environment to match the location of your Java installation `JAVA_HOME` should be set to `E:\Sun\SDK\jdk`. `PATH` should be set to include `%JAVA_HOME%\bin`.
Unable to install Maven on Windows: "JAVA_HOME is set to an invalid directory"
[ "", "java", "maven-2", "" ]
I'd like to catch my exceptions and log them in the Windows log file. How do I go about opening and writing to the Windows log?
You can use the [System.Diagnostics.EventLog.WriteEntry](http://msdn.microsoft.com/en-us/library/system.diagnostics.eventlog.writeentry.aspx) function to write entries to the event log. ``` System.Diagnostics.EventLog.WriteEntry("MyEventSource", exception.StackTrace, System.Diagnostics.EventLogEntryType.Warning); ``` To read event logs you can use the [System.Diagnostics.EventLog.GetEventLogs](http://msdn.microsoft.com/en-us/library/74e2ybbs.aspx) function. ``` //here's how you get the event logs var eventLogs = System.Diagnostics.EventLog.GetEventLogs(); foreach(var eventLog in eventLogs) { //here's how you get the event log entries foreach(var logEntry in eventLog.Entries) { //do something with the entry } } ```
You can also consider using the [Enterprise Library](https://github.com/MicrosoftArchive/enterprise-library). It looks complicated to start with but an hour or two of playing will pay off. Config is stored in app.config so you can change it without recompiling - this can be really handy when you've got the same code sitting on test and live servers with different config. You can do quite a lot without loads of code. One nice thing is that you can define Exception policies so that exceptions are automatically logged. Here's some code you might use (I'm using EntLib 4.1): ``` try { //This would be where your exception might be thrown. I'm doing it on //purpose so you can see it work throw new ArgumentNullException("param1"); } catch (Exception ex) { if (ExceptionPolicy.HandleException(ex, "ExPol1")) throw; } ``` The line in the catch block will rethrow the exception IF the ExPol1 defines it. If ExPol1 is configured for rethrow, then ExceptionPolicy.HandleException will return true. If not, it returns false. You define the rest in config. The XML looks pretty horrible (doesn't it always) but you create this using the Enterprise Library Configuration editor. I'm just supplying it for completeness. In the loggingConfiguration section, this file defines * the log: a rolling text log file (you can use the built in windows event logs, sql tables, email, msmq and others), with * a text formatter that governs how the parameters are written to the log (sometimes I configure this to write everything to one line, other times spread across many), * a single category "General" * a Special Source which traps any errors in the config/entlib and reports them as well. I strongly advise you to do this. In the exceptionHandling section, it defines * a single policy: "ExPo1", which handles type ArgumentNullExceptions and specifies the postHandlingAction of None (i.e. don't rethrow). * a handler which logs to the General category (defined above) I don't do it in this example, but you can also replace an exception with a different type using a policy. ``` <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <section name="loggingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <section name="exceptionHandling" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Configuration.ExceptionHandlingSettings, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </configSections> <loggingConfiguration name="Logging Application Block" tracingEnabled="true" defaultCategory="General" logWarningsWhenNoCategoriesMatch="true"> <listeners> <add fileName="rolling.log" footer="" formatter="Text Formatter" header="" rollFileExistsBehavior="Overwrite" rollInterval="None" rollSizeKB="500" timeStampPattern="yyyy-MM-dd" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.RollingFlatFileTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" traceOutputOptions="None" filter="All" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.RollingFlatFileTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Rolling Flat File Trace Listener" /> </listeners> <formatters> <add template="Timestamp: {timestamp}; Message: {message}; Category: {category}; Priority: {priority}; EventId: {eventid}; Severity: {severity}; Title:{title}; Machine: {machine}; Application Domain: {appDomain}; Process Id: {processId}; Process Name: {processName}; Win32 Thread Id: {win32ThreadId}; Thread Name: {threadName}; &#xD;&#xA; Extended Properties: {dictionary({key} - {value})}" type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Text Formatter" /> </formatters> <categorySources> <add switchValue="All" name="General"> <listeners> <add name="Rolling Flat File Trace Listener" /> </listeners> </add> </categorySources> <specialSources> <allEvents switchValue="All" name="All Events" /> <notProcessed switchValue="All" name="Unprocessed Category" /> <errors switchValue="All" name="Logging Errors &amp; Warnings"> <listeners> <add name="Rolling Flat File Trace Listener" /> </listeners> </errors> </specialSources> </loggingConfiguration> <exceptionHandling> <exceptionPolicies> <add name="ExPol1"> <exceptionTypes> <add type="System.ArgumentNullException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="None" name="ArgumentNullException"> <exceptionHandlers> <add logCategory="General" eventId="100" severity="Error" title="Enterprise Library Exception Handling" formatterType="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.TextExceptionFormatter, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" priority="0" useDefaultLogger="false" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="Logging Handler" /> </exceptionHandlers> </add> </exceptionTypes> </add> </exceptionPolicies> </exceptionHandling> </configuration> ```
Writing Exceptions to the Windows Log File
[ "", "c#", "windows", "logging", "event-log", "" ]
I was thinking about this when I ran into a problem using std::ofstream. My thinking is that since std::ifstream, it wouldn't support random access. Rather, it would just start at the beginning and stream by until you get to the part you want. Is this just quick so we don't notice? And I'm pretty sure FILE\* supports random access so this would be fast as well?
ifstream supports random access with seekg. FILE\* might be faster but you should measure it.
Since both of them imply system calls and that is going to be some orders of magnitude more time consuming that the rest of the operation, the performance of both should be very similar.
Will reading a file be faster with a FILE* or an std::ifstream?
[ "", "c++", "performance", "" ]
In Program.cs I have the below method that is checking and the Syncing 5 SQL db's with the central server. Each one is separate from the other so I thought to speed up my program's load time by having them all run at the same time. Unfortunately it is very flaky working one time then not the next. The local DB is SQLExpress 2005 and the central DB is SQL Server Standard 2005. Is there a limit on how many connections either of those can have? How about Background Workers, can I only have so many running at once? I am sure there is a MUCH more eloquent way of doing this, I'd love to hear(see) them. This is how I call this in Main() in Program.cs --> `if(IsSqlAvailable()) SyncNow();` --- ``` internal static void SyncNow() { #region ConnectDB Merge Sync Background Thread BackgroundWorker connectBW = new BackgroundWorker { WorkerReportsProgress = false, WorkerSupportsCancellation = true }; connectBW.DoWork += new DoWorkEventHandler(connectBW_DoWork); if (connectBW.IsBusy != true) connectBW.RunWorkerAsync(); #endregion #region aspnetDB Merge Sync Background Thread BackgroundWorker aspBW = new BackgroundWorker { WorkerReportsProgress = false, WorkerSupportsCancellation = true }; aspBW.DoWork += new DoWorkEventHandler(aspBW_DoWork); if (aspBW.IsBusy != true) aspBW.RunWorkerAsync(); #endregion #region MatrixDB Merge Sync Background Thread BackgroundWorker matrixBW = new BackgroundWorker { WorkerReportsProgress = false, WorkerSupportsCancellation = true }; matrixBW.DoWork += new DoWorkEventHandler(matrixBW_DoWork); if (matrixBW.IsBusy != true) matrixBW.RunWorkerAsync(); #endregion #region CMODB Merge Sync Background Thread BackgroundWorker cmoBW = new BackgroundWorker { WorkerReportsProgress = false, WorkerSupportsCancellation = true }; cmoBW.DoWork += new DoWorkEventHandler(cmoBW_DoWork); if (cmoBW.IsBusy != true) cmoBW.RunWorkerAsync(); #endregion #region MemberCenteredPlanDB Merge Sync Background Thread BackgroundWorker mcpBW = new BackgroundWorker { WorkerReportsProgress = false, WorkerSupportsCancellation = true }; mcpBW.DoWork += new DoWorkEventHandler(mcpBW_DoWork); if (mcpBW.IsBusy != true) mcpBW.RunWorkerAsync(); #endregion } static void mcpBW_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker worker = sender as BackgroundWorker; try { MergeRepl mcpMergeRepl = new MergeRepl(SystemInformation.ComputerName + "\\SQLEXPRESS", "WWCSTAGE", "MemberCenteredPlan", "MemberCenteredPlan", "MemberCenteredPlan"); mcpMergeRepl.RunDataSync(); areAllInSync += 1; } catch (Exception) { if (worker != null) worker.CancelAsync(); } } static void cmoBW_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker worker = sender as BackgroundWorker; try { MergeRepl cmoMergeRepl = new MergeRepl(SystemInformation.ComputerName + "\\SQLEXPRESS", "WWCSTAGE", "CMO", "CMO", "CMO"); cmoMergeRepl.RunDataSync(); areAllInSync += 1; } catch (Exception) { if (worker != null) worker.CancelAsync(); } } static void connectBW_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker worker = sender as BackgroundWorker; try { MergeRepl connectMergeRepl = new MergeRepl(SystemInformation.ComputerName + "\\SQLEXPRESS", "WWCSTAGE", "CONNECT", "Connect", "Connect"); connectMergeRepl.RunDataSync(); areAllInSync += 1; } catch (Exception) { if (worker != null) worker.CancelAsync(); } } static void matrixBW_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker worker = sender as BackgroundWorker; try { MergeRepl matrixMergeRepl = new MergeRepl(SystemInformation.ComputerName + "\\SQLEXPRESS", "WWCSTAGE", "MATRIX", "MATRIX", "MATRIX"); matrixMergeRepl.RunDataSync(); areAllInSync += 1; } catch (Exception) { if (worker != null) worker.CancelAsync(); } } static void aspBW_DoWork(object sender, DoWorkEventArgs e) { BackgroundWorker worker = sender as BackgroundWorker; try { MergeRepl aspnetdbMergeRepl = new MergeRepl(SystemInformation.ComputerName + "\\SQLEXPRESS", "WWCSTAGE", "aspnetdb", "aspnetdb", "aspnetdb"); aspnetdbMergeRepl.RunDataSync(); areAllInSync += 1; } catch (Exception) { if (worker != null) worker.CancelAsync(); } } ```
You need to be more specific what the 'flakiness' is, how does it manifest. If I understand this correctly, you are trying to drive manually a merge replication with RMO classes, in lack of the Agent support. One thing to note is that SQL Express supports only one scheduler so adding multiple workers (pending requests) to it won't make much of a difference, they'll just pile up in the runnable queue and fight for the one CPU to run them. Second, I'm not sure the RMO replication classes (which I assume you use) support doing sync in multiple, parallel, instances, so I probably there's no point in doing more that just one BackgroundWorker per db (I may be wrong on this one, I'm not an RMO expert by any stretch).
I only use one. I figure the BackgroundWorker is there to allow me to execute a long-running task and keep the UI responsive. If I want multiple threads I use the ThreadPool.
Is there a limit on Background Workers? Technical or Common Sense
[ "", "c#", "sql-server", "multithreading", "replication", "backgroundworker", "" ]
Can I rely on the fact that the underlying field to a property named Foo is called "k\_\_BackingField" ?
No, in short. And for this very reason, auto-properties are pain when used with field-based serializers like `BinaryFormatter` (I'm sure I might have mentioned that here before ;-p). See: [Obfuscation, serialization and automatically implemented properties](http://marcgravell.blogspot.com/2009/03/obfuscation-serialization-and.html) for some thoughts on this (and a solution).
That the spec does not specify the backing field's name (with good reason - you shouldn't take a dependency on it). You should investigate to see whether the Mono compiler does the same thing for example. Sanity checking that the field had the `CompilerGeneratedAttribute` applied to it would be a good idea. Note that explicit interface implementations would come out (currently) as: ``` <Full.Namespace.To.Interface.IBlah.Foo>k__BackingField ``` So code which is trying to serialize a specific aspect of a class in this fashion could be mislead. If you wanted to be really belt and braces about your code you could use a library like mono's Cecil to inspect the get function of the property and determine the field used.
Is the implementation of Auto Properties in the spec?
[ "", "c#", ".net", "automatic-properties", "specifications", "" ]
I want a list full of the same thing, where the thing will either be a string or a number. Is there a difference in the way these two list are created? Is there anything hidden that I should probably know about? ``` list_1 = [0] * 10 list_2 = [0 for i in range(10)] ``` Are there any better ways to do this same task? Thanks in advance.
It depends on whether your list elements are mutable, if they are, there'll be a difference: ``` >>> l = [[]] * 10 >>> l [[], [], [], [], [], [], [], [], [], []] >>> l[0].append(1) >>> l [[1], [1], [1], [1], [1], [1], [1], [1], [1], [1]] >>> l = [[] for i in range(10)] >>> l[0].append(1) >>> l [[1], [], [], [], [], [], [], [], [], []] ``` For immutable elements, the behavior of the two is the same. There might be a performance difference between them, but I'm not sure which one would perform faster.
I personally would advice to use the first method, since it is most likely the best performing one, since the system knows in advance the size of the list and the contents. In the second form, it must first evaluate the generator and collect all the values. Most likely by building up the list incrementally -- what is costly because of resizing. The first method should also be the best way at all.
Python: List initialization differences
[ "", "python", "list", "" ]
I have two types of strings as the IDs of elements in my HTML markup: Dates: ``` "april-23" "march-20" ``` and season names: ``` "springtime" "winter" ``` The dates have a dash separating the month and the day. The seasons are a single word with no other tokens. I want to assign the month or the season to a new variable called: ``` time_of_year ``` If I do this: ``` var time_of_year = $(this).attr("id").split('-')[0]; ``` It will work on the months but if I call it on a season name which does not contain the token, will it generate an error? What's the safe way to do this?
It doesn't return an error but it does return an array with a length of one. You could do something like this: ``` var splitty = $(this).attr('id').split('-'); if (splitty.length > 1) { time_of_year = splitty[0]; } else { // do something else magical here } ``` Here are the docs on [split](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/String/split). But, if you always want the first value and didn't care about the others, you could just use your original code w/o a problem: ``` var time_of_year = $(this).attr('id').split('-')[0] ```
You could check to see if the hyphen character exists using `mystring.indexOf('-')` (which will return -1 if it doesn't exist). But I'm pretty sure that if you split a string on a character that doesn't contain that character, it will just return the original string.
How to call split(token) on a string that does not contain the token without causing an error?
[ "", "javascript", "jquery", "stringtokenizer", "string-parsing", "" ]
I'm trying to run this query but I get "ORA-00904: "Z1"."LONGITUDE": invalid identifier" Is there a way to rewrite this so I have access to that column in the exists subquery? Or is there generally a better way to achieve what I'm trying to do? Thanks ``` select zip, count(UNIQUE address_id) LOCATIONS from records inner join addresses a using(address_id) inner join zip_coords z1 using(zip) where exists ( select 1 from ( select distance(z1.latitude, z1.longitude, z2.latitude, z2.longitude) d from zip_coords z2 where z2.zip in ( select zip from available_zips ) ) where d <= 50 ) GROUP BY ZIP ```
Your problem is that you can't descend that many levels into your subquery. I might have missed something from skimming over your query but could: ``` select 1 from ( select distance(z1.latitude, z1.longitude, z2.latitude, z2.longitude) d from zip_coords z2 where z2.zip in ( select zip from available_zips ) ) where d <= 50 ``` not be rewritten as: ``` SELECT 1 FROM zip_coords z2 WHERE z2.zip IN ( SELECT zip FROM available_zips ) AND distance(z1.latitude, z1.longitude, z2.latitude, z2.longitude) <= 50 ```
``` select zip, count(UNIQUE address_id) LOCATIONS from records inner join addresses a using(address_id) inner join zip_coords z1 using(zip) where ( select min(distance(z1.latitude, z1.longitude, z2.latitude, z2.longitude)) d from zip_coords z2 inner join available_zips using(zip) ) <= 50 GROUP BY ZIP ``` I have to warn you, I do not know how this will affect performance of the query.
Oracle SQL Invalid Identifier
[ "", "sql", "oracle", "ora-00904", "" ]
I'm having trouble getting a decent query time out of a large MySQL table, currently its taking over 20 seconds. The problem lies in the GROUP BY as MySQL needs to run a filesort but I don't see how I can get around this **QUERY**: ``` SELECT play_date, COUNT(DISTINCT(email)) AS count FROM log WHERE type = 'play' AND play_date BETWEEN '2009-02-23' AND '2009-02-24' GROUP BY play_date ORDER BY play_date desc ``` **EXPLAIN**: ``` id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE log ALL type,type_2 NULL NULL NULL 530892 Using where; Using filesort ``` **TABLE STRUCTURE** ``` CREATE TABLE IF NOT EXISTS `log` ( `id` int(11) NOT NULL auto_increment, `email` varchar(255) NOT NULL, `type` enum('played','reg','friend') NOT NULL, `timestamp` timestamp NOT NULL default CURRENT_TIMESTAMP, `play_date` date NOT NULL, `email_refer` varchar(255) NOT NULL, `remote_addr` varchar(15) NOT NULL, PRIMARY KEY (`id`), KEY `email` (`email`), KEY `type` (`type`), KEY `email_refer` (`email_refer`), KEY `type_2` (`type`,`timestamp`,`play_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=707859 ; ``` If anyone knows how I could improve the speed I would be very greatful Tom ***EDIT*** I've added the new index with just play\_date and type but MySQL refuses to use it ``` id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE log ALL play_date NULL NULL NULL 801647 Using where; Using filesort ``` This index was created using ALTER TABLE `log` ADD INDEX (`type`, `play_date`);
You need to create index on fields `type` AND `play_date`. Like this: ``` ALTER TABLE `log` ADD INDEX (`type`, `play_date`); ``` Or, alternately, you can rearrange your last key like this: ``` KEY `type_2` (`type`,`play_date`,`timestamp`) ``` so MySQL can use its left part as a key.
You should add an index on the fields that you base your search on. In your case it play\_date and type
MySQL 1 millon row query speed
[ "", "sql", "mysql", "performance", "" ]
I've got a string: ``` $string = "Hello World!"; ``` I want to turn it into a URL friendly tag, and I've developed a function to do it: ``` function stripJunk($string){ $string = str_replace(" ", "-", $string); $string = preg_replace("/[^a-zA-Z]\s/", "", $string); $string = strtolower($string); return $string; } ``` However, when I run my `$string` through it above, I get the following: ``` $string = "hello-world!"; ``` It seems that there are characters slipping through my preg\_replace, even though from what I understand, they shouldn't be. It should read like this: ``` $string = "hello-world"; ``` What's going on here? (This should be easy peasy lemon squeasy!) **Edit 1**: I wasn't aware that regular expressions were beginners stuff, but whatever. Additionally, removing the \s in my string does not produce the desired result. The desired result is: 1. All spaces are converted to dashes. 2. All remaining characters that are not A-Z or 0-9 are removed. 3. The string is then converted to lower case. **Edit 2+**: Cleaned up my code just a little.
The following works just fine to me: ``` function stripJunk($string){ $string = str_replace(" ", "-", trim($string)); $string = preg_replace("/[^a-zA-Z0-9-]/", "", $string); $string = strtolower($string); return $string; } ```
The \s at the end of your pattern means that you will only replace non-alphabetical characters which are immediately followed by a whitespace character. You probably want the \s within the square brackets so that whitespace is also preserved and can later be replaced with a dash. You will need to add 0-9 inside the square brackets if you want to also allow numbers. For example: ``` <?php $string = "Hello World!"; function stripJunk($string){ $string = preg_replace("/[^a-zA-Z0-9\s]/", "", $string); $string = str_replace(" ", "-", $string); $string = strtolower($string); return $string; } echo stripJunk($string); ```
preg_replace - leaving in unwanted characters
[ "", "php", "preg-replace", "" ]
In my project I get back a name of a person through a php response. And I store that name in a variable. So the name could be like James Smith or Sakhu Ali Khan or anything else. I want to replace the spaces between the names with "." Suppose I get the James Smith and I will save it in `$userName` Now I want to parse `$userName` and then replace the spaces with "." so my ``` $parsedUserName == James.Smith ``` Can anyone tell me how to do this in php. I am not very much familiar with text parsing. Best Zeeshan
You can use the [`str_replace()`](http://php.net/str_replace) function to do this: ``` $parsedUserName = str_replace(' ', '.', $userName); ``` If you're using UTF-8 or another multibyte character set then you should use [`mb_str_replace()`](http://php.net/mb_str_replace) instead. ``` $parsedUserName = mb_str_replace(' ', '.', $userName); ```
`$parsedUserName = str_replace(" ", ".", $userName);` Make sure the sanatize the data you get first however. Use things like [trim()][1] and [filter\_var()][2] to make sure the data is what you expect it to be. So do something like this: $userName = trim($userName); $userName = filter\_var($userName, FILTER\_SANITIZE\_STRING); $parsedUserName = str\_replace(" ", ".", $userName);
text parsing in php
[ "", "php", "" ]
Below query is resulting zero rows updated but i am sure that there is a record to update ``` DoCmd.RunSQL (" Update tbltesting set IsDiff ='Yes' " & _ "where empid= " & Me.txtEmpId.Value & _ " and testid= " & Me.txtAutoNumber.Value & ";") ``` Please help!!
Run this as a check to make sure your fields have the data that you think they have: ``` DoCmd.RunSQL (" SELECT * FROM tbltesting " & _ "WHERE empid= " & Me.txtEmpId.Value & _ " and testid= " & Me.txtAutoNumber.Value & ";") ``` Incidentally, you can leave off the .Value portion.
Maybe you need single quotes around the WHERE parameters: ``` DoCmd.RunSQL (" Update tbltesting set IsDiff ='Yes' where empid= '" & Me.txtEmpId.Value & "' and testid= '" & Me.txtAutoNumber.Value & "';") ```
VBA update query
[ "", "sql", "ms-access", "vba", "" ]
In JavaScript/JQuery I want to get all the text that is seen between some other text. For example, if the HTML document had: ``` <b class="blah">Blah: Some Text 1</b> <div id="foo"><b class="blah">Blah: Some Text 2</b> ``` I'd like to get an array that has 'Some Text 1' and 'Some Text 2' since they are both in between '`<b class="blah">Blah:`' followed by a '`</b>`'
Since you mention jQuery, just select all the right nodes and check their text. You can put a regex in here if you want, but it's not needed. ``` <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <title>test</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <script type="text/javascript" src="http://jqueryjs.googlecode.com/files/jquery-1.3.2.min.js"></script> <script type="text/javascript"> $( function(){ var texts = []; $('b.blah').each( function() { var txt = $(this).text(); if ( 0 == txt.indexOf( 'Blah:' ) ) { texts.push( txt.substr( 6 ) ); } } ); alert( texts ); }); </script> </head> <body> <b class="blah">Blah: Some Text 1</b> <div id="foo"><b class="blah">Blah: Some Text 2</b> <div id="foo"><b class="blah">Some Text 3</b> </body> </html> ``` Or with a string of HTML ``` $( function(){ var htmlChunk = '<b class="blah">Blah: Some Text 1</b>\n' + '<div id="foo"><b class="blah">Blah: Some Text 2</b></div>\n' + '<div id="foo2"><b class="blah">Some Text 3</b></div>'; var texts = []; $('b.blah', '<div>' + htmlChunk + '</div>').each( function() { var txt = $(this).text(); if ( 0 == txt.indexOf( 'Blah:' ) ) { texts.push( txt.substr( 6 ) ); } } ); alert( texts ); }); ```
This is kind of hard in JS, because there's no handy way to retrieve a global set of paren captures. A hack like this might work: ``` var chunked = text.replace(/.*<b class="blah">(.*?)<\/b>/g, '$1|ARBITRARY_SEPARATOR|'); var chunks = chunked.split(/|ARBITRARY_SEPARATOR|/); chunks.pop(); ```
Javascript RegEx: Get all text matches surrounded by some other text?
[ "", "javascript", "regex", "" ]
I have another problem with ListView :( Now I need to move items in group (up, down, to the beginning, to the end), but ListView is displaying moved items always at the end. Here is sample code for moving item to the beginning: ``` if (1 == listView1.SelectedItems.Count) { ListViewItem item = listView1.SelectedItems[0]; ListViewGroup gp = item.Group; int index; index = item.Index; if (index < listView1.Items.Count) { index = 0; listView1.Items.Remove(item); item.Group = gp; listView1.Items.Insert(index, item); } } ``` I tried google to find some solution, and I found someone else (<http://social.msdn.microsoft.com/Forums/en-US/winforms/thread/838f90cd-33d8-4c81-9ed9-85220b511afe>) who had same problem like me, but his solution is not working :( I considered using ObjectListView but I have modified ListView witch now supports drag & drop with WinAmp effect, onScroll events, scrolling synchronization etc.. and I don't want to lose this stuff :(
Try this: ``` /// <summary> /// Move the given item to the given index in the given group /// </summary> /// <remarks>The item and group must belong to the same ListView</remarks> public void MoveToGroup(ListViewItem lvi, ListViewGroup group, int indexInGroup) { group.ListView.BeginUpdate(); ListViewItem[] items = new ListViewItem[group.Items.Count + 1]; group.Items.CopyTo(items, 0); Array.Copy(items, indexInGroup, items, indexInGroup + 1, group.Items.Count - indexInGroup); items[indexInGroup] = lvi; for (int i = 0; i < items.Length; i++) items[i].Group = null; for (int i = 0; i < items.Length; i++) group.Items.Add(items[i]); group.ListView.EndUpdate(); } ```
My automatic response to this question would be to check and see if it works if you say `listView1.Items.Count - 1` because the list is zero-indexed so the count is 1 greater than the last index)
ListView moving items
[ "", "c#", "winforms", "listview", "" ]
> **Possible Duplicate:** > [Any way to access array directly after method call?](https://stackoverflow.com/questions/1182452/any-way-to-access-array-directly-after-method-call) In C# and other languages, I can do something like this ``` $value = $obj->getArray()[0]; ``` But not in PHP. Any workarounds or am I doomed to do this all the time? ``` $array = $obj->getArray(); $value = $array[0]; ```
No, you can't do it without using array\_shift (Which only gets the first element). If you want to access the third or fourth, most likely you'd want to do a function like this: ``` function elemnt($array, $element) { return $array[$element]; } $value = element($obj->getArray(), 4); ``` Also, see this question, as it is an exact duplicate: [Any way to access array directly after method call?](https://stackoverflow.com/questions/1182452/any-way-to-access-array-directly-after-method-call/1182461#1182461)
If this is a one-off or occasional thing where the situation in your example holds true, and you're retrieving the first element of the return array, you can use: ``` $value = array_shift($obj->getArray()); ``` If this is a pervasive need and you often need to retrieve elements other than the first (or last, for which you can use `array_pop()`), then I'd arrange to have a utility function available like so: ``` function elementOf($array, $index = 0) { return $array[$index]; } $value = elementOf($obj->getArray()); $otherValue = elementOf($obj->getArray(), 2); ```
PHP Is there anyway I can simulate an array operator on the return value of a function
[ "", "php", "arrays", "" ]
I'm using pygtk with PIL. I've already figured out a way to convert PIL `Image`s to `gtk.gdk.Pixbuf`s. What I do to display the pixbuf is I create a `gtk.gdk.Image`, and then use `img.set_from_pixbuf`. I now want to draw a few lines onto this image. Apparently I need a `Drawable` to do that. I've started looking through the docs but already I have 8-10 windows open and it is not being straightforward at all. So - what magic do I need to type to get a Drawable representing my picture, draw some things on to it, and then turn it into a `gdk.Image` so I can display it on my app?
I was doing something similar (drawing to a gdk.Drawable), and I found that set\_foreground doesn't work. To actually draw using the color I wanted, I used the following: ``` # Red! gc.set_rgb_fg_color(gtk.gdk.Color(0xff, 0x0, 0x0)) ```
Oh my god. So painful. Here it is: ``` w,h = pixbuf.get_width(), pixbuf.get_height() drawable = gtk.gdk.Pixmap(None, w, h, 24) gc = drawable.new_gc() drawable.draw_pixbuf(gc, pixbuf, 0, 0, 0, 0, -1, -1) #---ACTUAL DRAWING CODE--- gc.set_foreground(gtk.gdk.Color(65535, 0, 0)) drawable.draw_line(gc, 0, 0, w, h) #------------------------- cmap = gtk.gdk.Colormap(gtk.gdk.visual_get_best(), False) pixbuf.get_from_drawable(drawable, cmap, 0, 0, 0, 0, w, h) ``` It actually draws a black line ATM, not a red one, so I have some work left to do...
pygtk: Draw lines onto a gtk.gdk.Pixbuf
[ "", "python", "gtk", "drawing", "pygtk", "" ]
I'm generating a table of content in an ASP.NET page and I want to add client side `OnClick` actions to each cell (the action should be to set the value of a named TextBox control to the text in the clicked cell). If I could directly add stuff to the HTML output, I think I could do this with a single line of JavaScript in each cell. Given that I'me **not** now using JQuery (and never have before), what is the simplest solution to adding this (including the time spent learning stuff)? --- Edit: [this](https://stackoverflow.com/questions/821341/adding-onclick-attribute-to-asp-net-dropdownlist-item) seems to work
If you are creating your table programmatically, using HtmlControls, you can set the onclick client attribute of the HtmlTableCell: ``` HtmlTableCell cell = new HtmlTableCell(); cell.Attributes.Add("onclick", "someFunction();"); ``` However, if you're using jQuery you can do something like this: ``` $('td').click(function(){ // Select all the table cells on the document. $(this).html($('#textboxId').val()); // When clicked, change the innerHTML }); // with a textbox value. ```
You should give each cell a css class... class="contentsCell" You can then add the following script to your head tag (along with a jquery include) $(document).ready(function(){ ``` $(".contentsCell").click(function(){ CallYourFunction(); }) ``` }) Where CallYourFunction() is you own function... This will basically attach an onclick=" CallYourFunction();" to each td cell (or any other element) with the class "contentsCell" Also, if you want to grab the text inside the cell, something like: $(document).ready(function(){ ``` $(".contentsCell").click(function(){ var mytext = $(this).text(); CallYourFunction(mytext); }); ``` }); Check out visualjquery.com for a great reference for jquery
How to add client side OnClick actions to text in ASP.NET
[ "", "asp.net", "javascript", "onclick", "" ]
I have been trying to use a simple jQuery operation to dynamically match and store all anchor tags and their texts on the page. But I have found a weird behavior. When you are using match() or exec(), if you designate the needle as a separate RegExp object or a pattern variable, then your query matches only one instance among dozens in the haystack. And if you designate the pattern like this ``` match(/needle/gi) ``` then it matches every instance of the needle. Here is my code. You can even fire up Firebug and try this code right here on this page. ``` var a = {'text':'','parent':[]}; $("a").each(function(i,n) { var module = $.trim($(n).text()); a.text += module.toLowerCase() + ',' + i + ','; a.parent.push($(n).parent().parent()); }); var stringLowerCase = 'b'; var regex = new RegExp(stringLowerCase, "gi"); //console.log(a.text); console.log("regex 1: ", regex.exec(a.text)); var regex2 = "/" + stringLowerCase + "/"; console.log("regex 2: ", a.text.match(regex2)); console.log("regex 3: ", a.text.match(/b/gi)); ``` For me it is returning: ``` regex 1: ["b"] regex 2: null regex 3: ["b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b", "b"] ``` Can anyone explain the root of this behavior? **EDIT**: I forgot to mention that for regex1, it doesn't make any difference whether you add the flags "gi" for global and case insensitive matching. It still returns only one match. **EDIT2**: SOlved my own problem. I still don't know why one regex1 matches only one instance, but I managed to match all instances using the match() and the regex1. So..this matches all and dynamically! ``` var regex = new RegExp(stringLowerCase, "gi"); console.log("regex 2: ", a.text.match(regex)); ```
This is not unusual behaviour at all. In regex 1 you are only checking for 1 instance of it where in regex 3 you have told it to return all instances of the item by using the /gi argument. In Regex 2 you are assuming that "/b/" === /b/ when it doesn't. "/b/" !== /b/. "/b/" is a string that is searching so if you string has "/b/" in it then it will return while /b/ means that it needs to search between the slashes so you could have "abc" and it will return "b" I hope that helps. **EDIT:** Looking into it a little bit more, the `exec` methods returns the first match that it finds rather than all the matches that it finds. **EDIT:** ``` var myRe = /ab*/g; var str = "abbcdefabh"; var myArray; while ((myArray = myRe.exec(str)) != null) { var msg = "Found " + myArray[0] + ". "; msg += "Next match starts at " + myRe.lastIndex; console.log(msg); } ``` Having a look at it again it definitely does return the first instance that it finds. If you looped through it then would return more. Why it does this? I have no idea...my JavaScript Kung Fu clearly isnt strong enough to answer that part
The reason regex 2 is returning null is that you're passing "/b/" as the pattern parameter, while "b" is actually the only thing that is actually part of the pattern. The slashes are shorthand for regex, just as [ ] is for array. So if you were to replace that to just new regex("b"), you'd get one match, but only one, since you're omitting the "global+ignorecase" flags in that example. To get the same results for #2 and #3, modify accordingly: ``` var regex2 = stringLowerCase; console.log("regex 2: ", a.text.match(regex2, "gi")); console.log("regex 3: ", a.text.match(/b/gi)); ```
Javascript Match and RegExp Issue -- Strange Behavior
[ "", "javascript", "jquery", "regex", "" ]
In my binary to text decoding application (.NET 2.0) I found that the line: ``` logEntryTime.ToString("dd.MM.yy HH:mm:ss:fff") ``` takes 33% of total processing time. Does anyone have any ideas on how to make it faster? EDIT: This app is used to process some binary logs and it currently takes 15 hours to run. So 1/3 of this will be 5 hours. EDIT: I am using [NProf](http://code.google.com/p/nprof/) for profiling. App is processing around 17 GBytes of binary logs.
It's unfortunate that .NET doesn't have a sort of "formatter" type which can parse a pattern and remember it. If you're always using the same format, you might want to hand-craft a formatter to do exactly that. Something along the lines of: ``` public static string FormatDateTime(DateTime dt) { // Note: there are more efficient approaches using Span<char> these days. char[] chars = new char[21]; Write2Chars(chars, 0, dt.Day); chars[2] = '.'; Write2Chars(chars, 3, dt.Month); chars[5] = '.'; Write2Chars(chars, 6, dt.Year % 100); chars[8] = ' '; Write2Chars(chars, 9, dt.Hour); chars[11] = ' '; Write2Chars(chars, 12, dt.Minute); chars[14] = ' '; Write2Chars(chars, 15, dt.Second); chars[17] = ' '; Write2Chars(chars, 18, dt.Millisecond / 10); chars[20] = Digit(dt.Millisecond % 10); return new string(chars); } private static void Write2Chars(char[] chars, int offset, int value) { chars[offset] = Digit(value / 10); chars[offset+1] = Digit(value % 10); } private static char Digit(int value) { return (char) (value + '0'); } ``` This is pretty ugly, but it's probably a lot more efficient... benchmark it, of course!
Are you sure it takes 33% of the time? How have you measured that? It sounds more than a little suspicious to me... This makes things a **little** bit quicker: ``` Basic: 2342ms Custom: 1319ms ``` Or if we cut out the IO (`Stream.Null`): ``` Basic: 2275ms Custom: 839ms ``` --- ``` using System.Diagnostics; using System; using System.IO; static class Program { static void Main() { DateTime when = DateTime.Now; const int LOOP = 1000000; Stopwatch basic = Stopwatch.StartNew(); using (TextWriter tw = new StreamWriter("basic.txt")) { for (int i = 0; i < LOOP; i++) { tw.Write(when.ToString("dd.MM.yy HH:mm:ss:fff")); } } basic.Stop(); Console.WriteLine("Basic: " + basic.ElapsedMilliseconds + "ms"); char[] buffer = new char[100]; Stopwatch custom = Stopwatch.StartNew(); using (TextWriter tw = new StreamWriter("custom.txt")) { for (int i = 0; i < LOOP; i++) { WriteDateTime(tw, when, buffer); } } custom.Stop(); Console.WriteLine("Custom: " + custom.ElapsedMilliseconds + "ms"); } static void WriteDateTime(TextWriter output, DateTime when, char[] buffer) { buffer[2] = buffer[5] = '.'; buffer[8] = ' '; buffer[11] = buffer[14] = buffer[17] = ':'; Write2(buffer, when.Day, 0); Write2(buffer, when.Month, 3); Write2(buffer, when.Year % 100, 6); Write2(buffer, when.Hour, 9); Write2(buffer, when.Minute, 12); Write2(buffer, when.Second, 15); Write3(buffer, when.Millisecond, 18); output.Write(buffer, 0, 21); } static void Write2(char[] buffer, int value, int offset) { buffer[offset++] = (char)('0' + (value / 10)); buffer[offset] = (char)('0' + (value % 10)); } static void Write3(char[] buffer, int value, int offset) { buffer[offset++] = (char)('0' + (value / 100)); buffer[offset++] = (char)('0' + ((value / 10) % 10)); buffer[offset] = (char)('0' + (value % 10)); } } ```
How do I improve the performance of code using DateTime.ToString?
[ "", "c#", "optimization", "datetime", "formatting", "" ]
Any recommendation on which Java open source helpdesk system i should use ? i need these criteria - come with dynamic approval level support for certain request (workflow)
Some of the java basec Open source helpdesk system are * [itracker](http://www.itracker.org/) Java based open source help desk application with an emphasis on modularity. It's also provides i18n support. * [JTrac](http://jtrac.info/) A Java based open source issue tracking system. If you want create your own Help desk management system then u can look at Jboss [JBPM](http://www.jboss.com/products/jbpm/) frame work I have worked on it and done a simple work flow management system using JBPM Hope this helps !
This is made in ROR but can be deployed in Java with JRuby : [www.redmine.org](http://www.redmine.org). It's the best free project management/tracking tool I know in my opinion.
Java Open source helpdesk +workflow project
[ "", "java", "open-source", "jakarta-ee", "workflow", "" ]
I'm just starting out writing this code and when I add in the db select and refresh the page, instead of showing all the other html on the page or an error, it just shows up blank. Here's what I've got- ``` $link = mysql_connect('vps2.foo.com:3306', 'remote_vhost30', 'password'); if (!$link) { die('Could not connect: ' . mysql_error()); } $db_selected = mysql_select_db('best_of', $link); if (!$db_selected) { die ('Can\'t use foo : ' . mysql_error()); } $sections_result = "SELECT * FROM sections"; $sections_query = mysql_query($sections_result) or die(mysql_error()); $sections_array = mysql_fetch_array($sections_result) or die(mysql_error()); ``` This code above returns a blank page. If I comment out the row starting with $db\_selected the page loads fine. Obviously it doesn't do anything with the data but no errors. What's the problem? (And yes, I am connecting to a remote server, but the $link produces no errors)
The last line of code should be: ``` $sections_array = mysql_fetch_array($sections_query) or die(mysql_error()); ``` You are trying to fetch rows from the variable `$sections_result`, which is your query string and not the result set. --- Turn on error reporting, with `error_reporting(E_ALL)` like mentioned in one of the other answers.
Incidentally, I suspect the problem is that PHP is throwing an error, but you've disabled the display of errors - hence the display of a blank white page. Check the status of 'display\_errors' within your php.ini file. **NB: If this is a production server, you should leave display\_errors set to off.**
php mysql_select_db returns blank page
[ "", "php", "mysql-select-db", "" ]
I'm fairly new to python and following along with part 4 of the tutorial for the Django framework [here](http://docs.djangoproject.com/en/1.0/intro/tutorial04/#intro-tutorial04). I'm trying to implement generic views for the polls app--my code seems correct (as far as I can tell), but when I try to vote, I get a NoReverseMatch Exception that states: > Reverse for 'polls/poll\_results' with arguments '(1L,)' and keyword arguments '{}' not found. My code was working perfectly before I attempted the generic views, but I can't seem pinpoint the problem now. Here's the code for my urls.py in the poll directory: ``` from django.conf.urls.defaults import * from djtest.polls.models import Poll info_dict = { 'queryset': Poll.objects.all(), } urlpatterns = patterns('', (r'^$', 'django.views.generic.list_detail.object_list', info_dict), (r'^(?P<object_id>\d+)/$', 'django.views.generic.list_detail.object_detail', info_dict), url(r'^(?P<object_id>\d+)/results/$', 'django.views.generic.list_detail.object_detail', dict(info_dict, template_name='polls/results.html'), 'poll_results'), (r'^(?P<poll_id>\d+)/vote/$', 'djtest.polls.views.vote'), ) ``` And here is the views.py: ``` from django.http import HttpResponse, Http404, HttpResponseRedirect from django.shortcuts import render_to_response, get_object_or_404 from django.core.urlresolvers import reverse from djtest.polls.models import Poll, Choice def vote(request, poll_id): p = get_object_or_404(Poll, pk=poll_id) try: selected_choice = p.choice_set.get(pk=request.POST['choice']) except (KeyError, Choice.DoesNotExist): #redisplay form return render_to_response('polls/poll_detail.html', { 'object': p, 'error_message': "You didn't select a choice.", }) else: selected_choice.votes += 1 selected_choice.save() return HttpResponseRedirect(reverse('poll_results', args=(p.id,))) ``` I have a feeling that it is a syntactical error, but I can't find it. Thanks in advance for any help...
Try using: ``` return HttpResponseRedirect(reverse('poll_results', kwargs={'object_id': p.id})) ```
Are you sure that's where your error really is? Based on the error message, it sounds like either in a view or in a template you are trying to reverse `'polls/poll_results'` (in a template, you may be doing something like `{% url polls/poll_results poll.pk %}`)
NoReverseMatch Exception help in Django
[ "", "python", "django", "exception", "syntax", "django-generic-views", "" ]
I should probably know this by now, but what, if any is the difference between the two statements below? *The nested join:* ``` SELECT t1.* FROM table1 t1 INNER JOIN table2 t2 LEFT JOIN table3 t3 ON t3.table3_ID = t2.table2_ID ON t2.table2_ID = t1.table1_ID ``` *The more traditional join:* ``` SELECT t1.* FROM table1 t1 INNER JOIN table2 t2 ON t2.table2_ID = t1.table1_ID LEFT JOIN table3 t3 ON t3.table3_ID = t2.table2_ID ```
Well, it's the order of operations.. ``` SELECT t1.* FROM table1 t1 INNER JOIN table2 t2 LEFT JOIN table3 t3 ON t3.table3_ID = t2.table2_ID ON t2.table2_ID = t1.table1_ID ``` could be rewritten as: ``` SELECT t1.* FROM table1 t1 -- inner join t1 INNER JOIN (table2 t2 LEFT JOIN table3 t3 ON t3.table3_ID = t2.table2_ID) -- with this ON t2.table2_ID = t1.table1_ID -- on this condition ``` So basically, first you LEFT JOIN t2 with t3, based on the join condition: table3\_ID = table2\_ID, then you INNER JOIN t1 with t2 on table2\_ID = table1\_ID. In your second example you first INNER JOIN t1 with t2, and then LEFT JOIN the resulting inner join with table t3 on the condition table2\_ID = table1\_ID. ``` SELECT t1.* FROM table1 t1 INNER JOIN table2 t2 ON t2.table2_ID = t1.table1_ID LEFT JOIN table3 t3 ON t3.table3_ID = t2.table2_ID ``` could be rewritten as: ``` SELECT t1.* FROM (table1 t1 INNER JOIN table2 t2 ON t2.table2_ID = t1.table1_ID) -- first inner join LEFT JOIN -- then left join table3 t3 ON t3.table3_ID = t2.table2_ID -- the result with this ``` **EDIT** I apologize. My first remark was wrong. The two queries will produce the same results but there may be a difference in performance as the first query may perform slower than the second query in some instances ( when table 1 contains only a subset of the elements in table 2) as the LEFT JOIN will be executed first - and only then intersected with table1. As opposed to the second query which allows the query optimizer to do it's job.
For your specific example, I don't think there should be any difference in the query plans generated, but there's definitely a difference in readability. Your 2nd example is MUCH easier to follow. If you were to reverse the types of joins in the example, you could end up with much different results. ``` SELECT t1.* FROM table1 t1 LEFT JOIN table2 t2 ON t2.table2_ID = t1.table1_ID INNER JOIN table3 t3 ON t3.table3_ID = t2.table2_ID -- may not produce the same results as... SELECT t1.* FROM table1 t1 LEFT JOIN table2 t2 INNER JOIN table3 t3 ON t3.table3_ID = t2.table2_ID ON t2.table2_ID = t1.table1_ID ``` Based on the fact that order of the joins DOES matter in many cases - careful thought should go into how you're writing your join syntax. If you find that the 2nd example is what you're really trying to accomplish, i'd consider rewriting the query so that you can put more emphasis on the order of your joins... ``` SELECT t1.* FROM table2 t2 INNER JOIN table3 t3 ON t3.table3_ID = t2.table2_ID RIGHT JOIN table1 t1 ON t2.table2_ID = t1.table1_ID ```
SQL - Difference between these Joins?
[ "", "sql", "sql-server", "database", "t-sql", "join", "" ]
I'm planing a larger project,so i'm considering some technology options. The project will use the 3 tier architecture design. The presentation layer will be ASP.NET but it could be some other technology. This doesn't matter for now. My questions are: 1. For the aplication server should i use a windows service or just a normal application ? 2. What should i use for the communication between the presentation layer and the domain layer ? I wanted to use .NET remoting but i read that remoting is part of WCF. Actually i'm not so familiar with WCF that's why i'm asking. So .NET Remoting or WCF? I'll appreciate any hint
For the application server, right now, I guess Windows services would be your best bet, even though it's more work than it should be. If you don't have to deploy right now, you might want to also have a look at "[Dublin](http://msdn.microsoft.com/en-us/magazine/2009.01.net40.aspx)" - an add-on to .NET 4.0 that will augment the WAS (Windows Process Activation Server) with management tools and stuff. With this, you might be able to actually host and manage your WCF services in a nice and very powerful way. As for #2 - I would definitely recommend WCF - it's the platform of choice for communication in distributed systems, and with its configurability and flexibility, it can handle pretty much any task you care to throw at it. From very fast on-machine communication (NetNamedPipeBinding) to handling communication through the Windows Azure ServiceBus - a relay service "in the cloud" - it's *that* powerful! You can't go wrong with WCF - it can handle all your communications needs, I'd say. Don't waste your time learning deprecated technologies like .NET Remoting or ASMX or WSE web services (just my $0.02 for this discussion). Marc
1. Services are the way to go; a user doesn't have to be logged in for them to be running; they are well-equiped for remote administration; they are also better suited for being monitored for health 2. Remoting can be used via WCF; WCF aggregates many different communication platforms and provides easy-to-leverage configurations for them. Learning WCF would be the better way to go, in my opinion, however if you are versed in Remoting and are certain it covers all your needs then it should be fine too.
What technology for a larger project?
[ "", "c#", ".net", "wcf", "architecture", "remoting", "" ]
I am using the following code to initialize a model from within my controller: ``` $this->load->model('model_name'); ``` Is it possible to modify the above line somehow so that the model constructor recieves a parameter? I want to use the following code in the model constructor: ``` function __construct($param_var) { parent::Model(); $this->$param_var = $param_var; //I'm not even sure this works in PHP..but different issue } ``` This would be very helpful so that I can reuse my model classes. Thanks. UPDATE: (from one of the answers, my original question is solved..thanks!) Just to explain why I wanted to do this: the idea is to be able to reuse a model class. So basically to give a simple example I would like to be able to pass an "order\_by" variable to the model class so that I can reuse the logic in the model class (and dynamically change the order-by value in the sql) without having to create a separate class or a separate function. Is this poor design? If so could you please explain why you wouldn't do something like this and how you would do it instead?
I see your reasoning for this, but may I suggest looking at [Object-Relational Mapping](http://en.wikipedia.org/wiki/Object-relational_mapping) for your database needs. There is a user-made ORM library for CodeIgniter called [DataMapper](http://stensi.com/datamapper/index.html) that I've been using lately. You can use tables in your controllers as objects, and it may be a better fit for your problem.
You can't pass parameters through the `load` function. You'll have to do something like: ``` $this->load->model('model_name'); $this->model_name->my_constructor('stuff'); ``` In the model: ``` function my_constructor($param_var) { ... } ``` Response to update: You could just pass the order\_by value when you're calling your model function. I'm assuming in your controller action, you have something like `$this->model_name->get($my_id);` Just add your order\_by parameter to this function. IMO this makes your model logic more flexible/reusable because the way you were doing it, I assume setting order\_by in the constructor will set the order\_by value for every function.
PHP & Codeigniter - how to pass parameters to a model?
[ "", "php", "codeigniter", "" ]
What is the example of the OOP concept 'Message Passing' in C# (calling methods/Passing parameters/Firing Events/Handling Events/???) and why is it called message passing?
Method calls. It's called message passing to distinguish it from the imperative notion of "calling a function", and to reinforce the idea that the receiving object decides what to do. On the call site, you just tell the receiver the "message". An example would be methods that are abstract in a class, and implemented in subclasses; or implementations of an interface method. When you call e.g. `GetEnumerator()` through a variable of type `IEnumerable`, you don't know at the call site what code should actually be executed.
There are some who feel that message passing and method calls are different. We use the term interchangeably, but the meaning is subtle. In smalltalk, message passing was run time bound, and the object had a way to determine if it could handle a message that wasn't explicitly defined as a method. Ruby calls this method\_missing. Methods in C++ in particular are bound at compile time, with no way to dynamically add ways to handle more messages. C# 4.0 has a mix, once you start throwing dynamics around. There's another school of message passing, Erlang believes all message arguments need to be decoupled in state. That is, they are either immutable or copies.
OOP - Message Passing in C#
[ "", "c#", "oop", "message-passing", "" ]
The ultimate goal is comparing 2 binaries built from exact same source in exact same environment and being able to tell that they indeed are functionally equivalent. One application for this would be focusing QA time on things that were actually changed between releases, as well as change monitoring in general. MSVC in tandem with PE format naturally makes this very hard to do. So far I found and neutralized those things: * PE timestamp and checksum * Digital signature directory entry * Debugger section timestamp * PDB signature, age and file path * Resources timestamp * All file/product versions in VS\_VERSION\_INFO resource * Digital signature section I parse PE, find offsets and sizes for all those things and ignore byte ranges when comparing binaries. Works like charm (well, for the few tests I've run it). I can tell that signed executable with version 1.0.2.0 built on Win Server 2008 is equal to unsigned one, of version 10.6.6.6, build on my Win XP dev box, as long as compiler version and all sources and headers are the same. This seems to work for VC 7.1 -- 9.0. (For release builds) With one caveat. **Absolute paths for both builds** must be the same **must have the same length.** cl.exe converts relative paths to absolute ones, and puts them right into objects along with compiler flags and so on. This has unproportional effects on whole binary. One character change in path will result in one byte changed here and there several times over whole .text section (however many objects were linked I suspect). Changing length of the path results in significantly more differences. Both in obj files and in linked binary. Feels like file path with compile flags is used as some kind of hash, which makes it into linked binary or even affects placement order of unrelated pieces of compiled code. So here is the 3-part question (summarized as "what now?"): * Should I abandon the whole project and go home because what I am trying to do breaks laws of physics and corporate policy of MS? * Assuming I handle absolute path issue (on policy level or by finding a magical compiler flag), are there any other things I should look out for? (things like \_\_TIME\_\_ *do* mean changed code, so I don't mind those not being ignored) * Is there a way to either force compiler to use relative paths, or to fool it into thinking the path is not what it is? Reason for the last one is beautifully annoying Windows file system. You just never know when deleting several gigs worth of sources and objects and svn metadata will fail because of a rogue file lock. At least creating new root always succeeds while there is space left. Running multiple builds at once is an issue too. Running bunch of VMs, while a solution, is a rather heavy one. I wonder if there is a way to setup a virtual file system for a process and its children so that several process trees will see *different* "C:\build" dirs, private to them only, all at the same time... A light-weight virtualization of sorts... **UPDATE:** we recently opensourced the tool on [GitHub](https://github.com/smarttechnologies/peparser). See **Compare** section in documentation.
I solved this to an extent. Currently we have build system that makes sure all new builds are on the path of constant length (builds/001, builds/002, etc), thus avoiding shifts in the PE layout. After build a tool compares old and new binaries ignoring relevant PE fields and other locations with known superficial changes. It also runs some simple heuristics to detect dynamic ignorable changes. Here is full list of things to ignore: * PE timestamp and checksum * Digital signature directory entry * Export table timestamp * Debugger section timestamp * PDB signature, age and file path * Resources timestamp * All file/product versions in VS\_VERSION\_INFO resource * Digital signature section * MIDL vanity stub for embedded type libraries (contains timestamp string) * \_\_FILE\_\_, \_\_DATE\_\_ and \_\_TIME\_\_ macros when they are used as literal strings (can be wide or narrow char) Once in a while linker would make some PE sections bigger without throwing anything else out of alignment. Looks like it moves section boundary inside the padding -- it is zeros all around anyway, but because of it I'll get binaries with 1 byte difference. **UPDATE:** we recently opensourced the tool on [GitHub](https://github.com/smarttechnologies/peparser). See **Compare** section in documentation.
## Standardise Build Paths A simple solution would be to standardise on your build paths, so they are always of the form, for example: ``` c:\buildXXXX ``` Then, when you compare, say, **build0434** to **build0398**, just preprocess the binary to change all occurrences of **build0434** to **build0398**. Choose a pattern you know is unlikely to show up in your actual source/data, except in those strings the compiler/linker embed into the PE. Then you can just do your normal difference analysis. By using the same length pathnames, you won't shift any data around and cause false positives. ## Dumpbin utility Another tip is to use **dumpbin.exe** (ships with MSVC). Use *dumpbin /all* to dump all details of a binary to a text/hex dump. This can make it more obvious to see what/where is changing. For example: ``` dumpbin /all program1.exe > program1.txt dumpbin /all program2.exe > program2.txt windiff program1.txt program2.txt ``` Or use your favourite text diffing tool, instead of Windiff. ## Bindiff utility You may find Microsoft's **bindiff.exe** tool useful, which can be obtained here: [Windows XP Service Pack 2 Support Tools](http://www.microsoft.com/downloads/details.aspx?FamilyId=49AE8576-9BB9-4126-9761-BA8011FABF38&displaylang=en) It has a /v option, to instruct it to ignore certain binary fields, such as timestamps, checksums, etc.: > "BinDiff uses a special compare routine > for Win32 executable files that masks > out various build time stamp fields in > both files when performing the > compare. This allows two executable > files to be marked as "Near Identical" > when the files are truely identical, > except for the time they were built." However, it sounds like *you may be already doing* a superset of what bindiff.exe does.
Deterministic builds under Windows
[ "", "c++", "windows", "visual-c++", "portable-executable", "" ]
**[Update: The query works if I hardcode in the parameters - so it has to do with the way I am adding parameters to the query]** For the life of me, I cannot figure out what the problem is here. Here is the query being passed to the datareader: ``` SELECT * FROM (SELECT TOP ? StartDate, [ID] FROM (SELECT TOP ? StartDate, [ID] FROM Story ORDER BY StartDate DESC, [ID] DESC) AS foo ORDER BY StartDate ASC, [ID] ASC) AS bar INNER JOIN Story AS t ON bar.ID = t.ID ORDER BY bar.StartDate DESC, bar.[ID] DESC ``` The parameters are added in the following order: ``` var pNumToRetrieve = new OleDbParameter("", OleDbType.Integer) {Value = numToGet}; var pResultSet = new OleDbParameter("", OleDbType.Integer) {Value = resultSet}; _cmd.Parameters.Add(pNumToRetrieve); _cmd.Parameters.Add(pResultSet); ``` If I enter this query into access directly it works just fine. However, when running the query from ASP.NET, I get the following error: **The SELECT statement includes a reserved word or an argument name that is misspelled or missing, or the punctuation is incorrect.** What am I doing wrong? Thanks, Adam
The N in TOP N queries in Jet SQL cannot be parameterized, period. You have to write your SQL string on the fly to get a variable N. This means either that you can't use a saved QueryDef, or that you have to edit the SQL of the QueryDef and save it before using it.
Consider rewriting you `TOP N` constructs using a correlated subqueries. Here's a simple example. Consider a table named Sequence with a column (seq) of unique `INTEGER`s (a standard SQL auxiliary table which is useful in countless situations -- every database should have one!) Both the following queries return two the highest values for seq: 1) ``` SELECT TOP 2 seq FROM SEQUENCE ORDER BY seq DESC; ``` Pros: The Access database engine performs this relatively well (as you would expect for proprietary syntax). Cons: Proprietary syntax therefore bad for portability. The N (as in TOP N) cannot be parametrized. For me, using `DESC` in the `ORDER BY` to return the highest values is counter-intuitive. 2) ``` SELECT S1.seq FROM SEQUENCE AS S1 WHERE 2 >= ( SELECT COUNT(*) FROM SEQUENCE AS S2 WHERE S2.seq >= S1.seq ); ``` Pros: Standard SQL syntax therefore good for portability. The N can be parametrized. Cons: The Access database engine does not optimize correlated subqueries well therefore performance will degrade as the number of rows in the table increases (as ever with performance issues, you will need to test it). Some SQL coders find a correlated subquery hard to understand and therefore has potential maintenance issues.
MS Access Error: The SELECT statement includes a reserved word or an argument name that is misspelled or missing, or the punctuation is incorrect
[ "", "asp.net", "sql", "ms-access", "" ]
I have an array, called x, whose size is 6\*sizeof(float). I'm aware that declaring: ``` float x[6]; ``` would allocate 6\*sizeof(float) for x in the stack memory. However, if I do the following: ``` float *x; // in class definition x = new float[6]; // in class constructor delete [] x; // in class destructor ``` I would be allocating dynamic memory of 6\*sizeof(float) to x. If the size of x does not change for the lifetime of the class, in terms of best practices for cleanliness and speed (I do vaguely recall, if not correctly, that stack memory operations are faster than dynamic memory operations), should I make sure that x is statically rather than dynamically allocated memory? Thanks in advance.
Declaring the array of fixed size will surely be faster. Each separate dynamic allocation requires finding an unoccupied block and that's not very fast. So if you really care about speed (have profiled) the rule is if you don't need dynamic allocation - don't use it. If you need it - think twice on how much to allocate since reallocating is not very fast too.
Using an array member will be cleaner (more succinct, less error prone) and faster as there is no need to call allocation and deallocation functions. You will also tend to improve 'locality of reference' for the structure being allocated. The two main reasons for using dynamically allocated memory for such a member are where the required size is only known at run time, or where the required size is large and it is known that this will have a significant impact on the available stack space on the target platform.
Best practices of dynamic vs. static memory in terms of cleanliness and speed
[ "", "c++", "memory-management", "" ]
I've used databases in asp.net but I now have a situation where I need the power/flexibility of a database but I need to ship it with my application. I need to access my tables and create new ones etc but without requiring the user to have any kind of database software installed. I.e. I want it to be self-sufficient. What are my options? Of course standard serialization could be used but I want to use an actual database technology for cases where the records are in the tens of thousands. Thanks for any input!
You're probably looking at some sort of embedded / in-process database. Microsoft espouses their lite version of SQL Server. My personal recommendation is to look at SQLite. SQLite leaves you with a single DLL to run the database and one file for the actual db data. The [System.Data.SQLite project](http://sqlite.phxsoftware.com/) is an awesome community contribution to bridge the gap between SQLite and what .NET developers have come to expect with commerical database integration. With this project you get a driver with full ADO.NET 2.0 support, ADO.NET Entity Framework support and Visual Studio database design time integration. Give it a look. You'll be up and running in no time and I'm sure you'll be impressed.
There is a good series of articles regarding SQLLite here: <http://www.csharphacker.com/technicalblog/index.php/2009/06/16/sqlite-for-c-part-1-am-i-allowed-to-use-it/> the remaining 6 articles are posted at the bottom of this page
More Information about integrating a database into my C# Winforms Application?
[ "", "c#", ".net", "c#-3.0", "" ]
Is there a way for a Java program to detect when the operating system is about to go to sleep, or failing that, at least detecting a wake up? The actual problem is that in a particular application a number of MySQL database operations are run in the background. In testing on a Windows machine these database transactions are interrupted after a sleep/wake-up cycle causing a ton of error conditions in the program. These errors typically look something like this: ``` java.net.SocketException MESSAGE: Software caused connection abort: recv failed ``` If we could react to a 'will sleep soon' event we could attempt to pause background operations preempting the problem. Less ideally if we could react to a 'just woke up' event we could at least suppress the error messages.
You could detect the wakeup by periodically comparing the current system time to the previous system time. Edit: here's an example that looks like it would help you detect when the machine is going to sleep: <http://www.codeguru.com/cpp/w-p/system/messagehandling/article.php/c6907>
I know it's not exactly what you're looking for, but maybe the right answer is to try to write the code with the assumption that your sockets won't necessarily stay up. There are lots of reasons why the connection could crash, for example because the DB is taken down for maintenance, or because someone tripped over your Ethernet cable. I think you'd have similar problems with errors if someone pulled out the network cable while your app was running, and these are not unheard-of conditions, so maybe it's a better approach to try and handle those conditions gracefully.
Detect OS Sleep and Wake Up events in Java
[ "", "java", "mysql", "database-connection", "sleep", "" ]
A colleague asked me an interesting question today - is the C# keyword/operator "is" considered reflection? ``` object tmp = "a string"; if(tmp is String) { } ``` How is this operator implemented behind the scenes? Does it require reflection or introspection? Or because of the strongly typed nature of the language, is the Type of the object immediately accessable as a top-level attribute of the object in memory? [MSDN](http://msdn.microsoft.com/en-us/library/scekt9xw%28VS.71%29.aspx) states that: > Note that the is operator only considers reference conversions, boxing conversions, and unboxing conversions. Other conversions, such as user-defined conversions, are not considered by the is operator. The ability to consider boxed and unboxed conversions seems to imply to me some sort of introspection.
[Referencing ECMA-335](http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf), the `is` operator generates the `isinst` object model IL instruction (Partition III §4.6), which is part of the base instruction set as opposed to being part of the Reflection library (Partition IV §5.5). Edit: The `is` operator is extremely efficient compared to the reflection library. You could perform basically the same test much more slowly via reflection: ``` typeof(T).IsAssignableFrom(obj.GetType()) ``` Edit 2: You are not correct about the efficiency of the `castclass` and `isinst` instructions (which you've now edited out of the post). They are highly optimized in any practical VM implementation. The only real performance issue involved is the potential for `castclass` to throw an exception, which you avoid by using the C# `as` operator and a test for `null` (for reference types) or the `is` operator followed by a cast (for value types).
The `is` operator essentially determines if a cast is possible, but instead of throwing an exception when the cast is impossible it returns `false`. If you consider casting reflection then this is also reflection. EDIT: After some research I have discovered that a cast is performed in IL på the `castclass` instruction while the `is` operator maps to the `isinst` instruction. FxCop has a [rule](http://msdn.microsoft.com/en-us/library/ms182271(VS.80).aspx) that warns you if you are doing unecessary casts by first using the `isinst` and then the `castclass` instruction. Even though the operations are efficient they still have a performance cost.
C# "is" operator - is that reflection?
[ "", "c#", ".net", "reflection", "introspection", "typing", "" ]
I'm looking to be able to build GUI applications quickly and painlessly as possible. I'm competent (though not expert, and have no formal training) in C++, but have never used a GUI building toolkit or framework or anything. I am not a professional programmer and am totally inexperienced and ignorant when it comes to building GUI apps. Have spent hours researching trying to figure out what to do; only getting more confused and discouraged though. Qt and wxWidgets seem like the most popular options for cross-platform apps, though cross-platform isn't necessarily all that important to me; Windows-only is fine if that means the fastest learning curve. Qt seems cool and the Qt Creator is sweet looking with lots of good demos, except it has its own classes for everything, and I'm not overly keen on learning a bunch of stuff that's only applicable to the Qt platform itself rather than more generally. I suppose I could avoid using the Qt classes except for the GUI stuff where I have to use them, but I have no idea how wise or unwise that would be. I was thinking Visual Studio would have the smallest learning curve, but when I open a test GUI app, I see a bunch of foreign looking stuff like carats (^) all over the place - I found online that these mean "handles", which I have trouble even understanding the definition or purpose of ("sort of like pointers but not really" is basically how I've read people define them). I know pretty much nothing about wxWidgets, or how it compares with Qt. So every option has a big learning curve - and ideally I'd like to know which one minimizes the time you have to spend learning the toolkit/framework itself. Since I'm likely never going to be making money from the programs I create, the time I spend learning a specific toolkit would be pretty costly. I just want to be able to make a functional program using the C++ knowledge I have, but in GUI form. At the moment it seems if I want to make a GUI app, I'd have to spend way more time learning the GUI framework I'd use than writing the functional part of the app itself. Any input from people wiser and more experienced than me would be appreciated :)
First and foremost, start simple. There's a lot to the subject. If you are finding it hard, don't try and take it in all at once. Most of the good GUI packages have tutorials. The best advice I can give is that you try each of them, or at least a couple of them. They are the best short introduction you can have to the library you choose and if they are any good they narrow down what you need to absorb at first. That will give you some basis for comparison, because they are each trying to do very similar things (and you will see some of them before you are done), but they have different feels. You will likely find you have a preference for one and that's the one to get serious with. It will also give you a sense of what's hard about GUI programming as *separate* from the particulars of one package, which, if you have only used one, you won't have seen. Personally I find this sort of knowledge very helpful, because it makes me less intimidated by particulars. Here's a list of tutorials in one place, though you have likely seen them already: * [Qt's tutorial](https://doc.qt.io/qt-5/qtexamplesandtutorials.html) * [WxWidgets' tutorial](http://www.wxwidgets.org/docs/tutorials.htm) * [Gtkmm book](http://www.gtkmm.org/docs/gtkmm-2.4/docs/tutorial/html/index.html). Not *quite* a tutorial, though there are lots of examples. * .NET tutorials, either for WinForms or for WPF. Second, it sounds to me that you need to get some in depth understanding of the *concepts* of GUI programming, not just a particular library. Here there is no substitute for a book. I don't know all of them by a long shot, but the best of the bunch will not just teach you the details of a toolkit, they will teach you general concepts and how to use them. Here are some lists to start with though (and once you have titles, Amazon and Stack Overflow will help to pick one): * [List of Qt books](http://www.qtsoftware.com/developer/books) * [WxWidgets book](http://www.wxwidgets.org/docs/book/) ([PDF version](http://www.informit.com/content/images/0131473816/downloads/0131473816_book.pdf)) * There are tons of WPF and WinForms books. I can't make a good recommendation here unfortunately. Third, take advantage of the design tools (Qt Creator, VS's form building and so on). *Don't* start by trying to read through all the code they generate: get your own small programs running first. Otherwise it's too hard to know what matters for a basic program and what doesn't. The details get lost. Once you've got the basics down though, *Do* use them as references to learn how to do specific effects. If you can get something to work in the design tools, then you can look at *particular* code they generate to be able to try on your own hand-written programs. They are very useful for intermediate learning. > I'm not overly keen on learning a bunch of stuff that's only applicable to the Qt platform itself rather than more generally. I second the [comment](https://stackoverflow.com/questions/1189084/whats-the-c-gui-building-option-with-the-easiest-learning-curve-vs-qt-wxwidg/1189334#1189334) of GRB here: Don't worry about this. You are going to need to learn a lot specific to the toolkit no matter which toolkit you use. But you will also learn a lot that's *general* to GUI programming with any of the decent toolkits, because they are going to have to cover a lot of the same ground. Layouts, events, interaction between widgets/controls, understanding timers -- these will come up in any GUI toolkit you use. However do be aware that any serious GUI package *is* an investment of time. You will have a much easier time learning a second package if you decide to pick one up, but every large library has its personality and much of your time will be spent learning its quirks. That is, I think, a given in dealing with any complex subject. > I suppose I could avoid using the Qt classes except for the GUI stuff where I have to use them, but I have no idea how wise or unwise that would be. You do not need most of the non-GUI classes of Qt to use Qt's GUI properly. There are a handful of exceptions (like `QVariant`) which you'll need just because the GUI classes use them. I found you can learn those on a case-by-case basis.
Which is the easiest to learn is really going to depend on how you personally learn. *Personally*, I've found Qt to be the easiest to learn so far. The GUI classes are rather nice to use, but I've found the non-GUI classes to be excellent, making it easy to avoid a lot of common issues you'd normally get with a more basic API. The [documentation](https://doc.qt.io/qt-5/) is excellent, IMO, as are the books, the examples, etc. It's also being very actively developed, with a few new technologies coming in the near future (like DeclarativeUI). I've found Visual Studio/Windows API/.Net to be a good bit more complicated to learn. The API documentation on MSDN is rather complicated and not really organized in a manner that I find intuitive. I've tried learning WxWidgets a few times, but I've never liked the API documentation. All this is just my personal experience, YMMV of course. I'd say just dabble in all of them and see which one takes you the furthest, it won't hurt to try multiple.
What's the C++ GUI building option with the easiest learning curve - VS/Qt/wxWidgets/etc.?
[ "", "c++", "visual-studio", "user-interface", "qt", "wxwidgets", "" ]
I am a Java programmer looking to learn .NET, particularly C# and F#, to improve my employability. Toward this goal I am creating a website which will have a few demo projects. My hope is to learn C#/F# and the .NET framework by creating this website and then have a finished product to self-advertise and to show potential employers. What I need is a good host. My priorities are cost, stability, and capability. I might be willing to pay up to around $10/mo, but I don't really want to pay more than $5/mo. Stability and performance have to be reasonable. I want access to all the commonly-used .NET tools (SQLServer, LINQ, C#, F#, VB, MVC, any other stuff I might reasonably need that I don't know about yet). Basically I don't really know what I want besides room for my projects to grow as my expertise grows. EDIT: Some have expressed concern that I shouldn't pay for time before my project is ready-to go. However, my hope is to get it up in less than a month, so I'm not too worried about paying for time before that.
What time frame do you have? I would strongly suggest not purchasing hosting until you have your project ready to deploy, because you may waste a month or two of hosting money. That said, I highly recommend [DiscountASP](http://www.discountasp.com). They stay on top of the MS stack really hard. I had ASP.NET MVC installed the day it was released, for example, and they have a lot of experience tweaking the environment to match your needs. EDIT: To more clearly answer your question, ASP.NET, MVC, Linq, SQL Server 2005 or 2008 (choice), SQL Server Express, etc. are preinstalled. I know someone who runs F# code there as well, but I cannot say whether or not he requested it. I do know its not costing him extra.
I'm also just learning, and I'm using [reliablesite.net](http://payments.reliablesite.net/aff.php?aff=174) for my asp.net projects. They have a coupon section that is real easy to overlook, will give you 80% off your first bill - so I signed up for a quarterly cost of around $30, just paying $6 for my first 3 months. Take a look, I think they may right up your alley. As for discountasp.net, yeah, they are good, but the surcharge for MSSQL alone puts the cost around $20 a month to start, and then you are restricted to quarterly or annually bills - no monthly. Basically, $60 up front. **Edit:** One of the major features that got me to sign on to this host, even after looking at *tons* of others, was the dedicated application pools. Wasn't something I saw everyone offering.
.NET hosting for demo projects
[ "", "c#", ".net", "sql-server", "f#", "hosting", "" ]
I have a string **Action - [N]ew, [U]pdate, or [D]elete : N** that I need to replace with "Action - [N]ew, [U]pdate, or [D]elete : U" somhow by using preg\_replace I can't get it working. It remains the same. My code looks like this ``` $action = Action - '[N]ew, [U]pdate, or [D]elete : U'; $line = preg_replace("/(Action - [N]ew, [U]pdate, or [D]elete : N)/",$action,$line); ```
`[` and `]` are special characters in regular expressions. You'll have to escape them if you want to match them: ``` "/(Action - \[N\]ew, \[U\]pdate, or \[D\]elete : N)/" ``` Without being escaped, and expression within `[` and `]` will match one of every character within them. So in your original case, `"[N]ew"` was matching `"New"`. If it had been `"[NP]ew"`, it would have matched `"New"` or `"Pew"`.
You don’t need `preg_replace` to do that. A simple `str_replace` will suffice: ``` $action = 'Action - [N]ew, [U]pdate, or [D]elete : U'; $line = str_replace('Action - [N]ew, [U]pdate, or [D]elete : N', $action, $line); ```
Replacing a string of text with another string using preg_replace
[ "", "php", "regex", "preg-replace", "" ]
What compelling reasons exist for using them?
* Resource files give you an easy way to localize/internationalize your .net applications by automatically determining which language resx file to use based on the user's locale. To add more languages, simply add another translated resource file. * Resource files give you a central location to store your strings, files and scripts and refer to them in a strongly-typed manner (so the compile will break if you reference them improperly). * Resource files can be compiled into satellite assemblies, making it easy to change up the resources in a production application without having to recompile the whole thing.
As a supplement to the other answers, string resources are for human-readable text, not constants that will be used programmatically. They're great for error messages, button labels and the like. Very often, rather than the final string, we store a format string so that variables can be substituted in at the last moment. The nice thing about this method is that, unlike concatenation, it's not broken when a language has different word order requirements.
What are the benefits of resource(.resx) files?
[ "", "c#", ".net", "vb.net", "resx", "" ]
This is what i have so far but I do not think it is right. ``` for (int i = 0 ; i < 5; i++) { for (int j = 0; j < 5; j++) { matrix[i][j] += matrix[i][j] * matrix[i][j]; } } ```
I don't think you can multiply a matrix by itself in-place. ``` for (i = 0; i < 5; i++) { for (j = 0; j < 5; j++) { product[i][j] = 0; for (k = 0; k < 5; k++) { product[i][j] += matrix[i][k] * matrix[k][j]; } } } ``` Even if you use a less naïve matrix multiplication (i.e. something other than this O(n3) algorithm), you still need extra storage.
Suggestion: if it's not a homework don't write your own linear algebra routines, use any of the many peer reviewed libraries that are out there. Now, about your code, if you want to do a term by term product, then you're doing it wrong, what you're doing is assigning to each value it's square plus the original value (`n*n+n` or `(1+n)*n`, whatever you like best) But if you want to do an authentic matrix multiplication in the algebraic sense, remember that you had to do the scalar product of the first matrix rows by the second matrix columns (or the other way, I'm not very sure now)... something like: ``` for i in rows: for j in cols: result(i,j)=m(i,:)·m(:,j) ``` and the scalar product "·" ``` v·w = sum(v(i)*w(i)) for all i in the range of the indices. ``` Of course, with this method you cannot do the product in place, because you'll need the values that you're overwriting in the next steps. Also, explaining a little bit further Tyler McHenry's comment, as a consecuence of having to multiply rows by columns, the "*inner dimensions*" (I'm not sure if that's the correct terminology) of the matrices must match (if `A` is `m x n`, `B` is `n x o` and `A*C` is `m x o`), so in your case, a matrix can be squared only if it's square (he he he). And if you just want to play a little bit with matrices, then you can try Octave, for example; squaring a matrix is as easy as `M*M` or `M**2`.
How do you multiply a matrix by itself?
[ "", "c++", "matrix", "matrix-multiplication", "" ]
I have to make some database requests with PHP on a MySQL database. Question : What is the best (simpliest) framework to get thing done right CRUD (Create Read Update Delete)? I also have to populate the database, what is a good tool to do that. The only one I know is SqlMyAdmin, wich does not look good. An online tool would be great. Your experience is valuable: tell me what do you use and why ? --- I have taken a look at CodeIgniter, looks nice, what do you think... overkill ?
For lots of operations (especially CRUD, which work out of the box once you've written the schema files), the ORM Framework [Doctrine](http://www.doctrine-project.org/) is really great. If you want to go farther than just DB access, you might take a look at the PHP FRamework [symfony](http://www.symfony-project.org/), which provides an admin generator (there is even a [screencast about that one](http://www.symfony-project.org/screencast/admin-generator)). (And has great documentation, such as the [jobeet tutorial](http://www.symfony-project.org/jobeet/1_2/Doctrine/en/)) (BTW, symfony uses Doctrine as ORM ^^ ) But maybe it's a bit overkill (and requires a too big learning curve) if you need something simple... To load data to MySQL, what about [LOAD DATA INFILE](http://dev.mysql.com/doc/refman/5.0/en/load-data.html), which (quote from the docs) "reads rows from a text file into a table at a very high speed".
I recommend [GroceryCRUD](http://www.grocerycrud.com/) because of the good engineering and documentation 1. Copy files into your web folder 2. Configure MySQL database 3. Specify MySQL table name => You get a paginated JqueryUI table with create/edit/delete buttons. create/edit opens a form page based on the MySQL table schema. For example, a boolean, a varchar and a text get turned into a form with active/inactive radio buttons, a text field and a wysiwyg html editor. *Note: GroceryCRUD is built on CodeIgniter so you will have a copy living in your admin directory. You don't have to use it for building your main site.* *Security Advisory: Any library can have undiscovered security vulnerabilities, so it is recommended to minimize exposure by protecting your copy of GroceryCRUD with BaseAuth and permitting SSL access only.*
CRUD for MySQL and PHP
[ "", "php", "mysql", "crud", "" ]
I am looking for a program where I can enter a C++ code snippet in one window, press a button, and get output in another window. Compilation should somehow be hidden behind the button. On a per-snippet basis would be fine, full interactive probably asking too much. It should run under Linux/Unix. Main use case would be learning/testing/short debugging, etc. Related stuff I found: -- the Reinteract project for python (which i'm told sage has features similar to) -- the same thread for C# here: [C# Console?](https://stackoverflow.com/questions/47537/c-console) -- the CINT interpreter from the CERN ROOT project (which may be close, but maybe there are more comfortable apps around) -- some programs called Quickly Compile or Code Snippet, which are M$.
<http://codepad.org/> works nicely for this purpose. By default, it will run what you paste when you hit submit and display the result (or any errors you might have).
Dinkumware has a page for this AND you can choose the compiler <http://dinkumware.com/exam/default.aspx>
C++ interpreter / console / snippet compiler
[ "", "c++", "console", "interpreter", "interactive", "" ]
I have an image, taken from a live webcam, and I want to be able to detect a specific object in the image and extract that portion of it to do some further processing. Specifically, the image would be of a game board, let's say for the purposes of this question that it's a Sudoku game board. My initial approach was to look for contrasting areas and work it out from there, but I seem to end up with a lot of potential edges (many erroneous) and no real clue as to how to work out which ones are the ones I actually want! Are there any algorithms, libraries, code samples, or even just bright ideas out there, as to how I would go about finding and extracting the relevant part of the image?
use the free [AForge.Net](http://www.aforgenet.com/) image processing library for this. there's a ton of cool stuff to play with.
You need to perform filters operation and masks on image. **I think there are no simple ways** to just fetch object from the image, you need to use edge-detection algorithms, clipping, and set the criteria for valid objects/image. You can also use image thresholding to detect object. You may want to look at below Image processing library. 1. [Filters](http://filters.sourceforge.net/) API for C, C++, C#, Visual Basic .NET, Delphi, Python 2. [http://www.catenary.com/](http://catenary.com/) 3. [CIMG](http://cimg.sourceforge.net/) richer than above library however it is written in C++
Detect an object in a camera image in C#
[ "", "c#", ".net", "image-processing", "edge-detection", "" ]
Given the following methods: ``` // Method 1 void add(const std::string& header, bool replace); //Method 2 void add(const std::string& name, const std::string& value); ``` It would appear that the following code will end up calling method 1 instead of method 2: ``` something.add("Hello", "World"); ``` I ended up creating another method that looks like this: ``` //Method 3 void MyClass::add(const char* name, const char* value) { add(std::string(name), std::string(value)); } ``` It worked. So it would seem that when a method accepts a "quoted string" it will match in the following order: 1. `const char*` 2. `bool` 3. `std::string` Why would a quoted string be treated as a `bool` before a `std::string`? Is this the usual behavior? I have written a decent amount of code for this project and haven't had any other issues with the wrong method signature being selected...
My guess is the conversion from pointer to bool is an implicit primitive type conversion, where the conversion to `std::string` requires the call of a constructor and the construction of a temporary.
In your case you have has overloaded functions. Overloading resolution occurs according to Section 13.3. **C++03 13.3.3.2/2:** > When comparing the basic forms of implicit conversion sequences (as defined in 13.3.3.1) > — a standard conversion sequence (13.3.3.1.1) is a better conversion sequence than a user-defined conversion sequence or an ellipsis conversion sequence, and > — a user-defined conversion sequence (13.3.3.1.2) is a better conversion sequence than an ellipsis conversion sequence (13.3.3.1.3). Conversion pointer into bool is a standard conversion. Conversion pointer into std::string is a user-defined conversion. > **4.12 Boolean conversions** > An rvalue of arithmetic, enumeration, **pointer**, or pointer to member type can be converted to an rvalue of type **bool**. A zero value, null pointer value, or null member pointer value is converted to false; any other value is converted to true.
Why does a quoted string match bool method signature before a std::string?
[ "", "c++", "string", "arguments", "boolean", "" ]
I was wondering if enumeration is commonly used with user input. I'm doing an exercise in which in my Book class I have to create an enum Genre with different genre enumerators such as fiction, non, fiction etc. When the user uses the program, he/she is asked for certain information about the book being stored. For a genre, normally I would just do this with a string function and restrict it to certain names with if statements. However, I'm not sure how to accomplish the same process with an enumerated type, nor do I know if it's even supposed to be used for that sort of thing. Here is the code if you're interested. ``` #include "std_lib_facilities.h" //Classes----------------------------------------------------------------------- class Book{ public: Book(){}; // default constructor //operators friend ostream& operator<<(ostream& out, const Book& val); bool Book::operator==(const Book& check) //enumerators enum Genre{ fiction, nonfiction, periodical, biography, children}; //member functions string title(); string author(); int copyright(); void ISBN(); bool checkout(); private: string title_; string author_; int copyright_; int ISBN1; int ISBN2; int ISBN3; char ISBN4; bool checkout_; }; // Error Function--------------------------------------------------------------- void _error(const string& s) { cout << endl; cout << "Error: " << s << endl; cout << endl; } // Member Functions------------------------------------------------------------- string Book::title() { cout << "Title: "; getline(cin,title_); cout << endl; return title_; } string Book::author() { cout << "Author: "; getline(cin,author_); cout << endl; return author_; } int Book::copyright() { cout << "Copyright: "; cin >> copyright_; cout << endl; return copyright_; } void Book::ISBN() { cout << "ISBN (Use spaces): "; cin >> ISBN1 >> ISBN2 >> ISBN3 >> ISBN4; if((ISBN1<0) || (ISBN2<0) || (ISBN3<0) || (ISBN1>9) || (ISBN2>9) || (ISBN3)>9) _error("Must be single digit."); else if(!isdigit(ISBN4) && !isalpha(ISBN4)) _error("Must be single digit or letter."); else{ cout << endl; return;} } bool Book::checkout() { char check; cout << "Checked out?(Y or N): "; cin >> check; switch(check){ case 'Y': cout << endl; return true; break; case 'N': cout << endl; return false; break; default: _error("Must be Y or N.");} } // Operator Overloads----------------------------------------------------------- ostream& operator<<(ostream& out, const Book& val){ out << "Title: " << val.title_ << endl; out << "Author: " << val.author_ << endl; out << "ISBN: " << val.ISBN1 << "-" << val.ISBN2 << "-" << val.ISBN3 << "-" << val.ISBN4 << endl; out << endl; return out;} bool Book::operator==(const Book& check){ return((ISBN1 == check.ISBN1) && (ISBN2 == check.ISBN2) && (ISBN3 == check.ISBN3) && (ISBN4 == check.ISBN4));} // Main------------------------------------------------------------------------- int main() { bool finished = false; char notfinished; while(!finished) { Book book; book.title(); book.author(); book.copyright(); book.ISBN(); book.checkout(); cout << "Do you wish to store another book?(Y or N): "; cin >> notfinished; if(notfinished == 'Y'){ cin.ignore(); cout << endl;} else if(notfinished == 'N') finished = true; else _error("Must be Y or N"); } keep_window_open(); } ``` Note that some things aren't being used at the moment because the feature they are a part of hasn't been fully implemented yet (storing in a library, outputting books, etc.) So what would it take to accept user input for the enumerators listed, if even possible? I was thinking something along the lines of making a Genre variable. Then having a function where the user inputs for cin>>variable. However, I'm guessing that the function wouldn't understand an input like 'fiction' and would only accept the enumerator values and input.
C-style enums are not terribly useful for this purpose, since there's no way to recover the original string name. You could make some `switch`-based mechanism, but at that point you may as well just set up your own way of doing it all that works with your user I/O requirements without shoehorning.
Make Genre a class that wraps the enum type (GenreTypeEnum). Add the necessary operators, e.g. istream, ostream, equal operator, etc. Inside the istream operator, you can read a std::string from the stream and then parse and convert the value to the associated GenreTypeEnum. Something like this perhaps: ``` namespace GenreType { enum GenreTypeEnum { miscellaneous, fiction, non_fiction, children }; } class Genre { public: Genre() : genreType( GenreType::miscellaneous) {} ~Genre() {} void setType( std::string genreTypeString ){ // implement string-> enum } std::string toString( void ) const { // convert genre back to string } private: GenreType::GenreTypeEnum genreType; }; std::ostream& operator<<( std::ostream& os, const Genre& genre ) { os << genre.toString(); return os; } std::istream& operator>>( std::istream& is, Genre& genre ) { std::string input; is >> input; genre.setType( input ); return is; } ```
Enumerations and User Input
[ "", "c++", "" ]
I have been using BinaryFormatter to serialise data to disk but it doesn't seem very scalable. I've created a 200Mb data file but am unable to read it back in (End of Stream encountered before parsing was completed). It tries for about 30 minutes to deserialise and then gives up. This is on a fairly decent quad-cpu box with 8Gb RAM. I'm serialising a fairly large complicated structure. htCacheItems is a Hashtable of CacheItems. Each CacheItem has several simple members (strings + ints etc) and also contains a Hashtable and a custom implementation of a linked list. The sub-hashtable points to CacheItemValue structures which is currently a simple DTO which contains a key and a value. The linked list items are also equally simple. The data file that fails contains about 400,000 CacheItemValues. Smaller datasets work well (though takes longer than i'd expect to deserialize and use a hell of a lot of memory). ``` public virtual bool Save(String sBinaryFile) { bool bSuccess = false; FileStream fs = new FileStream(sBinaryFile, FileMode.Create); try { BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(fs, htCacheItems); bSuccess = true; } catch (Exception e) { bSuccess = false; } finally { fs.Close(); } return bSuccess; } public virtual bool Load(String sBinaryFile) { bool bSuccess = false; FileStream fs = null; GZipStream gzfs = null; try { fs = new FileStream(sBinaryFile, FileMode.OpenOrCreate); if (sBinaryFile.EndsWith("gz")) { gzfs = new GZipStream(fs, CompressionMode.Decompress); } //add the event handler ResolveEventHandler resolveEventHandler = new ResolveEventHandler(AssemblyResolveEventHandler); AppDomain.CurrentDomain.AssemblyResolve += resolveEventHandler; BinaryFormatter formatter = new BinaryFormatter(); htCacheItems = (Hashtable)formatter.Deserialize(gzfs != null ? (Stream)gzfs : (Stream)fs); //remove the event handler AppDomain.CurrentDomain.AssemblyResolve -= resolveEventHandler; bSuccess = true; } catch (Exception e) { Logger.Write(new ExceptionLogEntry("Failed to populate cache from file " + sBinaryFile + ". Message is " + e.Message)); bSuccess = false; } finally { if (fs != null) { fs.Close(); } if (gzfs != null) { gzfs.Close(); } } return bSuccess; } ``` The resolveEventHandler is just a work around because i'm serialising the data in one application and loading it in another (<http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/e5f0c371-b900-41d8-9a5b-1052739f2521>) The question is, how can I improve this? Is data serialisation always going to be inefficient, am i better off writing my own routines?
I would personally try to avoid the need for the assembly-resolve; that has a certain smell about it. If you **must** use `BinaryFormatter`, then I'd simply put the DTOs into a separate library (dll) that can be used in both applications. If you don't want to share the dll, then IMO you shouldn't be using `BinaryFormatter` - you should be using a contract-based serializer, such as `XmlSerializer` or `DataContractSerializer`, or one of the "protocol buffers" implementations (and to repeat Jon's disclaimer: I wrote [one of the others](http://code.google.com/p/protobuf-net/)). 200MB does seem pretty big, but I wouldn't have expected it to fail. One possible cause here is the object tracking it does for the references; but even then, this surprises me. I'd love to see a simplified object model to see if it is a "fit" for any of the above. --- Here's an example that attempts to mirror your setup from the description using protobuf-net. Oddly enough there seems to be a glitch working with the linked-list, [which I'll investigate](http://code.google.com/p/protobuf-net/issues/detail?id=62); but the rest seems to work: ``` using System; using System.Collections.Generic; using System.IO; using ProtoBuf; [ProtoContract] class CacheItem { [ProtoMember(1)] public int Id { get; set; } [ProtoMember(2)] public int AnotherNumber { get; set; } private readonly Dictionary<string, CacheItemValue> data = new Dictionary<string,CacheItemValue>(); [ProtoMember(3)] public Dictionary<string, CacheItemValue> Data { get { return data; } } //[ProtoMember(4)] // commented out while I investigate... public ListNode Nodes { get; set; } } [ProtoContract] class ListNode // I'd probably expose this as a simple list, though { [ProtoMember(1)] public double Head { get; set; } [ProtoMember(2)] public ListNode Tail { get; set; } } [ProtoContract] class CacheItemValue { [ProtoMember(1)] public string Key { get; set; } [ProtoMember(2)] public float Value { get; set; } } static class Program { static void Main() { // invent 400k CacheItemValue records Dictionary<string, CacheItem> htCacheItems = new Dictionary<string, CacheItem>(); Random rand = new Random(123456); for (int i = 0; i < 400; i++) { string key; CacheItem ci = new CacheItem { Id = rand.Next(10000), AnotherNumber = rand.Next(10000) }; while (htCacheItems.ContainsKey(key = rand.NextString())) {} htCacheItems.Add(key, ci); for (int j = 0; j < 1000; j++) { while (ci.Data.ContainsKey(key = rand.NextString())) { } ci.Data.Add(key, new CacheItemValue { Key = key, Value = (float)rand.NextDouble() }); int tail = rand.Next(1, 50); ListNode node = null; while (tail-- > 0) { node = new ListNode { Tail = node, Head = rand.NextDouble() }; } ci.Nodes = node; } } Console.WriteLine(GetChecksum(htCacheItems)); using (Stream outfile = File.Create("raw.bin")) { Serializer.Serialize(outfile, htCacheItems); } htCacheItems = null; using (Stream inFile = File.OpenRead("raw.bin")) { htCacheItems = Serializer.Deserialize<Dictionary<string, CacheItem>>(inFile); } Console.WriteLine(GetChecksum(htCacheItems)); } static int GetChecksum(Dictionary<string, CacheItem> data) { int chk = data.Count; foreach (var item in data) { chk += item.Key.GetHashCode() + item.Value.AnotherNumber + item.Value.Id; foreach (var subItem in item.Value.Data.Values) { chk += subItem.Key.GetHashCode() + subItem.Value.GetHashCode(); } } return chk; } static string NextString(this Random random) { const string alphabet = "abcdefghijklmnopqrstuvwxyz0123456789 "; int len = random.Next(4, 10); char[] buffer = new char[len]; for (int i = 0; i < len; i++) { buffer[i] = alphabet[random.Next(0, alphabet.Length)]; } return new string(buffer); } } ```
Serialization is tricky, particularly when you want to have some degree of flexibility when it comes to versioning. Usually there's a trade-off between portability and flexibility of what you can serialize. For example, you might want to use [Protocol Buffers](http://code.google.com/p/protobuf/) (disclaimer: I wrote [one of the C# ports](http://code.google.com/p/protobuf-csharp-port/)) as a pretty efficient solution with good portability and versioning - but then you'll need to translate whatever your natural data structure is into something supported by Protocol Buffers. Having said that, I'm surprised that binary serialization is failing here - at least in that particular way. Can you get it to fail with a large file with a very, very simple piece of serialization code? (No resolution handlers, no compression etc.)
c# serialized data
[ "", "c#", "serialization", "data-structures", "" ]
There's a special name for it, but I can't remember what it is. There are two different terms, one for casting an array of a subclass to an array of its superclass, and the other way around.
Array Upcasting and Array Downcasting. EDIT: Covariance and Contravariance is ability to treat certain types as same or not the same depending on composition and inheritance semantics. This has nothing to do with casting arrays, which just happen to be one case where co-variance comes into play.
Co-variance and contra-variance. Probably.
What's it called when you cast an array of objects to an array of strings?
[ "", "c#", "arrays", "casting", "terminology", "" ]
I am trying to compile large C++ project and I am getting this strange error. I know that it is linking error but couldn't figure out what it is exactly. ``` test_oqlquery.o:(.rodata._ZTV8r_MarrayIhE[vtable for r_Marray]+0x8): undefined reference to r_Marray<unsigned char>::~r_Marray()' test_oqlquery.o:(.rodata._ZTV8r_MarrayIhE[vtable for r_Marray<unsigned char>]+0xc): undefined reference tor_Marray::~r_Marray()' test_oqlquery.o:(.rodata._ZTV8r_MarrayIhE[vtable for r_Marray]+0x28): undefined reference to `r_Marray::print_status(std::basic_ostream >&) const' ``` What does this error mean ? And, is it possible to see the line number where there error is happening ? How ? I am mainly concerned with what this means ``` ".rodata._ZTV8r_MarrayIhE[vtable for r_Marray]+0x28" ``` Actually, my error is like this, but dont know why everything inside angle bracket are missing, so replacing them with " ", here is detailed error, it has something to do with template instantiation, as well `` test_oqlquery.o:(.rodata._ZTV8r_MarrayIhE[vtable for r_Marray"unsigned char"]+0x8): undefined reference to `r_Marray"unsigned char"::~r_Marray()' `` I am using g++ 4.3.3. Please excuse me, I cannot submit the whole source code here as it is very large and spans over multiple directories. Thanks a lot.
First, linker errors and compiler errors are different things. Since linker deals with object files rather than source files, compiler errors have a line number but linker errors don't. Second, it seems that you have declared the destructor for **r\_Marray** but have not implemented it anywhere included in the build. The same thing goes for **print\_status**.
Either you have not defined **r\_Marray::~r\_Marray()** and **r\_Marray::print\_status** or the cpp file containing these methods were not part of your build process. If you do have the cpp file with these methods defined, please post your Makefile. Based on your comment to your question I am assuming that r\_Marray is templated class? Do you have the definitions for the r\_Marray methods in your header file?
C++ strange compile linker error
[ "", "c++", "g++", "compiler-errors", "" ]
I know how to generate Crash Dump files with ADPlus or DebugDiag, but I'm wondering if there is a way to do this on a customer's computer without installing these tools... specifically, I would like to be able to configure my application (using a registry value, for example) to generate a crash dump in the case of a critical failure. More specifically, I need to be able to do this from a C# application, but I don't mind P/Invoke'ing if necessary. Thanks!
Note that creating a minidump from inside the "failing" process (or even thread) itself is not trivial or might not be accurate (also [MiniDumpWriteDump](http://msdn.microsoft.com/en-us/library/ms680360(VS.85).aspx) function's Remarks). Besides, if your process is in such anger that you might need to write a crash dump, the whole situation is typically so hosed, that even attempting to create a crash dump could cause another crash (situations like hangs aside - but those might be even harder to "catch" from within the current process). The "best" thing you can do, if you cannot install separate applications on your client's systems, is to start an external process (which could also fail in critical situations!) and let that create a crashdump from your current process (see [Superassert.NET from John Robbins](https://web.archive.org/web/20090825040700/http://msdn.microsoft.com/en-us/magazine/cc188701.aspx)). You could even go so far, as to put the external binary into your app resources, extract it from there on startup (as to minimize failure in critical situtations) to disk (if you dare).
You can configure Windows Error Reporting (WER) to create a crash dump in a specific directory using the following registry script: ``` Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps] "DumpFolder"="C:\\Dumps" "DumpCount"=dword:00000064 "DumpType"=dword:00000002 "CustomDumpFlags"=dword:00000000 ``` The dump will go into C:\Dumps with a name that reflects the name of the process that crashed. DumpType=2 gives a full memory dump. DumpType=1 gives a mini dump. On 64 bit machines, you do not need to put these under the Wow32 nodes. WER only uses the non-WOW registry key specified above. Depending on the type of crash, this method may not work. I have yet to figure out why or which crash types it doesn't catch. Anyone?
Generating .NET crash dumps automatically
[ "", "c#", ".net", "crash-dumps", "" ]
I have an events based table that I would like to produce a query, by minute for the number of events that were occuring. For example, I have an event table like: ``` CREATE TABLE events ( session_id TEXT, event TEXT, time_stamp DATETIME ) ``` Which I have transformed into the following type of table: ``` CREATE TABLE sessions ( session_id TEXT, start_ts DATETIME, end_ts DATETIME, duration INTEGER ); ``` Now I want to create a query that would group the sessions by a count of those that were active during a particular minute. Where I would essentially get back something like: ``` TIME_INTERVAL ACTIVE_SESSIONS ------------- --------------- 18:00 1 18:01 5 18:02 3 18:03 0 18:04 2 ```
Ok, I think I got more what I wanted. It doesn't account for intervals that are empty, but it is good enough for what I need. ``` select strftime('%Y-%m-%dT%H:%M:00.000',start_ts) TIME_INTERVAL, (select count(session_id) from sessions s2 where strftime('%Y-%m-%dT%H:%M:00.000',s1.start_ts) between s2.start_ts and s2.end_ts) ACTIVE_SESSIONS from sessions s1 group by strftime('%Y-%m-%dT%H:%M:00.000',start_ts); ``` This will generate a row per minute for the period that the data covers with a count for the number of sessions that were had started (start\_ts) but hadn't finished (end\_ts).
PostgreSQL allows the following query. In contrast to your example, this returns an additional column for the day, and it omits the minutes where nothing happened (count=0). ``` select day, hour, minute, count(*) from (values ( 0),( 1),( 2),( 3),( 4),( 5),( 6),( 7),( 8),( 9), (10),(11),(12),(13),(14),(15),(16),(17),(18),(19), (20),(21),(22),(23),(24),(25),(26),(27),(28),(29), (30),(31),(32),(33),(34),(35),(36),(37),(38),(39), (40),(41),(42),(43),(44),(45),(46),(47),(48),(49), (50),(51),(52),(53),(54),(55),(56),(57),(58),(59)) as minutes (minute), (values ( 0),( 1),( 2),( 3),( 4),( 5),( 6),( 7),( 8),( 9), (10),(11),(12),(13),(14),(15),(16),(17),(18),(19), (20),(21),(22),(23)) as hours (hour), (select distinct cast(start_ts as date) from sessions union select distinct cast(end_ts as date) from sessions) as days (day), sessions where (day,hour,minute) between (cast(start_ts as date),extract(hour from start_ts),extract(minute from start_ts)) and (cast(end_ts as date), extract(hour from end_ts), extract(minute from end_ts)) group by day, hour, minute order by day, hour, minute; ```
How do I produce a time interval query in SQLite?
[ "", "sql", "sqlite", "" ]
I was wondering - when would I want to use the .Value member on a nullable type instead of just calling the variable itself? e.g.. bool? b = true; why would i use b.Value to get the value instead of just using b? What advantage or function does the .Value call add?
The value property is read only and will return the actual value type. The value property can never be null. If you expect to have a nullable return a value then check .HasValue and then reference Value. For instance, if you want to assign the value of a Nullable to an ordinary bool then you have to reference it's value: ``` bool? nullableBool = null; if (nullableBool.HasValue) { bool realBool = nullableBool.Value; } ``` However, the following *won't* compile: ``` bool? nullableBool = true; bool realBool = nullableBool; // Won't work ```
.Value returns a bool rather than a bool? which will allow you to use it as a parameter to functions that don't expect nullable types.
Nullable variable types - .value member
[ "", "c#", "types", "nullable", "member", "" ]
Class hierarchies and constructors are related. Parameters from a child class need to be passed to their parent. So, in Python, we end up with something like this: ``` class Parent(object): def __init__(self, a, b, c, ka=None, kb=None, kc=None): # do something with a, b, c, ka, kb, kc class Child(Parent): def __init__(self, a, b, c, d, e, f, ka=None, kb=None, kc=None, kd=None, ke=None, kf=None): super(Child, self).__init__(a, b, c, ka=ka, kb=kb, kc=kc) # do something with d, e, f, kd, ke, kf ``` Imagine this with a dozen child classes and lots of parameters. Adding new parameters becomes very tedious. Of course one can dispense with named parameters completely and use \*args and \*\*kwargs, but that makes the method declarations ambiguous. Is there a pattern for elegantly dealing with this in Python (2.6)? By "elegantly" I mean I would like to reduce the number of times the parameters appear. a, b, c, ka, kb, kc all appear 3 times: in the Child constructor, in the super() call to Parent, and in the Parent constructor. Ideally, I'd like to specify the parameters for Parent's **init** once, and in Child's **init** only specify the additional parameters. I'd like to do something like this: ``` class Parent(object): def __init__(self, a, b, c, ka=None, kb=None, kc=None): print 'Parent: ', a, b, c, ka, kb, kc class Child(Parent): def __init__(self, d, e, f, kd='d', ke='e', kf='f', *args, **kwargs): super(Child, self).__init__(*args, **kwargs) print 'Child: ', d, e, f, kd, ke, kf x = Child(1, 2, 3, 4, 5, 6, ka='a', kb='b', kc='c', kd='d', ke='e', kf='f') ``` This unfortunately doesn't work, since 4, 5, 6 end up assigned to kd, ke, kf. Is there some elegant python pattern for accomplishing the above?
"dozen child classes and lots of parameters" sounds like a problem irrespective of parameter naming. I suspect that a little refactoring can peel out some **Strategy** objects that would simplify this hierarchy and make the super-complex constructors go away.
Well, the only solution I could see is using a mixture of listed variables as well as \*args and \*\*kwargs, as such: ``` class Parent(object): def __init__(self, a, b, c, ka=None, kb=None, kc=None): pass class Child(Parent): def __init__(self, d, e, f, *args, kd=None, ke=None, kf=None, **kwargs): Parent.__init__(self, *args, **kwargs) pass ``` This way, you could see which parameters are required by each of the classes, but without having to re-type them. One thing to note is that you lose your desired ordering (a, b, c, d, e, f) as it becomes (d, e, f, a, b, c). I'm not sure if there's a way to have the \*args before the other non-named parameters.
Constructor specialization in python
[ "", "python", "design-patterns", "class", "oop", "anti-patterns", "" ]
I am currently using this type of SQL on MySQL to insert multiple rows of values in one single query: ``` INSERT INTO `tbl` (`key1`,`key2`) VALUES ('r1v1','r1v2'),('r2v1','r2v2'),... ``` On the readings on PDO, the use prepared statements should give me a better security than static queries. I would therefore like to know whether it is possible to generate "inserting multiple rows of values by the use of one query" using prepared statements. If yes, may I know how can I implement it?
**Multiple Values Insert with PDO Prepared Statements** Inserting multiple values in one execute statement. Why because according to [this page](http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html) it is faster than regular inserts. ``` $data[] = ['valueA1', 'valueB1']; $data[] = ['valueA2', 'valueB2']; ``` more data values or you probably have a loop that populates data. That is basically how we want the insert statement to look like: ``` insert into table (fielda, fieldb, ... ) values (?,?...), (?,?...).... ``` So with prepared inserts you need to know the number of fields to create a single VALUES part and the number of rows in order to know how many times to repeat it. Now, the code: ``` // create the ?,? sequence for a single row $values = str_repeat('?,', count($data[0]) - 1) . '?'; // construct the entire query $sql = "INSERT INTO table (columnA, columnB) VALUES " . // repeat the (?,?) sequence for each row str_repeat("($values),", count($data) - 1) . "($values)"; $stmt = $pdo->prepare ($sql); // execute with all values from $data $stmt->execute(array_merge(...$data)); ``` Note that this approach is 100% secure, as the query is constructed entirely of constant parts explicitly written in the code, especially the column names.
Same answer as Mr. Balagtas, slightly clearer... Recent versions MySQL and PHP PDO **do** support multi-row `INSERT` statements. ## SQL Overview The SQL will look something like this, assuming a 3-column table you'd like to `INSERT` to. ``` INSERT INTO tbl_name (colA, colB, colC) VALUES (?, ?, ?), (?, ?, ?), (?, ?, ?) [,...] ``` `ON DUPLICATE KEY UPDATE` works as expected even with a multi-row INSERT; append this: ``` ON DUPLICATE KEY UPDATE colA = VALUES(colA), colB = VALUES(colB), colC = VALUES(colC) ``` ## PHP Overview Your PHP code will follow the usual `$pdo->prepare($qry)` and `$stmt->execute($params)` PDO calls. `$params` will be a 1-dimensional array of *all* the values to pass to the `INSERT`. In the above example, it should contain 9 elements; PDO will use every set of 3 as a single row of values. (Inserting 3 rows of 3 columns each = 9 element array.) ## Implementation Below code is written for clarity, not efficiency. Work with the PHP `array_*()` functions for better ways to map or walk through your data if you'd like. Given a single query is executed and each query being a transaction on its own, no explicit transaction is required. Assuming: * `$dataVals` - mutli-dimensional array, where each element is a 1-d array of a row of values to INSERT ### Sample Code ``` // setup data values for PDO. No memory overhead thanks to copy-on-write $dataToInsert = array(); foreach ($dataVals as $row) { foreach($row as $val) { $dataToInsert[] = $val; } } $onDup = "ON DUPLICATE KEY UPDATE colA=VALUES(colA)"; // optional // setup the placeholders - a fancy way to make the long "(?, ?, ?)..." string $rowPlaces = '(' . implode(', ', array_fill(0, count($colNames), '?')) . ')'; $allPlaces = implode(', ', array_fill(0, count($dataVals), $rowPlaces)); $sql = "INSERT INTO `tblName` (`colA`, `colB, colC)" . " VALUES $allPlaces ON DUPLICATE KEY UPDATE $onDup"; // and then the PHP PDO boilerplate $stmt = $pdo->prepare ($sql); $stmt->execute($dataToInsert); ```
PDO Prepared Inserts multiple rows in single query
[ "", "php", "pdo", "insert", "prepared-statement", "" ]
I want to work around the fact that my WCF servicelayer can not handle a generic method like this: ``` public void SaveOrUpdateDomainObject<T>(T domainObject) { domainRoot.SaveDomainObject<T>(domainObject); } ``` so I built this workaround method instead ``` public void SaveOrUpdateDomainObject(object domainObject, string typeName) { Type T = Type.GetType(typeName); var o = (typeof(T))domainObject; domainRoot.SaveDomainObject<typeof(T)>(o); } ``` The problem is this does not compile somehow. I think this is the result of me not fully understanding the difference between * Type T I believe this is an object of type "Type" * the result of typeof(T) I believe this results in a non-object type version of the type of T (I don't know how to say this exactly)
You don't need `typeName`: you have to either pass around `Type` instances, or use `object.GetType()` to retrieve object run-time type. In either case, ``` MethodInfo genericSaveMethod = domainRoot.GetType().GetMethod("SaveDomainObject"); MethodInfo closedSaveMethod = genericSaveMethod .MakeGenericMethod(domainObject.GetType()); closedSaveMethod.Invoke(domainRoot, new object[] { domainObject }); ```
Unfortunately, something like this is quite hard in C#. It's easy to get the correct Type instance from a string, just like you did, but you'll have to use reflection to get the right method. Try something along the lines of ``` public void SaveOrUpdateDomainObject(object domainObject, string typeName) { Type T = Type.GetType(typeName); MethodInfo genericMethod = domainRoot.GetType().GetMethod("SaveDomainObject"); MethodInfo method = genericMethod.MakeGenericMethod(T); method.Invoke(domainRoot, new object[] { domainObject }); } ```
c# casting to type gotten from typename as string
[ "", "c#", "casting", "types", "typeof", "gettype", "" ]
I'm using `TextWriterTraceListener` to log diagnostics messages to a text file. However I wan't also to log a timestamp of every trace message added. Is it possible to define a kind of formatter for the listener that would automatically add timestamps? Currently I'm adding timestamps manually on every `Trace.WriteLine()` call but this isn't very comfortable.
I suggest you use [Log4Net](http://logging.apache.org/log4net/index.html) instead, which has a lot more customizability. Alternatively you could write your own `TraceListener` implementation which put the timestamps on for you. You *may* even be able just derive from `TextWriterTraceListener` and override `Write` and `WriteLine`: ``` public override void Write(string x) { // Use whatever format you want here... base.Write(string.Format("{0:r}: {1}", DateTime.UtcNow, x)); } public override void WriteLine(string x) { // Use whatever format you want here... base.WriteLine(string.Format("{0:r}: {1}", DateTime.UtcNow, x)); } ``` As noted in comments, this ends up with date duplication for `TraceInformation`, because that calls `Write` twice. Using a "proper" logging framework is definitely better.
I recently encountered similar situation and it looks like now we have a tool very much fit for the task, namely [Essential Diagnostics](https://github.com/sgryphon/essential-diagnostics). You set up a listener in app.config like in code below and then just place `Essential.Diagnostics.dll` into the same folder. **NO RECOMPILING IS REQUIRED.** You can use this with any applications that uses System.Diagnostics for tracing, even if you do not own the source. Isn't that marvelous? ``` <sharedListeners> <add name="rollingfile" type="Essential.Diagnostics.RollingFileTraceListener, Essential.Diagnostics" initializeData="{ApplicationName}-{DateTime:yyyy-MM-dd}.log" convertWriteToEvent="true" template="{DateTime:yyyy-MM-dd HH:mm:ss.fff} {Message}{Data}" /> </sharedListeners> ```
Formatting trace output
[ "", "c#", "debugging", "formatting", "trace", "trace-listener", "" ]
Is it possible to keep all my database related configuration (hostnames, usernames, passwords, and databases) as well as the function to connect to and select the correct database in a separate class? I tried something like this: ``` class Database { var $config = array( 'username' => 'someuser', 'password' => 'somepassword', 'hostname' => 'some_remote_host', 'database' => 'a_database' ); function __construct() { $this->connect(); } function connect() { $db = $this->config; $conn = mysql_connect($db['hostname'], $db['username'], $db['password']); if(!$conn) { die("Cannot connect to database server"); } if(!mysql_select_db($db['database'])) { die("Cannot select database"); } } } ``` And then in another class I would use in the classes \_\_construct function: ``` require_once('database.php'); var $db_conn = new Database(); ``` But this doesnt save the connection, it ends up defaulting to the servers local db connection. Or do I have to do the database commands everytime before I execute some database commands?
I modified your class to work as you seem to be expecting it to: ``` <?php class Database { var $conn = null; var $config = array( 'username' => 'someuser', 'password' => 'somepassword', 'hostname' => 'some_remote_host', 'database' => 'a_database' ); function __construct() { $this->connect(); } function connect() { if (is_null($this->conn)) { $db = $this->config; $this->conn = mysql_connect($db['hostname'], $db['username'], $db['password']); if(!$this->conn) { die("Cannot connect to database server"); } if(!mysql_select_db($db['database'])) { die("Cannot select database"); } } return $this->conn; } } ``` Usage: ``` $db = new Database(); $conn = $db->connect(); ``` Note that you can call connect() as many times as you like and it will use the current connection, or create one if it doesn't exist. This is a **good thing**. Also, note that each time you **instantiate** a Database object (using new) you will be creating a new connection to the database. I suggest you look into implementing your Database class as a [Singleton](http://en.wikipedia.org/wiki/Singleton_pattern) or storing it in a [Registry](http://martinfowler.com/eaaCatalog/registry.html) for global access. You can also do it the dirty way and shove it in $GLOBALS. **Edit** I took the liberty of modifying your class to implement the Singleton pattern, and follow the PHP5 OOP conventions. ``` <?php class Database { protected static $_instance = null; protected $_conn = null; protected $_config = array( 'username' => 'someuser', 'password' => 'somepassword', 'hostname' => 'some_remote_host', 'database' => 'a_database' ); protected function __construct() { } public static function getInstance() { if (null === self::$_instance) { self::$_instance = new self(); } return self::$_instance; } public function getConnection() { if (is_null($this->_conn)) { $db = $this->_config; $this->_conn = mysql_connect($db['hostname'], $db['username'], $db['password']); if(!$this->_conn) { die("Cannot connect to database server"); } if(!mysql_select_db($db['database'])) { die("Cannot select database"); } } return $this->_conn; } public function query($query) { $conn = $this->getConnection(); return mysql_query($query, $conn); } } ``` Usage: ``` $res = Database::getInstance()->query("SELECT * FROM foo;"); ``` or ``` $db = Database::getInstance(); $db->query("UPDATE foo"); $db->query("DELETE FROM foo"); ```
You can certainly keep your connection info in a separate file. Just save your connection object - $conn in your connect() function - in a class variable. You'll then be able to reuse it across calls.
MySQL database config in a separate class
[ "", "php", "mysql", "database", "oop", "mysql-connect", "" ]
I'm starting to write a code syntax highlighter in JavaScript, and I want to highlight text that is in quotes (both "s and 's) in a certain color. I need it be able to not be messed up by one of one type of quote being in the middle of a pair of the other quotes as well, but i'm really not sure where to even start. I'm not sure how I should go about finding the quotes and then finding the correct end quote.
Unless you're doing this for the challenge, have a look at [Google Code Prettify](http://code.google.com/p/google-code-prettify/). For your problem, you could read up on [parsing (and lexers) at Wikipedia](http://en.wikipedia.org/wiki/Parsing). It's a huge topic and you'll find that you'll come upon bigger problems than parsing strings. To start, you *could* use regular expressions (although they rarely have the accuracy of a true lexer.) A typical regular expression for matching a string is: ``` /"(?:[^"\\]+|\\.)*"/ ``` And then the same for `'` instead of `"`. Otherwise, for a character-by-character parser, you would set some kind of state that you're in a string once you hit `"`, then when you hit `"` that is not preceded by an uneven amount of backslashes (an even amount of backslashes would escape eachother), you exit the string.
You can find quotes using regular expressions but if you're writing a syntax highlighter then the only reliable way is to step through the code, character by character, and decide what to do from there. E.g. of a Regex ``` /("|')((?:\\\1|.)+?)\1/g ``` (matches "this" and 'this' and "thi\"s")
Finding beginning and end quotations
[ "", "javascript", "syntax-highlighting", "" ]
I'd like to save the state of my machine before shutdown (for machines that do not support hibernate). Essentially, I'm trying to mimic the Windows Hibernate feature. When the machine its turned back on, it looks exactly like it did previous to being shut down. Any ideas on using managed code to perform this task? Currently using/considering Windows XP Service Pack 2.
For all applications running on your computer, this is simply not possible using pure managed code. In fact, even with unmanaged code you will have a hell of a time. I wouldn't say it's impossible but likely extremely difficult and time comsuming. Here are a few helpful resources to get you started: Arun Kishan on Windows Kernel <http://www.dotnetrocks.com/default.aspx?ShowNum=434> Core Dump <http://en.wikipedia.org/wiki/Core_dump> setcontext <http://en.wikipedia.org/wiki/Setcontext> Raymond Chen on "Hibernating" single processes <http://blogs.msdn.com/oldnewthing/archive/2004/04/20/116749.aspx> For your own application, your best bet is to isolate all of the state you would like to be able to restore into a set of serializable classes. Then, when your application is unloaded (or periodically), save this data to disk using XMLSerializer. When your application is loaded again, use the XMLSerializer again to rehydrate your classes holding the state of your application and use this information to return the user interface to the previous state. If you have complex user interfaces this could be a time consuming task.
Maybe [the Vista Application Recovery API](http://community.bartdesmet.net/blogs/bart/archive/2006/11/11/Windows-Vista-Application-Recovery-with-C_2300_.aspx) does help. Requires Vista though.
Saving state before shutting down using .NET
[ "", "c#", ".net", "shutdown", "" ]
If there are types that I want to modify to meet the project requirements, what type of restrictions are there for this? By modifying I mean: 1. Finding the type you are interested that is closest to what you need. 2. Using the reflector to disassemble it. 3. Modifying it based on your own spec. 4. Using the new type (in another namespace) in your application. 5. Release your app as commercial or open source. Is it totally OK to do this?
No, *very* definitely not. Even if you *can* do it, it would violate the .NET framework licence. Whether you'd *actually* be in legal hot water would depend on your country probably, but you'd definitely want to consult a lawyer. If you're intending on copying the functionality into a *new* type, then you'd probably have to bring in a load of other internal types and potentially other oddities. Just how much Microsoft code do you want to have to copy? Then you've got problems if the original type is updated in a new release, etc... Just don't do it - ask a separate question for how to work around whatever deficiency you've run into, but don't start taking the framework code and putting a modified version into your own code base. --- EDIT: I originally thought Joan was talking about modifying the code, rebuilding, and then *replacing* the BCL class in the framework. Here were my thoughts on that idea... It's a really bad idea for technical reasons though: * You wouldn't be able to sign the assembly properly again, meaning that at best you could only run apps under full trust (which I don't *think* checks assembly signatures). * How would you deploy your new type? You'd quite possibly have to deploy it "over the top" of the real one, which would affect anyone else using .NET. BCL types just aren't designed to be replaced like this.
That is not OK. The code you extracted using Reflector is copyrighted by Microsoft and some portions may even be patented. Either way, you cannot redistribute someone else's code as your own without explicit permission. What you *can* do is re-create your own implementation from scratch. Describing what a class, object, or pattern *does* is not copyrighted (though it could feasibly, but not usually, be patented). You can describe what a string class, for example, should do, and write your own from scratch which also does that. Legal implications aside, I am having trouble imagining a scenario where this would be a good idea from a technical perspective. If you step back and re-frame your problem in terms of the original thing you need to accomplish, you can probably come up with a much better solution. Or better yet, post *that* problem on Stackoverflow and get a lot more brains on it.
What types of restrictions are in place to keep me from modifying built in types in .NET?
[ "", "c#", ".net", "" ]
I've created a Display object that contains the header, sidebar and footer: ``` class Display { protected $framework; public $mysql; public function __construct() { $this->mysql = new MySQL(); $this->framework .= $this->header(); $this->framework .= $this->body(); $this->framework .= $this->sidebar(); $this->framework .= $this->footer(); } private function header (){ /* blah */ } private function body (){ } private function sidebar (){ /* blah */ } private function footer (){ /* blah */ } public function displayPage(){ print $this->framework; } } ``` On each page I've created a object that extends the Display object, with the code for the body: ``` class IndexPHP extends Display { public function body(){ $this->user = new User(); return '<div class="body">Hello ' . $this->user->getName() . '</div>'; } } $page = new IndexPHP(); $page->displayPage(); ``` Have I created a problem by nesting the objects too much? For example, in the User object, how do I access the already initiated MySQL object? ``` class User { protected $name; public function __construct() { $this->id = /* MySQL object query here */ } } ```
The issue with the approach you've given is that you're not following any sort of "separation of powers" principles. Your Display object has a database in it, along with the logic as to how to connect to it; that's probably not the best approach ("the God object"). It may be a better idea to follow MVC (model-view-controller) principles, where you have one class that knows something about your model (database), another that knows how to transform the model into objects that will be presented (controller), and the third that actually shows the data with all of its CSS goodness (view, frequently just a PHP template file). I'd recommend you take a look at an existing MVC framework - I use QCubed (<http://qcu.be>), there are others - Symfony, Zend, CakePHP. They all offer you a great way to separate your code cleanly, which ultimately results in maintainability.
> For example, in the User object, how do I access the already initiated MySQL object? You pass it in the constructor: ``` class User { protected $name; public function __construct($mysql) { $this->id = $mysql->something(); } } ```
Is this bad Object Oriented PHP?
[ "", "php", "oop", "" ]
I'm trying to read an RSS feed from Flickr but it has some nodes which are not readable by Simple XML (`media:thumbnail`, `flickr:profile`, and so on). How do I get round this? My head hurts when I look at the documentation for the DOM. So I'd like to avoid it as I don't want to learn. I'm trying to get the thumbnail by the way.
The solution is explained in [this nice article](http://www.sitepoint.com/blogs/2005/10/20/simplexml-and-namespaces/). You need the `children()` method for accessing XML elements which contain a namespace. This code snippet is quoted from the article: ``` $feed = simplexml_load_file('http://www.sitepoint.com/recent.rdf'); foreach ($feed->item as $item) { $ns_dc = $item->children('http://purl.org/dc/elements/1.1/'); echo $ns_dc->date; } ```
With the latest version, you can now reference colon nodes with curly brackets. ``` $item->{'itunes:duration'} ```
Simple XML - Dealing With Colons In Nodes
[ "", "php", "xml", "simplexml", "" ]
How can I encode an integer with base 36 in Python and then decode it again?
Have you tried Wikipedia's sample code? ``` def base36encode(number, alphabet='0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'): """Converts an integer to a base36 string.""" if not isinstance(number, int): raise TypeError('number must be an integer') base36 = '' sign = '' if number < 0: sign = '-' number = -number if 0 <= number < len(alphabet): return sign + alphabet[number] while number != 0: number, i = divmod(number, len(alphabet)) base36 = alphabet[i] + base36 return sign + base36 def base36decode(number): return int(number, 36) print(base36encode(1412823931503067241)) print(base36decode('AQF8AA0006EH')) ```
I wish I had read this before. Here is the answer: ``` def base36encode(number): if not isinstance(number, (int, long)): raise TypeError('number must be an integer') is_negative = number < 0 number = abs(number) alphabet, base36 = ['0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ', ''] while number: number, i = divmod(number, 36) base36 = alphabet[i] + base36 if is_negative: base36 = '-' + base36 return base36 or alphabet[0] def base36decode(number): return int(number, 36) print(base36encode(1412823931503067241)) print(base36decode('AQF8AA0006EH')) assert(base36decode(base36encode(-9223372036721928027)) == -9223372036721928027) ```
Python base 36 encoding
[ "", "python", "" ]
I am wondering how you would write a simple java method finding the closest Integer to a given value in a sorted Integer list. Here is my first attempt: ``` public class Closest { private static List<Integer> integers = new ArrayList<Integer>(); static { for (int i = 0; i <= 10; i++) { integers.add(Integer.valueOf(i * 10)); } } public static void main(String[] args) { Integer closest = null; Integer arg = Integer.valueOf(args[0]); int index = Collections.binarySearch( integers, arg); if (index < 0) /*arg doesn't exist in integers*/ { index = -index - 1; if (index == integers.size()) { closest = integers.get(index - 1); } else if (index == 0) { closest = integers.get(0); } else { int previousDate = integers.get(index - 1); int nextDate = integers.get(index); if (arg - previousDate < nextDate - arg) { closest = previousDate; } else { closest = nextDate; } } } else /*arg exists in integers*/ { closest = integers.get(index); } System.out.println("The closest Integer to " + arg + " in " + integers + " is " + closest); } } ``` What do you think about this solution ? I am sure there is a cleaner way to do this job. Maybe such method exists somewhere in the Java libraries and I missed it ?
try this little method: ``` public int closest(int of, List<Integer> in) { int min = Integer.MAX_VALUE; int closest = of; for (int v : in) { final int diff = Math.abs(v - of); if (diff < min) { min = diff; closest = v; } } return closest; } ``` some testcases: ``` private final static List<Integer> list = Arrays.asList(10, 20, 30, 40, 50); @Test public void closestOf21() { assertThat(closest(21, list), is(20)); } @Test public void closestOf19() { assertThat(closest(19, list), is(20)); } @Test public void closestOf20() { assertThat(closest(20, list), is(20)); } ```
Kotlin is so helpful ``` fun List<Int>.closestValue(value: Int) = minBy { abs(value - it) } val values = listOf(1, 8, 4, -6) println(values.closestValue(-7)) // -6 println(values.closestValue(2)) // 1 println(values.closestValue(7)) // 8 ``` List doesn't need to be sorted BTW Edit: since kotlin 1.4, `minBy` is deprecated. Prefer `minByOrNull` ``` @Deprecated("Use minByOrNull instead.", ReplaceWith("this.minByOrNull(selector)")) @DeprecatedSinceKotlin(warningSince = "1.4") ```
Find closest value in an ordered list
[ "", "java", "" ]
Is there a more efficient way of doing the following SQL? I want to select the top 50 results, but I also want to set a variable to tell me if I would have gotten more results back without the TOP ``` DECLARE @MoreExists BIT SET @MoreExists = 0 DECLARE @Count INT SELECT @Count = Count(*) FROM MyTable WHERE ... --some expensive where clause IF @Count > 50 SET @MoreExists = 1 SELECT TOP 50 Field1, Field2, ... FROM MyTable WHERE ... --same expensive where clause ```
Select 51 results instead, use the top 50 in the client layer, and use the count to know if there are more.
A spin on @Dougs answer ``` SET NOCOUNT ON SELECT TOP 51 Field1, Field2, ... into #t FROM MyTable WHERE ... --same expensive where clause if @@rowcount > 50 SET @MoreExists = 1 SET NOCOUNT OFF SELECT TOP 50 Field1, Field2, ... from #t -- maintain ordering with an order by clause ```
Select TOP N and set variable if more could be selected
[ "", "sql", "sql-server", "sql-server-2008", "" ]
Is it true that String.Format works 2 ways: if we use built-in format such as C, N, P.... it will take locale settings into account? if we use custom format code such as #,##0.000 it will NOT take locale settings into account? In my code, I use method like this String.Format("{0:#.##0,000}", value); because my country use comma as decimal separator but the result still is: 1,234.500 as if it consider dot as decimal separator. Please help!
You want to use [CultureInfo](http://msdn.microsoft.com/en-us/library/system.globalization.cultureinfo(VS.80).aspx): ``` value.ToString("N", new CultureInfo("vn-VN")); ``` Using [`String.Format`](http://msdn.microsoft.com/en-us/library/1ksz8yb7(VS.80).aspx): ``` String.Format(new CultureInfo("vi-VN"), "N", value); ``` Since you're in Hanoi (from profile), I used Vietnam's code, which is `vn-VN`.
This works. The formatted value is `123.456,789` which is correct per `es-ES` ``` IFormatProvider iFormatProvider = new System.Globalization.CultureInfo("es-ES"); var value = 123456.789001m; string s = value.ToString("#,##0.000", iFormatProvider); string s2 = string.Format(iFormatProvider, "{0:#,##0.000}", value); FormattableString fs = $"{value:#,##0.000}"; string s3 = fs.ToString(iFormatProvider); ``` Note that the `,` and `.` are using a 'standard' en-US style, but `.ToString()` and `string.Format()` with a format provider does the right thing.
String.Format consider locale or not?
[ "", "c#", "internationalization", "" ]
I'm trying to translate an Excel spreadsheet to CSV using the Python xlrd and csv modules, but am getting hung up on encoding issues. Xlrd produces output from Excel in Unicode, and the CSV module requires UTF-8. I imaging that this has nothing to do with the xlrd module: everything works fine outputing to stdout or other outputs that don't require a specific encoding. The worksheet is encoded as UTF-16-LE, according to `book.encoding` The simplified version of what I'm doing is: ``` from xlrd import * import csv b = open_workbook('file.xls') s = b.sheet_by_name('Export') bc = open('file.csv','w') bcw = csv.writer(bc,csv.excel,b.encoding) for row in range(s.nrows): this_row = [] for col in range(s.ncols): this_row.append(s.cell_value(row,col)) bcw.writerow(this_row) ``` This produces the following error, about 740 lines in: ``` UnicodeEncodeError: 'ascii' codec can't encode character u'\xed' in position 5: ordinal not in range(128) ``` The value is seems to be getting hung up on is "516-777316" -- the text in the original Excel sheet is "516-7773167" (with a 7 on the end) I'll be the first to admit that I have only a vague sense of how character encoding works, so most of what I've tried so far are various fumbling permutations of `.encode` and `.decode` on the `s.cell_value(row,col)` If someone could suggest a solution I would appreciate it -- even better if you could provide an explanation of what's not working and why, so that I can more easily debug these problems myself in the future. Thanks in advance! **EDIT:** Thanks for the comments so far. When I user `this_row.append(s.cell(row,col))` (e.g. s.cell instead of s.cell\_value) the entire document writes without errors. The output isn't particularly desirable (`text:u'516-7773167'`), but it avoids the error even though the offending characters are still in the output. This makes me think that the challenge might be in xlrd after all. Thoughts?
I expect the `cell_value` return value is the unicode string that's giving you problems (please print its `type()` to confirm that), in which case you should be able to solve it by changing this one line: ``` this_row.append(s.cell_value(row,col)) ``` to: ``` this_row.append(s.cell_value(row,col).encode('utf8')) ``` If `cell_value` is returning multiple different types, then you need to encode if and only if it's returning a unicode string; so you'd split this line into a few lines: ``` val = s.cell_value(row, col) if isinstance(val, unicode): val = val.encode('utf8') this_row.append(val) ```
You asked for explanations, but some of the phenomena are inexplicable without your help. (A) Strings in XLS files created by Excel 97 onwards are encoded in Latin1 if possible otherwise in UTF16LE. Each string carries a flag telling which was used. Earlier Excels encoded strings according to the user's "codepage". In any case, **xlrd produces unicode objects**. The file encoding is of interest only when the XLS file has been created by 3rd party software which either omits the codepage or lies about it. See the Unicode section up the front of the xlrd docs. (B) Unexplained phenomenon: This code: ``` bcw = csv.writer(bc,csv.excel,b.encoding) ``` causes the following error with Python 2.5, 2.6 and 3.1: `TypeError: expected at most 2 arguments, got 3` -- this is about what I'd expect given the docs on csv.writer; it's expecting a filelike object followed by either (1) nothing (2) a dialect or (3) one or more formatting parameters. You gave it a dialect, and csv.writer has no encoding argument, so splat. What version of Python are you using? Or did you not copy/paste the script that you actually ran? (C) Unexplained phenomena around traceback and what the actual offending data was: ``` "the_script.py", line 40, in <module> this_row.append(str(s.cell_value(row,col))) UnicodeEncodeError: 'ascii' codec can't encode character u'\xed' in position 5: ordinal not in range(128) ``` FIRSTLY, there's a str() in the offending code line that wasn't in the simplified script -- did you not copy/paste the script that you actually ran? In any case, you shouldn't use str in general -- you won't get the full precision on your floats; just let the csv module convert them. SECONDLY, you say """The value is seems to be getting hung up on is "516-777316" -- the text in the original Excel sheet is "516-7773167" (with a 7 on the end)""" --- it's difficult to imagine how the 7 gets lost off the end. I'd use something like this to find out exactly what the problematic data was: ``` try: str_value = str(s.cell_value(row, col)) except: print "row=%d col=%d cell_value=%r" % (row, col, s.cell_value(row, col)) raise ``` That %r saves you from typing `cell_value=%s ... repr(s.cell_value(row, col))` ... the repr() produces an unambiguous representation of your data. Learn it. Use it. How did you arrive at "516-777316"? THIRDLY, the error message is actually complaining about a unicode character u'\xed' at offset 5 (i.e. the sixth character). U+00ED is LATIN SMALL LETTER I WITH ACUTE, and there's nothing like that at all in "516-7773167" FOURTHLY, the error location seems to be a moving target -- you said in a comment on one of the solutions: "The error is on bcw.writerow." Huh? (D) Why you got that error message (with str()): `str(a_unicode_object)` attempts to convert the unicode object to a str object and in the absence of any encoding information uses ascii, but you have non-ascii data, so splat. Note that your object is to produce a csv file encoded in utf8, but your simplified script doesn't mention utf8 anywhere. (E) """... s.cell(row,col)) (e.g. s.cell instead of `s.cell_value)` the entire document writes without errors. The output isn't particularly desirable (text:u'516-7773167')""" That's happening because the csv writer is calling the `__str__` method of your Cell object, and this produces `<type>:<repr(value)>` which may be useful for debugging but as you say not so great in your csv file. (F) Alex Martelli's solution is great in that it got you going. However you should read the section on the Cell class in the xlrd docs: types of cell are text, number, boolean, date, error, blank and empty. If you have dates, you are going to want to format them as dates not numbers, so you can't use isinstance() (and you may not want the function call overhead anyway) ... this is what the `Cell.ctype` attribute and `Sheet.cell_type()` and `Sheet.row_types()` methods are for. (G) UTF8 is not Unicode. UTF16LE is not Unicode. UTF16 is not Unicode ... and the idea that individual strings would waste 2 bytes each on a UTF16 BOM is too preposterous for even MS to contemplate :-) (H) Further reading (apart from the xlrd docs): ``` http://www.joelonsoftware.com/articles/Unicode.html http://www.amk.ca/python/howto/unicode ```
Unicode to UTF8 for CSV Files - Python via xlrd
[ "", "python", "unicode", "encoding", "csv", "xlrd", "" ]
When rasterizing svg file, I would like to be able to set width and height for the resulting png file. With the following code, only the canvas is set to the desired width and height, the actual image content with the original svg file dimension is rendered in the top left corner on the (500, 600) canvas. ``` import cairo import rsvg WIDTH, HEIGHT = 500, 600 surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT) ctx = cairo.Context(surface) svg = rsvg.Handle(file="test.svg") svg.render_cairo(ctx) surface.write_to_png("test.png") ``` What should I do to make the image content same size with cairo canvas? I tried ``` svg.set_property('width', 500) svg.set_property('height', 500) ``` but got ``` TypeError: property 'width' is not writable ``` Also documents for librsvg python binding seem to be extremely rare, only some random code snippets on cairo site.
There is a [resize function](http://library.gnome.org/devel/rsvg/stable/rsvg-Core-API.html#rsvg-handle-set-size-callback) in librsvg, but it is deprecated. Set up a [scale matrix](http://cairographics.org/manual/cairo-Transformations.html#cairo-scale) in Cairo to change the size of your drawing: * setup a scale transformation matrix on your cairo context * draw your SVG with the .render\_cairo() method * write your surface to PNG
This is the code that works for me. It implements the answer by Luper above: ``` import rsvg import cairo # Load the svg data svg_xml = open('topthree.svg', 'r') svg = rsvg.Handle() svg.write(svg_xml.read()) svg.close() # Prepare the Cairo context img = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT) ctx = cairo.Context(img) # Scale whatever is written into this context # in this case 2x both x and y directions ctx.scale(2, 2) svg.render_cairo(ctx) # Write out into a PNG file png_io = StringIO.StringIO() img.write_to_png(png_io) with open('sample.png', 'wb') as fout: fout.write(png_io.getvalue()) ```
How to resize svg image file using librsvg Python binding
[ "", "python", "cairo", "librsvg", "" ]
After several years of following the bad practice handed down from 'architects' at my place of work and thinking that there must be a better way, I've recently been reading up around TDD and DDD and I think the principles and practices would be a great fit for the complexity of the software we write. However, many of the TDD samples I have seen call a method on the domain object and then test properties of the object to ensure the behaviour executed correctly. On the other hand, several respected people in the industry (Greg Young most noticeably so with his talks on CQRS) advocate fully encapsulating each domain object by removing all the 'getters'. My question therefore is: How does one test the functionality of a domain object if it is forbidden to retrieve its state? I believe I am missing something fundamental so please feel free to call me an idiot and enlighten me - any guidance would be greatly appreciated.
What you're describing is **state verification** wherein you Assert on the state of the domain object. There's a branch of TDD that is called **behavior verification** that utilizes Mock objects. Behavior verification allows you to specify which methods should be called and if you want, which methods aren't called. Look into this article by Martin Fowler for more details: [Mocks Aren't Stubs](http://www.martinfowler.com/articles/mocksArentStubs.html).
OK, this answer is one year too late ;-) But when you want to test CQRS models, you can make assertions on the fired domain events instead of assertions on entity state. e.g. if you want to test if calling : customer.Rename("Foo") results in the correct behavior. Instead of test if customer.Name equals "foo", you rather test if there is a pending CustomerRename event with the value "Foo" in your pending event store. (in your uow or in your entity event list depending on implementation)
TDD, DDD and Encapsulation
[ "", "c#", "tdd", "domain-driven-design", "encapsulation", "getter", "" ]
While trying to figure out the best method to ping (ICMP) something from python, I came across these questions: * [How can I perform a ping or traceroute in python, accessing the output as it is produced?](https://stackoverflow.com/questions/1151897/how-can-i-perform-a-ping-or-traceroute-in-python-accessing-the-output-as-it-is-p) * [ping a site in python](https://stackoverflow.com/questions/316866/ping-a-site-in-python) * [How can I perform a ping or traceroute using native python?](https://stackoverflow.com/questions/1151771/how-can-i-perform-a-ping-or-traceroute-using-native-python) The answers generally boil down to "use this third party module with root privileges" or "use the system's ping command and parse the output". Of the native methods, [icmplib](http://code.activestate.com/recipes/409689/) and [M. Cowles and J. Diemer's ping.py](http://svn.pylucid.net/pylucid/CodeSnippets/ping.py) explicitly mention the need for root privileges, as does the [scapy](http://www.secdev.org/projects/scapy/) [manual](http://www.secdev.org/projects/scapy/doc/usage.html?highlight=Root%20privileges). So from that front, natively sending ICMP pings without special privileges seems impossible. The system ping command does manage somehow, but its man page doesn't shed any light on how. The [man page for icmp](http://www.manpagez.com/man/4/icmp/), on the other hand, seems to say it's possible: ``` Non-privileged ICMP ICMP sockets can be opened with the SOCK_DGRAM socket type without requiring root privileges. The synopsis is the following: socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP) Datagram oriented ICMP sockets offer a subset of the functionality avail- able to raw ICMP sockets. Only IMCP request messages of the following types can be sent: ICMP_ECHO, ICMP_TSTAMP or ICMP_MASKREQ. ``` So it would seem that, at least according to icmp, it's allowed. So why is it that all the python tools are unable to do this? Are the python tools too general and expect any work on privileged sockets to be privileged? Would it be possible to write a ping function in C that can ping without root privileges, and extend python with this? Has anyone done this? Have I just misunderstood the problem?
The ping program is installed setuid root. This allows any user to use the program, and still be able to open a raw socket. After it opens the raw socket, it typically drops root privs. You generally need a raw socket to do ICMP correctly, and raw sockets are usually restricted. So it's not really python's fault at all. Regarding the bit about ICMP above, apparently many implementations don't really support those combinations of flags well. So it is likely that most implmentations just use the way they "know" works on most / all architectures.
Here's how /sbin/ping "somehow manages" (on most Unix-y systems): ``` $ ls -l /sbin/ping -r-sr-xr-x 1 root wheel 68448 Jan 26 10:00 /sbin/ping ``` See? It's owned by `root` and has that crucial `s` bit in the permission -- setuserid. So, no matter what user is running it, ping **runs as root**. If you're using a BSD Kernel with the new "non-privileged ICMP sockets" it would be interesting to see what's needed to use that functionality to ping from Python (but that won't help any user that's on a less advanced kernel, of course).
python non-privileged ICMP
[ "", "python", "ping", "icmp", "" ]
What is the best way to dynamically P/Invoke unmanaged code from .NET? For example, I have a number of unmanaged DLL's with common C-style exports between them. I would like to take the path to a DLL and then P/Invoke a function based on the exported name. I would not know the DLL name until runtime. Basically, what is the equivalent of `LoadLibrary` and `GetProcAddress` for .NET? (I have existing code which uses these functions to accomplish the same goal, entirely in unmanaged code).
This article describes a typesafe managed wrapper for GetProcAddress that should help you out. <https://learn.microsoft.com/en-us/archive/blogs/jmstall/type-safe-managed-wrappers-for-kernel32getprocaddress>
You can do this by P/Invoking into LoadLibrary and GetProcAddress, and then using [Marshal.GetDelegateForFunctionPointer](https://learn.microsoft.com/en-us/dotnet/api/system.runtime.interopservices.marshal.getdelegateforfunctionpointer). For details, see [this article](https://learn.microsoft.com/en-us/archive/blogs/junfeng/dynamic-pinvoke).
Dynamically P/Invoking a DLL
[ "", "c#", "dynamic", "pinvoke", "" ]
Is there a better way to format text from Twitter to link the hyperlinks, username and hashtags? What I have is working but I know this could be done better. I am interested in alternative techniques. I am setting this up as a HTML Helper for ASP.NET MVC. ``` using System; using System.Collections.Generic; using System.Text.RegularExpressions; using System.Web; using System.Web.Mvc; namespace Acme.Mvc.Extensions { public static class MvcExtensions { const string ScreenNamePattern = @"@([A-Za-z0-9\-_&;]+)"; const string HashTagPattern = @"#([A-Za-z0-9\-_&;]+)"; const string HyperLinkPattern = @"(http://\S+)\s?"; public static string TweetText(this HtmlHelper helper, string text) { return FormatTweetText(text); } public static string FormatTweetText(string text) { string result = text; if (result.Contains("http://")) { var links = new List<string>(); foreach (Match match in Regex.Matches(result, HyperLinkPattern)) { var url = match.Groups[1].Value; if (!links.Contains(url)) { links.Add(url); result = result.Replace(url, String.Format("<a href=\"{0}\">{0}</a>", url)); } } } if (result.Contains("@")) { var names = new List<string>(); foreach (Match match in Regex.Matches(result, ScreenNamePattern)) { var screenName = match.Groups[1].Value; if (!names.Contains(screenName)) { names.Add(screenName); result = result.Replace("@" + screenName, String.Format("<a href=\"http://twitter.com/{0}\">@{0}</a>", screenName)); } } } if (result.Contains("#")) { var names = new List<string>(); foreach (Match match in Regex.Matches(result, HashTagPattern)) { var hashTag = match.Groups[1].Value; if (!names.Contains(hashTag)) { names.Add(hashTag); result = result.Replace("#" + hashTag, String.Format("<a href=\"http://twitter.com/search?q={0}\">#{1}</a>", HttpUtility.UrlEncode("#" + hashTag), hashTag)); } } } return result; } } } ```
That is remarkably similar to the code I wrote that displays my Twitter status on my blog. The only further things I do that I do are 1) looking up `@name` and replacing it with `<a href="http://twitter.com/name">Real Name</a>`; 2) multiple `@name`'s in a row get commas, if they don't have them; 3) Tweets that start with `@name(s)` are formatted "To @name:". I don't see any reason this can't be an effective way to parse a tweet - they are a very consistent format (good for regex) and in most situations the speed (milliseconds) is more than acceptable. Edit: [Here is the code for my Tweet parser.](http://codepaste.net/1yhne8) It's a bit too long to put in a Stack Overflow answer. It takes a tweet like: > @user1 @user2 check out this cool link I got from @user3: <http://url.com/page.htm#anchor> #coollinks And turns it into: ``` <span class="salutation"> To <a href="http://twitter.com/user1">Real Name</a>, <a href="http://twitter.com/user2">Real Name</a>: </span> check out this cool link I got from <span class="salutation"> <a href="http://www.twitter.com/user3">Real Name</a> </span>: <a href="http://site.com/page.htm#anchor">http://site.com/...</a> <a href="http://twitter.com/#search?q=%23coollinks">#coollinks</a> ``` It also wraps all that markup in a little JavaScript: ``` document.getElementById('twitter').innerHTML = '{markup}'; ``` This is so the tweet fetcher can run asynchronously as a JS and if Twitter is down or slow it won't affect my site's page load time.
I created helper method to shorten text to 140 chars with url included. You can set share length to 0 to exclude url from tweet. ``` public static string FormatTwitterText(this string text, string shareurl) { if (string.IsNullOrEmpty(text)) return string.Empty; string finaltext = string.Empty; string sharepath = string.Format("http://url.com/{0}", shareurl); //list of all words, trimmed and new space removed List<string> textlist = text.Split(' ').Select(txt => Regex.Replace(txt, @"\n", "").Trim()) .Where(formatedtxt => !string.IsNullOrEmpty(formatedtxt)) .ToList(); int extraChars = 3; //to account for the two dots ".." int finalLength = 140 - sharepath.Length - extraChars; int runningLengthCount = 0; int collectionCount = textlist.Count; int count = 0; foreach (string eachwordformated in textlist .Select(eachword => string.Format("{0} ", eachword))) { count++; int textlength = eachwordformated.Length; runningLengthCount += textlength; int nextcount = count + 1; var nextTextlength = nextcount < collectionCount ? textlist[nextcount].Length : 0; if (runningLengthCount + nextTextlength < finalLength) finaltext += eachwordformated; } return runningLengthCount > finalLength ? finaltext.Trim() + ".." : finaltext.Trim(); } ```
Formatting Twitter text (TweetText) with C#
[ "", "c#", "asp.net-mvc", "twitter", "" ]
***edit 2015*** This question and its answers are no longer relevant. It was asked before the advent of C# 6, which has the null propagating opertor (?.), which obviates the hacky-workarounds discussed in this question and subsequent answers. As of 2015, in C# you should now use Form.ActiveForm?.ActiveControl?.Name. --- I've been thinking about the null propagation problem in .NET, which often leads to ugly, repeated code like this: **Attempt #1 usual code:** ``` string activeControlName = null; var activeForm = Form.ActiveForm; if (activeForm != null) { var activeControl = activeForm.ActiveControl; if(activeControl != null) { activeControlname = activeControl.Name; } } ``` There have been a few discussions on StackOverflow about a Maybe<T> monad, or using some kind of "if not null" extension method: **Attempt #2, extension method:** ``` // Usage: var activeControlName = Form.ActiveForm .IfNotNull(form => form.ActiveControl) .IfNotNull(control => control.Name); // Definition: public static TReturn IfNotNull<TReturn, T>(T instance, Func<T, TReturn> getter) where T : class { if (instance != null ) return getter(instance); return null; } ``` I think this is better, however, there's a bit of syntactic messy-ness with the repeated "IfNotNull" and the lambdas. I'm now considering this design: **Attempt #3, Maybe<T> with extension method** ``` // Usage: var activeControlName = (from window in Form.ActiveForm.Maybe() from control in window.ActiveControl.Maybe() select control.Name).FirstOrDefault(); // Definition: public struct Maybe<T> : IEnumerable<T> where T : class { private readonly T instance; public Maybe(T instance) { this.instance = instance; } public T Value { get { return instance; } } public IEnumerator<T> GetEnumerator() { return Enumerable.Repeat(instance, instance == null ? 0 : 1).GetEnumerator(); } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } public static class MaybeExtensions { public static Maybe<T> Maybe<T>(this T instance) where T : class { return new Maybe<T>(instance); } } ``` **My question is**: is this an evil abuse of extension methods? Is it better than the old usual null checks?
It's interesting that so many people independently pick the name `IfNotNull`, for this in C# - it must be the most sensible name possible! :) Earliest one I've found on SO: [Possible pitfalls of using this (extension method based) shorthand](https://stackoverflow.com/questions/123088/possible-pitfalls-of-using-this-extension-method-based-shorthand) My one (in ignorance of the above): [Pipe forwards in C#](https://stackoverflow.com/questions/336775/pipe-forwards-in-c/337846#337846) Another more recent example: [How to check for nulls in a deep lambda expression?](https://stackoverflow.com/questions/854591/how-to-check-for-nulls-in-a-deep-lambda-expression/854619#854619) There are a couple of reasons why the `IfNotNull` extension method may be unpopular. 1. Some people are adamant that an extension method should throw an exception if its `this` parameter is `null`. I disagree if the method name makes it clear. 2. Extensions that apply too broadly will tend to clutter up the auto-completion menu. This can be avoided by proper use of namespaces so they don't annoy people who don't want them, however. I've played around with the `IEnumerable` approach also, just as an experiment to see how many things I could twist to fit the Linq keywords, but I think the end result is less readable than either the `IfNotNull` chaining or the raw imperative code. I've ended up with a simple self-contained `Maybe` class with one static method (not an extension method) and that works very nicely for me. But then, I work with a small team, and my next most senior colleague is interested in functional programming and lambdas and so on, so he isn't put off by it.
Much as I'm a fan of extension methods, I don't think this is really helpful. You've still got the repetition of the expressions (in the monadic version), and it just means that you've got to explain `Maybe` to everyone. The added learning curve doesn't seem to have enough benefit in this case. The `IfNotNull` version at least manages to avoid the repetition, but I think it's still just a bit too longwinded without actually being clearer. Maybe one day we'll get a null-safe dereferencing operator... --- Just as an aside, my favourite semi-evil extension method is: ``` public static void ThrowIfNull<T>(this T value, string name) where T : class { if (value == null) { throw new ArgumentNullException(name); } } ``` That lets you turn this: ``` void Foo(string x, string y) { if (x == null) { throw new ArgumentNullException(nameof(x)); } if (y == null) { throw new ArgumentNullException(nameof(y)); } ... } ``` into: ``` void Foo(string x, string y) { x.ThrowIfNull(nameof(x)); y.ThrowIfNull(nameof(y)); ... } ``` There's still the nasty repetition of the parameter name, but at least it's tidier. Of course, in .NET 4.0 I'd use Code Contracts, which is what I'm meant to be writing about right now... Stack Overflow is great work avoidance ;)
Evil use of Maybe monad and extension methods in C#?
[ "", "c#", "extension-methods", "monads", "" ]
I have a field in a table `recipes` that has been inserted using `mysql_real_escape_string`, I want to count the number of line breaks in that field and order the records using this number. p.s. the field is called Ingredients. Thanks everyone
This would do it: ``` SELECT *, LENGTH(Ingredients) - LENGTH(REPLACE(Ingredients, '\n', '')) as Count FROM Recipes ORDER BY Count DESC ``` The way I am getting the amount of linebreaks is a bit of a hack, however, and I don't think there's a better way. I would recommend keeping a column that has the amount of linebreaks if performance is a huge issue. For medium-sized data sets, though, I think the above should be fine. If you wanted to have a cache column as described above, you would do: ``` UPDATE Recipes SET IngredientAmount = LENGTH(Ingredients) - LENGTH(REPLACE(Ingredients, '\n', '')) ``` After that, whenever you are updating/inserting a new row, you could calculate the amounts (probably with PHP) and fill in this column before-hand. Or, if you're into that sort of thing, try out [triggers](http://dev.mysql.com/doc/refman/5.0/en/triggers.html).
I'm assuming a lot here, but from what I'm reading in your post, you could change your database structure a little bit, and both solve this problem and open your dataset up to more interesting uses. If you separate ingredients into its own table, and use a linking table to index which ingredients occur in which recipes, it'll be much easier to be creative with data manipulation. It becomes easier to count ingredients per recipe, to find similarities in recipes, to search for recipes containing sets of ingredients, etc. also your data would be more normalized and smaller. (storing one global list of all ingredients vs. storing a set for each recipe) If you're using a single text entry field to enter ingredients for a recipe now, you could do something like break up that input by lines and use each line as an ingredient when saving to the database. You can use something like PHP's built-in `levenshtein()` or `similar_text()` functions to deal with misspelled ingredient names and keep the data as normalized as possbile without having to hand-groom your [users'] data entry too much. This is just a suggestion, take it as you like.
Count line breaks in a field and order by
[ "", "php", "mysql", "" ]
I have a system tray UI in Java that requires a schedule database poll. What is the best method for spawning a new thread and notifying the UI? I'm new to Swing and it's threading model.
[`SwingWorker`](http://java.sun.com/javase/6/docs/api/javax/swing/SwingWorker.html) is the exact thing designed to do this. It allows you to run a task that won't block the GUI and then return a value and update the GUI when it is done. Java has a [great tutorial](http://java.sun.com/docs/books/tutorial/uiswing/concurrency/worker.html) on how to use `SwingWorker`. Basically do the database pull in the `doInBackground()` method. And, in the `done()` method, update your GUI.
As jinguy mentioned SwingWorker should be the first place you look at. [Wikipedia](http://en.wikipedia.org/wiki/SwingWorker), of all places, has some interesting examples that may be good to look at before you tackle the JavaDocs.
Swing: Passing a value back to the UI from a scheduled thread
[ "", "java", "swing", "concurrency", "swingworker", "" ]
The Amazon Product Advertising API (formerly Amazon Associates Web Service or Amazon AWS) has implemented a new rule which is by August 15th 2009 all web service requests to them must be signed. They have provided sample code on their site showing how to do this in C# using both REST and SOAP. The implementation that I’m using is SOAP. You can find the sample code [here](http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2481&categoryID=14), I’m not including it because there is a fair amount. The problem I’m having is their sample code uses WSE 3 and our current code doesn’t use WSE. Does anyone know how to implement this update with just using the auto generated code from the WSDL? I’d like to not have to switch over to the WSE 3 stuff right now if I don’t have to since this update is more of a quick patch to hold us over until we can fully implement this in the current dev version (August 3rd they’re starting to drop 1 in 5 requests, in the live environment, if they aren’t signed which is bad news for our application). Here’s a snippet of the main portion that does the actual signing of the SOAP request. ``` class ClientOutputFilter : SoapFilter { // to store the AWS Access Key ID and corresponding Secret Key. String akid; String secret; // Constructor public ClientOutputFilter(String awsAccessKeyId, String awsSecretKey) { this.akid = awsAccessKeyId; this.secret = awsSecretKey; } // Here's the core logic: // 1. Concatenate operation name and timestamp to get StringToSign. // 2. Compute HMAC on StringToSign with Secret Key to get Signature. // 3. Add AWSAccessKeyId, Timestamp and Signature elements to the header. public override SoapFilterResult ProcessMessage(SoapEnvelope envelope) { var body = envelope.Body; var firstNode = body.ChildNodes.Item(0); String operation = firstNode.Name; DateTime currentTime = DateTime.UtcNow; String timestamp = currentTime.ToString("yyyy-MM-ddTHH:mm:ssZ"); String toSign = operation + timestamp; byte[] toSignBytes = Encoding.UTF8.GetBytes(toSign); byte[] secretBytes = Encoding.UTF8.GetBytes(secret); HMAC signer = new HMACSHA256(secretBytes); // important! has to be HMAC-SHA-256, SHA-1 will not work. byte[] sigBytes = signer.ComputeHash(toSignBytes); String signature = Convert.ToBase64String(sigBytes); // important! has to be Base64 encoded var header = envelope.Header; XmlDocument doc = header.OwnerDocument; // create the elements - Namespace and Prefix are critical! XmlElement akidElement = doc.CreateElement( AmazonHmacAssertion.AWS_PFX, "AWSAccessKeyId", AmazonHmacAssertion.AWS_NS); akidElement.AppendChild(doc.CreateTextNode(akid)); XmlElement tsElement = doc.CreateElement( AmazonHmacAssertion.AWS_PFX, "Timestamp", AmazonHmacAssertion.AWS_NS); tsElement.AppendChild(doc.CreateTextNode(timestamp)); XmlElement sigElement = doc.CreateElement( AmazonHmacAssertion.AWS_PFX, "Signature", AmazonHmacAssertion.AWS_NS); sigElement.AppendChild(doc.CreateTextNode(signature)); header.AppendChild(akidElement); header.AppendChild(tsElement); header.AppendChild(sigElement); // we're done return SoapFilterResult.Continue; } } ``` And that gets called like this when making the actual web service call ``` // create an instance of the serivce var api = new AWSECommerceService(); // apply the security policy, which will add the require security elements to the // outgoing SOAP header var amazonHmacAssertion = new AmazonHmacAssertion(MY_AWS_ID, MY_AWS_SECRET); api.SetPolicy(amazonHmacAssertion.Policy()); ```
I ended up updating the code to use WCF since that's what it is in the current dev version I've been working on. Then I used some code that was posted on the Amazon forums, but made it a little easier to use. **UPDATE:** new easier to use code that lets you still use the config settings for everything In the previous code I posted, and what I've seen elsewhere, when the service object is created one of the constructor overrides is used to tell it to use HTTPS, give it the HTTPS url and to manually attach the message inspector that will do the signing. The downfall to not using the default constructor is you lose the ability to configure the service via the config file. I've since redone this code so you can continue to use the default, parameterless, constructor and configure the service via the config file. The benifit of this is you don't have to recompile your code to use this, or make changes once deployed such as to maxStringContentLength (which is what caused this change to take place as well as discover the downfalls to doing it all in code). I also updated the signing part a bit so that way you can tell it what hashing algorithm to use as well as the regex for extracting the Action. These two changes are because not all web services from Amazon use the same hashing algorithm and the Action might need to be extracted differently. This means you can reuse the same code for each service type just by changing what’s in the config file. ``` public class SigningExtension : BehaviorExtensionElement { public override Type BehaviorType { get { return typeof(SigningBehavior); } } [ConfigurationProperty("actionPattern", IsRequired = true)] public string ActionPattern { get { return this["actionPattern"] as string; } set { this["actionPattern"] = value; } } [ConfigurationProperty("algorithm", IsRequired = true)] public string Algorithm { get { return this["algorithm"] as string; } set { this["algorithm"] = value; } } [ConfigurationProperty("algorithmKey", IsRequired = true)] public string AlgorithmKey { get { return this["algorithmKey"] as string; } set { this["algorithmKey"] = value; } } protected override object CreateBehavior() { var hmac = HMAC.Create(Algorithm); if (hmac == null) { throw new ArgumentException(string.Format("Algorithm of type ({0}) is not supported.", Algorithm)); } if (string.IsNullOrEmpty(AlgorithmKey)) { throw new ArgumentException("AlgorithmKey cannot be null or empty."); } hmac.Key = Encoding.UTF8.GetBytes(AlgorithmKey); return new SigningBehavior(hmac, ActionPattern); } } public class SigningBehavior : IEndpointBehavior { private HMAC algorithm; private string actionPattern; public SigningBehavior(HMAC algorithm, string actionPattern) { this.algorithm = algorithm; this.actionPattern = actionPattern; } public void Validate(ServiceEndpoint endpoint) { } public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters) { } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { } public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { clientRuntime.MessageInspectors.Add(new SigningMessageInspector(algorithm, actionPattern)); } } public class SigningMessageInspector : IClientMessageInspector { private readonly HMAC Signer; private readonly Regex ActionRegex; public SigningMessageInspector(HMAC algorithm, string actionPattern) { Signer = algorithm; ActionRegex = new Regex(actionPattern); } public void AfterReceiveReply(ref Message reply, object correlationState) { } public object BeforeSendRequest(ref Message request, IClientChannel channel) { var operation = GetOperation(request.Headers.Action); var timeStamp = DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ssZ"); var toSignBytes = Encoding.UTF8.GetBytes(operation + timeStamp); var sigBytes = Signer.ComputeHash(toSignBytes); var signature = Convert.ToBase64String(sigBytes); request.Headers.Add(MessageHeader.CreateHeader("AWSAccessKeyId", Helpers.NameSpace, Helpers.AWSAccessKeyId)); request.Headers.Add(MessageHeader.CreateHeader("Timestamp", Helpers.NameSpace, timeStamp)); request.Headers.Add(MessageHeader.CreateHeader("Signature", Helpers.NameSpace, signature)); return null; } private string GetOperation(string request) { var match = ActionRegex.Match(request); var val = match.Groups["action"]; return val.Value; } } ``` To use this you don't need to make any changes to your existing code, you can even put the signing code in a whole other assembly if need be. You just need to set up the config section as so (note: the version number is important, without it matching the code will not load or run) ``` <system.serviceModel> <extensions> <behaviorExtensions> <add name="signer" type="WebServices.Amazon.SigningExtension, AmazonExtensions, Version=1.3.11.7, Culture=neutral, PublicKeyToken=null" /> </behaviorExtensions> </extensions> <behaviors> <endpointBehaviors> <behavior name="AWSECommerceBehaviors"> <signer algorithm="HMACSHA256" algorithmKey="..." actionPattern="\w:\/\/.+/(?&lt;action&gt;.+)" /> </behavior> </endpointBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="AWSECommerceServiceBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="16384" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://ecs.amazonaws.com/onca/soap?Service=AWSECommerceService" behaviorConfiguration="AWSECommerceBehaviors" binding="basicHttpBinding" bindingConfiguration="AWSECommerceServiceBinding" contract="WebServices.Amazon.AWSECommerceServicePortType" name="AWSECommerceServicePort" /> </client> </system.serviceModel> ```
Hey Brian, I'm dealing with the same issue in my app. I'm using the WSDL generated code -- in fact I generated it again today to ensure the latest version. I found that signing with an X509 certificate the most straightforward path. With a few minutes of testing under my belt, so far it appears to work okay. Essentially you change from: ``` AWSECommerceService service = new AWSECommerceService(); // ...then invoke some AWS call ``` To: ``` AWSECommerceService service = new AWSECommerceService(); service.ClientCertificates.Add(X509Certificate.CreateFromCertFile(@"path/to/cert.pem")); // ...then invoke some AWS call ``` Viper at bytesblocks.com posted [more details](http://www.byteblocks.com/post/2009/06/15/Secure-Amazon-Web-Service-Request.aspx), including how to obtain the X509 certificate Amazon generates for you. **EDIT**: as the discussion [here](http://developer.amazonwebservices.com/connect/thread.jspa?threadID=33012&tstart=0) indicates, this might not actually sign the request. Will post as I learn more. **EDIT**: this doesn't appear to sign the request at all. Instead, it appears to require an https connection, and uses the certificate for SSL client authentication. SSL client authentication is an infrequently used feature of SSL. It would have been nice if the Amazon product advertising API supported it as an authentication mechanism! Unfortunately that doesn't seem to be the case. The evidence is twofold: (1) it's not one of the [documented authentication schemes](http://docs.amazonwebservices.com/AWSECommerceService/latest/DG/index.html?RequestAuthenticationArticle.html), and (2) it doesn't matter what certificate you specify. Some confusion is added by Amazon still not enforcing authentication on requests even after their proclaimed the August 15 2009 deadline. This makes requests appear to pass correctly when the certificate is added, even though it might not add any value. Look at Brian Surowiec's answer for a solution that works. I'm leaving this answer here to document the appealing but apparently failed approach, as I can still see it discussed in blogs and Amazon forums.
How to sign an Amazon web service request in .NET with SOAP and without WSE
[ "", "c#", "wcf", "web-services", "soap", "amazon-web-services", "" ]
In C#, using the `Regex` class, how does one parse comma-separated values, where some values might be quoted strings themselves containing commas? ``` using System ; using System.Text.RegularExpressions ; class Example { public static void Main ( ) { string myString = "cat,dog,\"0 = OFF, 1 = ON\",lion,tiger,'R = red, G = green, B = blue',bear" ; Console.WriteLine ( "\nmyString is ...\n\t" + myString + "\n" ) ; Regex regex = new Regex ( "(?<=,(\"|\')).*?(?=(\"|\'),)|(^.*?(?=,))|((?<=,).*?(?=,))|((?<=,).*?$)" ) ; Match match = regex.Match ( myString ) ; int j = 0 ; while ( match.Success ) { Console.WriteLine ( j++ + " \t" + match ) ; match = match.NextMatch() ; } } } ``` Output (in part) appears as follows: ``` 0 cat 1 dog 2 "0 = OFF 3 1 = ON" 4 lion 5 tiger 6 'R = red 7 G = green 8 B = blue' 9 bear ``` However, *desired* output is: ``` 0 cat 1 dog 2 0 = OFF, 1 = ON 3 lion 4 tiger 5 R = red, G = green, B = blue 6 bear ```
Try with this Regex: ``` "[^"\r\n]*"|'[^'\r\n]*'|[^,\r\n]* ``` --- ``` Regex regexObj = new Regex(@"""[^""\r\n]*""|'[^'\r\n]*'|[^,\r\n]*"); Match matchResults = regexObj.Match(input); while (matchResults.Success) { Console.WriteLine(matchResults.Value); matchResults = matchResults.NextMatch(); } ``` Ouputs: * cat * dog * "0 = OFF, 1 = ON" * lion * tiger * 'R = red, G = green, B = blue' * bear **Note:** This regex solution will work for your case, however I recommend you to use a specialized library like [FileHelpers](http://www.filehelpers.com/).
Why not heed the advice from the experts and [Don't roll your own CSV parser](http://secretgeek.net/csv_trouble.asp). Your first thought is, "I need to handle commas inside of quotes." Your next thought will be, "Oh, crap, I need to handle quotes inside of quotes. Escaped quotes. Double quotes. Single quotes..." It's a road to madness. Don't write your own. Find a library with an extensive unit test coverage that hits all the hard parts and has gone through hell for you. For .NET, use the free and open source [FileHelpers library](http://www.filehelpers.com/).
C#, regular expressions : how to parse comma-separated values, where some values might be quoted strings themselves containing commas
[ "", "c#", "regex", "csv", "" ]
I have been building a client / server app with Silverlight, web services, and polling. Apparently I missed the whole Duplex Communication thing when I was first researching this subject. At any rate, the [MSDN article](http://msdn.microsoft.com/en-us/library/cc645027(VS.95).aspx) I saw on the subject was promising. When researching the scalability, it appears as if there's conflicting *opinions* on the subject. silverlight.net/forums/t/89970.aspx - This thread seems to indicate that the duplex polling only supports a finite amount of concurrent clients *on the server end*. dotnetaddict.dotnetdevelopersjournal.com/sl\_polling\_duplex.htm - This blog entry shows up in multiple places, so it muddies waters. silverlight.net/forums/t/108396.aspx - This thread shows that I'm not the only one with this concern, but there are no answers in it. silverlight.net/forums/t/32858.aspx - Despite all the bad press, this thread seems to have an official response saying the 10 concurrent connections is per *machine*. In short, does anyone have facts / benchmarks? Thanks :)
This is my understanding of this, but I haven't done tests. There is an inbuilt 10 connection limit on non-server operating systems (XP/Vista/Windows 7). On IIS 6 (XP) it will reject new connections once there are 10 in progress. On II7 (Vista/Windows 7) it will queue connections once there are 10 in progress. I think this means that 10 simultaneous connections are out. On the server OS side (2003/2008), there is no connection limit. However, on IIS6 (2003) each long running connection will take a thread from the threadpool, so you will run into a connection limit pretty quickly. On IIS7 (2008), async threads get suspended in a way that does not use up a thread, so 1000s of connections should be possible.
Scalability of the WCF backend using the protocol in a web farm scenario is discussed at <http://tomasz.janczuk.org/2009/09/scale-out-of-silverlight-http-polling.html>.
Scalability of Duplex Polling with Silverlight / IIS
[ "", "c#", "wcf", "silverlight", "web-services", "" ]
Instead of the major.minor.build.revision format, I'd like to use date and time for version numbers. Something more like day.month.year.time. Is there a way to change the format of the AssemblyVersion attribute in AssemblyInfo.cs?
You can put whatever numbers you want in there (as long as they don't overflow the data types that contain them in memory) and call them whatever you wish. I am not sure why you would want to do this, however, as the standard format usually has some form of the date stored in the *build* field. For example, here is the assembly version format that we use where I work: > `5.1.729.1` This tells me that this is an assembly from version 5.1 of the library, built on July 29th, and was the first build of the day. Subsequent builds on the same day simply increment the *revision* field.
The easiest approach is to write you own build task that handles this and then have the .csproj file call your task to update it with your default rules. There's an article on [using a custom MSBuild task to increment version numbers](http://weblogs.asp.net/bradleyb/archive/2005/12/02/432150.aspx) that could serve as a guide. We have done a similar thing here in the past and found it to work well. I don't believe there are any tools included in VS2005 for doing this, though.
Is it possible to change the version format of a C# project?
[ "", "c#", ".net", "visual-studio-2005", "assemblyinfo", "" ]
I changed the return type of one method in an interface in a library. Previously it was void and I modified it to return an object. I did not change the code in my module as I did not want to do anything with the returned object(it was for a different module) I compiled my module with the new library jar and ran unit tests which ran fine but when I pushed the jar along with the library jar to production and ran the module, i get java.lang.NoSuchMethodError for the method I changed on the interface. I modified the interface as well as the implementation, both are in the library jar and I use spring annotations to inject the implementation object in my module. What is the possible cause of this problem?
It sounds like you haven't recompiled everything, or haven't pushed all the recompiled jar files to prod. Additionally, you didn't specify whether you restarted the server. If you've got some sort of "hot restart" going on, you'll have to have configured all the classloaders very carefully to get it right. If you can restart the container completely, that may help.
In the Java bytecode the return type of a method is part of the method signature, and it differentiates between two methods with the same name and same parameters (this is something that you cannot do in the Java language). So your code which calls this method somehow wasn't recompiled and it still tries to call the method which returns void.
changing return type of method gives java.lang.NoSuchMethodError
[ "", "java", "" ]
If we look at the stackoverflow website we have votes. But the question is what is the bestway to store who has voted and who has not. Lets also simplify this even more and say that we can only vote Up, and we can only Remove the Up vote. I was thinking having the table to be in such form > `question - Id(INT) | userId(INT) | title(TEXT) | vote(INT) | ratedBy(TEXT)` Thre rest is self explanitory but ratedBy is a Comma Seperated Id values of the Users. I was thinking to read the **ratedBy** and compare it with the **userId** of the current logged in User. If he dosent exist in the **ratedBy** he can vote Up, otherwise he can remove his vote. Which in turn will remove the value from **ratedBy**
I think to make another table "vote" is better. The relationship between users and votes is n to n, therefore a new table should be created. It should be something like this: ``` question id (int) | user id (int) | permanent (bool) | timestamp (datetime) ``` Permanent field can be used to make votes stay after a given time, as SO does. Other fields may be added according to desired features. As each row will take at least 16B, you can have up to 250M rows in the table before the table uses 4GB (fat32 limit if there is one archive per table, which is the case for MyISAM and InnoDB). Also, as Matthew Scharley points out in a comment, don't load all votes at once into memory (as fetching all the table in a resultset). You can always use LIMIT clause to narrow your query results.
A new table: Article ID | User ID | Rating Where Article ID and User ID make up the composite key, and rating would be 1, indicating upvote, -1 for a downvote and 0 for a removed vote (or just remove the row).
Rating System in PHP and MySQL
[ "", "php", "mysql", "architecture", "" ]
I'm fairly new to using JSON (as opposed to XML) and am currently working purely with Javascript to digest, parse and display my returned JSON data. I'm using the JSON2.js library and am getting back some valid JSON data representing a fairly simple nested list: ``` { "node":{ "class":"folder", "title":"Test Framework", "node":{ "class":"folder", "title":"Item 1", "node":{ "class":"folder", "title":"Item 1.1", "node":{ "class":"file", "title":"Item 1.1.a" } }, "node":{ "class":"folder", "title":"Item 1.2", "node":{ "class":"file", "title":"Item 1.2.a" }, "node":{ "class":"file", "title":"Item 1.2.b" }, "node":{ "class":"file", "title":"Item 1.2.c" } }, "node":{ "class":"folder", "title":"Item 1.3", "node":{ "class":"folder", "title":"Item 1.3.a", "node":{ "class":"file", "title":"Item 1.3.a.i" }, "node":{ "class":"file", "title":"Item 1.3.a.ii" } } } }, "node":{ "class":"folder", "title":"Item 2", "node":{ "class":"file", "title":"item 2.a" }, "node":{ "class":"file", "title":"Item 2.b" } } } } ``` Does anyone have any pointers for a quick way to turn that lot into a UL with all of the nested elements? It would be cool as well if the "class" element that's in the JSON could be used as the class for each LI. Any help is much appreciated. Thanks, Dave.
Your json is unsuited to your task. Some objects have several properties with the same name ("node"), so they are overriding one another. You have to use arrays of nodes instead. Here is a working data structure and the functions that can turn it into a nested list: ``` <!DOCTYPE HTML PUBLIC"-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title></title> <script type="text/javascript"> function parseNodes(nodes) { // takes a nodes array and turns it into a <ol> var ol = document.createElement("OL"); for(var i=0; i<nodes.length; i++) { ol.appendChild(parseNode(nodes[i])); } return ol; } function parseNode(node) { // takes a node object and turns it into a <li> var li = document.createElement("LI"); li.innerHTML = node.title; li.className = node.class; if(node.nodes) li.appendChild(parseNodes(node.nodes)); return li; } window.jsonData = [{ "class": "folder", "title": "Test Framework", "nodes": [{ "class": "folder", "title": "Item 1", "nodes": [{ "class": "folder", "title": "Item 1.1", "nodes": [{ "class": "file", "title": "Item 1.1.a" }] }, { "class": "folder", "title": "Item 1.2", "nodes": [{ "class": "file", "title": "Item 1.2.a" }, { "class": "file", "title": "Item 1.2.b" }, { "class": "file", "title": "Item 1.2.c" }] }, { "class": "folder", "title": "Item 1.3", "nodes": [{ "class": "folder", "title": "Item 1.3.a", "nodes": [{ "class": "file", "title": "Item 1.3.a.i" }, { "class": "file", "title": "Item 1.3.a.ii" }] }] }] }, { "class": "folder", "title": "Item 2", "nodes": [{ "class": "file", "title": "item 2.a" }, { "class": "file", "title": "Item 2.b" }] }] }]; </script> </head> <body> <input type="button" onclick="document.body.appendChild(parseNodes(jsonData))" value="go" /> </body> </html> ``` And I can add this css to have the items numberings match the node titles :) ``` <style type="text/css"> ol { list-style-type: none } ol ol { list-style-type: decimal } ol ol ol { list-style-type: decimal } ol ol ol ol { list-style-type: lower-alpha } ol ol ol ol ol { list-style-type: lower-roman } </style> ``` [See it in action.](http://jsbin.com/oqinis/2/watch)
Here's some fairly simple code that creates a `<ul>` element for each object, and a `<li>` element for each portion. You can add a quick check to test the `child` variable against things like `"class"` if you want to add a special case. ``` function jsonToHtmlList(json) { return objToHtmlList(JSON.parse(json)); } function objToHtmlList(obj) { if (obj instanceof Array) { var ol = document.createElement('ol'); for (var child in obj) { var li = document.createElement('li'); li.appendChild(objToHtmlList(obj[child])); ol.appendChild(li); } return ol; } else if (obj instanceof Object && !(obj instanceof String)) { var ul = document.createElement('ul'); for (var child in obj) { var li = document.createElement('li'); li.appendChild(document.createTextNode(child + ": ")); li.appendChild(objToHtmlList(obj[child])); ul.appendChild(li); } return ul; } else { return document.createTextNode(obj); } } ``` This won't do exactly what you want, because your JSON doesn't make sense. Objects in JavaScript, and therefore JSON, are maps, and so you can't have more than one child with the same name. You'll need to turn your multiple "node"s into an array, as Cédric points out.
Turning nested JSON into an HTML nested list with Javascript
[ "", "javascript", "json", "list", "nested", "" ]
The problem is this: ``` t0: Insert is made into my database t1: Delete is executed t2: Full backup is made t3: Transaction log backup is made ``` How can i recover the deleted record after t3 (which is now)? I want the database in a state between t0 and t1. The log level was Full. Edit: I have already run DBCC LOG(DB-Name, 3), but the log seems to be truncated. There are only 10 items although there must be thousands.
To replay log you always start from a full log, apply the differentials and then the log backups. The time line is always forward from the full backup, never backwards. So to get to the state after t0 (ie. recover the deleted record) you start with a full backup that was taken *before* t0 and then apply the log backups with stop at t0.
Although not an ideal solution you can try restoring database backup into separate database, compare databases and make needed changes manually. Another way is to use a third party transaction log reader such as [ApexSQL Log](http://www.apexsql.com/sql_tools_log.aspx).
View all transaction logs in SQL Server 2008
[ "", "sql", "database-restore", "database-backups", "transaction-log", "" ]
I've asked a couple of questions here but am still having issues. I'd appreciate if you could tell me what I am doing wrong in my code. I run the code above from a ASP.Net page and get "Cannot Access a Closed Stream". ``` var doc = new Document(); MemoryStream memoryStream = new MemoryStream(); PdfWriter.GetInstance(doc, memoryStream); doc.Open(); doc.Add(new Paragraph("First Paragraph")); doc.Add(new Paragraph("Second Paragraph")); doc.Close(); //if I remove this line the email attachment is sent but with 0 bytes MailMessage mm = new MailMessage("username@gmail.com", "username@gmail.com") { Subject = "subject", IsBodyHtml = true, Body = "body" }; mm.Attachments.Add(new Attachment(memoryStream, "test.pdf")); SmtpClient smtp = new SmtpClient { Host = "smtp.gmail.com", Port = 587, EnableSsl = true, Credentials = new NetworkCredential("username@gmail.com", "my_password") }; smtp.Send(mm); //the "Cannot Access a Closed Stream" error is thrown here ``` Thanks!!! ## EDIT: Just to help somebody looking for the answer to this question, the code to send a pdf file attached to an email without having to physically create the file is below (thanks to Ichiban and Brianng): ``` var doc = new Document(); MemoryStream memoryStream = new MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(doc, memoryStream); doc.Open(); doc.Add(new Paragraph("First Paragraph")); doc.Add(new Paragraph("Second Paragraph")); writer.CloseStream = false; doc.Close(); memoryStream.Position = 0; MailMessage mm = new MailMessage("username@gmail.com", "username@gmail.com") { Subject = "subject", IsBodyHtml = true, Body = "body" }; mm.Attachments.Add(new Attachment(memoryStream, "filename.pdf")); SmtpClient smtp = new SmtpClient { Host = "smtp.gmail.com", Port = 587, EnableSsl = true, Credentials = new NetworkCredential("username@gmail.com", "password") }; smtp.Send(mm); ```
Have you tried: ``` PdfWriter writer = PdfWriter.GetInstance(doc, memoryStream); // Build pdf code... writer.CloseStream = false; doc.Close(); // Build email memoryStream.Position = 0; mm.Attachments.Add(new Attachment(memoryStream, "test.pdf")); ``` If my memory serves me correctly, this solved a similar problem in a previous project. See <http://forums.asp.net/t/1093198.aspx>
I tried the code posted by [brianng](https://stackoverflow.com/questions/1196059/itextsharp-sending-in-memory-pdf-in-an-email-attachment/1196125#1196125) and it worked. Just change the top of the code to this: ``` var doc = new Document(); MemoryStream memoryStream = new MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(doc, memoryStream); //capture the object doc.Open(); doc.Add(new Paragraph("First Paragraph")); doc.Add(new Paragraph("Second Paragraph")); writer.CloseStream = false; //set the closestream property doc.close(); //close the document without closing the underlying stream memoryStream.Position = 0; /* remainder of your code stays the same*/ ```
iTextSharp - Sending in-memory pdf in an email attachment
[ "", "c#", "email", "pdf", "itext", "" ]
Just idly curious why the compare function for stl::sort can't be a static member? I have a small little helper class foo that is declared and defined in a header, but now I have to create a foo.cpp file for the implementation of cmp() so it isn't multiply defined. I also have to think of a suitably decorated name so fooCmp() doesn't clash with any other cmp(). Because it has no access to any member variables any compare operation that needs access to some other value (eg. sort by distance from foo.bar) needs the complex bind2nd call.
I am not sure what you are complaining about: ``` std::sort(begin,end) // use operator< std::sort(begin,end,order) // Where order is a functor ``` So order can be: * A function * A static member function * Or an object that behaves like a function. The following works for me: ``` class X { public: static bool diff(X const& lhs,X const& rhs) { return true;} }; int main() { std::vector<X> a; std::sort(a.begin(),a.end(),&X::diff); } ``` But if the class has some natural ordering then why not just define the operator< for the class. This will allow you the access to the members and will behave nicely for most of the standard containers/algorithms that need to define an ordering. ``` class X { public: bool operator<(X const& rhs) const { return true;} }; int main() { std::vector<X> a; std::sort(a.begin(),a.end()); } ```
If you're concerned with a multiply defined compare function, try declaring the function with `static` linkage. Then the scope of the function does not extend past the compilation unit where it is found. That said, your compare "function" need not be a function at all, but can instead be a function *object*. A function object is very much like a function but is implemented as an `operator()` that takes the appropriate parameters within a regular class. Since it's a regular class, you can pass constructor parameters to the class. Here is a simple example: ``` #include <iostream> #include <vector> #include <algorithm> using namespace std; class comparator { public: bool operator()(int a, int b) { return a < b; } }; int main(int, char *[]) { vector<int> a; a.push_back(1); a.push_back(3); a.push_back(2); sort(a.begin(), a.end(), comparator()); cout << a << endl; } ```
Why isn't stl compare function a member?
[ "", "c++", "stl", "comparison", "name-decoration", "" ]
I will (hopefully) be taking on a large project to migrate an enterprise Visual FoxPro application to C# 3.0 on .NET 3.5. I know that this application has a huge number of settings that have an impact on everything from multi-user/multi-site configurations to report properties. There's at least 150 different settings that are currently globally scoped. The settings are currently just stored as bits in the application's database, which prevents them from being changed at the user level, since all instances share the same db. My question is, do you know of any way to handle the storage of these settings that would allow them to be changed per user, without sacrificing performance? It would also need to be stored in a way that allows the values to be changed while the application is running. Any ideas would be appreciated.
The standard Settings.Settings file I believe offers you this functionality, including application or user scoped variables. Although I'm not 100% sure if changes are picked up without restarting the application. See here for more info: [MSDN:Using Settings in C#](http://msdn.microsoft.com/en-us/library/aa730869(VS.80).aspx)
If you want to go "enterprise", you can try having a setting definitions table, paired with a user settings table. The setting definitions would have the PK defined by a domain column (for UI settings, connection settings, language settings and so on...), and a setting identifier. A third column would define the default/global value. The user settings would have the PK set to setting definitions' PK + user id and a setting value column, varchar(x). If the application is language aware, language id columns should be added to both tables. Saving user changes to the user settings table should be trivial. Being notified when global settings change is a bit more complicated. EDIT: one thing to keep in mind is to always fallback to some default: global setting / default language.
Handling a large number of settings in a C# WinForms application?
[ "", "c#", ".net", "migration", "visual-foxpro", "foxpro", "" ]
Env: .NET 1.1 I got into this situation. Where I need to give a URL that someone could redirect them to our page. When they redirect they also need to tell us, what message I need to display on the page. Initially I thought of something like this. * <http://example.com/a.aspx?reason=100> * <http://example.com/a.aspx?reason=101> * ... * <http://example.com/a.aspx?reason=115> So when we get this url based on 'reason' we can display different message. But the problem turns out to be that they can not send any query parameters at all. They want 15 difference URL's since they can't send query params. It doesn't make any sense to me to created 15 pages just to display a message. Any smart ideas,that have one URL and pass the 'reason' thru some means? EDIT: Options I'm thinking based on Answers Try HttpRequest.PathInfo or Second option I was thinking was to have a httphanlder read read the path like this - HttpContext.Request.Path based on path act. Ofcourse I will have some 15 entries like this in web.config. `<add verb="*" path="reason1.ashx" type="WebApplication1.Class1, WebApplication1" />` `<add verb="*" path="reason2.ashx" type="WebApplication1.Class1, WebApplication1" />` Does that look clean?
Thoughts: * Path Info: <http://msdn.microsoft.com/en-us/library/system.web.httprequest.pathinfo.aspx> urls would be <http://example.com/a.aspx/reason100>, <http://example.com/a.aspx/reason101>, etc * URL Rewriting : <http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx> urls would be <http://example.com/a/reason/100.aspx>, <http://example.com/a/reason/100.aspx>, etc. edit: both these approaches involve only one aspx page, but multiple urls pointing to it.
Assuming IIS (I run this on IIS 6 but I expect it would run on 5 as well) you could install [IIRF](http://iirf.codeplex.com/). You could then configure different "friendly" urls a la Apache's mod-rewrite and send them as query params to a single as\*x page.
URL with no query parameters - How to distinguish
[ "", "c#", ".net", ".net-1.1", "" ]
I'm trying to port a shell script to the much more readable python version. The original shell script starts several processes (utilities, monitors, etc.) in the background with "&". How can I achieve the same effect in python? I'd like these processes not to die when the python scripts complete. I am sure it's related to the concept of a daemon somehow, but I couldn't find how to do this easily.
**Note**: This answer is less current than it was when posted in 2009. Using the `subprocess` module shown in other answers is now recommended [in the docs](https://docs.python.org/2/library/os.html?highlight=os#os.spawnl) > (Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions.) --- If you want your process to start in the background you can either use `system()` and call it in the same way your shell script did, or you can `spawn` it: ``` import os os.spawnl(os.P_DETACH, 'some_long_running_command') ``` (or, alternatively, you may try the less portable `os.P_NOWAIT` flag). See the [documentation here](https://docs.python.org/2/library/os.html#os.spawnl).
While [jkp](https://stackoverflow.com/questions/1196074/starting-a-background-process-in-python/1196122#1196122)'s solution works, the newer way of doing things (and the way the documentation recommends) is to use the `subprocess` module. For simple commands its equivalent, but it offers more options if you want to do something complicated. Example for your case: ``` import subprocess subprocess.Popen(["rm","-r","some.file"]) ``` This will run `rm -r some.file` in the background. Note that calling `.communicate()` on the object returned from `Popen` will block until it completes, so don't do that if you want it to run in the background: ``` import subprocess ls_output=subprocess.Popen(["sleep", "30"]) ls_output.communicate() # Will block for 30 seconds ``` See the documentation [here](http://docs.python.org/library/subprocess.html#module-subprocess). Also, a point of clarification: "Background" as you use it here is purely a shell concept; technically, what you mean is that you want to spawn a process without blocking while you wait for it to complete. However, I've used "background" here to refer to shell-background-like behavior.
Start a background process in Python
[ "", "python", "process", "daemon", "" ]
I'm doing a small multi-threaded app that uses asynchronous TCP sockets, but I will get to the point: I'm using a custom event to read a value from a form and the delegate used by the event returns a string when finished. My question here is: is that correct? is it OK to return values from the events? or is there a better way to do this? (like using a simple delegate to the form to read the values)
It's often awkward to return values from events. In practice, I've found it much easier to include a writable property on a set of custom EventArgs that is passed to the event, and then checked after the event fires -- similar to Cancel property of the WinForms FormClosing event.
I don't think it's a good idea... events are basically multicast delegates, so there can be multiple handlers. Which return value will you take in that case ?
Return a value from an Event -- is there a Good Practice for this?
[ "", "c#", "multithreading", "events", "delegates", "" ]
I'm not sure it's really a "polymorphism" question but anyway... So I have an abstract class and 2 other classes that inherit from it. Some code from the base class have a variable that would be different between classes that extends the base class. What's the best way to do that? Override a "get" method? Change the architecture a little to call a function from the base classe to the extended class that would pass the value... Here's an example (it won't compile, just a little example on how I did it) : ``` public abstract class BaseAbstract class { ... some code .... // Here "listName" would change depending on the extending class protected List<Items> getAllItems(String listName) { Web.getList(listName); ... some code ... } protected abstract List<Items> getAllItems(); } public class NewClass : BaseAbstract { protected override void getAllItems() { return getAllItems("ListNameOfNewClass"); } } ``` That's how I did it, but it doesn't feel really clear and easy to maintain IMO. Any idea or comment welcome!
I'd probably opt for something like this: ``` public abstract class Base { protected abstract string ListName { get; } public List<Item> GetItems() { return Web.GetList(ListName); } } public class Child : Base { protected override string ListName { get { return "MyListName"; } } } ```
[Prefer Composition Over Inheritance.](https://stackoverflow.com/questions/49002/prefer-composition-over-inheritance) ``` public class Base { private ItemListGetter itemListGetter; // assignment/initialization is up to you private List<Items> getAllItems() { // this can now be inlined return itemListGetter.getList(); } } ``` If this is the only dimension of variation between your subclasses, there's no longer a need for inheritance here; just establish the right ItemListGetter as needed.
C# basic Polymorphism question
[ "", "c#", "polymorphism", "class-design", "" ]
Suppose I have a definition for a door: ``` class Door { public void Lock() { // lock the door } } ``` This appeared to make sense to me, at least for awhile. But now, I'm not so sure. If I had a Person object that wanted to lock a Door, he would call aDoor.Lock(). But in real life, we do not lock doors by telling the door to lock itself. It seems like a more accurate model of the situation would be the person being able to directly modify the state of aDoor, provided he has sufficient power to lock doors. For example, aCat should not be able to set aDoor.IsLocked = true. I could see how to do this with properties, if they supported parameters: ``` class Person { public void LockDoor(Door door) { door.IsLocked(this) = true; } } class Door { bool isLocked; public bool IsLocked(Person person) { set { if(person != null) // ensure there is a real person trying to lock the door { this.isLocked = value; } } } } static void Main() { Person personFromThinAir = new Person(); Door doorFromThinAir = new Door(); personFromThinAir.LockDoor(doorFromThinAir); } ``` Instead, what we can do is this: ``` class Person { public void LockDoor(Door door) { door.SetLocked(this, true); } } class Door { bool isLocked; public void SetLocked(Person person, bool locked) { if(person != null) { this.isLocked = locked; } } } ``` Obviously these two classes are strongly coupled and both would probably have interfaces extracted in actual code, but that's not what I'm getting at. My question is, is this a better way to model the relationship between the two objects? Is there an even better way than this? The more I think about it, the less sense of aDoor.Lock() I can make; it seems to violate object-oriented design.
Although the person "locks" the door, in reality the person is toggling (or frobbing) on an element of the door (the lock handle) and that manipulation causes the lock to lock the door. You can think of this where, although the person is moving the deadbolt, the deadbolt is what is locking the door - not the person. So a better representation might be that a door has a lock, and the person calls lock.lock(), which then sets the lock being closed (locked). The basic premise here is that, although the person is manipulating the lock, that is external (the function call). The lock's internal changes (the code inside the function) is what is actually causing the door to lock. The person is not taking off the handle and manipulating the inside to lock the door every time - they are simply toggling a state on the outside and expecting the machinery internal to handle it.
OOP isn't really about modelling how things work in the "real world". Its more about managing complexity. Considering this, it is perfectly acceptable for the door to lock itself. Even in the real world, a person locking a door doesn't need to know anything about how the lock works other than turning the knob or the key. Hiding the details of a complex idea behind an abstraction is what makes OOP so useful. The abstractions you use differ with the problem domain. In the example you gave the Person shouldn't need to know anything about the door other than how to operate it: ``` class Door { public bool Open(){} public bool Close(){} public void Lock(){} public void Unlock(){} } ```
OOD and subject-object confusion
[ "", "c#", "oop", "" ]
``` select (t1.a + t2.b) sum from (select (aa + bb) a from table_x where cc = 'on') t1, table_y t2 where t1.id = t2.id ``` The problem is that when t1 is not found, the final result will be null; How can I make the default value of t2.b to 0, when t1 is not found? Thx in advance.
You'll need to use a subquery or a left join if you want to actually return of `null` if it can't find it. Like so: ``` select nvl( (select (aa + bb) from table_x where cc = 'on' and id = t2.id) , 0) + t2.b as sum from table_y t2 ```
``` select (t1.a + decode(nvl(t1.a,-1),-1,0,t2.b) sum from (select (aa + bb) a from table_x where cc = 'on') t1, table_y t2 where t1.id = t2.id ``` Does this work? -1 may need to be replaced by a varchar like 'X' if the t2.b is a varchar selection, which I am guessing is not; looking at the addition here.
How can I use the 'NVL' function on a result table?
[ "", "sql", "oracle", "" ]
I have a created a custom keyboard shortcut for my application, when the user press combination keys of CTRL + ALT + Q, i show a messagebox "Are you sure you want to log out ?" Then if clicked YES, i log out of the application. Problem : I want to make sure that, only once instance of message box shows. No matter how many times the user presses the shortcut. currently it shows multiple message box, on pressing multiple shortcuts. How to overcome this ?
You'll want to make your *application* single instance so it can only be started once. [Single Instance App](http://www.codeproject.com/KB/cs/singleinstance.aspx)
From [MSDN](http://msdn.microsoft.com/en-us/library/szwxe9we.aspx) > A message box is a modal dialog box, > which means no input (keyboard or > mouse click) can occur except to > objects on the modal form. The program > must hide or close a modal form > (typically in response to some user > action) before input to another form > can occur. File a bug on connect.microsoft.com ! --- Taking ck's comment into consideration...If you are showing a custom dialog (form) then you need to invoke the form using Form.ShowDialog() and not Show().
Show only one instance of messagebox.show in c#
[ "", "c#", ".net", "winforms", "" ]
I'm trying to create a MouseEvent with certain modifiers for UnitTesting. I'm using J2SE and the following code fails to pass: ``` public void testMouseEventProblem() { MouseEvent event = new MouseEvent(new JPanel(), 1, System.currentTimeMillis(), InputEvent.CTRL_DOWN_MASK | InputEvent.ALT_DOWN_MASK, 1,1, 0, false); assertEquals(InputEvent.CTRL_DOWN_MASK | InputEvent.ALT_DOWN_MASK, event.getModifiers()); } ``` It's complaining saying "expected 640 but was 10" What's going on?
It gets internally converted to ALT\_MASK and CTRL\_MASK constants (8 + 2) It happens in `java.awt.event.InputEvent:405` in JDK 6 ``` /** * Returns the modifier mask for this event. */ public int getModifiers() { return modifiers & (JDK_1_3_MODIFIERS | HIGH_MODIFIERS); } ``` Try `getModifiersEx():442`: ``` public int getModifiersEx() { return modifiers & ~JDK_1_3_MODIFIERS; } ``` As ALT\_DOWN\_MASK and friends are extended modifiers introduced after Java 1.3 Proof: ``` public class MouseEvt { public static void main(String[] args) { MouseEvent event = new MouseEvent(new JPanel(), 1, System .currentTimeMillis(), InputEvent.CTRL_DOWN_MASK | InputEvent.ALT_DOWN_MASK, 1, 1, 0, false); System.out.printf("%d - %d%n", InputEvent.CTRL_DOWN_MASK | InputEvent.ALT_DOWN_MASK, event.getModifiersEx()); } } ``` Returns `640 - 640`
I don't think you should use assertEquals here. You are checking that at least one of two specific bits are set in a value that could be anything, so you probably want to separate asserts. Let me clarify this: You are getting a number that consists of a bunch of bits that are set (the modifiers), but you only care about two specific bits. With assertEquals, you are essentially saying that you want two specific bits sets, while the others are zero. What you could do is assert that getModifiers() & ( MASK1 | MASK2 ) is greater than zero, since at least one of the two bits must be on for that to happen That being said, something about those numbers looks funky, are you sure you are using the correct mask values?
MouseEvent weirdness
[ "", "java", "events", "junit", "" ]