Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am working on making an address book in C# 2008. I need to be able to save the contacts and then later display them when the user asked for it. I also need to handle an exception when someone enters an unknown color when writing the person's favorite color. This is my code so far: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Lab02 { class Program { static void Main(string[] args) { Contact contact = new Contact(); Console.WriteLine("Please enter the person's name:"); contact.Name = Console.ReadLine(); Console.WriteLine("Please enter the person's e-mail address:"); contact.Email = Console.ReadLine(); Console.WriteLine("Please enter the person's favorite color:"); string tempColor = Console.ReadLine(); contact.Favoritecolor = (System.Drawing.KnownColor)(Enum.Parse(typeof(System.Drawing.KnownColor), tempColor)); try { } catch { } finally { Console.WriteLine("This is an unknown color. Please enter a known color"); } } class Color { enum clr // This is a list of colors for the user to pick from. { Red, Green, Blue, Yellow, Purple, Brown, Black, Crimson, White, Turqoise, Orange, Cyan, Pink, Gold, Silver, Bronze, Gray, Indigo, Rust } } } class Contact { //This string represents the person's Name. public string Name { get; set; } //This string represents the person's Email. public string Email { get; set; } public System.Drawing.KnownColor Favoritecolor { get; set; } } } ``` Can anyone help me please?
Why not list all colors that the user can pick, with the Enum Value, then ask them to enter a number, try and convert it to an enum,then store it. If it fails to convert, let them know it's invalid. Here's a little snippet to help. You'll need to make your enum public however. ``` Console.WriteLine("Here are a list of colors:"); foreach(Color.clr item in Enum.GetValues(typeof(Color.clr))) { Console.WriteLine(string.Format("{0} - {1}",(int)item,item.ToString())); } Console.WriteLine("Please choose your color"); string colorInput = Console.ReadLine(); int colorValue = 0; if(!int.TryParse(colorInput, out colorValue)) { Console.WriteLine(string.Format("{0} is not a valid number",colorInput)); return; } // This will give an error if they've typed a number that wasn't listed // So need to add a bit of checking here Color.clr tempColor = (Color.clr)colorValue; // Your code here ```
You can also use the Reflection: ``` public static class ColorInfo { private static readonly PropertyInfo[] PropertiesInfo; static ColorInfo() { PropertiesInfo = typeof(Color).GetProperties(BindingFlags.Public | BindingFlags.Static); } public static bool TryGetKnownColorFromString(string colorName, out KnownColor knowColor) { if (String.IsNullOrEmpty(colorName))//if wrong color name { knowColor = KnownColor.ActiveBorder; return false; } try { foreach (PropertyInfo property in PropertiesInfo) { if (property.Name.Equals(colorName, StringComparison.InvariantCultureIgnoreCase)) { knowColor = ((Color)property.GetValue(null, null)).ToKnownColor(); return true; } } } catch (Exception exc) { //catch GetValue & Equals methods exceptions if (!(exc is ArgumentException || exc is TargetException || exc is TargetParameterCountException || exc is MethodAccessException || exc is TargetInvocationException)) { //throw exc; //We don't want to lose information from StackTrace throw; } } knowColor = KnownColor.ActiveBorder; return false; } } ``` Simply test: ``` string[] colors = { "reD", "AzUre", "Blue", "BlueViollet" }; KnownColor knowColor; foreach (String color in colors) { if (ColorInfo.TryGetKnownColorFromString(color, out knowColor)) { Console.WriteLine(knowColor.ToString()); } else { Console.WriteLine("Color: {0} - not found !", color); } } ``` Output: ``` Red Azure Blue Color: BlueViollet - not found ! ``` Helpfull usings: ``` using System; using System.Drawing; using System.Reflection; ```
How to save contacts in an address book and list them?
[ "", "c#", "exception", "" ]
I want to be able to get the size of one of the local directories using C#. I'm trying to avoid the following (pseudo like code), although in the worst case scenario I will have to settle for this: ``` int GetSize(Directory) { int Size = 0; foreach ( File in Directory ) { FileInfo fInfo of File; Size += fInfo.Size; } foreach ( SubDirectory in Directory ) { Size += GetSize(SubDirectory); } return Size; } ``` Basically, is there a Walk() available somewhere so that I can walk through the directory tree? Which would save the recursion of going through each sub-directory.
If you use `Directory.GetFiles` you can do a recursive seach (using `SearchOption.AllDirectories`), but this is a bit flaky anyway (especially if you don't have access to one of the sub-directories) - and might involve a huge single array coming back (warning klaxon...). I'd be happy with the recursion approach unless I could show (via profiling) a bottleneck; and then I'd probably switch to (single-level) `Directory.GetFiles`, using a `Queue<string>` to emulate recursion. Note that .NET 4.0 introduces some enumerator-based file/directory listing methods which save on the big arrays.
A very succinct way to get a folder size in .net 4.0 is below. It still suffers from the limitation of having to traverse all files recursively, but it doesn't load a potentially huge array of filenames, and it's only two lines of code. Make sure to use the namespaces `System.IO` and `System.Linq`. ``` private static long GetDirectorySize(string folderPath) { DirectoryInfo di = new DirectoryInfo(folderPath); return di.EnumerateFiles("*.*", SearchOption.AllDirectories).Sum(fi => fi.Length); } ```
How do I get a directory size (files in the directory) in C#?
[ "", "c#", "asp.net", "winforms", ".net", "" ]
I have a tag field in my web page, in which user can enter tags seprating them by `,` (comma), same as done in StackOverflow. I am using PHP and I am seprating all tags enetered by the user on the basis of comma using explode function and then adding them to my tags table in teh database. My code is working perfect for normal tags like if user entered `battleship, strategy, sea war` then its working, but lets say user enetered `battleship, strategy, sea war,` or `battleship, strategy, games, sea war, , , , ,` or `battleship, strategy, games,, sea war` or any wrong value, then how will I detect it and then enter only correct value sin the database. Is it possible using regExp or any other way, please tell me how?
``` $tags = "battleship, stragety, ,"; $exp = explode(",", $tags); $valid_tags = array(); foreach($exp as $tag) { if( ($tag = trim($tag)) != "") { $valid_tags[] = $tag; } } ``` Will remove whitespace from any tag and if it has any text, tag will be added into valid array :)
You can use regular expressions to split the string into the tags, for example: ``` \s*,\s* $tags = preg_split('/\\s*,\\s*/', $str); $tags = array_filter(array_map('trim', $tags)); var_dump($tags); ``` But then you still need to check if the tags are valid. An empty tag for example (like in `foo,,bar`) isn’t. Just filter them out and pass the rest to your database. Or you use a regular expression to just find valid tags: ``` \w+(?:\s+\w+)* preg_match_all('/\\w+(?:\\s+\\w+)*/', $str, $tags, PREG_PATTERN_ORDER); var_dump($tags); ```
How to enter tags correctly?
[ "", "php", "tagging", "explode", "" ]
Jimmy Nilsson discusses his COMB guid concept [here](http://www.informit.com/articles/article.aspx?p=25862&seqNum=7). This concept is popular in NHibernate, among other circles, for its supposed performance value over standard GUIDs which are typically far more random. However, in testing, this does not appear to be the case. Am I missing something? Test case: I have a table called temp (not a temp table, just a table named "temp") with 585,000 rows in it. I have a new table called Codes, and wish to copy all 585,000 code values from the temp table to the codes table. The test SQL I executed was: ``` set statistics time on; truncate table codes; DBCC DBREINDEX ('codes', '', 90); insert into codes (codeid, codevalue) select newid(), codevalue from temp truncate table codes; DBCC DBREINDEX ('codes', '', 90); insert into codes (codeid, codevalue) select CAST(CAST(NEWID() AS BINARY(10)) + CAST(GETDATE() AS BINARY(6)) AS UNIQUEIDENTIFIER), codevalue from temp ``` Performance with standard GUID values: > SQL Server Execution Times: CPU > time = 17250 ms, elapsed time = 15735 > ms. > > (585000 row(s) affected) Performance with COMB GUID values: > SQL Server Execution Times: CPU > time = 17500 ms, elapsed time = 16419 > ms. > > (585000 row(s) affected) What am I missing? the COMB GUID values resulted in slightly longer times, presumably because of the additional conversions. I thought the point was to reduce the insert time by semi-ordering the GUIDS using the date for the last 6 bytes, but the performance gain appears non-existent.
I second that you'll see differences only when you have indexes (PK, FK or other kind of indexes, clustered or not clustered) on the Guid colume, because cost of standard guid versus newguid or comb guid is due to the high cost of re-ordering the index data every time an insert is performed. See [my question](https://stackoverflow.com/questions/170346/what-are-the-performance-improvement-of-sequential-guid-over-standard-guid) in which I corroborate this with some real life data from both SQL Server and Oracle.
I'd suggest that you're not seeing the order benefit because the target table has no PK. So, it's the conversion overhead you're seeing. IF it has a PK, the 585k rows must still be sorted on insert. How does SQL know it's semi-sorted? Now, if it was 5,850 x 100 row inserts, then you may see some benefit because the new rows will go "at the end" not "in the middle" so reducing page splits and overhead. I'd go further and say that the article is dated 2002, and is for SQL 2000, and has been overtaken by real life. In SQL Server 2005 we have SEQUENTIAL GUIDs to allow strictly monotonic GUIDs to solve some issues. The GUID as PK has been done here too: recent example: [INT vs Unique-Identifier for ID field in database](https://stackoverflow.com/questions/1151625/int-vs-unique-identifier-for-id-field-in-database) with 3rd party links. If an ORM dictates GUID as a PK rather than a natural key or standard int-based surrogate key, that's a severe limitation of the ORM. And a case of the client tail wagging the database dog.
Performance value of COMB guids
[ "", "sql", "sql-server", "performance", "t-sql", "guid", "" ]
I like Google Web Tookit API approach. It use Java language behind the scenes that compiles ONLY JavaScript code WHOSE TARGET BROWSER NEEDS. It happens some developers would like to use that feature in pure JavaScript language. Anwser: WHAT COULD WE SUGGEST in order to fullfill this requirement ? I suggest to use JavaScript comments (as a flag) as a way some compiler (like Yahoo JavaScript compiler) analises our app JavaScript code and generates only a JavaScript Framework code needed. Example: a hypothetical JavaScript framework (JQuery, Mootools, Prototype ect) code ``` // depends function say funcion sayHello() { // some code } function say() { // some code } // more and more no needed Javascript framework functions in our app ``` So when my app use a function sayHello, only that sayHello function and its dependencies would be filtered through JavaScript comments, nothing else. So, this way our application would be lighter, by using only JavaScript Framework code needed. And you: what do you suggest ?
I would suggest learning to program in JavaScript and understanding the various peculiarities of the different DOM implementations, then writing just the code necessary for your application. If you really don't want to have to deal with re-creating all the event handling shenanigans and so on, then nick the relevant techniques from the libraries. You'll learn a lot more about what you're doing that way, as you'll need to actually understand how it all works to be able to integrate those techniques with your application. Having worked professionally with JavaScript since 1996 I, like many, was initially tempted by the apparent ease of use offered by libraries; but if I see one more answer on here that says "use jQuery" (followed by some code that isn't even optimal in jQuery) when the correct answer is to use an existing and well-documented feature of JavaScript that works on every single implementation since Netscape Navigator 3, I'll scream ;-)
If the JavaScript code of the framework is served as a cacheable file then the download cost of requesting the *entire* framework (e.g. jQuery.js) can be eliminated, but if you were generating the framework code on the fly (as you suggest above) then it's going to be harder to take advantage of caching. On top of this the memory cost of defining the entire framework is probably unlikely to be problematic (assuming the framework is written sensibly). So, pulling in the entire framework, as is the common case, is simple, works well and doesn't require a particular server-side infrastructure (like GWT does).
Why to download a ENTIRE JavaScript framework when we do not need all features of them? What could we suggest? (JavaScript)
[ "", "javascript", "performance", "" ]
We have an ASP.NET C# web application with a "printer friendly" link. Is there a way to programmatically set the orientation to landscape, so the user can just press the print button?
The short answer is "No." It is a deliberate limitation of browsers that the page itself cannot override the user's print settings. This is to prevent abuse I would imagine and causes all sorts of headaches. One possible work around would be to output your page as a PDF and present that. You *can* control the print settings for a PDF page.
This is something that would have to be done on the client side (using JavaScript/CSS). Unfortunately, JavaScript does not have the ability to make this change. CSS does have a means of specifying landscape printing via the @page directive: ``` @page { size: landscape; } ``` CSS Paged Media is well supported by Chrome v15+ and Opera v15+, is partially supported by Firefox v19+, IE8+ and all versions of Edge. However it is not supported by Safari at all <https://caniuse.com/#feat=css-paged-media>
Programmatically set print page orientation to landscape
[ "", "c#", "asp.net", "printing", "" ]
I have a list of tuples, each containing a find/replace value that I would like to apply to a string. What would be the most efficient way to do so? I will be applying this iteratively, so performance is my biggest concern. More concretely, what would the innards of processThis() look like? ``` x = 'find1, find2, find3' y = [('find1', 'replace1'), ('find2', 'replace2'), ('find3', 'replace3')] def processThis(str,lst): # Do something here return something >>> processThis(x,y) 'replace1, replace2, replace3' ``` Thanks, all!
You could consider using `re.sub`: ``` import re REPLACEMENTS = dict([('find1', 'replace1'), ('find2', 'replace2'), ('find3', 'replace3')]) def replacer(m): return REPLACEMENTS[m.group(0)] x = 'find1, find2, find3' r = re.compile('|'.join(REPLACEMENTS.keys())) print r.sub(replacer, x) ```
A couple notes: 1. The boilerplate argument about premature optimization, benchmarking, bottlenecks, 100 is small, etc. 2. There are cases where the different solutions will return different results. if `y = [('one', 'two'), ('two', 'three')]` and `x = 'one'` then mhawke's solution gives you `'two'` and Unknown's gives `'three'`. 3. Testing this out in a silly contrived example mhawke's solution was a *tiny* bit faster. It should be easy to try it with your data though.
Iterative find/replace from a list of tuples in Python
[ "", "python", "django", "list", "iteration", "tuples", "" ]
I'm confused how CookieContainer handles domain, so I create this test. This test shows cookieContainer doesn't return any cookie for "example.com" but according to RFC it should return at least 2 cookies. Isn't it a bug? How make it to work? Here is a discussion about this bug: <http://social.msdn.microsoft.com/Forums/en-US/ncl/thread/c4edc965-2dc2-4724-8f08-68815cf1dce6> ``` <%@ Page Language="C#" %> <%@ Import Namespace="System.Net" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <script runat="server"> CookieContainer getContainer() { CookieContainer result = new CookieContainer(); Uri uri = new Uri("http://sub.example.com"); string cookieH = @"Test1=val; domain=sub.example.com; path=/"; result.SetCookies(uri, cookieH); cookieH = @"Test2=val; domain=.example.com; path=/"; result.SetCookies(uri, cookieH); cookieH = @"Test3=val; domain=example.com; path=/"; result.SetCookies(uri, cookieH); return result; } void Test() { CookieContainer cookie = getContainer(); lblResult.Text += "<br>Total cookies count: " + cookie.Count + " &nbsp;&nbsp; expected: 3"; Uri uri = new Uri("http://sub.example.com"); CookieCollection coll = cookie.GetCookies(uri); lblResult.Text += "<br>For " + uri + " Cookie count: " + coll.Count + " &nbsp;&nbsp; expected: 2"; uri = new Uri("http://other.example.com"); coll = cookie.GetCookies(uri); lblResult.Text += "<br>For " + uri + " Cookie count: " + coll.Count + " &nbsp;&nbsp; expected: 2"; uri = new Uri("http://example.com"); coll = cookie.GetCookies(uri); lblResult.Text += "<br>For " + uri + " Cookie count: " + coll.Count + " &nbsp;&nbsp; expected: 2"; } protected void Page_Load(object sender, EventArgs e) { Test(); } </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>CookieContainer Test Page</title> </head> <body> <form id="frmTest" runat="server"> <asp:Label ID="lblResult" EnableViewState="false" runat="server"></asp:Label> </form> </body> </html> ```
I just found the fix for this bug and discussed here: <http://dot-net-expertise.blogspot.com/2009/10/cookiecontainer-domain-handling-bug-fix.html> Here is the solution: 1. Don't use .Add(Cookie), Use only .Add(Uri, Cookie) method. 2. Call BugFix\_CookieDomain each time you add a cookie to the container or before you use .GetCookie or before system use the container. ``` private void BugFix_CookieDomain(CookieContainer cookieContainer) { System.Type _ContainerType = typeof(CookieContainer); Hashtable table = (Hashtable)_ContainerType.InvokeMember("m_domainTable", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.GetField | System.Reflection.BindingFlags.Instance, null, cookieContainer, new object[] { }); ArrayList keys = new ArrayList(table.Keys); foreach (string keyObj in keys) { string key = (keyObj as string); if (key[0] == '.') { string newKey = key.Remove(0, 1); table[newKey] = table[keyObj]; } } } ```
I've created a fix for this problem that works on Windows 10 / UWP / .NET Core apps. The issue is that the internals for `CookieContainer` are different, but just as crappy, as they are in the .NET Framework proper. So the accepted solution does not work anymore. But instead of "fixing" the `CookieContainer`, I just wrote a version of `GetCookies()` that gets all the cookies for a particular domain with a string, regardless of their "secure" state or if they are prefixed with a dot. Feel free to modify it as you see fit for your needs, and I'll see about getting a version of it implemented in a future .NET Core release. ``` using System.Collections.Generic; using System.Reflection; namespace System.Net { /// <summary> /// Contains extensions for the <see cref="CookieContaner"/> class. /// </summary> public static class CookieContainerExtensions { /// <summary> /// Uses Reflection to get ALL of the <see cref="Cookie">Cookies</see> where <see cref="Cookie.Domain"/> /// contains part of the specified string. Will return cookies for any subdomain, as well as dotted-prefix cookies. /// </summary> /// <param name="cookieContainer">The <see cref="CookieContainer"/> to extract the <see cref="Cookie">Cookies</see> from.</param> /// <param name="domain">The string that contains part of the domain you want to extract cookies for.</param> /// <returns></returns> public static IEnumerable<Cookie> GetCookies(this CookieContainer cookieContainer, string domain) { var domainTable = GetFieldValue<dynamic>(cookieContainer, "_domainTable"); foreach (var entry in domainTable) { string key = GetPropertyValue<string>(entry, "Key"); if (key.Contains(domain)) { var value = GetPropertyValue<dynamic>(entry, "Value"); var internalList = GetFieldValue<SortedList<string, CookieCollection>>(value, "_list"); foreach (var li in internalList) { foreach (Cookie cookie in li.Value) { yield return cookie; } } } } } /// <summary> /// Gets the value of a Field for a given object instance. /// </summary> /// <typeparam name="T">The <see cref="Type"/> you want the value to be converted to when returned.</typeparam> /// <param name="instance">The Type instance to extract the Field's data from.</param> /// <param name="fieldName">The name of the Field to extract the data from.</param> /// <returns></returns> internal static T GetFieldValue<T>(object instance, string fieldName) { BindingFlags bindFlags = BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static; FieldInfo fi = instance.GetType().GetField(fieldName, bindFlags); return (T)fi.GetValue(instance); } /// <summary> /// Gets the value of a Property for a given object instance. /// </summary> /// <typeparam name="T">The <see cref="Type"/> you want the value to be converted to when returned.</typeparam> /// <param name="instance">The Type instance to extract the Property's data from.</param> /// <param name="propertyName">The name of the Property to extract the data from.</param> /// <returns></returns> internal static T GetPropertyValue<T>(object instance, string propertyName) { var pi = instance.GetType().GetProperty(propertyName); return (T)pi.GetValue(instance, null); } } } ```
CookieContainer bug?
[ "", "c#", "cookiecontainer", "" ]
How would I convert this query from SQL to Linq: ``` SELECT status As 'Status', count(status) As 'Count' FROM tbl_repair_order WHERE contract = 'con' and (status = 'Parts Arr' or status = 'NA' or status = 'New Call' or status = 'Parts Ord' or status = 'Parts Req' or status = 'F Work') GROUP BY status ``` --- Update Thanks Guys, this is the code I used. Tested and returns the same as above: ``` List<string> statuses = new List<string> { "Parts Arr", "NA", "New Call", "Parts Ord", "Parts Req", "F Work"}; var result = (from x in db.tbl_repair_orders where x.CONTRACT == strContract && statuses.Contains(x.STATUS) group x.STATUS by x.STATUS into grouping select new { Status = grouping.Key, Count = grouping.Count() }); return result; ```
``` string[] statuses = new string[] { "Parts Arr", "NA", "New Call", "Parts Ord", "Parts Req", "F Work" }; var x = (from ro in db.tbl_repair_order where ro.contract == "con" && statuses.Contains(ro.status) group 0 by ro.status into grouping select new { Status = grouping.Key, Count = grouping.Count() }); ``` I don't know if the syntax is correct (especially the last two lines) but it should be pretty close. I added the 0 between group and by based on Eamon Nerbonne's correction in the comments. Also, thanks to Ryan Versaw for the link explaining List and arrays for generating IN clauses.
Assuming you wire up your table's appropriately, something like ``` var statusCounts = from row in youDbNameHere.tbl_repair_order where row.contract == "con" && (row.status == "Parts Arr" || row.status == "NA" || row.status == "New Call" || row.status == "Parts Ord" || row.status == "Parts Req" || row.status == "F Work") group 0 by row.status into g select new { Status = g.Key, StatusCount = g.Count() }; ``` ...and I see Andy beat me to it ;-) Notes: * You need to include an expression between "group" and "by", this expression is will be evaluated to form the set of values accessible under the group's key (in your case it's irrelevant, so a zero is fine). * If you wish to use Linq-to-Sql or Linq-to-Entities (or some other IQueryable implementation), be aware that your code will not execute directly in C#, but rather be translated (as it should be) into sql -- so avoid using .NET specific calls that cannot be translated, as these will generally cause a run-time exception or (rarely) cause the resulting query to be partially evaluated client-side (at a potentially hefty performance cost).
Sql Query to Linq
[ "", "sql", "linq", "linq-to-sql", "" ]
Currently when I make a signature using java.security.signature, it passes back a string. I can't seem to use this string since there are special characters that can only be seen when i copy the string into notepad++, from there if I remove these special characters I can use the remains of the string in my program. In notepad they look like black boxes with the words ACK GS STX SI SUB ETB BS VT I don't really understand what they are so its hard to tell how to get ride of them. Is there a function that i can run to remove these and potentially similar characters? when i use the base64 class supplied in the posts, i cant go back to a signature ``` System.out.println(signature); String base64 = Base64.encodeBytes(sig); System.out.println(base64); String sig2 = new String (Base64.decode(base64)); System.out.println(sig2); ``` gives the output ``` ”zÌý¥y]žd”xKmËY³ÕN´Ìå}ÏBÊNÈ›`Αrp~jÖüñ0…Rõ…•éh?ÞÀ_û_¥ÂçªsÂk{6H7œÉ/”âtTK±Ï…Ã/Ùê² lHrM/aV5XZ5klHhLbctZs9VOtMzlfc9Cyk7Im2DOkXJwfmoG1vzxMIVS9YWV6Wg/HQLewF/7X6XC56pzwmt7DzZIN5zJL5TidFRLsc+Fwy/Z6rIaNA2uVlCh3XYkWcu882tKt2RySSkn1heWhG0IeNNfopAvbmHDlgszaWaXYzY= [B@15356d5 ```
The odd characters are there because cryptographic signatures produce bytes rather than strings. Consequently if you want a printable representation you should [Base64 encode it](http://en.wikipedia.org/wiki/Base64) ([here's a public domain implementation for Java](http://iharder.sourceforge.net/current/java/base64/)). Stripping the non-printing characters from a cryptographic signature will render it useless as you will be unable to use it for verification. Update: > [B@15356d5 This is the result of toString called on a byte array. "[" means array, "B" means byte and "15356d5" is the address of the array. You should be passing the array you get out of decode to [Signature.verify](<http://java.sun.com/j2se/1.4.2/docs/api/java/security/Signature.html#verify(byte[]))>. Something like: ``` Signature sig = new Signature("dsa"); sig.initVerify(key); sig.verify(Base64.decode(base64)); // <-- bytes go here ```
How are you "making" the signature? If you use the `sign` method, you get back a byte array, not a string. That's not a binary representation of some text, it's just arbitrary binary data. That's what you should use, and if you need to convert it into a string you should use a base64 conversion to avoid data corruption.
Need help removing strange characters from string
[ "", "java", "notepad++", "" ]
I have never written unit tests before, for various reasons. I have a chance to write tests now, comfortably, because I have a small app to make from scratch. However, I'm a bit puzzled. The application is supposed to use a printer with a smart card reader to program data on a smart card. So here's the sequence of actions: Create device context, set printer mode, initialize a document, feed a card into the printer, connect to the card with a reader, write something to the card, move the card out, end document, dispose of device context. Okay, unit tests are supposed to test one function for each test, and each test is supposed to run independently of the result of other tests. But let's see - I can not test writing to smart card if I did not position it properly in the printer and if I have not connected to it. And I can not mock this by software - I can only test if writing actually happened if the real card is positioned properly and connected to. And if connecting to card will fail, there's no way to test writing to card - so the test independence principle is broken. So far I came up with a test like this (there are also other test which are 'proper' and test other things too) ``` [Test] public void _WriteToSmartCard() { //start print job printer = new DataCardPrinter(); reader = new SCMSmartCardReader(); di = DataCardPrinter.InitializeDI(); printer.CreateHDC(); Assert.AreNotEqual(printer.Hdc, 0, "Creating HDC Failed"); Assert.Greater(di.cbSize, 0); int res = ICE_API.SetInteractiveMode(printer.Hdc, true); Assert.Greater(res, 0, "Interactive Mode Failed"); res = ICE_API.StartDoc(printer.Hdc, ref di); Assert.Greater(res, 0, "Start Document Failed"); res = ICE_API.StartPage(printer.Hdc); Assert.Greater(res, 0, "Start Page Failed"); res = ICE_API.RotateCardSide(printer.Hdc, 1); Assert.Greater(res, 0, "RotateCardSide Failed"); res = ICE_API.FeedCard(printer.Hdc, ICE_API.ICE_SMARTCARD_FRONT + ICE_API.ICE_GRAPHICS_FRONT); Assert.Greater(res, 0, "FeedCard Failed"); bool bRes = reader.EstablishContext(); Assert.True(bRes, "EstablishContext Failed"); bRes = reader.ConnectToCard(); Assert.True(bRes, "Connect Failed"); bRes = reader.WriteToCard("123456"); Assert.True(bRes, "Write To Card Failed"); string read = reader.ReadFromCard(); Assert.AreEqual("123456", read, "Read From Card Failed"); bRes = reader.DisconnectFromCard(); Assert.True(bRes, "Disconnect Failde"); res = ICE_API.SmartCardContinue(printer.Hdc, ICE_API.ICE_SMART_CARD_GOOD); Assert.Greater(res, 0, "SmartCardContinue Failed"); res = ICE_API.EndPage(printer.Hdc); Assert.Greater(res, 0, "End Page Failed"); res = ICE_API.EndDoc(printer.Hdc); Assert.Greater(res, 0, "End Document Failed"); } ``` The test is working, but the principles are broken - it tests multiple functions, and a lot of them. And each following function depends on the outcome of the previous one. Now, we come to the question: How should I approach unit testing in these circumstances?
Your test code is what is often referred to as an integration test. In short, integration tests are often defined as tests that check the integration between components of a system. While, as David Reis mentions, unit tests will often test individual methods. Both classes of tests are useful. Integration tests, like yours, exercise the system from start to finish making sure that everything is working together nicely. But they are slow and often have outside dependencies (like a card reader). Unit tests are smaller, faster and highly focused but it's hard to see the forest for the trees if all you have are unit tests. Place your unit tests in a separate directory from your integration tests. Use continuous integration. Run your integration tests maybe only a few times a day because they are slower and require more setup/deployment. Run your unit tests all the time. Now, how do you unit test your particular situation where methods depend on other methods? It's unclear how much code you control vs how much is in the libraries, but in your code, learn to use Dependency Injection (DI) as much as possible. Suppose your reader method looks something like this (in pseudocode) ``` boolean WriteToCard(String data){ // do something to data here return ICE_API.WriteToCard(ICE_API.SOME_FLAG, data) } ``` Well you ought to be able to change this to something like : ``` ICE_API api = null ICE_API setApi(ICE_API api) { this.api = api } ICE_API getApi() { if (api == null) { api = new ICE_API() } } boolean WriteToCard(String data){ // do something to data here return getApi().WriteToCard(ICE_API.SOME_FLAG, data) } ``` Then in your test for WriteToCard in the setup you would do ``` void setup() _mockAPI = new Mock(ICE_API) reader.setApi(_mockAPI) void testWriteToCard() reader.writeToCard("12345") // assert _mockAPI.writeToCard was called with expected data and flags. ```
This doesn't look like an unit test. Unit test should be fast and assertive, i.e., you should not need to check if an operation acctually happened in a hardware. I would classify this code as "test automatization", as you need to execute this task and be sure that something happened. The code is also procedural and looks hard to test. The use of several assertions in the same test method indicates that it should be divided. My prefered reference for unit testing is [Misko Hevery's site](http://misko.hevery.com/code-reviewers-guide/). Hope it helps!
How would you apply unit testing in this situation?
[ "", "c#", "unit-testing", "nunit", "" ]
My company is going to be "breaking ground" on a (large!) ASP.NET application in the near future here. We're currently in the design phase and most of the ASP.NET application's I've developed in the past were relatively small -- less than a few dozen pages and 1 or 2 developers. Can anyone point out any resources, either online or otherwise, that include some kinds of best practices or common architectures for large ASP.NET applications? Cheers, Brad
The [patterns & practices guidance for Web applications](http://msdn.microsoft.com/en-us/practices/bb969054.aspx#web_apps) is not a bad place to start. Just don't get too dogmatic about following everything to the letter in those documents.
[The Onion Architecture (sorta like the Hexagonal architecture):](http://jeffreypalermo.com/blog/the-onion-architecture-part-1/) Jeffrey Palermo has done a wonderful job with this architecture. It's a really nice approach to coding in a loosely-coupled way that facilitates Test Driven Development, Domain Driven Design and generally good practices overall. Also, checkout Domain Driven Design book by Eric Evans (Its written with Java examples, but I am using for a C# project no problems.)
ASP.NET Application Architecture
[ "", "c#", "asp.net", "architecture", "" ]
I have observed some unexpected or at least not-perfectly-matching-my-needs behaviour of textboxes bound to textproperties when I can't use using UpdateTrigger=PropertyChanged for my binding. Probably it is not an issue with the textbox but will occur with other editors as well. In my example (source code attached), I have a WPF TabControl bound to some collection. On each tab, you can edit an item from the collection, in various ways you can trigger a save-action, which should save the edits to some model. The textboxes bound to each items' properties are (on purpose) kept to default update-trigger 'OnFocusLost'. This is because there is some expensive validation taking place when a new value is set. Now I found there are at least two ways to trigger my save-action in such a way, that the last focused textbox does not update the bound value. 1) Changing the tab-item via mouse-click on its header and then clicking some save-button. (changing back to the previous tab shows that the new value is even lost) 2) Triggering the save-command via KeyGesture. I setup an example application that demonstrates the behaviour. Clicking on "Save All" will show all item values, the other save-button only shows the current item. **Q:** What would be the best way to make sure that all bindingsources of all my textboxes will be updated before the bound objects are comitted? Preferably there should be a single way that catches all possibilites, I dislike to catch each event differently, since I would worry to have forgotten some events. Observing the selection-changed-event of the tab-control for example would solve issue 1) but not issue 2). Now to the example: XAML first: ``` <Window x:Class="TestOMat.TestWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:TestOMat="clr-namespace:TestOMat" Title="TestOMat" x:Name="wnd"> <Grid> <Grid.Resources> <DataTemplate x:Key="dtPerson" DataType="{x:Type TestOMat:Person}"> <StackPanel Orientation="Vertical"> <StackPanel.CommandBindings> <CommandBinding Command="Close" Executed="CmdSaveExecuted"/> </StackPanel.CommandBindings> <TextBox Text="{Binding FirstName}"/> <TextBox Text="{Binding LastName}"/> <Button Command="ApplicationCommands.Stop" CommandParameter="{Binding}">Save</Button> </StackPanel> </DataTemplate> </Grid.Resources> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.CommandBindings> <CommandBinding Command="ApplicationCommands.Stop" Executed="CmdSaveAllExecuted"/> </Grid.CommandBindings> <TabControl ItemsSource="{Binding ElementName=wnd, Path=Persons}" ContentTemplate="{StaticResource dtPerson}" SelectionChanged="TabControl_SelectionChanged"/> <Button Grid.Row="1" Command="ApplicationCommands.Stop">Save All</Button> </Grid></Window> ``` And the corresponding class ``` using System; using System.Collections.Generic; using System.Linq; using System.Windows; using System.Windows.Controls; namespace TestOMat { /// <summary> /// Interaction logic for TestOMat.xaml /// </summary> public partial class TestWindow : Window { public TestWindow() { InitializeComponent(); } private List<Person> persons = new List<Person> { new Person {FirstName = "John", LastName = "Smith"}, new Person {FirstName = "Peter", LastName = "Miller"} }; public List<Person> Persons { get { return persons; } set { persons = value; } } private void CmdSaveExecuted(object sender, System.Windows.Input.ExecutedRoutedEventArgs e) { Person p = e.Parameter as Person; if (p != null) { MessageBox.Show(string.Format("FirstName={0}, LastName={1}", p.FirstName, p.LastName)); e.Handled = true; } } private void CmdSaveAllExecuted(object sender, System.Windows.Input.ExecutedRoutedEventArgs e) { MessageBox.Show(String.Join(Environment.NewLine, Persons.Select(p=>string.Format("FirstName={0}, LastName={1}", p.FirstName, p.LastName)).ToArray())); e.Handled = true; } private void TabControl_SelectionChanged(object sender, SelectionChangedEventArgs e) { Console.WriteLine(String.Format("Selection changed from {0} to {1}", e.RemovedItems, e.AddedItems)); // Doing anything here only avoids loss on selected-tab-change } } public class Person { public string FirstName { get; set; } public string LastName { get; set; } } } ```
Maybe it's not nice to answer own questions, but I think this answer is more suitable to the question than others, and therefore worth to be written. Surely this was also because I did not describe the problem clearly enough. Finally, just as a quick'n'dirty proof of concept, I worked around it like this: The LostFocus-Event is never fired on the TextBox, when I switch the tab. Therefore, the binding doesn't update and the entered value is lost, because switching back makes the binding refresh from its source. But what IS fired is the PreviewLostFocus-Event, hence I hooked in this tiny function, that manually triggers the update to the binding source: ``` private void BeforeFocusLost(object sender, KeyboardFocusChangedEventArgs e) { if (sender is TextBox) { var tb = (TextBox)sender; var bnd = BindingOperations.GetBindingExpression(tb, TextBox.TextProperty); if (bnd != null) { Console.WriteLine(String.Format("Preview Lost Focus: TextBox value {0} / Data value {1} NewFocus will be {2}", tb.Text, bnd.DataItem, e.NewFocus)); bnd.UpdateSource(); } Console.WriteLine(String.Format("Preview Lost Focus Update forced: TextBox value {0} / Data value {1} NewFocus will be {2}", tb.Text, bnd.DataItem, e.NewFocus)); } } ``` The output according to the event chain with PreviewLostFocus, LostFocus (both from TextBox) and SelectionChanged (from TabControl) will look like this: > Preview Lost Focus: TextBox value Smith123456 / Data value John Smith123 NewFocus will be System.Windows.Controls.TabItem Header:Peter Miller Content:Peter Miller > Preview Lost Focus Update forced: TextBox value Smith123456 / Data value John Smith123456 NewFocus will be System.Windows.Controls.TabItem Header:Peter Miller Content:Peter Miller > Selection changed from System.Object[] to System.Object[] > Preview Lost Focus: TextBox value Miller / Data value Peter Miller NewFocus will be System.Windows.Controls.TextBox: Peter > Preview Lost Focus Update forced: TextBox value Miller / Data value Peter Miller NewFocus will be System.Windows.Controls.TextBox: Peter > Lost Focus having value Miller We see that the LostFocus only occurs at the end, but not before changing the TabItem. Still I think this is strange, possibly a bug in WPF or in the standard control templates. Thank you all for your suggestions, sorry I couldn't really sign them to be answers, as they did not solve the loss of entries on tab-change.
You could write a style targeting all textboxes, in which you would have an EventSetter on the GotFocus or GotKeyboardFocus events, and on complementary LostFocus events. In the handler associated with the GotFocus events, you would set a "canSave" boolean variable to false, that in the LostFocus handler you'll set back to true. All you have to do then is to check before saving if your variable allows you too. If not, you can notify the user, or simply switch the focus from the textbox to something else. That way, the binding's update trigger for the currently edited textbox will trigger appropriately when its focus is lost.
How to achieve focus-reset to update BindingSource of TextBox before any action
[ "", "c#", "wpf", "data-binding", "textbox", "focus", "" ]
This will be my first ASP.NET MVC application with forms authentication so I am trying to make sure I don't miss anything. The scenario is this: Public / Secured Areas. Within the private area it is even further limited to specific areas / user. These 'Areas' are defined by customizations to the base area that is customized per user group. So for example a user could get to url `/Area/Controller/Action`. They would need to have permission to the secured area or they would be redirected to the sign-in view. I have been reading about the `AuthorizeAttribute` but I am not sure how/where I should be doing these basic checks. My initial hunch would be to store a user object in the session after a successful sign-in with the user's IP and details about what they have access to etc. The authorization check for each secured controller call would verify that a valid user object exists in the session, the IPs still match up, and the user has access to the specific area. Is there any obvious holes to this setup? **Edit:** Where/how do I implement these checks so that when a controller is tagged with [Authorize] it will perform those session object checks? Any pointers or suggestions would be much appreciated. Thanks.
Well it looks like I went with a custom AuthorizeAttribute. It was actually very simple. Here is the code: ``` namespace MyApp.Custom.Security { public class Secure : AuthorizeAttribute { /// <summary> /// Checks to see if the user is authenticated and has a valid session object /// </summary> /// <param name="httpContext"></param> /// <returns></returns> protected override bool AuthorizeCore(HttpContextBase httpContext) { if (httpContext == null) throw new ArgumentNullException("httpContext"); // Make sure the user is authenticated. if (httpContext.User.Identity.IsAuthenticated == false) return false; // This will check my session variable and a few other things. return Helpers.SecurityHelper.IsSignedIn(); } } } ``` Then on my controllers I just have to put a `[Secure]` attribute and it uses my function above anytime that controller is accessed. Pretty simple. I also made a `[SecureByRole]` attribute as well that does all the same stuff but checks for my custom role information as well. No need to for all that built in voodoo from the canned Membership :)
Try to look at the [RoleProvider class](http://msdn.microsoft.com/en-us/library/system.web.security.roleprovider.aspx). This is the basic framework of how ASP.net use rolebased authorization to users. And I think you should use [[Authorize(Roles='...')]](http://www.asp.net/learn/mvc/tutorial-17-cs.aspx) attribute to make use of that.
How to implement authorization checks in ASP.NET MVC based on Session data?
[ "", "c#", "asp.net", "asp.net-mvc", "security", "forms-authentication", "" ]
I'm interested in some of the design behind Rails ActiveRecord, Doctrine for PHP (and similar ORMs). * How does an ORM manage to accomplish features like chained accessors and how deep are they typically expected to work? * How does an ORM construct queries internally? * How does an ORM manage the queries while sustaining the arbitrary nature of all that is expected of it? Obviously this is an academic question, but all natures of answers are welcome! (My language of choice is OO PHP5.3!)
Chained method calls are orthogonal to the ORM question, they're used all over the place in OOP. A chain-able method simply returns a reference to the current object, allowing the return value to be called. In PHP ``` class A { public function b() { ... return $this; } public function c($param) { ... return $this; } } $foo = new A(); $foo->b()->c('one'); // chaining is equivilant to // $foo = $foo->b(); // $foo = $foo->c(); ``` As for how queries are constructed, there are two methods. In ActiveRecord like ORMs there's code that examines the Database's meta-data. Most databases has some kind of SQL or SQL like commands to view this meta-data. (MySQL's `DESCRIBE TABLE`, Oracle's `USER_TAB_COLUMNS` table, etc.) Some ORMs have you describe your database tables in a neutral language such as YAML. Others might infer a database structure from the way you've created your Object models (I want to say Django does this, but it's been a while since I looked at it). Finally there's a hybrid approach, where either of the previous two techniques are used, but a separate tool is provided to automatically generate the YAML/etc. or class files. One the names and data-types of a table are known, it's pretty easy to pragmatically write a SQL query that returns all the rows, or a specific set of rows that meet a certain criteria. As for your last question, > How does an ORM manage the queries while > sustaining the arbitrary nature of all > that is expected of it? I'd argue the answer is "not very well". Once you move beyond the one-table, one-object metaphor, each ORM has a different approach an philosophy as to how SQL queries should be used to model objects. In the abstract though, it's just as simple as adding new methods that construct queries based on the assumptions of the ORM (i.e. Zend\_Db\_Table's "findManyToManyRowset" method)
> How does an ORM manage to accomplish features like chained accessors and how deep are they typically expected to work? Nobody seems to have answered this. I can quickly describe how Doctrine does this in PHP. In Doctrine, none of the fields which you see on an object model are actually defined for that class. So in your example, $car->owners, there is no actual field called 'owners' defined in $car's class. Instead, the ORM uses magic methods like [\_\_get and \_\_set](http://www.php.net/manual/en/language.oop5.overloading.php#language.oop5.overloading.members). So when you use an expression like $car->color, internally PHP calls Doctrine\_Record#\_\_get('color'). At this point the ORM is free to satisfy this in anyway necessary. There are a lot of possible designs here. It can store these values in an array called $\_values, for example, and then return $this->\_values['color']. Doctrine in particular tracks not only the values for each record, but also its status relative to the persistence in the database. One example of this that is not intuitive is with Doctrine's relations. When you get a reference to $car, it has a relationship to the People table that is called 'owners'. So the data for $car->owners is actually stored in a separate table from the data for $car itself. So the ORM has two choices: 1. Each time you load a $user, the ORM automatically joins all related tables and populates that information into the object. Now when you do $car->owners, that data is already there. This method is slow, however, because objects may have many relationships, and those relationships may have relationships themselves. So you'd be adding *a lot of joins* and not necessarily even using that information. 2. Each time you load a $user, the ORM notices which fields are loaded from the User table and it populates them, but any fields which are loaded from related tables are not loaded. Instead, some metadata is attached to those fields to mark them as being 'not loaded, but available'. Now when you write the expression $car->owners, the ORM sees that the 'owners' relationship has not been loaded, and it issues a separate query to get that information, add it into the object, and then return that data. This all happens transparently without you needing to realize it. Of course, Doctrine uses #2, since #1 becomes unwieldy for any real production site with moderate complexity. But it also has side-effects. If you are using several relations on $car, then Doctrine will load each one separately, as you access it. So you end up running 5-6 queries when maybe only 1 was required. Doctrine allows you to optimize this situation by using Doctrine Query Language. You tell DQL that you want to load a car object, but also join it to its owners, manufacturer, titles, liens, etc. and it will load all of that data into objects. Whew! Long response. Basically, though, you've gotten at the heart of "What is the purpose of an ORM?" and "Why should we use one?" The ORM allows us to continue thinking in object mode at most times, but the abstraction is not perfect and the leaks in the abstraction tend to come out as performance penalties.
Behind the scenes: How does an ORM "think"?
[ "", "php", "ruby-on-rails", "orm", "activerecord", "doctrine", "" ]
In C# I have the following object: ``` public class Item { } public class Task<T> { } public class TaskA<T> : Task<T> { } public class TaskB<T> : Task<T> { } ``` I want to dynamically create TaskA or TaskB using C# reflection (**Activator.CreateInstance**). However I wouldn't know the type before hand, so I need to dynamically create TaskA based on string like "namespace.TaskA" or "namespace.TaskAB".
Check out this [article](http://msdn.microsoft.com/en-us/library/b8ytshk6.aspx) and this [simple example](http://www.omegacoder.com/?p=38). Quick translation of same to your classes ... ``` var d1 = typeof(Task<>); Type[] typeArgs = { typeof(Item) }; var makeme = d1.MakeGenericType(typeArgs); object o = Activator.CreateInstance(makeme); ``` Per your edit: For that case, you can do this ... ``` var d1 = Type.GetType("GenericTest.TaskA`1"); // GenericTest was my namespace, add yours Type[] typeArgs = { typeof(Item) }; var makeme = d1.MakeGenericType(typeArgs); object o = Activator.CreateInstance(makeme); ``` To see where I came up with backtick1 for the name of the generic class, see [this article](http://msdn.microsoft.com/en-us/library/w3f99sx1.aspx). Note: if your generic class accepts multiple types, you must include the commas when you omit the type names, for example: ``` Type type = typeof(IReadOnlyDictionary<,>); ```
Indeed you would not be able to write the last line. But you probably don't want to create the object, just for the sake or creating it. You probably want to call some method on your newly created instance. You'll then need something like an interface : ``` public interface ITask { void Process(object o); } public class Task<T> : ITask { void ITask.Process(object o) { if(o is T) // Just to be sure, and maybe throw an exception Process(o as T); } public void Process(T o) { } } ``` and call it with : ``` Type d1 = Type.GetType("TaskA"); //or "TaskB" Type[] typeArgs = { typeof(Item) }; Type makeme = d1.MakeGenericType(typeArgs); ITask task = Activator.CreateInstance(makeme) as ITask; // This can be Item, or any type derived from Item task.Process(new Item()); ``` In any case, you won't be statically cast to a type you don't know beforehand ("makeme" in this case). ITask allows you to get to your target type. If this is not what you want, you'll probably need to be a bit more specific in what you are trying to achieve with this.
How to dynamically create generic C# object using reflection?
[ "", "c#", "generics", "reflection", "activator", "" ]
How can I make for example only a rectangle inside the window have opacity like 50% or something like that and for that part to have the effect of WS\_EX\_TRANSPARENT so that mouse clicks will go through it?
I do not think it is possible simply by setting WS\_EX\_TRANSPARENT, but it can be accomplished using two windows, create a window with a hole, using [SetWindowRgn](http://msdn.microsoft.com/en-us/library/aa930600.aspx), and inside that hole put another transparent window using WS\_EX\_LAYERED and WS\_EX\_TRANSPARENT styles.
Take a look at the **SetLayeredWindowAttributes** Win32 function. It can be used to set the *opacity* and *transparency* of a window.
How can I make only a part of the window transparent? (WIN32)
[ "", "c++", "windows", "winapi", "" ]
I'm confused about when you should use event delegation as opposed to out-of-the-box JQuery event handling. I'm always tempted to use an event handler because it's so easy in JQuery: For example: ``` $("button#submit").click(function () { $(this).css("disabled", "true"); }); ``` Event delegation is not really that much more complex to write: ``` $("button#submit").live("click", function() { $(this).css("disabled", "true"); }); ``` But it just doesn't seem as intuitive. **Is there a simple rule of thumb about when to use event delegation?** I guess I don't really understand the point of it.
You should use event delegation in the following situations: * When you want to handle the same event across many elements, e.g. if you have a data table with many rows, using delegation will use much less memory and be faster than registering an event handler per row. * When elements are dynamically added to the page after page load and you want to handle events for these new elements, e.g. adding rows to a data table. [Using Event Utility and Event Delegation to Improve Performance](http://developer.yahoo.com/yui/examples/event/event-delegation.html) disusses this some more (even though the article uses YUI the ideas are still applicable).
Use .live() when you will be dynamically adding elements to the page that need to have that event assigned to them. (note that .live() works only with selectors) Use .click() (or whatever other event) for all other cases
Is there a rule of thumb for when to use event delegation vs event handling in JQuery?
[ "", "javascript", "jquery", "event-delegation", "" ]
I'm trying to find a better way to implement these methods, as over very large sets they take a very long time, any ideas? ``` import java.util.HashMap; import java.util.HashSet; public class Multiset<E> extends HashSet<E> { private static final long serialVersionUID = -9013417064272046980L; private HashMap<E, Integer> multiplicities = new HashMap<E, Integer>(); @Override public boolean add(E element){ if(multiplicities.containsKey(element)){ int x = (int) multiplicities.get(element); multiplicities.put(element, ++x); }else{ multiplicities.put(element, 1); } return super.add(element); } /** * Adds all of the elements of another multiset to this one. * This method allows the preservation of multiplicities * which would not occur using the superclass's addAll(). * @param elements * @return true if all elements were successfully added */ public boolean addAll(Multiset<E> elements) { boolean flag = false; for(E element : elements){ for(int i = 0; i < elements.multiplicity(element); i++) flag = add(element); } return flag; } /** * The set-view of a multiset is the ordinary set of all * elements with multiplicity >= 1. * @return all elements that have multiplicity >= 1 */ public Multiset<E> setView(){ Multiset<E> set = new Multiset<E>(); for(E o : multiplicities.keySet()){ set.add(o); } return set; } /** * provides a union of two multisets whereby the multiplicity of each * element is the larger of the two * @param second * @return */ public Multiset<E> union(Multiset<E> second){ Multiset<E> union = new Multiset<E>(); Multiset<E> join = new Multiset<E>(); join.addAll(this); join.addAll(second); for(E o : join){ int i = this.multiplicity(o); int j = second.multiplicity(o); i = i > j ? i : j; for(int c = 0; c < i; c++){ union.add(o); } } return union; } /** * provides an intersection of two multisets whereby * the multiplicity of each element is the smaller of the two * @param second * @return The multiset containing the intersection of two multisets */ public Multiset<E> intersect(Multiset<E> second){ Multiset<E> intersection = new Multiset<E>(); for(E o : this.setView()){ if (second.setView().contains(o)) { int i = this.multiplicity(o); int j = second.multiplicity(o); i = i < j ? i : j; for(int c = 0; c < i; c++){ intersection.add(o); } } } return intersection; } /** * The Multiplicity is the number of occurrences of an object * in the multiset * @param o * @return number of occurrences of o */ public int multiplicity(E o){ return (multiplicities.containsKey(o)) ? multiplicities.get(o) : 0; } public int cardinality(){ int card = 0; for(Integer i : multiplicities.values()){ card += i; } return card; } /** * Measures the similarity between two multisets * @param A * @param B * @return the cardinality of the difference of A and B */ public int similarityOfMultisets(Multiset<E> second){ Multiset<E> union, intersection; int difference; union = union(second); intersection = intersect(second); difference = union.cardinality() - intersection.cardinality(); return difference; } } ``` EDIT: I believe I have found a faster way to calculate the similarityOfMultisets method: ``` public int similarityOfMultisets(Multiset<E> second){ int c = 0; for(E elem: this.setView()){ c += Math.min(this.multiplicity(elem), second.multiplicity(elem)); } Multiset<E> union = this.union(second); return union.cardinality() - c; } ```
Here's a refactoring of the class. Not necessarily faster - except for not re-running setView() inside the loops - but maybe cleaner in some ways. FWIW. ``` import java.util.HashMap; import java.util.HashSet; public class Multiset<E> extends HashSet<E> { private static final long serialVersionUID = -9013417064272046980L; private final HashMap<E, Integer> multiplicities = new HashMap<E, Integer>(); public boolean add(E element) { return add(element, 1); } private boolean add(E element, int copies) { if (!contains(element)) multiplicities.put(element, 0); int n = multiplicities.get(element); multiplicities.put(element, n + copies); return super.add(element); } /** * Adds all of the elements of another multiset to this one. This method allows the preservation of multiplicities which would not occur * using the superclass's addAll(). * * @param that * @return true if all elements were successfully added */ public boolean addAll(Multiset<E> that) { boolean flag = false; for (E element : that) flag = add(element, that.multiplicity(element)); return flag; } /** * The set-view of a multiset is the ordinary set of all elements with multiplicity >= 1. * * @return all elements that have multiplicity >= 1 */ public Multiset<E> setView() { Multiset<E> set = new Multiset<E>(); for (E o : multiplicities.keySet()) set.add(o); return set; } /** * provides a union of two multisets whereby the multiplicity of each element is the larger of the two * * @param that * @return */ public Multiset<E> union(Multiset<E> that) { HashSet<E> both = new HashSet<E>(); both.addAll(this); both.addAll(that); Multiset<E> union = new Multiset<E>(); for (E element : both) union.add(element, Math.max(this.multiplicity(element), that.multiplicity(element))); return union; } /** * provides an intersection of two multisets whereby the multiplicity of each element is the smaller of the two * * @param that * @return The multiset containing the intersection of two multisets */ public Multiset<E> intersect(Multiset<E> that) { Multiset<E> intersection = new Multiset<E>(); final Multiset<E> other = that.setView(); for (E element : this.setView()) if (other.contains(element)) intersection.add(element, Math.min(this.multiplicity(element), that.multiplicity(element))); return intersection; } /** * The Multiplicity is the number of occurrences of an object in the multiset * * @param element * @return number of occurrences of o */ public int multiplicity(E element) { return contains(element) ? multiplicities.get(element) : 0; } public int cardinality() { int card = 0; for (Integer n : multiplicities.values()) card += n; return card; } /** * Measures the similarity between two multisets * * @param that * @return the cardinality of the difference of A and B */ public int similarityOfMultisets(Multiset<E> that) { return union(that).cardinality() - intersect(that).cardinality(); } } ```
Performance test result for the first varians of our algorithms: ``` Robert-Union: 2263374 us Robert-Intersection: 603134 us Robert-Similarity: 2926389 us Carl-Union: 3372 us Carl-Intersection: 5097 us Carl-Similarity: 6913 us David-Union: 5182 us David-Intersection: 2527 us David-Similarity: 5270 us ``` Carl's union beats my union. Test code [here](http://karnokd.uw.hu/MultisetTest.java). I did not verify the correctness of the computation output though. Test code 2 for various set sizes and variances [here](http://karnokd.uw.hu/MultisetTest2.java) (JDK 7b59). Results [xslx](http://karnokd.uw.hu/text.xslx) / [ods](http://karnokd.uw.hu/text.ods).
slow implementation of Union and Intersection
[ "", "java", "optimization", "set", "abstract-data-type", "" ]
The below regex requires that the password has 1 number, 1 char (upper or lower) and is a minimum of 8 in length. But if I type in a special char it returns false. I don't want to require a special char, but i want to allow it in this context. How can I alter this regex to allow a special char? ``` Regex.IsMatch(Password, "^(?=.*[0-9])(?=.*[a-zA-Z])\w{8,}$") ```
Changing "\w" to "." should do it: ``` Regex.IsMatch(Password, "^(?=.*[0-9])(?=.*[a-zA-Z]).{8,}$") ``` \w matches "word" characters, which won't normally match special characters (depending on your definition of "special" and the language you're using). . will match any character except newline
``` Regex.IsMatch(Password, "^(?=.*[0-9])(?=.*[a-zA-Z]).{8,}$") ```
Regex that allows special chars?
[ "", "c#", "regex", "" ]
I want to enhance an application with scripting support like many other applications have, e.g. [MS Office using VBA](http://en.wikipedia.org/wiki/Visual_Basic_for_Applications) or [UltraEdit using JavaScript](http://www.ultraedit.com/products/ultraedit/ultraedit_features.html#macro). Which libraries do exist for C#/.NET (and which language(s) do they support)?
[IronPython](http://www.codeplex.com/IronPython/) is a dynamic `.NET` scripting language. > IronPython is an implementation of the Python programming language running under .NET and Silverlight. It supports an interactive console with fully dynamic compilation. It's well integrated with the rest of the .NET Framework and makes all .NET libraries easily available to Python programmers, while maintaining compatibility with the Python language. See [IronPython embedding](http://www.voidspace.org.uk/ironpython/embedding.shtml) for examples showing ways to call IronPython from `.NET` apps. The [*IronPython Calculator and the Evaluator*](http://www.voidspace.org.uk/ironpython/dlr_hosting.shtml) goes into the details of using `IronPython` from a `C#` application.
Please check [CS Scripting](http://www.csscript.net/) library [Here](http://www.pcpix.com/Photoshop/) is an article about scripting Photoshop CS with C# [This](http://einfall.blogspot.com/2006/05/scripting-with-lua-in-c.html) one discusses using LUA as scripting lib with C#.
C#/.NET scripting library
[ "", "c#", ".net", "scripting", "" ]
I have a bunch of urls in static html files which need to be changed. They look like this now: ``` <img src="/foldera/folderb/folderc/images/imgxyz.jpg" /> ``` They need to look like this: ``` <img src="imgxyz.jpg" /> ``` So, I just wrote a php script that opens each and does a preg\_replace(). My regex (with the double escaped backslashes, yes): ``` $regex = '/<img src="\\/foldera\\/folderb\\/folderc\\/images\\/([^"]*)" \\/>/' $replacement = '<img src="$0" />' ; ``` So I am only capturing anything after /images until the closing quote. But what I get is something like: ``` <img src="<img src="/foldera/folderb/folderc/images/imgxyz.jpg" />" /> ``` It seems the capture group is overzealous and ... or something is not matching with the /foldera/folderb part. What is going on here?
Use `$1` for the replacement. `$0` matches the whole pattern. You want the first group. ``` $replacement = '<img src="$1" />' ; ``` --- An even better way would be to use `basename` as part of your replacement: ``` $regex = '/(<img src=")([^"]*)"( \\/>)/e'; $replacement = "stripslashes('\$1').basename(stripslashes('\$2')).stripslashes('\$3')"; ```
Just as a sidenote, now that the question has been answered : if you have slashes '`/`' in the regex, using slashes as delimiter forces you to escape the ones inside the regex, like the example you proposed : ``` $regex = '/<img src="\\/foldera\\/folderb\\/folderc\\/images\\/([^"]*)" \\/>/' ``` It really makes tkings harder to understand/modify/maintain :-( You can use another character as delimiter, as long as it's the same at the beginning and the end of the regex. For example, in that kind of situation, people often use '`#`', pretty much like this : ``` $regex = '#<img src="/foldera/folderb/folderc/images/([^"]*)" />#' ``` Easier to read, no ? *(Of course, if you have '`#`' inside the regex, you'll have to escape them, as it's the delimiter)*
Help with regex replace in php
[ "", "php", "regex", "" ]
I currently have to generate an image that displays the text of a string, i need to make this image on a Servlet and then somehow pass the image to a JSP page so that it can display it. I'm trying to avoid saving the image, and instead somehow stream the image to the JSP. I haven't found a way of generating the image since i started with finding how to pass an image from the Servlet to the JSP adn got stuck. EDIT: The jsp page is already made and isn't created by the servlet, i have to pass the image into an already existing jsp Any help is appreciated.
You need to write the image as a byte array to the response's output stream. Something like this: ``` byte[] imageBytes = getImageAsBytes(); response.setContentType("image/jpeg"); response.setContentLength(imageBytes.length); response.getOutputStream().write(imageBytes); ``` Then in you JSP you just use a standard `img` element: ``` <img src="url to your servlet"> ```
You **can't**1 return both in the same response, since you're returning different types (an HTML page of type `text/html` and an image of type `image/jpeg`, say). For this sort of thing, I will generate the image during the initial servlet request (for the containing HTML page). I store it in a cache in my servlet, and write the HTML page with the image tag containing a URL to that image with the handle. e.g. the browser asks for `http://whatever/page` The servlet generates the image, and writes an HTML tag in the page like ``` <img src="http://whatever/image/unique_handle_to_image"> ``` The browser will render the HTML page, and as part of that issue a new request to my servlet with the handle for the image. e.g. the browser now asks for `http://whatever/image/unique_handle_to_image` I then return the image as content type `image/jpeg` or similar. So you have two requests going on. One for the page, in which you render the image and store it temporarily, and the second in which you return the image. You have to remember to clear the image cache, but that's all straightforward. I wouldn't worry about storing lots of images, since the two requests from the browser usually (!) come in quick succession. 1. I guess it's possible to use a [data uri](http://en.wikipedia.org/wiki/Data_URI_scheme) provided your browser supports it, and create something like ``` <img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAABGdBTUEAALGPC/xhBQAAAAlwSFlzAAALEwAACxMBAJqcGAAAAAd0SU1FB9YGARc5KB0XV+IAAAAddEVYdENvbW1lbnQAQ3JlYXRlZCB3aXRoIFRoZSBHSU1Q72QlbgAAAF1JREFUGNO9zL0NglAAxPEfdLTs4BZM4DIO4C7OwQg2JoQ9LE1exdlYvBBeZ7jqch9//q1uH4TLzw4d6+ErXMMcXuHWxId3KOETnnXXV6MJpcq2MLaI97CER3N0vr4MkhoXe0rZigAAAABJRU5ErkJggg==" alt="Red dot" /> ``` Note there are a number of caveats surrounding these. See the linked page.
Help getting image from Servlet to JSP page
[ "", "java", "image", "jsp", "servlets", "" ]
But it doesn't work. You can see I'm trying to change the class of the div containing season.png onmousedown and revert it onmouseup. What am I missing? Thanks. Mike
The Javascript file I was using was a very inflexible library. I solved the problem a different way.
It's working just fine. There is nothing wrong with the code that you posted, so if you can't see it there has to be something wrong with your css. I used this to test the code: ``` <html> <head> <title>Test</title> <style> .winter { border: 1px solid blue; } .spring { background: yellow; } .summer { background: green; } </style> </head> <body> <div class="winter spring" onmousedown="this.className='winter summer'" onmouseup="this.className='winter spring'"> <img src="Resources/season.png" /> </div> </body> </html> ```
How Can I Change The Style Of A Div On Mousedown?, or any other event for that matter…
[ "", "javascript", "html", "css", "" ]
How can I write a sample console application with Zend? ``` /Zend/Console/Getopt.php ``` I just want to pass a value as `-v` and will get the version information. Input as ``` prjectfolder/console/version.php -v ``` Output: ``` Version 1.xxxxx ``` How can I code this in Zend with simple PHP with send lib includes methods.
This is a small example of how I am handling the CLI Interface for an Application. It is including my Bootstrap and the Zend Autoloader. A better solution is to change the Bootstrap for CLI Operations (no need for Dispatching and such stuff) but I am a lazy guy :-) ``` <?php define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application/')); define('APPLICATION_ENVIRONMENT', 'development'); /** * Setup for includes */ set_include_path( APPLICATION_PATH . '/../library' . PATH_SEPARATOR . APPLICATION_PATH . '/../application/models' . PATH_SEPARATOR . APPLICATION_PATH . '/../application/extends'. PATH_SEPARATOR . get_include_path()); /** * Zend Autoloader */ require_once 'Zend/Loader/Autoloader.php'; $autoloader = Zend_Loader_Autoloader::getInstance(); /** * Register my Namespaces for the Autoloader */ $autoloader->registerNamespace('My_'); $autoloader->registerNamespace('Db_'); /** * Include my complete Bootstrap * @todo change when time is left */ require '../application/bootstrap.php'; /** * Setup the CLI Commands * ../application/cli.php --add * ../application/cli.php --scan * .. */ try { $opts = new Zend_Console_Getopt( array( 'help' => 'Displays usage information.', 'add' => 'Add the Feeds to the Pipe', 'scan' => 'Scan the Feeds in the Pipe', 'que' => 'Process the Pipe', ) ); $opts->parse(); } catch (Zend_Console_Getopt_Exception $e) { exit($e->getMessage() ."\n\n". $e->getUsageMessage()); } if(isset($opts->help)) { echo $opts->getUsageMessage(); exit; } /** * Action : add */ if(isset($opts->add)) { // do something } /** * Action : scan */ if(isset($opts->scan)) { // do something } /** * Action : que */ if(isset($opts->que)) { // do something } ```
I suggest using Symfony Console Component instead <http://dev.umpirsky.com/building-cli-apps-with-symfony-console-component/>
How can I write a sample console application with Zend?
[ "", "php", "zend-framework", "" ]
So I have starting to learn Qt 4.5 and found the Signal/Slot mechanism to be of help. However, now I find myself to be considering two types of architecture. This is the one I would use ``` class IDataBlock { public: virtual void updateBlock(std::string& someData) = 0; } class Updater { private: void updateData(IDataBlock &someblock) { .... someblock.updateBlock(data); .... } } ``` Note: code inlined for brevity. Now with signals I could just ``` void Updater::updateData() { ... emit updatedData(data); } ``` This is cleaner, reduces the need of an interface, but should I do it just because I could? The first block of code requires more typing and more classes, but it shows a relationship. With the second block of code, everything is more "formless". Which one is more desirable, and if it is a case-by-case basis, what are the guidelines?
Emmitting a signal costs few switches and some additional function calls (depending on what and how is connected), but overhead should be minimal. Provider of a signal has no control over who its clients are and even if they all actually got the signal by the time emit returns. This is very convenient and allows complete decoupling, but can also lead to problems when order of execution matters or when you want to return something. Never pass in pointers to temporary data (unless you know exactly what you are doing and even then...). If you must, pass address of your member variable -- Qt provides a way to delay destruction of object untill after all events for it are processed. Signals also might requre event loop to be running (unless connection is direct I think). Overall they make a lot of sense in event driven applications (actually it quickly becomes very annoying without them). If you already using Qt in a project, definitely use them. If dependency on Qt is unacceptable, boost has a similar mechanism.
There is another difference. #1 is hard coupled to the IDataBlock interface, and the Updater class needs to know about "someblock". #2 can be late-coupled via a connect call (or several, including disconnects), which leads to a more dynamic approach. #2 acts like a message (think Smalltalk/ObjC) and not a call (think C/C++). Messages can also be subject to multiple dispatch, which requires hand implementing that feature in #1. My preference would be to utilize signals/slots due to their flexibility, unless code performance or the need for immediate return data does not allow for it (or the dependence on Qt is not desirable).
Signal/Slot vs. direct function calls
[ "", "c++", "qt", "" ]
I am trying to run an application outside Joomla (not as a plugin) and I would like to access the logged in user's information (userid). I am wondering how should I go about doing that? Is there a file which I can include? I tried using $\_SESSION but it shows empty. Is there a simple solution to my problem? Thank you for your time.
Actually that's not as easy as it sounds. Joomla uses its own session handling with come unique session-id-generation and some encryption in place, so the only way to get into the Joomla session data is to use the appropriate Joomla functions (as others have suggested). I recently had a project where we needed to transfer a Joomla authenticated user into a separate application. We did this by adding a Joomla adapter which instantiates the Joomla user classes, reads the user data, puts everything into an encrypted cookie and redirects back to our application. In there we read the encrypted cookie, instantiate our own user object and discard the cookie. As this is not 100% secure we're changing the system to write the user data in a database table and read it from our application - we avoid the unsecure way through a cookie that way, because even though the cookie is encrypted (and contains sensitive user information which suffice to authenticate a user) it'll be transfered on wire and could be sniffed. ``` define( '_JEXEC', 1 ); define('JPATH_BASE', dirname(dirname(__FILE__))); define( 'DS', DIRECTORY_SEPARATOR ); require_once (JPATH_BASE . DS . 'includes' . DS . 'defines.php'); require_once (JPATH_BASE . DS . 'includes' . DS . 'framework.php'); $mainframe = JFactory::getApplication('site'); ``` The above is the basic script required to access Joomla resources.
``` define( '_JEXEC', 1 ); define('JPATH_BASE', 'your joomla basedir goes here' ); define( 'DS', DIRECTORY_SEPARATOR ); require_once ( JPATH_BASE .DS.'includes'.DS.'defines.php' ); require_once ( JPATH_BASE .DS.'includes'.DS.'framework.php' ); JDEBUG ? $_PROFILER->mark( 'afterLoad' ) : null; $mainframe =& JFactory::getApplication('site'); $mainframe->initialise(); JPluginHelper::importPlugin('system'); JDEBUG ? $_PROFILER->mark('afterInitialise') : null; $mainframe->triggerEvent('onAfterInitialise'); $user =& JFactory::getUser(); if ($user->guest) { echo 'stuff'; //redirect('/'); } else { echo 'user'; } ```
Accessing session data outside Joomla
[ "", "php", "joomla", "" ]
I'm looking for well organized information sources about how the upcoming jsr166y (fork-join, fences) and extras166y (ParallelArray, etc.) can be used - something from tutorial to expert level.
The IBM Developerworks website has a [good series](http://www.ibm.com/developerworks/java/library/j-jtp11137.html) on this, but the most informative source I've found is [this presentation](http://www.infoq.com/presentations/brian-goetz-concurrent-parallel) by Brian Goetz, which is well worth an hour of your time. He spends the first 25 mins talking about the background, but then presents some great examples of the new framework.
[Doug Lea homepage](http://gee.cs.oswego.edu/dl/concurrency-interest/index.html), check also the [wiki](http://artisans-serverintellect-com.si-eioswww6.com/default.asp?W1)
Resources on the upcoming fork-join framework
[ "", "java", "concurrency", "fork-join", "" ]
I have one long excel sheet with books details of a library..can i import this data into a .mdf file.
You will need SQL Server Management Studio, and use its Import feature <http://msdn.microsoft.com/en-us/library/ms141209.aspx>
Yes. Depends on SQL Server version you have installed (for exact instructions) but you use DTS (Data Transformation Services). Right click on your database and you should see **Import data**. Wizard will do most of the work, basically it's a next next next operation.
How to import excel files into SqlDatabase .mdf file
[ "", "sql", "database", "excel", "" ]
When I implement new features using TDD, I often use the shortcut `Shift + Alt + D T` to run only the jUnit test case I am currently working on (and not the whole test suite, which takes a few minutes). This creates a new Debug Configuration for the current Java source and runs it immediately. Now I would like to run the test with assertions enabled (VM option `-ea`). Unfortunately, the VM arguments for the new debug configuration starts empty. I always have to go into `Debug Configurations...` → `Arguments` and add `-ea` to the VM arguments input box. Is there any way to have Eclipse these arguments populated with (project or workspace wide) default options?
`Windows` → `Preferences` → `JUnit` has an option to add `-ea` every time a new launch configuration is created. It adds the `-ea` option to the Debug Configuration as well. The full text next to a check box is: > Add `-ea` to VM arguments when creating a new JUnit launch configuration
I think you can set this option by editing the definition of the JRE under `Preferences` → `Java` → `Installed JREs` → Select the JRE in question and edit the default VM arguments
Default VM arguments in Eclipse debug configuration?
[ "", "java", "eclipse", "" ]
I have the following problem and I'm wondering if there's a nice way to model these objects without using multiple inheritance. If it makes any difference, I am using Python. Students need contact information plus student information. Adults need contact information plus billing information. Students can be adult students, in which case I need contact/student/billing info, or they can be children, in which case I need contact/student/parent info. Just to be clear on how the system will be used, I need to be able to ask for a list of all adults (and I will get adult students plus parents), or a list of all students (and I will get child students plus adult students). Also, all of these objects need to have a common base class.
What you have is an example of Role -- it's a common trap to model Role by inheritance, but Roles can change, and changing an object's inheritance structure (even in languages where it's possible, like Python) is not recommended. Children grow and become adults, and some adults will also be parents of children students as well as adult students themselves -- they might then drop either role but need to keep the other (their child changes schools but they don't, or viceversa). Just have a class Person with mandatory fields and optional ones, and the latter, representing Roles, can change. "Asking for a list" (quite independently of inheritance or otherwise) can be done either by building the list on the fly (walking through all objects to check for each whether it meets requirements) or maintaining lists corresponding to the possible requirements (or a mix of the two strategies for both frequent and ad-hoc queries). A database of some sort is likely to help here (and most DBs work much better without inheritance in the way;-).
As I'm sure someone else will comment soon (if they haven't already), one good OO principle is "[Favor composition over inheritance](http://www.artima.com/lejava/articles/designprinciples4.html)". From your description, it sounds suspiciously like you're breaking the [Single Responsibility Principle](http://en.wikipedia.org/wiki/Single_responsibility_principle), and should be breaking down the functionality into separate objects. It also occurs to me that Python supports [duck typing](http://en.wikipedia.org/wiki/Duck_typing), which begs the question "Why is it so important that all the classes have a common base class?"
Eliminating multiple inheritance
[ "", "python", "oop", "multiple-inheritance", "" ]
Subject to this question, asks it all:How effective is executeBatch method? Is there a performance benchmark, that says.. 'if you have 1000 records to be inserted, using a executeBatch instead of executeUpdate saves you x amount of database cycles?' Or Is this just a convention? EDIT: Here is what I am working with: a DB2 V 8.1 hosted on Z/OS, a web app that would be inserting 80,000 records at one go in it's worst case scenario execution.
Not sure what database you are using. When I ran a test on this using db2 this is what I saw: To write to the database: 1 insert it took 2500 microseconds. 10 inserts it took 6000 microseconds. (600 microseconds per write) 10000 inserts it took about 1 million microseconds. ( 100 microseconds per write) Performance maxed out there. All this means is that there is a huge overhead in sending messages, and using a batch method minimizes this. Of course, sending inserts/updates in huge batches runs the risk of losing them if the application crashes. Also of note: Exact numbers will vary depending on your DB and settings. So you will have to find your own "sweet spot." But this gives you an idea.
I'm not sure what you're asking, but for inserting many thousands of rows, a batched statement is *hugely* faster. I can't give you numbers, though.
How effective is executeBatch on a Prepared Statement?
[ "", "java", "database", "performance", "jdbc", "jakarta-ee", "" ]
I am have some problem in updating my qotwVote1a table's Vote1a field through PHP. Could you please have a look at the code, and tell me what am i doing wrong in here. ``` $result = mysql_query("SELECT * FROM qotwVote1a WHERE QuestionId='".$questionId."' AND MemberId='".$id."'"); while($row = mysql_fetch_array($result)) { $originalVote=$row['Vote1a']; $newVote=$originalVote + $vote; //echo ($newVote); } $sql = <<<END UPDATE qotwVote1a SET Vote1a = '$newVote', WHERE QuestionId = '$questionId' AND MemberId = '$id' END; mysql_query($sql); if (mysql_error()) { die("Error executing query '$sql': " . mysql_error()); } ``` Using this code I got an error: > "Error executing query 'UPDATE qotwVote1a SET Vote1a = '2', WHERE QuestionId = '57' AND MemberId = 'zee'': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE QuestionId = '57' AND MemberId = 'zee'' at line 3" Regards Zeeshan
You have a comma after `$newVote`. Remove it, and you'll be peachy-keen. Also, you don't need to wrap numbers in quotation marks, and don't do it if your column is an integer or float type. Doing so just causes those to get converted to numbers, anyway, so it's not a big deal. ``` UPDATE qotwVote1a SET Vote1a = '$newVote' WHERE QuestionId = '$questionId' AND MemberId = '$id' ```
There’s a comma in you MySQL query after the `SET` clause that’s misplaced. So try this: ``` $sql = <<<END UPDATE qotwVote1a SET Vote1a = '$newVote' WHERE QuestionId = '$questionId' AND MemberId = '$id' END; ```
Problem in using UPDATE command in PHP
[ "", "php", "mysql", "" ]
I'm new to JQuery and I'd like to know if you've got any idea regarding how to accomplish the following with JQuery instead of JScript: You have a group of a's within a top div: ``` <h3> <a id="acer" href="#acerca">acerca</a> | <a id="cur" href="#cursos">cursos y clases</a> | <a id="cal" href="#calendario">calendario</a> | <a id="con" href="#contacto">contacto</a> </h3> ``` And below them, inside the same container div, there's four content divs, one after the other, each belonging to each of the a's above: ``` <div id="acerca"></div> <div id="cursos"></div> <div id="calendario"></div> <div id="contacto"></div> ``` Now, the idea here is that we start off by closing all of these content div's but one, the first: acerca, which is visible to the user: ``` $(document).ready(function(){ $("#cursos,#calendario,#contacto").hide(); }); ``` Now, using the h3 a's at the top, I want the following behaviour to take place: 1.- If I click a different item than the one open by default (acerca), then close the currently open one and show me the new one. 2.- If I click the same item which is already open (acerca), then nothing happens: there must always be one content div open at all times. 3.- If possible, using #anchors to mask the ugly "javascript:;" of the old days. This is very simple with the use of JavaScript's onclick function (save for #3) but, I'm somehow getting stuck with JQuery. Help is greatly appreciated Sotkra
I'd assign a common class to all the DIVs so I could reference them collectively. When one of the anchors is selected, I'd remove any "selected" classes from all of the DIVs, then apply a "selected" class to the DIV that should be shown. I would then hide all DIVs that don't have the selected class and show the DIV that does have the selected class. This way if you reclick the same link you won't have any flashing as you would if you simply hid all the DIVs then display the selected one. ``` $('a').click( function() { var all = $('div.interface'); var selected = $(this).attr('href'); all.removeClass('selected'); $(selected).addClass('selected'); div.filter(':not(.selected)').hide(); div.filter('.selected').show(); return false; // stop the link from being taken }); ```
The easiest solution is to add manually the events for every element of the menu and then hidden or showing with "toggle" on wich you can add the same effects as with show hidde ``` $("#acer").click(function() { $("#acerca").toggle();} ); $("#cur").click(function() { $("#cursos").toggle();} ); ... ``` Another solution, in this case you can add menu items without changing the jquery code, with the requisite that the anchor text must be the div id. ``` <a href="#acerca" class="anchorClass">acercade</a> <div id="acercade"></div> $(".anchorClass").click(function(){ $("#"+$(this).text()).toggle(); }); ```
JQuery Swapping the Toggle of elements, one at a time
[ "", "jquery", "toggle", "javascript", "swap", "" ]
I try to figure out which devices are online and which are offline in our LAN. I have seen many programs doing a kind of graphical network overview, presenting LAN IP and MAC addresses. I would like to know if and how those (ARP?) information can be pulled from `C#/.NET` ?
If you know which devices are out there you can use the [Ping Class](http://msdn.microsoft.com/en-us/library/system.net.networkinformation.ping.aspx). This will allow you to at least fill up the ARP table. You can always execute ARP -a and parse the output if you have to. Here is also a link that shows how to pinvoke to call [GetIpNetTable](http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2004-10/3790.html). I have included examples below of Ping Class and how to access the ARP table using the GetIpNetTable. This is an example for the Ping Class ``` using System; using System.Net; using System.Net.NetworkInformation; using System.Text; namespace Examples.System.Net.NetworkInformation.PingTest { public class PingExample { // args[0] can be an IPaddress or host name. public static void Main (string[] args) { Ping pingSender = new Ping (); PingOptions options = new PingOptions (); // Use the default Ttl value which is 128, // but change the fragmentation behavior. options.DontFragment = true; // Create a buffer of 32 bytes of data to be transmitted. string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; byte[] buffer = Encoding.ASCII.GetBytes (data); int timeout = 120; PingReply reply = pingSender.Send (args[0], timeout, buffer, options); if (reply.Status == IPStatus.Success) { Console.WriteLine ("Address: {0}", reply.Address.ToString ()); Console.WriteLine ("RoundTrip time: {0}", reply.RoundtripTime); Console.WriteLine ("Time to live: {0}", reply.Options.Ttl); Console.WriteLine ("Don't fragment: {0}", reply.Options.DontFragment); Console.WriteLine ("Buffer size: {0}", reply.Buffer.Length); } } } } ``` This is an example of the GetIpNetTable. ``` using System; using System.Runtime.InteropServices; using System.ComponentModel; using System.Net; namespace GetIpNetTable { class Program { // The max number of physical addresses. const int MAXLEN_PHYSADDR = 8; // Define the MIB_IPNETROW structure. [StructLayout(LayoutKind.Sequential)] struct MIB_IPNETROW { [MarshalAs(UnmanagedType.U4)] public int dwIndex; [MarshalAs(UnmanagedType.U4)] public int dwPhysAddrLen; [MarshalAs(UnmanagedType.U1)] public byte mac0; [MarshalAs(UnmanagedType.U1)] public byte mac1; [MarshalAs(UnmanagedType.U1)] public byte mac2; [MarshalAs(UnmanagedType.U1)] public byte mac3; [MarshalAs(UnmanagedType.U1)] public byte mac4; [MarshalAs(UnmanagedType.U1)] public byte mac5; [MarshalAs(UnmanagedType.U1)] public byte mac6; [MarshalAs(UnmanagedType.U1)] public byte mac7; [MarshalAs(UnmanagedType.U4)] public int dwAddr; [MarshalAs(UnmanagedType.U4)] public int dwType; } // Declare the GetIpNetTable function. [DllImport("IpHlpApi.dll")] [return: MarshalAs(UnmanagedType.U4)] static extern int GetIpNetTable( IntPtr pIpNetTable, [MarshalAs(UnmanagedType.U4)] ref int pdwSize, bool bOrder); [DllImport("IpHlpApi.dll", SetLastError = true, CharSet = CharSet.Auto)] internal static extern int FreeMibTable(IntPtr plpNetTable); // The insufficient buffer error. const int ERROR_INSUFFICIENT_BUFFER = 122; static void Main(string[] args) { // The number of bytes needed. int bytesNeeded = 0; // The result from the API call. int result = GetIpNetTable(IntPtr.Zero, ref bytesNeeded, false); // Call the function, expecting an insufficient buffer. if (result != ERROR_INSUFFICIENT_BUFFER) { // Throw an exception. throw new Win32Exception(result); } // Allocate the memory, do it in a try/finally block, to ensure // that it is released. IntPtr buffer = IntPtr.Zero; // Try/finally. try { // Allocate the memory. buffer = Marshal.AllocCoTaskMem(bytesNeeded); // Make the call again. If it did not succeed, then // raise an error. result = GetIpNetTable(buffer, ref bytesNeeded, false); // If the result is not 0 (no error), then throw an exception. if (result != 0) { // Throw an exception. throw new Win32Exception(result); } // Now we have the buffer, we have to marshal it. We can read // the first 4 bytes to get the length of the buffer. int entries = Marshal.ReadInt32(buffer); // Increment the memory pointer by the size of the int. IntPtr currentBuffer = new IntPtr(buffer.ToInt64() + Marshal.SizeOf(typeof(int))); // Allocate an array of entries. MIB_IPNETROW[] table = new MIB_IPNETROW[entries]; // Cycle through the entries. for (int index = 0; index < entries; index++) { // Call PtrToStructure, getting the structure information. table[index] = (MIB_IPNETROW) Marshal.PtrToStructure(new IntPtr(currentBuffer.ToInt64() + (index * Marshal.SizeOf(typeof(MIB_IPNETROW)))), typeof(MIB_IPNETROW)); } for (int index = 0; index < entries; index++) { MIB_IPNETROW row = table[index]; IPAddress ip=new IPAddress(BitConverter.GetBytes(row.dwAddr)); Console.Write("IP:"+ip.ToString()+"\t\tMAC:"); Console.Write( row.mac0.ToString("X2") + '-'); Console.Write( row.mac1.ToString("X2") + '-'); Console.Write( row.mac2.ToString("X2") + '-'); Console.Write( row.mac3.ToString("X2") + '-'); Console.Write( row.mac4.ToString("X2") + '-'); Console.WriteLine( row.mac5.ToString("X2")); } } finally { // Release the memory. FreeMibTable(buffer); } } } } ```
Hopefully you are trying to get the MAC Addresses from a IP Addresses and not the other way around. Here is a link of a guy's example: [ARP Resolver](http://www.tamirgal.com/blog/post/ARP-Resolver-C-Class.aspx) I have not tried it, let us know how it works.
How do I access ARP-protocol information through .NET?
[ "", "c#", ".net", "ip", "lan", "arp", "" ]
I would like to do something like this: ``` public class Foobar { @Tag final private int foo; @Tag final private int bar; @Tag final private int baz; @Tag final private int quux; static private final TagValidator validator = TagValidator.autoGenerate(Foobar.class); public Foobar(Something something) { validator.validate(something); this.foo = something.method1(); this.bar = something.anotherMethod(); this.baz = something.someOtherMethod(); this.quux = something.yetAnotherMethod(); } ... other methods ... } ``` where `TagValidator.autoGenerate()` uses reflection + annotation parsing to get all the members of my Foobar class that have been tagged with `@Tag` and do some grungy laborious boring stuff which I'm going to need to do for several classes. My question is, is it going to see these members in the order they are declared? I agree it would be safer to do this: ``` public class Foobar { @Tag(0) final private int foo; @Tag(1) final private int bar; @Tag(2) final private int baz; @Tag(3) final private int quux; ... } ``` but I'm feeling lazy :-) Also, is it possible to have a class and an annotation with the same name?
To quote the JavaDocs on both `Class.getDeclaredFields()` and `Class.getFields()` > The elements in the array returned are > not sorted and are not in any > particular order. From a maintainability perspective if the order is important include it in your annotation or better still don't rely on the order. Yes, it is possible to for a class and annotation to have the same name providing they are in a different package. Basically the same rules apply as classes and interfaces.
Assuming your TagValidator uses Class.getFields, then the javadoc says: > Returns an array containing Field > objects reflecting all the accessible > public fields of the class or > interface represented by this Class > object. The elements in the array > returned are not sorted and are not in > any particular order. So I wouldn't rely on it. In practice, though, this does always seem to come back in declaration order.
questions on annotation + class member order
[ "", "java", "reflection", "annotations", "" ]
I would appreciate a little expert help please. in an SQL SELECT statement I am trying to get the last day with data per month for the last year. Example, I am easily able to get the last day of each month and join that to my data table, but the problem is, if the last day of the month does not have data, then there is no returned data. What I need is for the SELECT to return the last day with data for the month. This is probably easy to do, but to be honest, my brain fart is starting to hurt. I've attached the select below that works for returning the data for only the last day of the month for the last 12 months. Thanks in advance for your help! ``` SELECT fd.cust_id,fd.server_name,fd.instance_name, TRUNC(fd.coll_date) AS coll_date,fd.column_name FROM super_table fd, (SELECT TRUNC(daterange,'MM')-1 first_of_month FROM ( select TRUNC(sysdate-365,'MM') + level as DateRange from dual connect by level<=365) GROUP BY TRUNC(daterange,'MM')) fom WHERE fd.cust_id = :CUST_ID AND fd.coll_date > SYSDATE-400 AND TRUNC(fd.coll_date) = fom.first_of_month GROUP BY fd.cust_id,fd.server_name,fd.instance_name, TRUNC(fd.coll_date),fd.column_name ORDER BY fd.server_name,fd.instance_name,TRUNC(fd.coll_date) ```
Here's another approach, if ANSI `row_number()` is supported: ``` with RevDayRanked(itemDate,rn) as ( select cast(coll_date as date), row_number() over ( partition by datediff(month,coll_date,'2000-01-01') -- rewrite datediff as needed for your platform order by coll_date desc ) from super_table ) select itemDate from RevDayRanked where rn = 1; ``` Rows numbered 1 will be nondeterministically chosen among rows on the last active date of the month, so you don't need distinct. If you want information out of the table for all rows on these dates, use `rank()` over days instead of `row_number()` over coll\_date values, so a value of 1 appears for any row on the last active date of the month, and select the additional columns you need: ``` with RevDayRanked(cust_id, server_name, coll_date, rk) as ( select cust_id, server_name, coll_date, rank() over ( partition by datediff(month,coll_date,'2000-01-01') order by cast(coll_date as date) desc ) from super_table ) select cust_id, server_name, coll_date from RevDayRanked where rk = 1; ``` If `row_number()` and `rank()` aren't supported, another approach is this (for the second query above). Select all rows from your table for which there's no row in the table from a later day in the same month. ``` select cust_id, server_name, coll_date from super_table as ST1 where not exists ( select * from super_table as ST2 where datediff(month,ST1.coll_date,ST2.coll_date) = 0 and cast(ST2.coll_date as date) > cast(ST1.coll_date as date) ) ``` If you have to do this kind of thing a lot, see if you can create an index over computed columns that hold `cast(coll_date as date)` and a month indicator like `datediff(month,'2001-01-01',coll_date)`. That'll make more of the predicates SARGs.
You probably need to group your data so that each month's data is in the group, and then within the group select the maximum date present. The sub-query might be: ``` SELECT MAX(coll_date) AS last_day_of_month FROM Super_Table AS fd GROUP BY YEAR(coll_date) * 100 + MONTH(coll_date); ``` This presumes that the functions YEAR() and MONTH() exist to extract the year and month from a date as an integer value. Clearly, this doesn't constrain the range of dates - you can do that, too. If you don't have the functions in Oracle, then you do some sort of manipulation to get the equivalent result. Using information from Rhose (thanks): ``` SELECT MAX(coll_date) AS last_day_of_month FROM Super_Table AS fd GROUP BY TO_CHAR(coll_date, 'YYYYMM'); ``` This achieves the same net result, putting all dates from the same calendar month into a group and then determining the maximum value present within that group.
Last day of the month with a twist in SQLPLUS
[ "", "sql", "oracle", "" ]
I currently ftp all my files to my website when i do an update (over a slowish adsl connection) And I want to make things easier, so I just recently started using a hosted svn service, and i thought i could speed things up a bit by doing an svn export of my website directly onto my webserver i have tried that a few times and it seems to work ok, however it does fetch the entire site everytime which is a bit slow for a 1 file update so my questions are is it possible to do an export and only get the changes since the last export (how will this handle deleted files ?) OR will it be easier to do an svn checkout and svn update it all the time instead of svn export and just hide the .svn folders using apache htaccess is this a good idea, or is there a better way to publish my website i am trying to achieve the 1 click deploy type ideal maybe there are some gotcha's i haven't thought of that someone else has run into debian/apache/php
I would do an svn checkout, and have done so successfully on a live site for a number of years. You should add `mod_rewrite` rules to 404 the .svn directories (and files) though.
This is what I'm doing on my host: For every project I have a structure that looks more less like this: ``` ~/projects/myproj ~/public_html/myproj ``` First dir is a checkout from SVN, while second one is just svn export. I have a small bash script ``` #!/bin/bash SOURCE="$HOME/projects/" TARGET="$HOME/public_html/" for x in `ls $SOURCE` do if [ -d $SOURCE$x ]; then svn update $SOURCE$x svn export --force $SOURCE$x $TARGET$x fi done ``` Export is done from working copy so it's very fast.
publishing a website using svn export
[ "", "php", "svn", "deployment", "build-automation", "" ]
**Scenario** Imagine a REST service that returns a list of things (e.g. notifications) **Usage** A client will continually poll the REST service. The REST service retrieves records from the database. If records are available, they are converted into JSON and returned to the client. And at the same time, the retrieved records are purged from the DB. **Problem** How do you handle the problem if the REST endpoints encounters a problem writing the results back to the client ? By that time, the records have been deleted.
Deleting the records will always be a dangerous proposition. What you could do instead is include a timestamp column on the data. Then have your REST url include a "new since" timestamp. You return all records from that timestamp on. If the notifications grow to be too large you can always setup an automated task to purge records more than an hour old - or whatever interval works well for you.
It sounds like a strange idea to delete DB records after reading access. Possible problems immediately leap into mind: Network trouble prevent the client reading the data, multiple clients cause each other to see incomplete lists, et.al. The RESTful apporach might be like this: * Give each notification a specific URI. Allow GET and DELETE on these URIs. The client may trigger the record deletion once it successfully received and processed the notification. * Provide an URI to the collection of current notifications. Serve a list of notification data (ID, URI, timestamp, (optional:) content) upon GET request. Take a look at the Atom protocol for ideas. Optional: Allow POST to add a new notification. With this approach all reading requests stay simple GETs. You may instrument the usual HTTP caching mechanisms on proxies and clients for performance improvement. Anyway: Deleting a DB entry is a state change on the server. You must not do this upon a GET request. So POST will be you primary choice. But this does not help you much, since the communication might still not be reliable. And polling qith POSTs smells a lot more like Web-Services than REST.
How do you robustly implement a REST service that retrieves DB records then purges them before returning?
[ "", "java", "rest", "persistence", "" ]
I have a string array in C# 3.5: ``` string [] times = new[] {“08:00 am” , “10:00 am”, “120”} ; ``` I would like to create indexes to times: `StartTime`, `EndTime` , `ElapsedTime` so that when I code: ``` StartTime= “09:00 am” ; EndTime= “11:00 am” ; ``` then `times[0]` is set to `“09:00 am”` , etc. I could create 3 methods: ``` private void StartTime(string time) { times[0] = time; } private void EndTime(string time) { times[1] = time; } private void ElapsedTime(string time) { times[2] = time; } ``` and code ``` StartTime("09:00"); ``` but is there a simpler way to do it?
What you should really do is create a new class to do this. Make the two times properties. And the time elapsed is a function of the start and end times. ``` class Time { public DateTime StartTime{ get; set; } public DateTime EndTime{ get; set; } public String[] ToStringArray() { String[] ret = new String[3]; ret[0] = StartTime.ToString(); ret[1] = EndTime.ToString(); ret[2] = ElapsedTime().ToString(); return ret; } public TimeSpan ElapsedTime() { return EndTime.subtract(StartTime); } } ```
I don't know it it is simpler, but I would suggest taking the hard index references out of your code and replace them with constants, to make easier to maintain if the order of elements in the array would change in the future: ``` private const int START_TIME = 0; private const int END_TIME = 1; private const int ELAPSED_TIME = 2; ``` Then you will also get more readable code: ``` times[END_TIME] = time; ``` Unless you want to be more object oriented, in which case you should follow jjnguy's advice.
How to index a string array
[ "", "c#", "" ]
I am using the PDO Database Abstraction library to make sure my code is portable. However, I now find that I need column information so I turned to the [PDOStatement->getColumnMeta()](https://www.php.net/manual/en/pdostatement.getcolumnmeta.php) method hoping it would be somewhat standardized - but from what I have found it actually seems open-ended. For example, when calling that method from SQLite it seems you get one set of possible values: <http://gcov.php.net/PHP_5_3/lcov_html/pdo_sqlite/sqlite_statement.c.gcov.php> ``` null double blob string integer ... ``` While a call from the MySQL database lists all kinds of other weird values: <http://gcov.php.net/PHP_5_3/lcov_html/pdo_mysql/mysql_statement.c.gcov.php> ``` var_string longlong newdecimal geometry ... ``` I may be looking in the wrong place also, but I just can't find any useful data on what "native\_type" values can be when it comes to switching around databases.
This is one of those areas of PDO that was left intentionally undefined, in order to keep the abstraction light weight. PDO does not define a standard representation of types for this method; each driver has it's own idea about what it can return here.
PDO is *not* a database abstraction. It is "only" an unified access layer. If you switch to another database system you most likely have to change the code. Each (specific) database driver returns its own set of values and there's no "translation layer" for the driver:decl\_type info in pdo beyond the native\_type/pdo\_type fields in the result of getColumnMeta()
List of PHP native_type's for PDO getColumnMeta()
[ "", "php", "database", "pdo", "" ]
I am trying to get the following code to compile but am getting errors in VS2008. Anyone can tell me where I am going wrong? ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace dummy { public class NaturalNumbersSequence : IEnumerable<int> { public IEnumerator<int> GetEnumerator() { for (int i = 1; i <= 1000; i++) yield return i; } IEnumerator IEnumerable.GetEnumerator() { for (int i = 1; i <= 1000; i++) yield return i; } } class Program { static void Main(string[] args) { foreach (int i in new NaturalNumbersSequence()) Console.WriteLine(i); } } } ```
Well, the first compiler error I get is that it complains that: > Using the generic type 'System.Collections.Generic.IEnumerator' requires '1' type arguments This is on line 16, this one: ``` IEnumerator IEnumerable.GetEnumerator() ``` Fixing that by adding a using directive for the `System.Collections` namespace (tip: place the cursor just after IEnumerator, on the *r* at the end of the word, and hit Ctrl+. (ctrl + the dot-key), it should suggest you add a "using System.Collections;" directive, do that). Then it compiles, and runs. Does that match what you expect? Also, note that you should always post the actual error messages you're getting, this way we're not barking up the wrong tree if there's something else wrong with your code that we're not seeing at first glance. Additionally, you can simplify this very common implementation of `IEnumerable<T>` by calling one of the methods from the other, hence I would simplify the implementation of the second methods like this: ``` IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); // this will return the one // available through the object reference // ie. "IEnumerator<int> GetEnumerator" } ``` this way you only implement the actual enumerator code once. And finally see [Earwicker](https://stackoverflow.com/users/27423/earwicker)'s [answer](https://stackoverflow.com/questions/1174713/ienumerablet-in-c/1174736#1174736) as well, it shows a better (in my opinion at least) way to write this whole code.
Not sure why you're getting errors, but wouldn't this be simpler? ``` public static class NumbersSequence { public static IEnumerable<int> Naturals { get { for (int i = 1; i <= 1000; i++) yield return i; } } } class Program { public static void Main(string[] args) { foreach (int i in NumbersSequence.Naturals) Console.WriteLine(i); } } ```
IEnumerable<T> in C#
[ "", "c#", "ienumerable", "" ]
I have a normal windows forms program (Not VSTO) which deploys using click once. The issue is that loads of user are having problems with random errors generally stating (from the IClassFactory failed due to the following error: 80004005). Im deploying Redemption by changing the mode to "Isolated" which appears to work for some users but not others. The users that arn't working can be fixed by manually installing the Redemption DLL. Can anyone explain how to automate the process (I really want it to be reg free so users don't need admin permission to install). Thanks Ross
Got this resolved. The issue was I had loaded redemption objects on a background thread, and was trying to manipulate them on the UI thread. Ensure you are consistent when using the objects.
It is entirely possible to use Redemption in background threads, if you do it correctly. The firsr RDOSession object you create, must be created in the UI thread, because some MAPI internals need for the message pump to have been created in the same thread. Typically this RDOSession should be kept for the lifetime of your app. You cannot access this object from any other thread. You'll need to pass the MAPIOBJECT property of your first RDOSession to each worker thread, create a new RDOSessuion object from within each thread, and assign the MAPIOBJECT from your RDOSession to the secondary RDOSession created in the thread. Example: (Aircode Warning: the code below was typed from memory.) ``` Dim PrimaryRDOSession As New Redemption.RDOSession() PrimaryRDOSession.Login([...]) Dim WorkerThread as New System.Threading.Thread(AddressOf ThreadProc) WorkerThread.Start(PrimaryRDOSession.MAPIOBJECT) Sub ThreadProc(ByVal param as Object) Dim ThdRDOSession As New Redemption.RDOSession() ThdRDOSession.MAPIOBJECT = param ' do other stuff End Sub ``` From there you can do anything you'd normally do with Redemption. You can pass EntryIDs between threads if Outlook objects are selected/located in one thread, and acted upon in another.
Redemption + Clickonce = :-(
[ "", "c#", "com", "clickonce", "outlook-redemption", "" ]
I'm writing a software for hyperbolic partial differential equations in c++. Almost all notations are vector and matrix ones. On top of that, I need the linear algebra solver. And yes, the vector's and matrix's sizes can vary considerably (from say 1000 to sizes that can be solved only by distributed memory computing, eg. clusters or similar architecture). If I had lived in utopia, I'd had had linear solver which scales great for clusters, GPUs and multicores. When thinking about the data structure that should represent the variables, I came accros the boost.ublas and MTL4. Both libraries are blas level 3 compatible, MTL4 implements sparse solver and is much faster than ublas. They both don't have implemented support for multicore processors, not to mention parallelization for distributed memory computations. On the other hand, the development of MTL4 depends on sole effort of 2 developers (at least as I understood), and I'm sure there is a reason that the ublas is in the boost library. Furthermore, intel's mkl library includes the example for binding their structure with ublas. I'd like to bind my data and software to the data structure that will be rock solid, developed and maintained for long period of time. Finally, the question. What is your experience with the use of ublas and/or mtl4, and what would you recommend? thanx, mightydodol
With your requirements, I would probably go for [BOOST::uBLAS](http://www.boost.org/doc/libs/1_39_0/libs/numeric/ublas/doc/index.htm). Indeed, a good deployment of uBLAS should be roughly on par with MTL4 regarding speed. The reason is that there exist [bindings](http://mathema.tician.de/software/boost-bindings) for [ATLAS](http://math-atlas.sourceforge.net/) (hence shared-memory parallelization that you can efficiently optimize for your computer), and also vendor-tuned implementations like the [Intel Math Kernel Library](http://software.intel.com/en-us/intel-mkl/) or [HP MLIB](http://www.hp.com/go/mlib). With these bindings, uBLAS with a well-tuned ATLAS / BLAS library doing the math should be fast enough. If you link against a given BLAS / ATLAS, you should be roughly on par with MTL4 linked against the same BLAS / ATLAS using the compiler flag [`-DMTL_HAS_BLAS`](http://www.osl.iu.edu/research/mtl/mtl4/doc/install.html), and most likely faster than the MTL4 without BLAS according to their own [observation](http://www.osl.iu.edu/research/mtl/mtl4/doc/peak_addiction.html) (example see [here](http://www.osl.iu.edu/research/mtl/mtl4/doc/performance_athlon.html), where [GotoBLAS](http://www.tacc.utexas.edu/resources/software/gotoblasfaq.php) outperforms MTL4). To sum up, speed should not be your decisive factor as long as you are willing to use some BLAS library. Usability and support is more important. You have to decide, whether MTL or uBLAS is better suited for you. I tend towards uBLAS given that it is part of BOOST, and MTL4 currently only supports BLAS [selectively](http://www.osl.iu.edu/research/mtl/mtl4/doc/performance_disclaimer.html). You might also find this slightly dated [comparison of scientific C++ packages](http://www.linuxclustersinstitute.org/conferences/archive/2003/PDF/Mello_U.pdf) interesting. One big BUT: for your requirements (extremely big matrices), I would probably skip the "syntactic sugar" uBLAS or MTL, and call the "metal" C interface of BLAS / LAPACK directly. But that's just me... Another advantage is that it should be easier than to switch to [ScaLAPACK](http://www.netlib.org/scalapack/) (distributed memory LAPACK, have never used it) for bigger problems. Just to be clear: for house-hold problems, I would not suggest calling a BLAS library directly.
If you're programming vectors, matrices, and linear algebra in C++, I'd look at Eigen: <http://eigen.tuxfamily.org/> It's faster than uBLAS (not sure about MTL4) and much cleaner syntax.
ublas vs. matrix template library (MTL4)
[ "", "c++", "math", "linear-algebra", "" ]
I'm currently adding a tooltip to a label like so: ``` ToolTip LabelToolTip = new System.Windows.Forms.ToolTip(); LabelToolTip.SetToolTip(this.LocationLabel, text); ``` When I need to change this tooltip as the label's text changes, I try doing the same to add a new tooltip. Unfortunately, the old tooltip remains under the new one, which is really annoying. Is there a method to remove the old tooltip, or should I just make a new label when I want to change the text in a label?
Create a single instance of the `ToolTip` and use it whenever you like to show it using the `SetToolTip` method and use [`Hide`](http://msdn.microsoft.com/en-us/library/system.windows.forms.tooltip.hide.aspx) method to hide it. Generally it is not necessary to create more than one `ToolTip` instance.
The tooltip object works in multiple Controls at the same time. Create a single instance of the ToolTip and use it for adding and removing a ToolTip of any Control. When adding you should use [.SetToolTip](https://msdn.microsoft.com/en-us/library/system.windows.forms.tooltip.settooltip(v=vs.110).aspx)(Control, "Message that will apear when hover") When removing you just set it back to null with [.SetToolTip](https://msdn.microsoft.com/en-us/library/system.windows.forms.tooltip.settooltip(v=vs.110).aspx)(Control, null).
How do I remove a tooltip currently bound to a control?
[ "", "c#", ".net", "winforms", "tooltip", "" ]
I did a REGEX pattern some time ago and I don't remember its meaning. For me this is a write-only language :) Here is the REGEX: ``` "(?!^[0-9]*$)(?!^[a-zA-Z]*$)^([a-zA-Z0-9]{8,10})$" ``` I need to know, in plain English, what does it means.
``` (?!^[0-9]*$) ``` don't match only numbers, ``` (?!^[a-zA-Z]*$) ``` don't match only letters, ``` ^([a-zA-Z0-9]{8,10})$ ``` match letters and number 8 to 10 characters long.
[Perl](http://www.perl.com/doc/manual/html/pod/perlre.html) (and [Python](http://docs.python.org/library/re.html) accordingly) says to the `(?!...)` part: > A zero-width negative lookahead assertion. For example `/foo(?!bar)/` matches any occurrence of 'foo' that isn't followed by 'bar'. Note however that lookahead and lookbehind are NOT the same thing. You cannot use this for lookbehind. That means, ``` (?!^[0-9]*$) ``` means: *don't match, if the string contains **only** numbers.* (`^`: start of line/string, `$`: end of line/string) The other accordingly. Your regexp matches any string, that contains **both** numbers and letters, but not only one of them. Cheers, **Update:** For your future RegExp tailoring, take a look at the `(?#...)` pattern. It allows you to embed comments in your regexp. There is also a modifier, `re.X`, but I don't like this very much. It's your choice.
Interpret this particular REGEX
[ "", "python", "regex", "" ]
Consider this code: ``` class test { public static void main(String[] args) { test inst_test = new test(); int i1 = 2000; int i2 = 2000; int i3 = 2; int i4 = 2; Integer Ithree = new Integer(2); // 1 Integer Ifour = new Integer(2); // 2 System.out.println( Ithree == Ifour ); inst_test.method( i3 , i4 ); inst_test.method( i1 , i2 ); } public void method( Integer i , Integer eye ) { System.out.println(i == eye ); } } ``` It prints: ``` false true false ``` I understand the first `false`, the == operator only checks if two references are working on the same object, which in this case aren't. The following `true` and `false` have me scratching my head. Why would Java consider `i3` and `i4` equal but `i1` and `i2` different? Both have been wrapped to Integer, shouldn't **both** evaluate to false? Is there a practical reason for this inconsistency?
Autoboxing of primitives into objects (as used in your calls to `method` uses a cache of small values. From the [Java Language Specification section 5.1.7](http://java.sun.com/docs/books/jls/third_edition/html/conversions.html#190697): > If the value p being boxed is true, > false, a byte, a char in the range > \u0000 to \u007f, or an int or short > number between -128 and 127, then let > r1 and r2 be the results of any two > boxing conversions of p. It is always > the case that r1 == r2. The discussion part of the spec immediately following that is interesting too. Notably a JVM can cache *more* values if it wants to - you can't be sure of the results of doing: ``` Integer i1 = 129; Integer i2 = 129; boolean b = (i1 == i2); ```
When autoboxing, Integers between -128 and 127 are cached, and the same wrapper object is returned. The same with boolean values and char values between \u0000 and \u007F This is what you get most of the time, however it depends on JVM implementation.
Inconsistent behavior on java's ==
[ "", "java", "integer", "primitive", "" ]
I like to force a policy of no warnings when I check someone's code. Any warnings that appear have to be explicitly documented as sometimes it's not easy to remove some warnings or might require too many cycles or memory etc. But there is a down-side to this policy and that is removing warnings in ways that are potentially dangerous, i.e. the method used actually hides the problem rather than fixes it. The one I'm most acutely aware of is explicitly casts which might hide a bug. What other potentially dangerous ways of removing compiler warnings in C(++) are there that I should look out for?
const correctness can cause a few problems for beginners: ``` // following should have been declared as f(const int & x) void f( int & x ) { ... } ``` later: ``` // n is only used to pass the parameter "4" int n = 4; // really wanted to say f(4) f( n ); ``` **Edit1:** In a somewhat similar vein, marking all member variables as **mutable**, because your code often changes them when const correctness says it really shouldn't. **Edit2:** Another one I've come across (possibly from Java programmers ) is to tack throw() specifications onto functions, whether they could actually throw or not.
Well, there's the obvious way - disabling a specific warning for parts of the code: ``` #pragma warning( disable : 4507 34 ) ``` EDIT: As has been pointed out in the comments, it is sometimes necessary to use in cases where you know that the warnings are OK (if it wasn't a useful feature, there would have been no reason to put it in in the first place). However, it is also a very easy way to "ignore" warnings in your code and still get it to compile silently, which is what the original question was about.
Dangerous ways of removing compiler warnings?
[ "", "c++", "c", "compiler-warnings", "" ]
I have a build error with a slot in Qt. I have an class which has a public slot: ``` void doSomething(); ``` In constructor of this class i do: ``` this->connect( ui->textFrom, SIGNAL(returnPressed()), this, SLOT(doSomething()) ); ``` I have QLineEdit - textFrom object. The build error is > ``` > ../moc_mainwindow.cpp:66: undefined reference to `MainWindow::doSomething()' > ``` > > :-1: error: collect2: ld returned 1 > exit status Help me, please (:
void doSomething(); looks like a snip from the header file, did you implement the slot itself?
quick note about syntax: Usually you would use either ``` connect(from, SIGNAL(sig()), to, SLOT(slot())); ``` which is basically equivalent to ``` QObject::connect(from, SIGNAL(sig()), to, SLOT(slot())); ``` Which you'll do if you're calling from somewhere not inside a QObject. While this syntax: ``` to->connect(from, SIGNAL(sig()), SLOT(slot())); ``` is also reasonable. But this syntax: ``` to->connect(from, SIGNAL(sig()), to, SLOT(slot())); ``` is just confusing and duplicates code.
Qt and C++ - undefined reference to slot
[ "", "c++", "qt", "signals-slots", "undefined-reference", "" ]
I want to create a php script that will ping a domain and list the response time along with the total size of the request. This will be used for monitoring a network of websites. I tried it with `curl`, here is the code I have so far: ``` function curlTest2($url) { clearstatcache(); $return = ''; if(substr($url,0,4)!="http") $url = "http://".$url; $userAgent = 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322)'; $ch = curl_init($url); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_setopt($ch, CURLOPT_USERAGENT, $userAgent); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch,CURLOPT_CONNECTTIMEOUT, 15); curl_setopt($ch, CURLOPT_FAILONERROR, 1); curl_setopt($ch, CURLOPT_FRESH_CONNECT, 1); $execute = curl_exec($ch); // Check if any error occured if(!curl_errno($ch)) { $bytes = curl_getinfo($ch, CURLINFO_CONTENT_LENGTH_DOWNLOAD); $total_time = curl_getinfo($ch, CURLINFO_TOTAL_TIME); $return = 'Took ' . $total_time . ' / Bytes: '. $bytes; } else { $return = 'Error reaching domain'; } curl_close($ch); return $return; } ``` And here is one using fopen ``` function fopenTest($link) { if(substr($link,0,4)!="http"){ $link = "http://".$link; } $timestart = microtime(); $churl = @fopen($link,'r'); $timeend = microtime(); $diff = number_format(((substr($timeend,0,9)) + (substr($timeend,-10)) - (substr($timestart,0,9)) - (substr($timestart,-10))),4); $diff = $diff*100; if (!$churl) { $message="Offline"; }else{ $message="Online. Time : ".$diff."ms "; } fclose($churl); return $message; } ``` Is there a better way to ping a website using php?
You could use xmlrpc ([xmlrpc\_client](http://phpxmlrpc.sourceforge.net/doc-2/ch07s03.html)). Not sure what the advantages/disadvantages to curl are. Drupal uses xmlrpc for this purpose (look at the ping module).
Obviously curl's got all kinds of cool things, but remember, you can always make use of built in tools by invoking them from the command line like this: ``` $site = "google.com"; ob_start(); system("ping " . escapeshellarg($site)); print ob_end_flush(); ``` Only thing to keep in mind, this isn't going to be as cross platform as curl might be; although the curl extension is not enabled by default either..
Using php to ping a website
[ "", "php", "curl", "web", "ping", "" ]
I am trying to create a nullalble object in Java but no idea how to do this , in C# this would be done like this ``` int? someTestInt; ``` This allows me to check for for null , while in certain cases i can use a 0 value ,this isnt always possible since certain execution paths allow 0 values
I'm not entirely sure what you want, but if you want to have an integer value that also can be declared `null`, you probably want to use the `Integer` class: ``` Integer nullableInteger = 1; nullableInteger = null; System.out.println(nullableInteger); // "null" ``` There are corresponding classes for each primitive: `Character`, `Long`, `Double`, `Byte`, etc. The 'standard library' numeric classes all extend the [`Number`](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Number.html) class. Note that Java [autoboxes](http://en.wikipedia.org/wiki/Autoboxing#Autoboxing) these objects automatically since JDK 1.5, so you can use *and declare* them just like the primitives (no need for e.g. "`new Integer(1)`"). So, although they are technically objects (and, therefore, extend the `Object` class, which the primitive `int` type does not), you can do basic arithmetics with them. They are converted to object operations at compile time.
Java does not support nullable primitives. You can use the Integer type if you want the ability to store nulls. (This is a duplicate of this post: [How to present the nullable primitive type int in Java?](https://stackoverflow.com/questions/985151/how-to-present-the-nullable-primitive-type-int-in-java))
Creating Nullables types in java
[ "", "java", "nullable", "" ]
I am not able to connect to Oracle using connection string and the correct password. I've tried the same parameters in SQL developer and sqlplus(commandline) and they are working. I even tried changing the password but no use. Any ideas what might be wrong or where should I start looking? The exact Oracle error is : **invalid username/password password, logon denied** and after trying a few times account is getting locked with **the account is locked**
Actually the problem was with cache on the computer. Even though I am giving "server='correctdb'" visual studio was taking "server='olddb'" as input and hence I am not able to connect. Pathetic but that's what happened.
Is it just your computer that exhibits the problem, or are others seeing it as well? Can you test your connection in the ODBC Data Source Administrator? Does that work?
Error connecting to Oracle from Visual Studio
[ "", "c#", "database", "visual-studio", "oracle", "" ]
In my app I need to watch a directory for new files. The amount of traffic is very large and there are going to be a minimum of hundreds of new files per second appearing. Currently I'm using a busy loop with this kind of idea: ``` while True: time.sleep(0.2) if len(os.listdir('.')) > 0: # do stuff ``` After running profiling I'm seeing a lot of time spent in the sleep, and I'm wondering if I should change this to use polling instead. I'm trying to use one of the available classes in `select` to poll my directory, but I'm not sure if it actually works, or if I'm just doing it wrong. I get an fd for my directory with: ``` fd = os.open('.', os.O_DIRECT) ``` I've then tried several methods to see when the directory changes. As an example, one of the things I tried was: ``` poll = select.poll() poll.register(fd, select.POLLIN) poll.poll() # returns (fd, 1) meaning 'ready to read' os.read(fd, 4096) # prints largely gibberish but i can see that i'm pulling the files/folders contained in the directory at least poll.poll() # returns (fd, 1) again os.read(fd, 4096) # empty string - no more data ``` Why is poll() acting like there is more information to read? I assumed that it would only do that if something had changed in the directory. Is what I'm trying to do here even possible? If not, is there any other better alternative to `while True: look for changes` ?
> After running profiling I'm seeing a lot of time spent in the sleep, and I'm wondering if I should change this to use polling instead. Looks like you already *do* synchronous polling, by checking the state at regular intervals. Don't worry about the time "spent" in `sleep`, it won't eat CPU time. It just passes control to the operating system which wakes the process up after a requested timeout. You could consider asynchronous event loop using a library that listens to filesystem change notifications provided by the operating system, but consider first if it gives you any real benefits in this particular situation.
FreeBSD and thus Mac OS X provide an analog of inotify called kqueue. Type man 2 kqueue on a FreeBSD machine for more information. For kqueue on Freebsd you have PyKQueue available at <http://people.freebsd.org/~dwhite/PyKQueue/>, unfortunately is not actively maintained so your mileage may vary.
Using select/poll/kqueue/kevent to watch a directory for new files
[ "", "python", "asynchronous", "freebsd", "polling", "" ]
My understanding is that Microsoft Script Editor is the best debugging utility for IE. To enable debugging in IE6 on XP, I found these instructions: > --- > > On Windows XP SP2+, the option has been split to two: > > 1. Go to Tools->Internet > Options…->Advanced->Disable Script > Debugging (Internet Explorer) > 2. Go to > Tools->Internet > Options…->Advanced->Disable Script > Debugging (Other) > > Unchecking the first will enable > debugging for IE. > > --- Once enabled, I'm supposed to see a new menu under “View > Script debugger” to activate debugging. Unfortunately, unchecking both check boxes and restarting my computer does reveal this new "Script debugger" option. This worked on Windows 2000, but it fails for XP SP3. 1. Any clues on how to install MSFT Script Editor for IE6 on XP SP3? 2. Recommendations for alternatives to Script Editor for IE6?
Thanks for the answer, redsquare. After much experimentation, I finally discovered how to install MSE on XP SP3: 1) Disable the checkboxes as described above. 2) Open MS Excel. Go to Tools -> Macro -> Microsoft Script Editor. Clicking this option will prompt installation of MSE. Install MSE. 3) Once MSE is accessible from Excel, open MSE. Go to Debugging -> Web Debugging. Clicking this option will prompt installation of MSE Web Debugging. Install it.`enter code here` Once Web Debugging is ready, restart IE, and voila! Under the View menu, you'll now see the Script Debugger option. Finally ...
You can debug with [visual web express](http://www.microsoft.com/express/vwd/). Instructions [here](http://www.berniecode.com/blog/2007/03/08/how-to-debug-javascript-with-visual-web-developer-express/) edit: MS's site doesn't play nicely with older versions of IE. It's impossible to get the tabs to expand to get the links to the software. Use Chrome to browse the links above, or use this URL <http://go.microsoft.com/?linkid=9730788>
Microsoft Script Editor for Internet Explorer on XP Home?
[ "", "javascript", "debugging", "internet-explorer", "windows-xp", "internet-explorer-6", "" ]
Is there any small library, that wrapps various processors' CAS-like operations into macros or functions, that are portable across multiple compilers? **PS.** The [atomic.hpp library](http://www.boost.org/doc/libs/1_39_0/boost/interprocess/detail/atomic.hpp) is inside boost::interprocess::detail namespace. The author refuses to make it a public, well maintained library. **Lets reopen the question, and see if there are any other options?**
[Intel Threading Building Blocks](http://threadingbuildingblocks.org) has a nice portable `atomic<T>` template which does what you want. But whether it is a small library or not can of course be debated..
OPA (Open Portable Atomics) could be a good fit for your needs. <https://trac.mcs.anl.gov/projects/openpa/> It provides a consistent C API to common atomic operations across multiple platforms under an MIT-style license. The library is small and certainly meets your size requirements. The current platform list is: * GCC inline assembly for x86, x86\_64, ia64, PPC 440, and MIPS 5K processors. Several compilers with GCC-compatible-ish front-ends are also supported on the same architectures, such as icc, PGI, and IBM's xlc. * GCC atomic intrinsics, so most GCC-4.1+ installations are supported. * The SUN Solaris atomic operations library. * Windows NT intrinsics (although you currently have to do a little bit of extra work to build on Windows). * Two pseudo-platforms, pthread mutex based emulation for portability to otherwise unsupported platforms (while sacrificing some performance), and an "unsafe" implementation for use in code that is conditionally compiled to be single-threaded code. I've never used it in a C++ program, although it ought to work with little or no changes. I'd be happy to tweak it if you run into trouble (just mail opa-discuss@lists.mcs.anl.gov).
Portable Compare And Swap (atomic operations) C/C++ library?
[ "", "c++", "c", "multithreading", "portability", "lock-free", "" ]
How do I get the installation path for a given instance of SQL Server (default and name instances)
``` using(RegistryKey sqlServerKey = Registry.LocalMachine.OpenSubKey(@"SOFTWARE\Microsoft\Microsoft SQL Server")) { foreach (string subKeyName in sqlServerKey.GetSubKeyNames()) { if(subKeyName.StartsWith("MSSQL.")) { using(RegistryKey instanceKey = sqlServerKey.OpenSubKey(subKeyName)) { string instanceName = instanceKey.GetValue("").ToString(); if (instanceName == "MSSQLSERVER")//say { string path = instanceKey.OpenSubKey(@"Setup").GetValue("SQLBinRoot").ToString(); path = Path.Combine(path, "sqlserver.exe"); return path; } } } } } ```
If you have the connection string, you may select the Directory with SQL ``` private string ServerRootDirectory(string connString) { string path = string.Empty; using (SqlConnection con = new SqlConnection(connString)) { con.Open(); SqlCommand cmd = new SqlCommand(); cmd.Connection = con; cmd.CommandText = string.Format(@"DECLARE @InstanceName varchar(100), @InstanceLocation varchar(100), @InstancePath varchar(100) SELECT @InstanceName = convert(varchar, ServerProperty('InstanceName')) EXEC master..xp_regread @rootkey='HKEY_LOCAL_MACHINE', @key='Software\Microsoft\Microsoft SQL Server\Instance Names\SQL', @value_name=@InstanceName, @value=@InstanceLocation OUTPUT SELECT @InstanceLocation = 'Software\Microsoft\Microsoft SQL Server\'+@InstanceLocation+'\Setup' EXEC master..xp_regread @rootkey='HKEY_LOCAL_MACHINE', @key=@InstanceLocation, @value_name='SQLPath', @value=@InstancePath OUTPUT SELECT @InstancePath as RootDirectoryPath"); path = (string)cmd.ExecuteScalar(); con.Close(); } return path; } ``` Output of the above code: > c:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL
C# How to get SQL Server installation path programatically?
[ "", "c#", "sql-server", "" ]
Given the following form: ``` <form> <input name="foo" value="bar"> <input name="hello" value="hello world"> </form> ``` I can use the `$.param( .. )` construct to serialize the form: ``` $.param( $('form input') ) => foo=bar&hello=hello+world ``` How can I deserialize the above String with JavaScript and get a hash back? For example, ``` $.magicFunction("foo=bar&hello=hello+world") => {'foo' : 'bar', 'hello' : 'hello world'} ``` Reference: [`jQuery.param( obj )`](http://api.jquery.com/jQuery.param/#jQuery-param-obj).
You should use [jQuery BBQ](http://benalman.com/projects/jquery-bbq-plugin/)'s [deparam](http://benalman.com/code/projects/jquery-bbq/examples/deparam/) function. It's well-tested and documented.
This is a slightly modified version of a function I wrote a while ago to do something similar. ``` var QueryStringToHash = function QueryStringToHash (query) { var query_string = {}; var vars = query.split("&"); for (var i=0;i<vars.length;i++) { var pair = vars[i].split("="); pair[0] = decodeURIComponent(pair[0]); pair[1] = decodeURIComponent(pair[1]); // If first entry with this name if (typeof query_string[pair[0]] === "undefined") { query_string[pair[0]] = pair[1]; // If second entry with this name } else if (typeof query_string[pair[0]] === "string") { var arr = [ query_string[pair[0]], pair[1] ]; query_string[pair[0]] = arr; // If third or later entry with this name } else { query_string[pair[0]].push(pair[1]); } } return query_string; }; ```
The $.param( ) inverse function in JavaScript / jQuery
[ "", "javascript", "jquery", "" ]
How are floating points represented and interpreted by a compiler. I am trying to understand that so I can easily interpret what byte array would mean for floats and doubles. Thanks
To actually interpret it you would probably not want to treat it as bytes anyway because mantisa boundries don't align to an 8bit boundry. Something along the lines of: ``` mantisa = (*(unsigned int *)&floatVal) | MANTISA_MASK; exp = ((*(unsigned int *)&floatVal) | EXP_MASK ) >> EXP_SHIFT; sign = ((*(unsigned int *)&floatVal) | SIGN_MASK ) >> SIGN_SHIFT; ``` Would let you pull it apart to play with the juice center. EDIT: ``` #include <stdio.h> void main() { float a = 4; unsigned int exp,sign,mantisa; int i; for(i = 0;i<4;i++) { exp = (*((unsigned int *)&a) >>23) & 0xFF; sign = (*((unsigned int *)&a) >>31) & 0x01; mantisa = (*((unsigned int *)&a)) & 0x7FFFFF | 0x800000; printf("a = %04x\r\n",*((unsigned int *)&a)); printf("a = %f\r\n",a); printf("exp = %i, %02x\r\n",exp,exp); printf("sign = %i, %02x\r\n",sign,sign); printf("mantisa = %i, %02x\r\n\r\n",mantisa,mantisa); a = -a / 2; } } ``` Produces: ``` a = 40800000 a = 4.000000 exp = 129, 81 sign = 0, 00 mantisa = 8388608, 800000 a = c0000000 a = -2.000000 exp = 128, 80 sign = 1, 01 mantisa = 8388608, 800000 a = 3f800000 a = 1.000000 exp = 127, 7f sign = 0, 00 mantisa = 8388608, 800000 a = bf000000 a = -0.500000 exp = 126, 7e sign = 1, 01 mantisa = 8388608, 800000 Press any key to continue . . . ```
Try this link: <http://en.wikipedia.org/wiki/IEEE_754> I just found that this might be a little more helpful: <http://en.wikipedia.org/wiki/IEEE_754-1985> This is the IEEE-Standard for floating point numbers. There is one from 1985 and a revised edition from 2008. Float is 32bit, double is 64bit (explained in the second link). --- Edit: Thx to the comment by Don, here's the link to Intels 80bit floating point description: <http://en.wikipedia.org/wiki/Extended_precision>
How are float and doubles represented in C++ (gcc)?
[ "", "c++", "computer-science", "floating-point", "" ]
I wrote a page by javascript that has over 1000 lines of code and so the page is so heavy. when i run the page on the firefox, it tell me that: ``` A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete. Script: http://localhost:5070/resources/ir.pnusn.branch.ui.pages.BranchMap/scripts/MapAddress.js:151 ``` and I have in MyAddress.js:151: ``` function fillAddress(number) { 150: var filedNum=0; 151: while(document.getElementById("choice4")!=null) document.getElementById("choice4").click(); nextId=1; for(i=0;;i++) { var cookieChoiceName=number+"-"+filedNum+"-0"; var cookieChoiceValue=getCookie(cookieChoiceName); if(cookieChoiceValue==null) { //alert(document.getElementById("choice4")); break; } var cookieFieldName=number+"-"+filedNum+"-1"; var cookieFieldValue=getCookie(cookieFieldName); if(cookieFieldValue==null) { alert(document.getElementById("choice4")); break; } if(cookieChoiceValue!= "-1" && cookieChoiceValue!="-2" && document.getElementById("choice"+cookieChoiceValue)!=null) { document.getElementById("choice"+cookieChoiceValue).click(); } var value=utf8_decode(unescape(cookieFieldValue)); var finalval=replacePluse(value); // var Resvalue=value.substr(1,value.length-1); document.getElementById("txt"+(filedNum-1)).value=finalval; filedNum++; } } ``` and when I press continue in warning window the code work correctly. but when i increase dom.max\_script\_run\_time in firefox configuration file it does not change and after that when I press continue again it works. What's the problem?If you want more information about the code tell me to put for you here.
Your `while` probably is an infinite loop. The condition you are using will always be true as long as you have element with id `#choise4` present. Since you are not removing this element inside the loop, then it will never finish.
This line looks suspiciously like a infinite loop to me: ``` while(document.getElementById("choice4")!=null) document.getElementById("choice4").click(); ```
JavaScript causing "A script on this page may be busy, or it may have stopped responding." error in Firefox
[ "", "javascript", "firefox", "" ]
I have a C application and I want to include a Scripting Language to put certain functionality into scripts. I just have no experience with that and don't know exactly where to start (Still learning C and trying to understand the application). How does embedding and communication between my app and the scripts actually work? I think I need the interpreter for the scripting language as a library (.dll on Windows or C Source Code that can be compiled into my application)? And then can I do something like ``` interpreter->run("myscript", some_object); ``` How would the script know about the properties of the object? Say my script wants to read or modify some\_object->some\_field? Are there any scripting languages that are optimized for that sort of embedding? I know that there is Lua which is popular in game dev, and languages like Python, Perl, PHP or Ruby which seem to be more targeted as stand-alone applications, but my knowledge in the deep architecture does not allow more educated guesses :) (Tagged Lua and Python because they would be my favorites, but as long as it runs on x86 Windows, Linux and Mac OS X, I'm open for other scripting languages as long as they are easy to implement into a C application)
Here's the document from the Python website for embedding Python 2.6... <http://docs.python.org/extending/embedding.html>
Lua. It has a very small footprint, is rather fast, and I found it (subjectively) to have the most pleasant API to interact with C. If you want to touch the Lua objects from C - it's quite easy using the built-in APIs. If you want to touch C data from Lua - it's a bit more work, typically you'd need to make wrapper methods to expose what you want to allow the Lua to modify. Small code base and tight control over the amount of default libraries introduced into your embeded interpreter also means that you can make reasonable assumptions over the security. The only odd part is the 1-based array numbering, however, it was not that big of a deal compared to what I thought, given the existence of the iterators. How to integrate with C: the distribution tarball for Lua has a directory "etc" with a few very good useful examples that should quickly get you started. Specifically - etc/min.c shows how to start up an interpreter, make it interpret the file, and make it call the C function ('print' in that case). From there on you can go by reading the Lua documentation and the source of the standard libraries included with the distribution.
How to implement a scripting language into a C application?
[ "", "python", "c", "scripting", "lua", "" ]
Suppose I have a number of elements to store like: ``` fruitList = "apple", "orange", "banana", "kiwi", ... ``` How would you store these in XML? ``` <FruitList>"apple", "orange", "banana", "kiwi"</FruitList> ``` OR ``` <Fruits Type="Expensive" List="apple", "orange", "banana", "kiwi"> </Fruits> ``` Is there a better way? Whatever method is chosen, how can I parse the list easily so that the parsing doesn't need to change if the formatting of the items is changed to something like: ``` <FruitList> "apple", "orange", "banana", "kiwi" </FruitList> ```
``` <Fruits> <Fruit>Apple</Fruit> <Fruit>Orange</Fruit> </Fruits> ```
If you're looking to store application settings, then take a look at the types in the [System.Configuration Namespace](http://msdn.microsoft.com/en-us/library/system.configuration.aspx), or look into using [Application Settings](http://msdn.microsoft.com/en-us/library/a65txexh.aspx).
How can I store multiple items in an XML file for easy parsing?
[ "", "c#", ".net", "xml", "" ]
I'd have to say I'm no expert on using the STL. Here's my problem, I have a class Called LdapClientManager which maintains a number of LDAP clients that are managed by ID. The container holding the LdapClients is declared as a member variable i.e. ``` typedef std::map<int, LdapClient *> LdapClientMap; LdapClientMap _ldapClientMap; ``` The following function fails to compile with the error: ``` LdapClient * LdapClientManager::getLdapClient(unsigned int templateID) { // Do we have an LdapClient LdapClientMap::const_iterator it = _ldapClientMap.find(templateID); if (it == std::map::end) { // no existing client, lets create it LdapClient * ldapClient = new LdapClient(); if (ldapClient == NULL) { // TODO: handle out of memory condition } _ldapClientMap[templateID] = ldapClient; return ldapClient; } return it->second; } ``` Unfortunately I get the following error at compile time, what does it mean. I haven't found a solution in google as yet. LdapClientManager.cc: In member function `LdapClient* LdapClientManager::getLdapClient(unsigned int)': LdapClientManager.cc:33:`template class std::map' used without template parameters
Replace `std::map::end` with `_ldapClientMap.end()`. Also, `new` never returns 0, it throws an exception if the allocation fails. Note that the program can be made much shorter. ``` LdapClient * LdapClientManager::getLdapClient(unsigned int templateID) { LdapClient *& value = _ldapClientMap[templateID]; if (value == 0) value = new LdapClient(); return value; } ```
It means exactly what it says it means. `std::map` is a *class template*. It is not a class in and of itself. It needs template parameters, like you used when you defined the `LdapClientMap` type. Later, you say `std::map::end`, and the compiler says that needs parameters, too. But you probably meant `_ldapClientMap.end()`. Each map has its own end; `end` is not a static function, so you need to call it on an instance. **If** it were static, you would have needed to provide template parameters, just like when you defined the type: `std::map<int, LdapClient*>::end`.
"class std::map used without template paramaters" error
[ "", "c++", "gcc", "stl", "" ]
**Please note that this is asking a question about constructors, not about classes which handle time.** Suppose I have a class like this: ``` class Time { protected: unsigned int m_hour; unsigned int m_minute; unsigned int m_second; public: Time(unsigned int hour, unsigned int minute, unsigned int second); }; ``` While I would want a to be constructed successfully, I would want the constructor of b to fail. ``` Time a = Time(12,34,56); Time b = Time(12,34,65); // second is larger than 60 ``` However, this is not possible, because constructors do not return any values and will always succeed. How would the constructor tell the program that it is not happy? I have thought of a few methods: 1. have the constructor throw an exception, and have handlers in the calling function to handle it. 2. have a flag in the class and set it to true only if the values are acceptable by the constructor, and have the program check the flag immediately after construction. 3. have a separate (probably static) function to call to check the input parameters immediately before calling the constructor. 4. redesign the class so that it can be constructed from any input parameters. Which of these methods is most common in industry? Or is there anything I may have missed?
The typical solution is to throw an exception. The logic behind that is the following: the constructor is a method that transforms a chunk of memory into a valid object. Either it succeeds (finishes normally) and you have a valid object or you need some non-ignorable indicator of a problem. Exceptions are the only way to make the problem non-ignorable in C++.
Another alternative, for completeness: * Redesign the interface so that invalid values are "impossible" In your "Time" class, for example, you could have: ``` class Time{ public: Time(Hours h, Minutes m, Seconds s); //... }; ``` Hours, Minutes and Seconds being bounded values. For example, with the (not yet)Boost Constrained Value library: ``` typedef bounded_int<unsigned int, 0, 23>::type Hours; typedef bounded_int<unsigned int, 0, 59>::type Minutes; typedef bounded_int<unsigned int, 0, 59>::type Seconds; ```
How to handle incorrect values in a constructor?
[ "", "c++", "error-handling", "constructor", "return-value", "" ]
I'm using MS Query Analyzer (as part of SQL Server 2000) to write T-SQL scripts to pull data out of a DB. This involves querying some tables, iterating through the results (using a cursor), some basic processing of the results and putting the processed data into another table. This is working pretty well as I can view the tables and stored procedures and edit/run the scripts against them all in Query Analyzer. The main issue I have is the script editing kinda sux. It's pretty much notepad with syntax highlighting. There's very little understanding of the databases I'm scripting against. And none of the other programming aids that most IDEs provide, eg. code completion on keywords and database table and column names would be very handy. My question is: are there any tools that provide a better T-SQL scripting environment, while maintaining the same (or better) level of integration with SQL Server? Free/open source tools would be preferred, but commerical ones would also be considered.
I think it's called SQL Server 2008 Management Studio :). It has intellisense support now. If not, you can try add-in from Red Gate like Sql Prompt.
I've tried this and personally think that is a better offer than Microsoft one, plus its free! <http://www.albahari.com/queryexpress.aspx> Hope it serves!
SQL Scripting Tool That Beats MS Query Analyzer?
[ "", "sql", "sql-server", "t-sql", "scripting", "" ]
Are attributes applied to an abstract method in a base class applied to the overridden versions in the child classes? I hope the question is clear enough without an example.
It depends on how the attribute itself is declared - see [`AttributeUsageAttribute.Inherited`](http://msdn.microsoft.com/en-us/library/system.attributeusageattribute.inherited.aspx) property.
It depends on the Attribute. attributes applied to Attribute class defination, carry a property [AttributeUsageAttribute.Inherited] that determines if an attributed is inherited in the derived classes. check out this sample ``` [global::System.AttributeUsage(AttributeTargets.Method, Inherited = true, AllowMultiple = false)] public sealed class MyAttribute : Attribute { public MyAttribute (string FieldName) { //constructor. } } ```
Are Method Attributes Inherited in C#?
[ "", "c#", "inheritance", "attributes", "methods", "" ]
I am in the process of learning ASP.NET MVC and I am learning it by walking through nerd dinner application. I am having trouble understanding the ASPX and ASCX files. The way of writing the code in ASPX files seems pretty weird (code is split in multiple lines). Please can you suggest any online tutorial/guide/article which can explain how to write ASPX pages? EDIT: Another question: When a variable like "Request" of type HttpRequest is used in an aspx page, where does it come from? I mean where is the class, where is the variable declaration?
The first chapter of "[Professional ASP.NET MVC 1.0](http://tinyurl.com/aspnetmvc)" is free. This chapter will take you through creating the Nerd Dinner application. A copy of the first chapter is available from: <http://aspnetmvcbook.s3.amazonaws.com/aspnetmvc-nerdinner_v1.pdf>. Running through the tutorial should make getting up to speed a lot easier.
Request is type of HttpRequest, which is normally a static object across the single Request: ie, it is created each time in every Http Request made by the client. Since all aspx page, is inherited from [Page Class](http://msdn.microsoft.com/en-us/library/system.web.ui.page_properties.aspx). Like: ``` public class Default : Page {..} ``` The Request Property is defined in the Page class, same as other like Response, Context etc. (and it is the same case for ascx, which parent is UserControl). They can all accessed through HttpContext.Current (which return the single-instance of context), but they defined in Page or usercontrol class for easier access.
How to learn/understand ASPX or ASCX pages
[ "", "c#", "asp.net", "asp.net-mvc", "" ]
I have a c# simple application that have to write some values in a excel ranges of a specific worksheet. I create an instance of Excel application if not exist, but if exist i want to set active it and take an instance if it to use in my code. I use this code to create a new application: ``` Microsoft.Office.Interop.Excel app = new Microsoft.Office.Interop.Excel.Application(); app.Visible = true; ``` To get the handle of active excel window i use this api ``` [DllImportAttribute("User32.dll")] private static extern int FindWindow(String ClassName, String WindowName); ``` How can i get an instance of excel application by an handle? ``` int hWnd = FindWindow(null, "Microsoft Excel - MySheet.xlsx"); Microsoft.Office.Interop.Excel app = ....(hWnd) ```
Use the following code to get the first running instance of Excel: ``` oExcelApp = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); ``` **Example** ``` public Excel.Application StartExcel() { Excel.Application instance = null; try { instance = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); } catch (System.Runtime.InteropServices.COMException ex) { instance = new Excel.ApplicationClass(); } return instance; } ```
There might be more than one Excel instance running. GetActiveObject(...) looks in the Running Object Table (ROT) and would give you the last Excel instance that was opened - not necessarily the one corresponding with the window handle you have. You're looking for AccessibleObjectFromWindow(..). The [Andrew Whitechapel post](http://blogs.msdn.com/b/andreww/archive/2008/11/30/starting-or-connecting-to-office-apps.aspx) linked to in the other answer shows how to use this function. Another link - <http://blogs.officezealot.com/whitechapel/archive/2005/04/10/4514.aspx>.
Get instance of Excel application with C# by Handle
[ "", "c#", "excel", "office-interop", "" ]
I'm writing an asp.net user control. It has a property, FurtherReadingPage, and two controls bound to it: ObjectDataSource and a Repeater. Inside the Repeater I would like to display a hyperlink with an href property set to something like `FurtherReadingPage + "?id=" + Eval("Id")`. I don't know how to do it inside the page's markup. I can use `<% Eval("Id") %>` or `<% Response.Write(FurtherReadingPage + "?id=") %>` alone but I don't know how to mix them.
You can do it like this - ``` <asp:Hyperlink runat="Server" ID="hlLink" NavigateUrl='<%# FurtherReadingPage + "?Id=" + DataBinder.Eval(Container.DataItem, "Id") %>' /> ```
You have a couple of different tags: `<%` executes the code inside: ``` <% int id = int.Parse(Request["id"]); %> ``` `<%=` writes out the code inside: ``` <%=id %> <!-- note no ; --> <!-- this is shorthand for: --> <% Response.Write(id); %> ``` Both of these break up the normal flow when rendered on a page, for instance if you use them in a normal Asp.net `<head runat="server">` you'll get problems. `<%#` databinding: ``` <%# Eval("id") %> ``` This allows you to specify the bindings for controls that Asp.net WebForms render as a collection (rather than the literal controls that you can use `<%=` with), for instance: ``` <!-- this could be inside a repeater or another control --> <asp:Hyperlink runat="server" ID="demo" NavigateUrl="page.aspx?id=<%# Eval("id") %>" /> <% //without this bind the <%# will be ignored void Page_Load( object sender, EventArgs e ) { demo.DataBind(); //or repeaterWithManyLinks.DataBind(); } %> ``` For your specific case you either: * Use a repeater and `<%# Eval(...) %>` with `repeater.DataBind();` or * Use a foreach loop (`<% foreach(... %>`) with `<%= ... %>`
How to use c# code inside <% ... %> tags on asp.net page?
[ "", "c#", "asp.net", "" ]
I'd like to make a textarea resizable. I tried out jQuery UI's "resizable" functionality, but, much as I love jQuery, the resizable thing isn't quite there yet. I want the textarea to resize heightwise only, via a handle at the bottom-center. jQuery lets you resize heightwise only, but evidently doesn't let you place a handle anywhere other than lower right corner, despite what the docs say. Does anyone know of an alternate library for this functionality?
Are you sure about this? The [handles](http://jqueryui.com/demos/resizable/#option-handles) option worked fine for me. ``` <!DOCTYPE html> <html> <head> <title></title> <link href="css/ui-lightness/jquery-ui-1.7.2.custom.css" rel="stylesheet" type="text/css"> </head> <body> <div id="welcome" class="ui-widget-content">Hello, world!</div> <script src="js/jquery-1.3.2.min.js" type="text/javascript"></script> <script src="js/jquery-ui-1.7.2.custom.min.js" type="text/javascript"></script> <script type="text/javascript"> $(function() { $("#welcome").resizable({ handles: "n, s" }); }); </script> </body> </html> ``` You can style the ui-resizable-handle, ui-resizable-s, and ui-resizable-n classes.
[4umi](http://4umi.com/web/javascript/textarearesize.php) does this rather well.
What's a good JS library for making things resizable?
[ "", "javascript", "jquery", "resize", "" ]
I haven't started writing any code for this program, but here's what I need to do in C#/ASP.NET, both of which I'm just starting to learn. I have a DIV on a page that I want to update with information from an MS SQL Server every five seconds. 1. Would it be better to create my countdown timer on the JavaScript or C# side? 2. Would UpdatePanel or creating a Page Method be more efficient for updating the DIV with the database information? Load times are a serious issue for this application, so the lighter and faster the solution, the better.
1) You'll have to create the countdown timer on the client in javascript since the client is disconnected from your server-side code except when it explicitly sends requests to it. 2) `UpdatePanel` will be less efficient, because it posts all of your page's form values (including any `ViewState` or `EventValidation` material), when all you might need to pass to the server is a few bytes (for an id, for example). However, this difference in efficiency may not be significant if your query times are high (or the cost of transporting/rendering the data is high).
The UpdatePanel will also use the timer on client side to refresh periodically. So it is better to use some efficient WebMethod rather than sending and retrieving whole form values using UpdatePanel. For more information regarding pros and cons of UpdatePanel and WebMethods you should visit Dave's website (<http://www.encosia.com>)
.NET Page Method vs UpdatePanel, Which is Better for Updating and Refreshing
[ "", "c#", "asp.net", "updatepanel", "pagemethods", "" ]
I have been working in a business writing advanced software apps, and obviously im provided with access to our SQL server and all the connection strings needed.This is fine for my job now - but what if i wanted to do this for a new (very small) business... If i wanted to purchase a small database server and set up a piece of software that talks to the databases on this server, how would i go about a) Talking and connecting to the server in code (c#) and b)What would i need regarding things like internet/phone connections etc to make this possible. Edit: the reason it would need a server is because it would need to be accessed from 2 or 3 different computers in different locations?
Actually there are quite a few ways to create a database connection, but I would say one of the easiest ways is to utilize the methods and classes found in [System.Data.SQLClient](http://msdn.microsoft.com/en-us/library/system.data.sqlclient.aspx). A basic connection would look something like the following: ``` using System.Data.SQLClient; namespace YourNamespace { public class DatabaseConnect { public DataType getData() { DataType dataObj = new DataType(); SqlConnection testConn = new SqlConnection("connection string here"); SqlCommand testCommand = new SqlCommand("select * from dataTable", testConn); testConn.Open() using (SqlDataReader reader = testCommand.ExecuteReader()) { while (reader.Read()) { //Get data from reader and set into DataType object } } return dataObj; } } } ``` Keep in mind, this is a very, very simple version of a connection for reading data, but it should give you an idea of what you need to do. Make sure to use a "using" or "try/catch" statement to ensure that the connection is closed and resources are freed after each use (whether it successfully gets data or not). As for your other question about what equipment you may require. In the beginning I would suggest just creating the database on your local machine and running tests from there. Once you are confident with trading data back and forth, feel free to move the database to another box or an online server. Any internet connection type should suffice, though I can't vouch for dial-up, haven't used it in years. One final note, if you do happen to decide to move to an online server system, make sure that the service you use allows for outside connections. Certain services use shared server systems, and force users to use their internal database interfaces to manage and write to the database. --- EDIT --- As for the server system itself, build up a separate box on your local network that you can see, and load up the database software of your choice. Since you are using C#, it would probably be easiest to go with Microsoft SQL Server 2005 / 2008. The installation is rather straightforward, and it will prompt you to automatically create your first database while installing. After installation it will be up to you to add in the tables, stored procedures, custom functions, etc... Once your base structure is created, go ahead and use the above code to make some simple connections. Since you are familiar with the above practices then I'm sure you know that all you really need to do is target the server machine and database in the connection string to be on your way.
In case your application is small (by small I mean the usage of resources like CPU and memory) then your SQL Server can reside on the same box. Else you need to have a separate server box for your database and connect to that from your application. In this case, preferably your database box and application box would be on the local area network. Check this link for having a connection to SQL Server from C# code - <http://www.codeproject.com/KB/database/sql_in_csharp.aspx> cheers
Connecting to a database from the beginning
[ "", "c#", "" ]
I am using the Reflection classes in order to get all the fields inside a certain object. My problem however is that it works perfectly when the fields are inside a normal class, like: ``` class test { string test1 = string.Empty; string test2 = string.Empty; } ``` Here i get both test1 and test2, my problem is that i use abstraction and thus several classes combined. I got something like: ``` class test3 : test2 { string test4 = string.Empty; string test5 = string.Empty; } class test2 : test1 { string test2 = string.Empty; string test3 = string.Empty; } class test1 { string test0 = string.Empty; string test1 = string.Empty; } ``` But when I run it, I don't get the fields back from the `GetType().GetFields(BindingFlag.Default)`. Everyone of those fields also have a property, `get; set;` attached to it. When I run the code, I get the properties all the way back to test1 but not the actual fields. This is the code that I'm trying to get the fields with: ``` FieldInfo[] fields = Obj.GetType().GetFields(BindingFlags.Default); foreach (FieldInfo field in fields) ``` I have also tried: ``` FieldInfo[] fields = Obj.GetType().GetFields(BindingFlags.Public | BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Static); ``` I use the same code for the properties: ``` PropertyInfo[] properties = Obj.GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Static); foreach (PropertyInfo property in properties) ``` Any ideas why I get the properties from the abstracted classes but not the fields?
Edit: To get *private* members of the base type, you have to: ``` typeof(T).BaseType.GetFields(...) ``` Edit again: Win. Edit 3/22/13: Used `Concat` instead of `Union`. Since we are specifying `BindingFlags.DeclaredOnly` and a type's `BaseType` cannot equal itself, `Union` is not needed and is more expensive. ``` public static IEnumerable<FieldInfo> GetAllFields(Type t) { if (t == null) return Enumerable.Empty<FieldInfo>(); BindingFlags flags = BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.Instance | BindingFlags.DeclaredOnly; return t.GetFields(flags).Concat(GetAllFields(t.BaseType)); } ```
A type that inherits another type cannot see private parts of that other type, it can see protected, internal and public parts. Consider the following code: ``` class A { // note that this field is private string PrivateString = string.Empty; // protected field protected string ProtectedString = string.Empty; } class B : A { } class Program { static void Main(string[] args) { Console.WriteLine("B Fields:"); B b = new B(); b.GetType() .GetFields(BindingFlags.NonPublic | BindingFlags.Instance) .ToList() .ForEach(f => Console.WriteLine(f.Name)); Console.WriteLine("A Fields:"); A a = new A(); a.GetType() .GetFields(BindingFlags.NonPublic | BindingFlags.Instance) .ToList() .ForEach(f => Console.WriteLine(f.Name)); } } ``` The output of this program is the following: ``` B Fields: ProtectedString A Fields: PrivateString ProtectedString ``` So, the type `A` has two fields; `PrivateString` and `ProtectedString`. Type `B` has one; `ProtectedString`, that it inherits from `A`. If you wish to "reach" `PrivateString` through the type `B`, you will need to navigate to its base type (`b.GetType().BaseType`). Note though, that even if the type `B` reports to have a field called `ProtectedString`, this field is still not declared in `B`; it is declared in `A`. This can be examined by adding `BindingFlags.DeclaredOnly` to the `GetFields` calls in the above sample program; `GetFields` will return no fields for `B`, and two for `A`. Translated to your code sample, this means that the type `test3` does not contain the fields `test2` and `test3`, since they are private to the type `test2` (the similarity of the field names and type names make that sentence somewhat confusing, I am afraid).a
Not getting fields from GetType().GetFields with BindingFlag.Default
[ "", "c#", "reflection", "bindingflags", "" ]
What is the best way to update an XElement (**update the value of itemNumber of element Pen**) in this XML? ``` <?xml version="1.0" encoding="utf-8"?> <MyStore> <Category> <itemName>Pen</itemName> <itemNumber>12</itemNumber> </Category> <Category> <itemName>Paper</itemName> <itemNumber>23</itemNumber> </Category> </MyStore> ```
``` XDocument doc; ... XElement penItemValue = doc .Elements("MyStore") .Elements("Category") .Elements("itemName") .Single(itemName => itemName.Value == "Pen") .Parent .Element("itemValue"); penItemValue.Value = "123"; ```
You could find it and update it using LinqToXml: ``` XElement root = XElement.Load("myXml.xml"); var penCategory = from category in root.Descendants("Category") where category.Element("itemName") != null && category.Element("itemName").Value == "Pen" select category; penCategory.Element("itemName").Value = updatedValue; ```
How can I update a specific XElement?
[ "", "c#", "xml", "" ]
I have what amounts to an `Iterator<Integer>`... actually it's a `class Thing` that accepts a `Visitor<SomeObject>` and calls `visit()` for a subset of the `SomeObject`s it contains, and I have to implement `Visitor<SomeObject>` so it does something like this: ``` // somehow get all the Id's from each of the SomeObject that Thing lets me visit public int[] myIdExtractor(Thing thing) { SomeCollection c = new SomeCollection(); thing.visitObjects(new Visitor<SomeObject>() { public void visit(SomeObject obj) { c.add(obj.getId()); } } ); return convertToPrimitiveArray(c); } ``` I need to extract an `int[]` containing the results, and I'm not sure what to use for `SomeCollection` and `convertToPrimitiveArray`. The number of results is unknown ahead of time and will be large (10K-500K). Is there anything that would be a better choice than using `ArrayList<Integer>` for `SomeCollection`, and this: ``` public int[] convertToPrimitiveArray(List<Integer> ints) { int N = ints.size(); int[] array = new int[N]; int j = 0; for (Integer i : ints) { array[j++] = i; } return array; } ``` Efficiency and memory usage are of some concern.
It's not too difficult to come up with a class that collects `int`s in an array (even if you are not using some library which does it for you). ``` public class IntBuffer { private int[] values = new int[10]; private int size = 0; public void add(int value) { if (!(size < values.length)) { values = java.util.Arrays.copyOf(values, values.length*2); } values[size++] = value; } public int[] toArray() { return java.util.Arrays.copyOf(values, size); } } ``` (Disclaimer: This is stackoverflow, I have not even attempted to compile this code.) As an alternative you could use `DataOutputStream` to store the `int`s in a `ByteArrayOutputStream`. ``` final ByteArrayOutputStream byteOut = new ByteArrayOutputStream(); final DataOutputStream out = new DataOutputStream(byteOut); ... out.writeInt(value); ... out.flush(); final byte[] bytes = byteOut.toByteArray(); final int[] ints = new int[bytes.length/4]; final ByteArrayInputStream byteIn = new ByteArrayInputStream(bytes); final DataInputStream in = new DataOutputStream(byteIn); for (int ct=0; ct<ints.length; ++ct) { ints[ct] = in.readInt(); } ``` (Disclaimer: This is stackoverflow, I have not even attempted to compile this code.)
You could look at something like [pjc](http://pcj.sourceforge.net/) to handle this. That is a collections framework made for primitives.
How can I get an int[] out of an Iterator?
[ "", "java", "collections", "" ]
I have to parse large XML files in php, one of them is 6.5 MB and they could be even bigger. The SimpleXML extension as I've read, loads the entire file into an object, which may not be very efficient. In your experience, what would be the best way?
For a large file, you'll want to use a [SAX parser](http://php.net/xml) rather than a DOM parser. With a DOM parser it will read in the whole file and load it into an object tree in memory. With a SAX parser, it will read the file sequentially and call your user-defined callback functions to handle the data (start tags, end tags, CDATA, etc.) With a SAX parser you'll need to maintain state yourself (e.g. what tag you are currently in) which makes it a bit more complicated, but for a large file it will be much more efficient memory wise.
My take on it: <https://github.com/prewk/XmlStreamer> A simple class that will extract all children to the XML root element while streaming the file. Tested on 108 MB XML file from pubmed.com. ``` class SimpleXmlStreamer extends XmlStreamer { public function processNode($xmlString, $elementName, $nodeIndex) { $xml = simplexml_load_string($xmlString); // Do something with your SimpleXML object return true; } } $streamer = new SimpleXmlStreamer("myLargeXmlFile.xml"); $streamer->parse(); ```
Best way to process large XML in PHP
[ "", "php", "xml", "parsing", "simplexml", "large-files", "" ]
I was pretty surprised when I discovered that there is no direct way to sort or perform a binary search on an IList< T >. Just like there are static methods to sort and perform a binary search on an Array, I think that it would be awfully helpful to have similar static methods that take an IList< T >. Currently: ``` class Array { static Sort<T>(T[] array); static int BinarySearch<T>(T[] array, T item); } ``` I wish they would add: ``` class List { static Sort<T>(IList<T> list); static int BinarySearch<T>(IList<T> list, T item); } ``` I glanced at the .NET Framework 4.0 Beta SDK and there **still** doesn't appear to be a solution to this problem. I know that I could work around this by creating an extension method that checks if it is a List< T > and then sort/search using the List< T > instance; however, if it is not an instance of a List< T >, then I have to perform a copy (which stinks for very large lists). I know I could do all of this, but why? Is there some reason they have intentionally omitted this feature? To try to get this in the .NET 4.0 Framework, I have created a suggestion via Microsoft's Connect program. If you are frustrated like me about this issue, vote on it and maybe it will get added. <https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=474201>
LINQ has a OrderBy method that works on all IEnumerable<T>, including IList<T>. You can accomplish the same thing using OrderBy. ``` // Order a list of addresses: IList<string> list = ... var orderedList = list.OrderBy(input => input); ```
I think there's a pretty good case for not including a sort method for `IList<T>`. First, it would create added complexity for those that want to implement an IList and second it would make it harder for the IList interface to conform to the [Interface Segregation Principle](http://www.lostechies.com/blogs/rhouston/archive/2008/03/14/ptom-the-interface-segregation-principle.aspx). Generally what I do if I need to perform a sort on an `IList<T>` is create a new `List<T>` and pass in the `IList<T>` as a parameter so for example: ``` public IList<Address> SortAddresses(IList<Address> addresses) { var sortedAddresses = new List<Address>(addresses); sortedAddresses.Sort(); return sortedAddresses; } ```
Why is there no Sort for IList<T>?!?! (edited)
[ "", "c#", ".net", "c#-4.0", "" ]
If I have a #define statement within a namespace as such: ``` namespace MyNamespace { #define SOME_VALUE 0xDEADBABE } ``` Am I correct in saying that the #define statement is not restricted to the namespace? Is the following the "correct" thing to do? ``` namespace MyNamespace { const unsigned int SOME_VALUE = 0xDEADBABE; } ```
Correct,`#define`'s aren't bound by namespaces. `#define` is a [preprocessor](http://en.wikipedia.org/wiki/C_preprocessor) directive - it results in manipulation of the source file prior to being compiled via the compiler. Namespaces are used during the compilation step and the compiler has no insight into the `#define`'s. You should try to avoid the preprocessor as much as possible. For constant values like this, prefer const over `#define`'s.
I completely agree with the suggestions on the use of constants and the scope being unlimited for `#define`s. However, if you **do** have to use preprocessor `#define` lines, please cover them up correctly for the expected scope, ``` namespace MyNamespace { #define SOME_VALUE 0xDEADBABE // your code #undef SOME_VALUE } ``` Why `#defines`? I know one case where an embedded platform did not support constants in the code. There was no way to initialize them... It always helps to be more readable.
#define statements within a namespace
[ "", "c++", "namespaces", "" ]
### Original Question: I want to generate a Poisson process. If the number of arrivals by time *t* is *N(t)* and I have a Poisson distribution with parameter *λ* how do I generate *N(t)*? How would I do this in C++? ### Clarification: I originally wanted to generate the process using a Poisson distribution. But, I was confused about what parameter from the process I needed; I thought I could use *N(t)* but that tells me how many arrivals have occurred on the interval *(0,t]* which wasn't what I wanted. So, then I thought I could use *N(t2)-N(t1)* to get the number of arrivals on the interval *[t1,t2]*. Since *N(t)~Poisson(t x λ)* I could use *Poisson(t2 x λ)-Poisson(t1 x λ)* but I don't want the number of arrivals in an interval. Rather, I want to generate the explicit times that arrivals occur at. I could do this by making the interval *[t2,t1]* sufficiently small so that each interval has only one arrival (which occurs as *|t2-t1| -> 0*).
Here's sample code for generating Poisson samples using [C++ TR1](http://www.johndcook.com/cpp_TR1_random.html#poisson). If you want a Poisson *process*, times between arrivals are exponentially distributed, and exponential values can be generated trivially with the inverse CDF method: -k\*log(u) where u is a uniform random variable and k is the mean of the exponential.
If you have a Poisson process with rate parameter L (meaning that, long term, there are L arrivals per second), then the inter-arrival times are exponentially distributed with mean 1/L. So the PDF is f(t) = -L\*exp(-Lt), and the CDF is F(t) = Prob(T < t) = 1 - exp(-Lt). So your problem changes to: how to I generate a random number t with distribution F(t) = 1 - \exp(-Lt)? Assuming the language you are using has a function (let's call it `rand()`) to generate random numbers uniformly distributed between 0 and 1, the inverse CDF technique reduces to calculating: ``` -log(rand()) / L ``` As python provides a function to generate exponentially distributed random numbers, you could simulate the first 10 events in a poisson process with an averate rate of 15 arrivals per second like this: ``` import random for i in range(1,10): print random.expovariate(15) ``` Note that that would generate the \*inter\*arrival times. If you wanted the arrival times, you would have to keep moving a time variable forward like this: ``` import random t= 0 for i in range(1,10): t+= random.expovariate(15) print t ```
How do I generate a Poisson Process?
[ "", "c++", "random", "poisson", "stochastic-process", "" ]
For the past few years I've continuously struggled with unit testing database code and all the pain that comes with it. I found this existing thread which I found very enlightening: * [What's the best strategy for unit testing databases?](https://stackoverflow.com/questions/145131/whats-the-best-strategy-for-unit-testing-database-driven-applications) The author of the accepted answer suggests that it might be useful to mock the entire database layer in order to validate the generated SQL. I didn't think much of it when I first read the answer a few months ago, but recently I have observed several bugs caused by incorrectly generated SQL, wrongly assigned fields, and so on. I do realize that JDBC is rather bloated and error prone to use, but it isn't an option to switch to something different at this point. The application in question is a batch processor of data feeds, and uses JDBC directly rather than an ORM. All JDBC code is separated into distinct DAO objects where each object has its own interface and stub, besides the actual implementations. This has allowed me to achieve good test coverage of the business layer, but the testing of the database layer is virtually non-existant. **Is there an existing stub implementation of the JDBC (java.sql) interfaces that can be injected into DAO classes and used to validate the generated SQL and possibly send back some preprogrammed results?**
I don't know if you have seen it or not but there's [MockRunner](http://mockrunner.sourceforge.net/). It provides many classes that implement the interfaces of JDBC (as well as other J2EEclasses). Here's [the JDBC mock objects](http://mockrunner.sourceforge.net/doc/api/com/mockrunner/mock/jdbc/package-tree.html). There are also quite a few [examples](http://mockrunner.sourceforge.net/examplesjdbc.html).
It sounds like you're having issues in the DAO code itself? Otherwise, the DAO layer is the obvious place to do your mocking, but if you're trying to test the DAO, then you'll need to mock that which comes beneath. Personally, I tend to stay away from mocking large, complex libraries; if you really need to test the DAO layer directly and the DAO works directly with JDBC, you've got three obvious choices: 1. Run an integrated test that includes the DAO and JDBC along with a Database 2. Add a layer above JDBC with a thinner interface, better suited for mocking. 3. Use JDBC mocks either of your own writing, or some of the items listed above. I would almost always choose #1 or #2. Because there's a host of possibilities of errors in malformed SQL syntax and the like I tend to lean towards #1. I realize, however, that that's not what you're asking for. ;)
Is there a set of stubs/mocks for JDBC available anywhere?
[ "", "java", "unit-testing", "jdbc", "tdd", "" ]
Is there any way in **C#** to pass a **random** method as a parameter? To explain my question: I want to write a simple Logger-Tool that reports the entering and leaving of a method **with** the passed arguments an the class and method name: **The log file I'm aiming at:** ``` ENTERING: ClassOfDoom::MethodOfDoom( arg1={1} [int], arg2={true} [bool] ) LEAVING: ClassOfDoom::MethodOfDoom RETURNING 1 [int] ``` **The code I have in mind:** ``` class ClassOfDoom { // Remeber: MethodOfDoom is a _random_ method with _random_ arguments public int MethodOfDoom(int arg1, bool arg2) { Log.Entering(this, this.MethodOfDoom, arg1, arg2); ... return Log.Returing(this, this.MethodOfDoom, 1); } } ``` Is there a way to achieve this? Or isn't **C#** as flexible as that? Thanks in advance!
You can make your logging function take a `MethodBase` argument and use [MethodBase.GetCurrentMethod](http://msdn.microsoft.com/en-us/library/system.reflection.methodbase.getcurrentmethod.aspx) to pass the current method info as an argument. Then, in the logger, you could check its properties `Name` and `DeclaringType` to get the method information. Also, passing parameters is easy by declaring a `params object[] args` parameter in the logging function: ``` public static void Entering(object obj, MethodBase methodInfo, params object[] args) { Console.WriteLine("ENTERING {0}:{1}", methodInfo.DeclaringType.Name, methodInfo.Name); ... } ```
I'm not sure I entirely understand your question, but if you are trying to make a call to Log.Entering and Log.Returning inside an arbitrary (random) method and using the method's actual parameters, you should check out [PostSharp](http://www.postsharp.org/). It will allow you to inject code in a method body and then do some work based on the reflected method information you get from the .NET framework (and the actual parameters passed to the method at runtime).
Passing a random method as a parameter?
[ "", "c#", "" ]
PHP's [`extract()`](http://ca.php.net/manual/en/function.extract.php) function can take on one of several `extract_types`. But what's the difference between `extr_prefix_same` and `extr_prefix_if_exists`? The manual makes it sound like, in either case, new variables will be prefixed if the variable name already exists. Thanks!
When using `EXTR_PREFIX_IF_EXISTS`, if the variable doesn't already exist then the prefixed version won't be created either. In this example: ``` function test() { $a = 12345; extract(array('a' => 1, 'b' => 2, 'c' => 3), EXTR_PREFIX_IF_EXISTS, 'my_'); var_export(get_defined_vars()); } test(); ``` `$my_b` and `$my_c` aren't created because `$b` and `$c` don't exist.
`EXTR_PREFIX_SAME` will extract *all* variables, and only prefix ones that exist in the current scope. `EXTR_PREFIX_IF_EXISTS` will *only* extract variables that exist in the current scope, and prefix them with the desired prefix. So, for example: ``` $foo = 'foo'; $bar = 'bar'; extract(array('foo' => 'moo', 'bar' => 'mar', 'baz' => 'maz'), EXTR_PREFIX_IF_EXISTS, 'prefix'); isset($prefix_foo); // true isset($prefix_baz); // false isset($baz); // false ``` While.... ``` $foo = 'foo'; $bar = 'bar'; extract(array('foo' => 'moo', 'bar' => 'mar', 'baz' => 'maz'), EXTR_PREFIX_SAME, 'prefix'); isset($prefix_foo); // true isset($prefix_baz); // false isset($baz); // true ```
PHP - extract() type
[ "", "php", "extract", "exists", "prefix", "" ]
How can I pass a null parameter to a SQL server query. I have a simple table with a nullable int column. If I pass a .NET null value, I get a sql error. If I pass DBNull.Value, no row matches the filter. Is there a simple way to do this without using ISNULL. ``` OleDbConnection connection = new OleDbConnection(); connection.ConnectionString = ...; connection.Open(); OleDbCommand cmd = new OleDbCommand(); cmd.Connection = connection; cmd.CommandText = "select * from myTable where myColumn = ?"; OleDbParameter parameter = cmd.Parameters.Add(null, OleDbType.Integer); parameter.DbType = System.Data.DbType.Int32 ; parameter.IsNullable = true; parameter.Value = DBNull.Value; // no row returned parameter.Value = null; // sql error var reader = cmd.ExecuteReader(); ... ```
Since NULL does not match anything (even NULL = NULL is false), you have no choice other than using the IS NULL statement.
As ocdecio mention, NULL is not equal to itself. But you do have another option. If you are worried about NULL in this scenario, you can just put an empty string in your parameter value and write the query itself like this: ``` select * from myTable where COALESCE(myColumn,'') = ? ```
pass null parameter to SQL server query
[ "", "sql", "sql-server", "oledb", "oledbparameter", "" ]
What's the best way to set all values in a C# Dictionary? Here is what I am doing now, but I'm sure there is a better/cleaner way to do this: ``` Dictionary<string,bool> dict = GetDictionary(); var keys = dict.Keys.ToList(); for (int i = 0; i < keys.Count; i++) { dict[keys[i]] = false; } ``` I have tried some other ways with foreach, but I had errors.
That is a reasonable approach, although I would prefer: ``` foreach (var key in dict.Keys.ToList()) { dict[key] = false; } ``` The call to `ToList()` makes this work, since it's pulling out and (temporarily) saving the list of keys, so the iteration works.
A one-line solution: ``` dict = dict.ToDictionary(p => p.Key, p => false); ```
What's the best way to set all values in a C# Dictionary<string,bool>?
[ "", "c#", ".net", "generics", "dictionary", "" ]
I have a sample class: ``` class SampleClass { public virtual string SomeProperty{get; set;} public virtual void SomeMethod() { // code } } ``` I can Inherit from it and override SomeProperty and SomeMethod like this: ``` class ChildClass:SampleClass { public override string SomeProperty{get; set;} public override void SomeMethod() { // code } } ``` Is there any way I can override from a *object*, not from a class? Like ``` SampleClass sampleObject = new sampleObject(); ``` And have sampleObject.SomeMethod() be unique? Thanks.
No, you can't override on a per-object basis. However, if you want to give different objects different behaviour models, you could always pass in a delegate to the constructor: ``` public class ChildClass : SampleClass { private readonly Action someMethodBehavior; public ChildClass(Action someMethodBehavior) { this.someMethodBehavior = someMethodBehavior; } public override void SomeMethod() { someMethodBehavior(); } } ``` That allows each object to effectively specify its own behavior. You could even allow a null delegate reference to mean "just perform the default action" if you wanted it to be a sort of opt-in override.
Not entirely sure what you're trying to accomplish, but you would just do ``` SampleClass sampleObject = new ChildClass(); ``` Then you could pass sampleObject to any function that took a SampleClass, even though it actually is a ChildClass, and since your functions are virtual, your overridden functions would be called.
C#: Can an _object_ override a class' method?
[ "", "c#", "object", "static-methods", "" ]
Why can I do this: ``` public T GetMainContentItem<T>(string moduleKey, string itemKey) { return (T)GetMainContentItem(moduleKey, itemKey); } ``` but not this: ``` public T GetMainContentItem<T>(string moduleKey, string itemKey) { return GetMainContentItem(moduleKey, itemKey) as T; } ``` It complains that I haven't restricted the generic type enough, but then I would think that rule would apply to casting with "(T)" as well.
Because 'T' could be a value-type and 'as T' makes no sense for value-types. You can do this: ``` public T GetMainContentItem<T>(string moduleKey, string itemKey) where T : class { return GetMainContentItem(moduleKey, itemKey) as T; } ```
If T is a value type this is an exception, you need to make sure T is either Nullable or a class.
Why does "as T" get an error but casting with (T) not get an error?
[ "", "c#", ".net", "generics", "casting", "generic-programming", "" ]
Are there any alternatives to LogonUser and for impersonating given account in order to access network resources? I'm looking for the method of impersonation which would let me connect to machine in foreign domains (or, workgroup machines for the same matter). For initial data I have: machine name, username (or domain\username), cleartext password. I know there's a way to establish connection using WNetAddConnection to a \\machinename\ipc$, then most network functions will run in a context of that account, however win2008 added another twist and some functions still use the account, that thread is running under. I'm also aware, that there's some way to get an impersonation token using SSPI. Have anyone experimented with those tokens, are they good for accessing shares, SCM, remote registry and stuff? Is is what WNetAddConnection is using? **EDIT:** To clarify, the reason I cannot use LogonUser is because I need to impersonate user in a non-trusted domain or workgroup **EDIT2:** Another clarification: the item I'm trying to implement is similar to psexec, e.g.: * program should not modify host or active directory configuration (e.g.: create temporary local users, etc). Moreover assumption cannot be made that it is running on DC or not * there can be no assumptions made about which software is pre-installed on the remote host, only condition given is that windows file sharing is enabled on target * Account/password is known to be working on target, but target machine may be in local domain, foreign domain, not in domain at all. **EDIT3:** I would really love to hear more about SSPI InitializeSecurityContext / AcquireCredentialsHandle option. Is there anybody who has been working with this API extensively? Is it possible to use the tokens returned with impersonation, so that a thread can access network shares and copy files, etc? Can someone post a working code snippet? **EDIT4:** Thanks to Marsh Ray, problem got resolved. If anyone is looking to see the proof-of-concept code, [it is here](http://docs.google.com/View?id=djcgsjz_1gvhc7wfg)
If you're wanting to "access network resources" outside of your forest, do that with WNetAddConnection2/3 as you mentioned, or use the standard RPC APIs with RPC\_ C\_\_ AUTHN\_\_ GSS\_\_ NEGOTIATE and and explicit credentials structure. Normally, "impersonation" is something that happens on the server side. The server side will be able to impersonate the connection as the account you're connecting as. But the key is this: impersonation only makes sense for impersonating an account the server can access in his local SAM/domain/forest directory. If the client and server are in different forests, they clearly can't agree on the SID of an account for an impersonation token (except for the case of well-known SIDs like Administrator which serve mainly to confuse this kind of thing), and that seems necessary to check against DACLs etc. Perhaps what you want is to call LogonUserEx with the LOGON32\_\_ LOGON\_\_ NEW\_\_ CREDENTIALS flag. This should succeed (even in a different forest - it doesn't actually authenticate the credentials you give it) giving you a token with the username/password you specified. You may have to use DuplicateToken to turn this into an impersonation token. Then you can use SetThreadToken to replace the token on your thread. IMHO this isn't really "impersonation", you're just using the credentials outright, but it allows you to access network resources transparently as the arbitrary username/password you supply. Edit: Oh yeah, be aware that there is no protection against man-in-the-middle on this type of connection. The client especially cannot strongly authenticate the server (short of heroics like IPSEC), so in theory you can't trust anything the server tells you.
The theory goes that you pass the credentials as a [`SEC_WINNT_AUTH_IDENTITY`](http://msdn.microsoft.com/en-us/library/aa380131(VS.85).aspx) structure to the [AcquireCredentialsHandle](http://msdn.microsoft.com/en-us/library/aa374714(VS.85).aspx) function that creates the handle used in [InitializeSecurityContext](http://msdn.microsoft.com/en-us/library/aa375509(VS.85).aspx). I never tried this on foreign domains though and I don't know if it works.
Alternatives to LogonUser for network impersonation (C++)
[ "", "c++", "security", "winapi", "impersonation", "sspi", "" ]
In Windows, I can use `Shell_NotifyIcon()`. What is the Linux equivalent of this function? Is it different in GNOME (gtk) and KDE (qt)? Can't find any tutorials on the Internet.
Yes, it depends on a desktop environment you're using. In Qt (KDE), you can do this in a cross-platform manner using `QSystemTrayIcon`. There is even a [sample application in Qt](https://web.archive.org/web/20090720000232/http://doc.trolltech.com/4.2/desktop-systray.html). In GTK, you do this with `GtkStatusIcon`. There is a sample application in [this blog post](https://web.archive.org/web/20090915175446/http://blog.sacaluta.com/2007/08/gtk-system-tray-icon-example.html).
All major desktop support the current XEmbed based systray icon specification. Qt and GTK both provide facilities to use that protocol in comfortable manner. Regardless of your toolkit choice the icons will appear on all desktop environments. If you use Qt you get even compatability with other platforms like Windows for free.
How can i make a tray icon in Linux using C++?
[ "", "c++", "linux", "desktop-application", "system-tray", "" ]
I'm developing a web application and considering Django, Google App Engine, and several other options. I wondered what kind of "penalty" I will incur if I develop a complete Django application assuming it runs on a dedicated server, and then later want to migrate it to Google App Engine. I have a basic understanding of Google's data store, so please assume I will choose a column based database for my "stand-alone" Django application rather than a relational database, so that the schema could remain mostly the same and will not be a major factor. Also, please assume my application does not maintain a huge amount of data, so that migration of tens of gigabytes is not required. I'm mainly interested in the effects on the code and software architecture. Thanks
Most (all?) of Django is available in GAE, so your main task is to avoid basing your designs around a reliance on anything from Django or the Python standard libraries which is not available on GAE. You've identified the glaring difference, which is the database, so I'll assume you're on top of that. Another difference is the tie-in to Google Accounts and hence that if you want, you can do a fair amount of access control through the app.yaml file rather than in code. You don't have to use any of that, though, so if you don't envisage switching to Google Accounts when you switch to GAE, no problem. I think the differences in the standard libraries can mostly be deduced from the fact that GAE has no I/O and no C-accelerated libraries unless explicitly stated, and my experience so far is that things I've expected to be there, have been there. I don't know Django and haven't used it on GAE (apart from templates), so I can't comment on that. Personally I probably wouldn't target LAMP (where P = Django) with the intention of migrating to GAE later. I'd develop for both together, and try to ensure if possible that the differences are kept to the very top (configuration) and the very bottom (data model). The GAE version doesn't necessarily have to be perfect, as long as you know how to make it perfect should you need it. It's not guaranteed that this is faster than writing and then porting, but my guess is it normally will be. The easiest way to spot any differences is to run the code, rather than relying on not missing anything in the GAE docs, so you'll likely save some mistakes that need to be unpicked. The Python SDK is a fairly good approximation to the real App Engine, so all or most of your tests can be run locally most of the time. Of course if you eventually decide not to port then you've done unnecessary work, so you have to think about the probability of that happening, and whether you'd consider the GAE development to be a waste of your time if it's not needed.
It sounds like you have awareness of the major limitation in building/migrating your app -- that AppEngine doesn't support Django's ORM. Keep in mind that this doesn't just affect the code you write yourself -- it also limits your ability to use a lot of existing Django code. That includes other applications (such as the built-in admin and auth apps) and ORM-based features such as [generic views](https://docs.djangoproject.com/en/1.4/ref/generic-views/).
Migrating Django Application to Google App Engine?
[ "", "python", "django", "google-app-engine", "" ]
So basically I have a plugin dll that is loaded by a GUI-Application. In this dll I need to detect when Windows enters the Hibernate state. I cannot modify the GUI-App. GetMessage only works if the calling thread is the same thread as the UI-Thread, which it is not. Any ideas?
You could create a hidden window in a seperate thread from your DLL code. And process messages as shown below. You could use this Window class for that. ``` #pragma once #include <windows.h> #include <process.h> #include <iostream> using namespace std; static const char *g_AppName = "Test"; class CMyWindow { HWND _hWnd; int _width; int _height; public: CMyWindow(const int width,const int height):_hWnd(NULL),_width(width),_height(height) { _beginthread( &CMyWindow::thread_entry, 0, this); } ~CMyWindow(void) { SendMessage(_hWnd, WM_CLOSE, NULL, NULL); } private: static void thread_entry(void * p_userdata) { CMyWindow * p_win = static_cast<CMyWindow*> (p_userdata); p_win->create_window(); p_win->message_loop(); } void create_window() { WNDCLASSEX wcex; wcex.cbSize = sizeof(WNDCLASSEX); wcex.style = CS_HREDRAW | CS_VREDRAW; wcex.lpfnWndProc = &CMyWindow::WindowProc; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hInstance = GetModuleHandle(NULL); wcex.hIcon = LoadIcon(NULL, IDI_APPLICATION); wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)(COLOR_WINDOW+1); wcex.lpszMenuName = NULL; wcex.lpszClassName = g_AppName; wcex.hIconSm = LoadIcon(NULL, IDI_APPLICATION); RegisterClassEx(&wcex); _hWnd = CreateWindow(g_AppName, g_AppName, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, GetModuleHandle(NULL), NULL); ShowWindow(_hWnd, SW_SHOWDEFAULT); UpdateWindow(_hWnd); } void message_loop() { MSG msg = {0}; while (GetMessage(&msg, NULL, 0, 0)) { if(msg.message == WM_QUIT) { break; } TranslateMessage(&msg); DispatchMessage(&msg); } } static LRESULT WINAPI WindowProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch(uMsg) { case WM_DESTROY: PostQuitMessage(0); return 0; case WM_POWERBROADCAST: { //power management code here } } return DefWindowProc(hWnd, uMsg, wParam, lParam); } }; ``` Also make sure to include an exit condition.
I ran into a similar problem with a Windows console application. I wrote up a blog post on what the issues are, how the hidden window seems like the only solution, and how to do it. The post is available [here](http://www.zachburlingame.com/2011/04/capturing-windows-power-events-in-a-console-application/) and the source code is available [here](http://hg.zachburlingame.com/windowspowerevents). The basic principle I used is almost identical to Indeera's answer. I'm not sure if you would have to modify my solution at all to run in a DLL. I believe that ALL threads which have a message queue (and a thread does when it creates a window) receive WM\_POWERBROADCAST messages, so spooling up one of your own even if you are loaded by a Windows application might work. It's worth noting by the way, that you aren't guaranteed to receive a notification prior to the system going into Hibernate (such as from a critical battery state) or any other sleep state for that matter. You will however receive a [PBT\_APMRESUMEAUTOMATIC](http://msdn.microsoft.com/en-us/library/aa372718%28v=vs.85%29.aspx) event (or [PBT\_APMRESUMECRITICAL](http://msdn.microsoft.com/en-us/library/aa372719%28v=vs.85%29.aspx) on systems prior to Vista) when the system comes back online after such an event has occurred.
How to get the Windows Power State Message (WM_POWERBROADCAST) when not running a Win32 GUI app?
[ "", "c++", "winapi", "" ]
When using malloc to allocate memory, is it generally quicker to do multiple mallocs of smaller chunks of data or fewer mallocs of larger chunks of data? For example, say you are working with an image file that has black pixels and white pixels. You are iterating through the pixels and want to save the x and y position of each black pixel in a new structure that also has a pointer to the next and previous pixels x and y values. Would it be generally faster to iterate through the pixels allocating a new structure for each black pixel's x and y values with the pointers, or would it be faster to get a count of the number of black pixels by iterating through once, then allocating a large chunk of memory using a structure containing just the x and y values, but no pointers, then iterating through again, saving the x and y values into that array? I'm assuming certain platforms might be different than others as to which is faster, but what does everyone think would generally be faster?
It depends: * Multiple small times means multiple times, which is slower * There may be a special/fast implementation for small allocations. If I cared, I'd measure it! If I really cared a lot, and couldn't guess, then I might implement both, and measure at run-time on the target machine, and adapt accordingly. In general I'd assume that fewer is better: but there are size and run-time library implementations such that a (sufficiently) large allocation will be delegated to the (relatively slow) O/S. whereas a (sufficiently) small allocation will be served from a (relatively quick) already-allocated heap.
Allocating large blocks is more efficient; additionally, since you are using larger contiguous blocks, you have greater locality of reference, and traversing your in-memory structure once you've generated it should also be more efficient! Further, allocating large blocks should help to reduce memory fragmentation.
Faster to malloc multiple small times or few large times?
[ "", "c++", "c", "algorithm", "memory", "" ]
Let's say I have a class such as ``` class c { // ... void *print(void *){ cout << "Hello"; } } ``` And then I have a vector of c ``` vector<c> classes; pthread_t t1; classes.push_back(c()); classes.push_back(c()); ``` Now, I want to create a thread on `c.print();` And the following is giving me the problem below: `pthread_create(&t1, NULL, &c[0].print, NULL);` > Error Output: cannot convert ‘void\* (tree\_item::\*)(void\*)’ to ‘void\* > (\*)(void\*)’ for argument ‘3’ to ‘int pthread\_create(pthread\_t\*, const > pthread\_attr\_t\*, void\* (\*)(void\*), void\*)’
You can't do it the way you've written it because C++ class member functions have a hidden `this` parameter passed in. `pthread_create()` has no idea what value of `this` to use, so if you try to get around the compiler by casting the method to a function pointer of the appropriate type, you'll get a segmetnation fault. You have to use a static class method (which has no `this` parameter), or a plain ordinary function to bootstrap the class: ``` class C { public: void *hello(void) { std::cout << "Hello, world!" << std::endl; return 0; } static void *hello_helper(void *context) { return ((C *)context)->hello(); } }; ... C c; pthread_t t; pthread_create(&t, NULL, &C::hello_helper, &c); ```
My favorite way to handle a thread is to encapsulate it inside a C++ object. Here's an example: ``` class MyThreadClass { public: MyThreadClass() {/* empty */} virtual ~MyThreadClass() {/* empty */} /** Returns true if the thread was successfully started, false if there was an error starting the thread */ bool StartInternalThread() { return (pthread_create(&_thread, NULL, InternalThreadEntryFunc, this) == 0); } /** Will not return until the internal thread has exited. */ void WaitForInternalThreadToExit() { (void) pthread_join(_thread, NULL); } protected: /** Implement this method in your subclass with the code you want your thread to run. */ virtual void InternalThreadEntry() = 0; private: static void * InternalThreadEntryFunc(void * This) {((MyThreadClass *)This)->InternalThreadEntry(); return NULL;} pthread_t _thread; }; ``` To use it, you would just create a subclass of MyThreadClass with the InternalThreadEntry() method implemented to contain your thread's event loop. You'd need to call WaitForInternalThreadToExit() on the thread object before deleting the thread object, of course (and have some mechanism to make sure the thread actually exits, otherwise WaitForInternalThreadToExit() would never return)
pthread function from a class
[ "", "c++", "pthreads", "" ]
I created a custom taxonomy named 'technologies' but cannot query multiple terms like I can with categories or tags. **These querys DO work:** ``` query_posts('tag=goldfish,airplanes'); query_posts('technologies=php'); ``` **However, neither of the following work correctly:** ``` query_posts('technologies=php,sql'); query_posts('technologies=php&technologies=sql'); ``` My objective: Show all posts with a technology of 'php' and all posts with a technology of 'sql' Any ideas? Is this even possible? Thanks!
Apparently query\_posts cannot help in this specific situation. (Hopefully it will be added in future versions of Wordpress!) The solution is to use a custom select query like the following: ``` SELECT * FROM $wpdb->posts LEFT JOIN $wpdb->term_relationships ON($wpdb->posts.ID = $wpdb->term_relationships.object_id) LEFT JOIN $wpdb->term_taxonomy ON($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id) LEFT JOIN $wpdb->terms ON($wpdb->term_taxonomy.term_id = $wpdb->terms.term_id) WHERE $wpdb->posts.post_type = 'post' AND $wpdb->posts.post_status = 'publish' AND $wpdb->term_taxonomy.taxonomy = 'technologies' AND $wpdb->terms.slug = 'php' OR $wpdb->terms.slug = 'css' ORDER BY $wpdb->posts.post_date DESC ``` More information can be found at the Wordpress Codex: <http://codex.wordpress.org/Displaying_Posts_Using_a_Custom_Select_Query>
This is a bit of a delayed reply, but it's first on Google at the moment for "wordpress related posts by multiple terms" so thought I'd contribute my findings. Since this question was posted Wordpress has been changed to allow for this type of query. This will give you a list of posts related by any of the custom taxonomy terms assigned to an object: ``` $post_cats = wp_get_object_terms(get_the_ID(), 'video_category', array('fields' => 'ids')); $args=array( "tax_query" => array( array( "taxonomy" => "video_category", "field" => "id", "terms" => $post_cats ) ), 'post__not_in' => array(get_the_ID()), 'post_type' => 'video', 'posts_per_page' => 8, 'caller_get_posts' => 1 ); $related_by_cats = new WP_Query($args); ``` This is my first contribution to SO, I hope it's up to standards.
Query multiple custom taxonomy terms in Wordpress 2.8?
[ "", "php", "wordpress", "taxonomy", "" ]
I'd like to get some rows (**z:row** rows) from XML using: ``` <rs:data> <z:row Attribute1="1" Attribute2="1" /> <z:row Attribute1="2" Attribute2="2" /> <z:row Attribute1="3" Attribute2="3" /> <z:row Attribute1="4" Attribute2="4" /> <z:row Attribute1="5" Attribute2="5" /> <z:row Attribute1="6" Attribute2="6" /> </rs:data> ``` I'm having trouble using (**Python**): ``` ElementTree.parse('myxmlfile.xml').getroot().findall('//z:row') ``` I think that two points are invalid in that case. Anyone knows how can I do that?
If I define the namespaces like this: ``` <?xml version="1.0"?> <rs:data xmlns="http://example.com" xmlns:rs="http://example.com/rs" xmlns:z="http://example.com/z"> <z:row Attribute1="1" Attribute2="1" /> <z:row Attribute1="2" Attribute2="2" /> <z:row Attribute1="3" Attribute2="3" /> <z:row Attribute1="4" Attribute2="4" /> <z:row Attribute1="5" Attribute2="5" /> <z:row Attribute1="6" Attribute2="6" /> </rs:data> ``` the Python [ElementTree](http://docs.python.org/library/xml.etree.elementtree.html)-API can be used like this: ``` ElementTree.parse("r.xml").getroot().findall('{http://example.com/z}row') # => [<Element {http://example.com/z}row at 551ee0>, <Element {http://example.com/z}row at 551c60>, <Element {http://example.com/z}row at 551f08>, <Element {http://example.com/z}row at 551be8>, <Element {http://example.com/z}row at 551eb8>, <Element {http://example.com/z}row at 551f30>] ``` See also <http://effbot.org/zone/element.htm#xml-namespaces>
If you don't want to figure out setting up namespaces properly, you can ignore them like this: ``` XPathGet("//*[local-name() = 'row']") ``` Which selects every node whose name (without namespace) is `row`.
Getting rows from XML using XPath and Python
[ "", "python", "xml", "xpath", "" ]
Hi not sure if this is possible or not but I want to programaticaly update the <body> tags to change the onload function in my zend framework application. The App is using layouts so the body tag currently looks like this **<body class="trandra">** However in one of my views I have a map from google being loaded and it needs the following in the body tag this particular view **<body onload="initialize()" onunload="GUnload()">** As you can understand I don't want this to be hardcoded in my layout as this will cause all matter of nightmares with the different views. How can this be done programaticaly, if at all it is possible? Im using the headScript functions to add the javascript so is there an equivalant for the body tag? Thanks in advance...
**Approach one - Use a layout variable** One idea would be the following: ``` <body class="trandra" <?php echo $this->layout()->bodyScripts ?>> ``` And in your view: ``` <?php $this->layout->bodyScripts = 'onload="initialize()" onunload="GUnload()"'; ``` **Approach two - Additional JS-file that adds event handlers** Another approach, which is less obtrusive and doesn't affect the HTML whatsoever is to add an additional JS-file in the view that requires the onload- and onunload-handlers. It could look something like this: ``` <?php $this->headScript()->appendScript( '/path/to/javascripts/loadGMaps.js'); ``` In your loadGMaps.js (using [prototype](http://www.prototypejs.org/)) ``` Event.observe(window, 'load', function onLoadHandler() { // Code for initializing Google maps here }); Event.observe(window, 'unload', function onUnloadHandler() { // Code for unloading Google maps here }); ```
Instead of putting your Javascript directly in the code, you could also use an non-obstrusive approch : plugging in the javascript when the page is fully loaded. Have a look, for instance, at a function called [`addOnLoadEvent`](http://thepeninsulasedge.com/blog/?p=65) (can be found on many websites ^^ ) If you are using a JS Framework, it certainly has that kind of feature : * for jQuery : <http://docs.jquery.com/Events/ready#fn> * for prototype : <http://www.prototypejs.org/api/event/observe> If you register the "plugging-in" with headScript, there should be no need to modify the tag directly.
updating javascript in the body tag using zend framework
[ "", "javascript", "zend-framework", "" ]
I'm writing an application in C++ that runs as a system tray icon. When the application initially starts up the main dialog loads up and takes focus, which isn't the behavior I intend it to have. Is there a way to load the system tray icon without having the main dialog load up?
If you used the standard mfc project wizard, then the code that displays the dialog is in your applications's InitInstance method. Just comment out the `dlg.DoModal()` and `m_pMainWnd = &dlg;` parts and you will be fine. Note that you might have to code your own message loop otherwise your application will just exit after these changes.
I'd say you have to separate your systray code and your window code, and only display the window when the systray is called. however you can always call ``` ShowWindow(SH_HIDE); ``` check <http://www.codeguru.com/forum/showthread.php?t=231032>
Hiding the dialog on startup for a system tray application
[ "", "c++", "mfc", "system-tray", "" ]
I'm currently using `Django 1.1 beta` for some personal projects, and plan to start messing arround with the trunk to see the new stuff under the hood. But I might start using it on a professional basis, and I'd need to know if `trunk` is stable enough for using in production, or I should stick to 1.0 for mission critical systems. ## Edit Putting all the information in answer for correctness.
First, [Django 1.1](http://docs.djangoproject.com/en/dev/releases/1.1-rc-1/) is [one step closer to being released](http://www.djangoproject.com/weblog/2009/jul/21/rc/), as [RC1 is available for download](http://www.djangoproject.com/download/). With that out of the way, I've found some useful things. * If you are planning following this project, you should keep an eye in the [Django deprecation timeline](http://docs.djangoproject.com/en/dev/internals/deprecation/#internals-deprecation), along the [Django deprecation policy](http://docs.djangoproject.com/en/dev/internals/release-process/#internal-release-deprecation-policy). * Another important place to look, is the [API Stability](http://docs.djangoproject.com/en/dev/misc/api-stability/#misc-api-stability) page of [the documentation](http://docs.djangoproject.com/). * Keep an eye on the [django-users mailing list](http://groups.google.com/group/django-users), as well as the [django-developer mailing list](http://groups.google.com/group/django-developers). Am I missing something I should be looking too? ## Edit [Django 1.1 was released](http://www.djangoproject.com/weblog/2009/jul/29/1-point-1/ "Django weblog: Django 1.1 released")!!! You can [download it **right now**](http://www.djangoproject.com/download/1.1/tarball/ "Download Django 1.1 tarball")! :) The question remains if `trunk` following is not recommended (once upon a time, *Django didn't have releases*, you only had `head` of `trunk`) --- Acording to the tech team at [The Onion](http://www.theonion.com), which [has migrated from Drupal to Django](http://www.reddit.com/r/django/comments/bhvhz/the_onion_uses_django_and_why_it_matters_to_us/), [Django trunk is extremely stable](http://www.reddit.com/r/django/comments/bhvhz/the_onion_uses_django_and_why_it_matters_to_us/c0mv32r).
You probably shouldn't pull Django trunk *every* day, sometimes there are big commits that might break some things on your site. Also it depends what features you use, the new ones will of cause be a bit more buggy than older features. But all in all there shouldn't be a problem using trunk for production. You just need to be careful when updating to latest revision. You could for example set up a new virtual environment to test, before updating the live site. There are many ways to do something simelar, but I will let you take your pick.
Is it safe to track trunk in Django?
[ "", "python", "django", "" ]
I'd like to extend a DOM element without extending all of them. That is, I'd like a custom class with its own methods and properties, but also be able to treat it as a div. E.g. ``` MyClass = function(){ this.foo = "waaa"; } MyClass.prototype.changeText = function(newtext){ // if this extended $(document.createElement("div")) something // like this might be possible this.html(newtext); } MyClass.prototype.alertFoo = function(){ alert(this.foo); } var m = new MyClass(); $("body").append(m); m.changetext(); ``` Is this possible?
You can make your class a child class of the jquery generated DIV element: ``` function MyClass (){ this.foo = "waaa"; } MyClass.prototype = $('<div/>'); MyClass.prototype.changeText = function(newtext){ this.html(newtext); } MyClass.prototype.alertFoo = function(){ alert(this.foo); } var m = new MyClass(); $("body").append(m); m.changeText('appletree'); ```
You could make your own object, like you are doing. Then you can use jQuery's [extend method](http://docs.jquery.com/Utilities/jQuery.extend) to extend the element.
Extending a DOM element with jQuery
[ "", "javascript", "jquery", "" ]
I have the following tables Master Table ``` id ---- 1x 2x ``` .... Assignment Table ``` id | type_id | assign_id ----------------------------- 1x | 2 | 554 1x | 3 | 664 2x | 2 | 919 2x | 4 | 514 ``` Type table ``` type_id | create_date ---------------------- 1 | 01/01/2009 2 | 01/01/2009 3 | 03/01/2009 4 | 04/01/2009 ``` What i need for the query to output is something like this ``` id | max create_date type_id | assign_id ---------------------------------------------- 1x | 3 | 664 2x | 4 | 514 ``` Right now i'm doing something like this to acquire the results, but i'm sure there is a much better way to do this. ``` Q1 --- CREATE TABLE tmp_table as SELECT m.id, max(t.create_date) FROM master m, assignment a, type t WHERE m.id=a.id and a.type_id=t.type_id GROUP BY m.id Q2 -- SELECT tmp.id, a.type_id, a.assign_id from tmp_table tmp, assignment a, type t WHERE tmp.create_date=t.create_date and t.type_id=a.type_id ``` Thanks for any help
No temp table necessary. ``` select distinct a.id, first_value(t.type_id) over (partition by a.id order by t.create_date desc) as max_create_date_type_id, first_value(a.assign_id) over (partition by a.id order by t.create_date desc) as assign_id from assignment a, type t where a.type_id = t.type_id ```
Using analytics, and then applying the DISTINCT operator is not the way to go, when you need to aggregate. Here is a simpler and more performant version using only aggregates: ``` SQL> select a.id 2 , max(t.type_id) keep (dense_rank last order by t.create_date) max_create_date_type_id 3 , max(a.assign_id) keep (dense_rank last order by t.create_date) assign_id 4 from assignment a 5 , type t 6 where a.type_id = t.type_id 7 group by a.id 8 / ID MAX_CREATE_DATE_TYPE_ID ASSIGN_ID -- ----------------------- ---------- 1x 3 664 2x 4 514 2 rows selected. ``` And here is a test to prove it's more performant: ``` SQL> exec dbms_stats.gather_table_stats(user,'assignment') PL/SQL procedure successfully completed. SQL> exec dbms_stats.gather_table_stats(user,'type') PL/SQL procedure successfully completed. SQL> select /*+ gather_plan_statistics */ 2 distinct 3 a.id, 4 first_value(t.type_id) 5 over (partition by a.id order by t.create_date desc) 6 as max_create_date_type_id, 7 first_value(a.assign_id) 8 over (partition by a.id order by t.create_date desc) 9 as assign_id 10 from assignment a, type t 11 where a.type_id = t.type_id 12 / ID MAX_CREATE_DATE_TYPE_ID ASSIGN_ID -- ----------------------- ---------- 2x 4 514 1x 3 664 2 rows selected. SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last')) 2 / PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------- SQL_ID fu520w4kf2bbp, child number 0 ------------------------------------- select /*+ gather_plan_statistics */ distinct a.id, first_value(t.type_id) over (partition by a.id order by t.create_date desc) as max_create_date_type_id, first_value(a.assign_id) over (partition by a.id order by t.create_date desc) as assign_id from assignment a, type t where a.type_id = t.type_id Plan hash value: 4160194652 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 6 | | | | | 1 | HASH UNIQUE | | 1 | 4 | 2 |00:00:00.01 | 6 | 898K| 898K| 493K (0)| | 2 | WINDOW SORT | | 1 | 4 | 4 |00:00:00.01 | 6 | 2048 | 2048 | 2048 (0)| | 3 | WINDOW SORT | | 1 | 4 | 4 |00:00:00.01 | 6 | 2048 | 2048 | 2048 (0)| |* 4 | HASH JOIN | | 1 | 4 | 4 |00:00:00.01 | 6 | 898K| 898K| 554K (0)| | 5 | TABLE ACCESS FULL| ASSIGNMENT | 1 | 4 | 4 |00:00:00.01 | 3 | | | | | 6 | TABLE ACCESS FULL| TYPE | 1 | 4 | 4 |00:00:00.01 | 3 | | | | ------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - access("A"."TYPE_ID"="T"."TYPE_ID") 28 rows selected. SQL> select /*+ gather_plan_statistics */ 2 a.id 3 , max(t.type_id) keep (dense_rank last order by t.create_date) max_create_date_type_id 4 , max(a.assign_id) keep (dense_rank last order by t.create_date) assign_id 5 from assignment a 6 , type t 7 where a.type_id = t.type_id 8 group by a.id 9 / ID MAX_CREATE_DATE_TYPE_ID ASSIGN_ID -- ----------------------- ---------- 1x 3 664 2x 4 514 2 rows selected. SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last')) 2 / PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------- SQL_ID 156kpxgxmfjd3, child number 0 ------------------------------------- select /*+ gather_plan_statistics */ a.id , max(t.type_id) keep (dense_rank last order by t.create_date) max_create_date_type_id , max(a.assign_id) keep (dense_rank last order by t.create_date) assign_id from assignment a , type t where a.type_id = t.type_id group by a.id Plan hash value: 3494156172 ----------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ----------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 6 | | | | | 1 | SORT GROUP BY | | 1 | 2 | 2 |00:00:00.01 | 6 | 2048 | 2048 | 2048 (0)| |* 2 | HASH JOIN | | 1 | 4 | 4 |00:00:00.01 | 6 | 898K| 898K| 594K (0)| | 3 | TABLE ACCESS FULL| ASSIGNMENT | 1 | 4 | 4 |00:00:00.01 | 3 | | | | | 4 | TABLE ACCESS FULL| TYPE | 1 | 4 | 4 |00:00:00.01 | 3 | | | | ----------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("A"."TYPE_ID"="T"."TYPE_ID") 25 rows selected. ``` As you can see, both are full scanning the tables and perform a hash join. The difference is after this step. The aggregate variant takes 4 rows and aggregates them to 2 rows with a SORT GROUP By. The analytic one is first sorting the 4-row-set twice and then applies a HASH UNIQUE to reduce the set to 2 rows. Regards, Rob.
Oracle Query Optimization
[ "", "sql", "oracle", "" ]
Given a `System.IO.FileStream` object, how can I get the original path to the file it's providing access to? For example, in the `MyStreamHandler()` function below, I want to get back the path of the file that created the `FileStream`: ``` public static void Main() { string path = @"c:\temp\MyTest.txt"; FileStream fs = File.Create(path)); MyStreamHandler(fs); MyOtherStreamHandler(fs); fs.Close(); fs.Dispose(); } private static void MyStreamHandler(FileStream fs) { // Get the originating path of 'fs' } private static void MyOtherStreamHandler(FileStream fs) { } ```
The FileStream's Name property. See documentation in [MSDN](https://msdn.microsoft.com/en-us/library/system.io.filestream.name(v=vs.110).aspx)
You can use fs.Name to get the path.
Getting Original Path from FileStream
[ "", "c#", "path", "filestream", ".net", "" ]
I am trying to create a simple ray tracer. I have a perspective view which shows the rays visibly for debugging purposes. In my example screenshot below I have a single white sphere to be raytraced and a green sphere representing the eye. Rays are drawn as lines with ``` glLineWidth(10.0f) ``` If a ray misses the sphere it is given color ``` glColor4ub(100,100,100,100); ``` in my initialization code I have the following: ``` glEnable(GL_ALPHA_TEST); glAlphaFunc(GL_GREATER, 0.0f); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA,GL_SRC_ALPHA); ``` You can see in the screen shot that for some reason, the rays passing between the perspective view point and the sphere are being color blended with the axis line behind the sphere, rather than with the sphere itself. Here is a screenshot: ![screenshot](https://i.stack.imgur.com/gkb8A.jpg) Can anyone explain what I am doing wrong here? Thanks!!
Is it a possibility you cast those rays before you draw the sphere? Then if Z-buffer is enabled, the sphere's fragments simply won't be rendered, as those parts of rays are closer. When you are drawing something semi-transparent (using blending), you should watch the order you draw things carefully. In fact I think you cannot use Z-buffer in any sensible way together with ray-tracing process. You'll have to track Z-order manually. While we are at it OpenGL might not be the best API to visualize ray-tracing process. (It will do so possibly much slower than pure software ray-tracer)
1. You dont need the `glAlphaFunc`, disable it. 2. Light rays should be blended by adding to the buffer: `glBlendFunc(GL_ONE, GL_ONE)` (for premultiplied alpha, which you chose. 3. Turn off depth buffer writing (**not testing**) when rendering the rays: `glDepthMask(GL_FALSE)` 4. Render the rays last.
OpenGL Alpha blending with wrong color
[ "", "c++", "qt", "opengl", "" ]