Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I was just reading this [thread](https://stackoverflow.com/questions/243082/c-casting-programmatically-can-it-be-done) and it occurred to me that there is one seemingly-valid use of that pattern the OP is asking about. I know I've used it before to implement dynamic creation of objects. As far as I know, there is no better solution in C++, but I was wondering if any gurus out there know of a better way. Generally, I run into this situation when I need to create one of several subclasses of an object based one something unknown at compile time (such as based on a config file). I use the object polymorphically once it is created. There's another related situation when you're using a message-passing scheme (usually over TCP/IP) where each message is an object. I like to implement that pattern as letting each message serialize itself into some serialization stream interface, which works well and is fairly clean on the sending end, but on the receiver, I always find myself examining a header on the message to determine the type, then constructing an appropriate message object using the pattern from the linked article, then having it deserialize itself from the stream. Sometimes I implement it so that the construction and deserialization happen at the same time as part of the constructor, which seems more RAII, but that's small consolation for the mess of if/else statements figuring out the type. Any better solutions out there? If you're going to suggest a 3rd party library, it should be free (and ideally open source) and I'd appreciate it if you could explain how the library accomplishes this feat.
I think what you are asking is how to keep the object creation code with the objects themselves. This is usually what I do. It assumes that there is some key that gives you a type (int tag, string, etc). I make a class that has a map of key to factory functions, and a registration function that takes a key and factory function and adds it to the map. There is also a create function that takes a key, looks it up in the map, calls the factory function, and returns the created object. As an example, take a int key, and a stream that contains the rest of the info to build the objects. I haven't tested, or even compiled, this code, but it should give you an idea. ``` class Factory { public: typedef Object*(*Func)(istream& is); static void register(int key, Func f) {m[key] = f;} Object* create(key, istream& is) {return m[key](is);} private: std::map<key, func> m; } ``` Then in each class derived from subobject, the register() method is called with the appropriate key and factory method. To create the object, you just need something like this: ``` while(cin) { int key; is >> key; Object* obj = Factory::Create(key, is); // do something with objects } ```
What you're describing here is termed the [factory](http://en.wikipedia.org/wiki/Factory_method_pattern) pattern. A variant is the [builder](http://en.wikipedia.org/wiki/Builder_pattern) pattern.
creating objects dynamically in C++
[ "", "c++", "serialization", "" ]
Disclaimer: I'm fairly new to python! If I want all the lines of a file until (edit: and including) the line containing some string `stopterm`, is there a way of using the list syntax for it? I was hoping there would be something like: ``` usefullines = [line for line in file until stopterm in line] ``` For now, I've got ``` usefullines = [] for line in file: usefullines.append(line) if stopterm in line: break ``` It's not the end of the world, but since there rest of Python syntax is so straightforward, I was hoping for a 1 thought->1 Python line mapping.
``` from itertools import takewhile usefullines = takewhile(lambda x: not re.search(stopterm, x), lines) from itertools import takewhile usefullines = takewhile(lambda x: stopterm not in x, lines) ``` Here's a way that keeps the stopterm line: ``` def useful_lines(lines, stopterm): for line in lines: if stopterm in line: yield line break yield line usefullines = useful_lines(lines, stopterm) # or... for line in useful_lines(lines, stopterm): # ... do stuff pass ```
" I was hoping for a 1 thought->1 Python line mapping." Wouldn't we all love a programming language that somehow mirrored our natural language? You can achieve that, you just need to define your unique thoughts once. Then you have the 1:1 mapping you were hoping for. ``` def usefulLines( aFile ): for line in aFile: yield line if line == stopterm: break ``` Is pretty much it. ``` for line in usefulLines( aFile ): # process a line, knowing it occurs BEFORE stopterm. ``` There are more general approaches. The `lassevk` answers with `enum_while` and `enum_until` are generalizations of this simple design pattern.
Python: item for item until stopterm in item?
[ "", "python", "" ]
I added a `get_absolute_url` function to one of my models. ``` def get_absolute_url(self): return '/foo/bar' ``` The admin site picks it up and adds a "view on site" link to the detail page for that object (when I put a real URL there instead of "/foo/bar"). The problem is instead of going to `http://localhost:8000/foo/bar`, it goes to `http://example.com/foo/bar`. What am I doing wrong?
You have to change [default site](http://docs.djangoproject.com/en/dev/ref/contrib/sites/#ref-contrib-sites) domain value.
The funniest thing is that "example.com" appears in an obvious place. Yet, I was looking for in in an hour or so. Just use your admin interface -> Sites -> ... there it is :)
Django Admin's "view on site" points to example.com instead of my domain
[ "", "python", "django", "django-admin", "" ]
I have a form action that needs to have its value set from a variable. I need to set the variable once and it will be reflected many times throughout the DOM. So: variable = "somthing.html"; ...
This will cause all FORMs to get the variable action: ``` <script src="jquery-1.2.6.pack.js"></script> <script> $(document).ready(function() { var variable = "something.html"; $('form').attr("action", variable); }); </script> ```
You can then change the form action to be equal to the variable name. Something like the following. ``` var variableName = "myform.htm"; this.form.action = variableName; ``` Edit, you can also do this with other elements, just like you would with any other piece of javascript, using things such as "getElementById()" to get the items from the DOM
print or echo a javascript variable, not to the document, but to be the value, or part of the value, to a DOM element, specifically a form action
[ "", "javascript", "html", "dom", "" ]
I have this SQL query: ``` SELECT * FROM IMAGES WHERE IMAGENAME in ('IMG1', 'IMG2', 'IMG3', 'IMG4', 'IMG5', 'IMG6') ORDER BY CASE IMAGENAME WHEN 'IMG1' THEN 1 WHEN 'IMG2' THEN 2 WHEN 'IMG3' THEN 3 WHEN 'IMG4' THEN 4 WHEN 'IMG5' THEN 5 WHEN 'IMG6' THEN 6 ELSE 7 END ``` I cannot guarantee that the list of IMAGENAMEs will be in alphabetical order, hence the case statement, but I would prefer to sort in the DB rather than in code because I trust their sorting code better than mine :) SQL server analyses that 78% of the execution time is spent sorting - can I reduce this? It needs to be fairly vanilla SQL as we target SQL Server and Oracle. Any tuning advice would be fantastic.
I'm not sure if this is available in your target databases, but it might hopefully work at least as inspiration for a better approach. In MySQL, you would do what you want with the FIELD() function like this: ``` SELECT * FROM IMAGES WHERE IMAGENAME IN ('IMG1', 'IMG2', 'IMG3', 'IMG4', 'IMG5', 'IMG6') ORDER BY FIELD(IMAGENAME, 'IMG1', 'IMG2', 'IMG3', 'IMG4', 'IMG5', 'IMG6'); ``` The FIELD() function takes the first parameter, and then returns an integer based on the position in the list of parameters, or zero if it does not exist in the parameter list. This enables you to create a completely custom ordering for your query. <http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_field> I would expect something similar might be available in SQL Server or Oracle.
``` SELECT * FROM IMAGES WHERE IMAGENAME in ('IMG1', 'IMG2', 'IMG3', 'IMG4', 'IMG5', 'IMG6') ORDER BY IMAGENAME ASC ``` You don't need to strip the imagename; you're only selecting the few cases noted above, and order by can order string well enough. Also, the 'else' condition makes no sense, because it would never be used, since you stripped out all other options in the where clause. note: this is based on the query you posted, if there are more conditions or possibilities, you should give them as well.
Is there a more efficient way to do this SQL Select?
[ "", "sql", "sorting", "performance", "" ]
I am currently working in C#, and I need to insert a new record into one table, get the new primary key value, and then use that as a foreign key reference in inserting several more records. The Database is MS SQL Server 2003. All help is appreciated!
The way to get the identity of the inserted row is with the `SCOPE_IDENTITY()` function. If you're using stored procedures then this would look something like the following to return the row identity as an output parameter. ``` CREATE PROCEDURE dbo.MyProcedure ( @RowId INT = NULL OUTPUT ) AS INSERT INTO MyTable ( Column1 ,Column2 ,... ) VALUES ( @Param1 ,@Param2 ,... ); SET @RowId = SCOPE_IDENTITY(); ``` You can then use this value for any subsequent inserts (alternatively, if you can pass the data all into the stored procedure, then you can use it in the remainder of the procedure body). If you're passing the SQL in dynamically then you use much the same technique, but with a single string with statement delimiters (also `;` in SQL), e.g.: ``` var sql = "INSERT INTO MyTable (Column1, Column2, ...) VALUES (@P1, @P2, ...);" + "SELECT SCOPE_IDENTITY();"; ``` Then if you execute this using `ExecuteScalar` you'll be able to get the identity back as the scalar result and cast it to the right type. Alternatively you could build up the whole batch in one go, e.g. ``` var sql = "DECLARE @RowId INT;" + "INSERT INTO MyTable (Column1, Column2, ...) VALUES (@P1, @P2, ...);" + "SET @RowId = SCOPE_IDENTITY();" + "INSERT INTO MyOtherTable (Column1, ...) VALUES (@P3, @P4, ...);"; ``` This may not be *exactly* the right syntax, and you may need to use `SET NOCOUNT ON;` at the start (my mind is rusty as I rarely use dynamic SQL) but it should get you on the right track.
The best way of doing this is the use SCOPE\_IDENTITY() function in TSQL. This should be executed as part of the insert i.e. ``` SqlCommand cmd = new SqlCommand(@" INSERT INTO T (Name) VALUES(@Name) SELECT SCOPE_IDENTITY() As TheId", conn); cmd.AddParameter("@Name", SqlDbType.VarChar, 50).Value = "Test"; int tId = (int)cmd.ExecuteScalar(); ``` Alternatively you can assign SCOPE\_IDENTITY() to a variable to be used in successive statements. e.g. ``` DECLARE @T1 int INSERT INTO T (Name) VALUES('Test') SELECT @T1 = SCOPE_IDENTITY() INSERT INTO T2 (Name, TId) VALUES('Test', @T1) ```
Getting autonumber primary key from MS SQL Server
[ "", "c#", "sql-server", "autonumber", "" ]
I want to validate a set of credentials against the domain controller. e.g.: ``` Username: STACKOVERFLOW\joel Password: splotchy ``` ## Method 1. Query Active Directory with Impersonation A lot of people suggest querying the Active Directory for something. If an exception is thrown, then you know the credentials are not valid - as is suggested in [this stackoverflow question](https://stackoverflow.com/questions/290548/c-validate-a-username-and-password-against-active-directory). There are some serious [drawbacks to this approach](http://bytes.com/groups/net-c/249893-fyi-easy-way-validate-ad-credentials-win2k-using-c) however: 1. You are not only authenticating a domain account, but you are also doing an implicit authorization check. That is, you are reading properties from the AD using an impersonation token. What if the otherwise valid account has no rights to read from the AD? By default all users have read access, but domain policies can be set to disable access permissions for restricted accounts (and or groups). 2. Binding against the AD has a serious overhead, the AD schema cache has to be loaded at the client (ADSI cache in the ADSI provider used by DirectoryServices). This is both network, and AD server, resource consuming - and is too expensive for a simple operation like authenticating a user account. 3. You're relying on an exception failure for a non-exceptional case, and assuming that means invalid username and password. Other problems (e.g. network failure, AD connectivity failure, memory allocation error, etc) are then mis-intrepreted as authentication failure. ## Method 2. LogonUser Win32 API [Others](https://stackoverflow.com/questions/290548/c-validate-a-username-and-password-against-active-directory#290558) have suggested using the [`LogonUser()`](http://msdn.microsoft.com/en-us/library/aa378184.aspx) API function. This sounds nice, but unfortunately the calling user sometimes needs a permission usually only given to the operating system itself: > The process calling LogonUser requires > the SE\_TCB\_NAME privilege. If the > calling process does not have this > privilege, LogonUser fails and > GetLastError returns > ERROR\_PRIVILEGE\_NOT\_HELD. > > In some > cases, the process that calls > LogonUser must also have the > SE\_CHANGE\_NOTIFY\_NAME privilege > enabled; otherwise, LogonUser fails > and GetLastError returns > ERROR\_ACCESS\_DENIED. This privilege is > not required for the local system > account or accounts that are members > of the administrators group. By > default, SE\_CHANGE\_NOTIFY\_NAME is > enabled for all users, but some > administrators may disable it for > everyone. Handing out the "*Act as a part of the operating system*" privilege is not something you want to do willy-nilly - as Microsoft points out in a [knowledge base article](http://support.microsoft.com/kb/180548): > ...the process that is calling > LogonUser must have the SE\_TCB\_NAME > privilege (in User Manager, this is > the "*Act as part of the Operating > System*" right). The SE\_TCB\_NAME > privilege is very powerful and > **should not be granted to any arbitrary user just so that they can > run an application** that needs to > validate credentials. Additionally, a call to `LogonUser()` will fail if a blank password is specified. --- What is the proper way to authenticate a set of domain credentials? --- I *happen* to be calling from managed code, but this is a a general Windows question. It can be assumed that the customers have the .NET Framework 2.0 installed.
C# in .NET 3.5 using [System.DirectoryServices.AccountManagement](http://msdn.microsoft.com/en-us/library/system.directoryservices.accountmanagement.aspx). ``` bool valid = false; using (PrincipalContext context = new PrincipalContext(ContextType.Domain)) { valid = context.ValidateCredentials( username, password ); } ``` This will validate against the current domain. Check out the parameterized PrincipalContext constructor for other options.
Install `System.DirectoryServices.AccountManagement` from NuGet Package Manager and then: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Security; using System.DirectoryServices.AccountManagement; public struct Credentials { public string Username; public string Password; } public class Domain_Authentication { public Credentials Credentials; public string Domain; public Domain_Authentication(string Username, string Password, string SDomain) { Credentials.Username = Username; Credentials.Password = Password; Domain = SDomain; } public bool IsValid() { using (PrincipalContext pc = new PrincipalContext(ContextType.Domain, Domain)) { // validate the credentials return pc.ValidateCredentials(Credentials.Username, Credentials.Password); } } } ```
How to validate domain credentials?
[ "", "c#", "windows", "security", "authentication", "" ]
I know that you cannot return anonymous types from methods but I am wondering how the Select extension method returns an anonymous type. Is it just a compiler trick? Edit Suppose L is a List. How does this work? ``` L.Select(s => new { Name = s }) ``` The return type is IEnumerable<'a> where 'a = new {String Name}
The type is actually defined by *the caller*, so it's in the scope of the calling function - neatly avoiding the issue of "returning" an anonymous type. This is accomplished by generic type inference. The signature for [Select](http://msdn.microsoft.com/en-us/library/bb548891.aspx) is `Select<Tsource, TResult>(IEnumerable<TSource>, Func<TSource, TResult>`. The `IEnumerable<TSource>` is, obviously, the source collection. The `Func<Tsource, TResult>` transformation function is where the compiler can use type inference to declare an anonymous type. In other words, in order to pass a `Func<Tsource, TResult>` to `Select`, you - the caller - must define `TResult`. Which means `Select` isn't returning an anonymous type defined by it - but by you. To emulate this, you just have to get the caller to define the type: ``` TResult ReturnAnonymousType<TResult>(Func<TResult> f) { return f(); } Console.WriteLine(ReturnAnonymousType( () => return new { Text = "Hello World!" } // type defined here, before calling ); ```
Well, it's normal type inference for generic method type arguments. For instance: ``` List<string> x = new List<string>(); // The compiler converts this: x.Select(y => y.Length); // Into this, using type inference: Enumerable.Select<string, int>(x, y => y.Length); ``` The same would be true if `x` were a list of some anonymous type, or if the inferred return type of the lambda expression were an anonymous type. Don't forget that even though you can't explicitly state the type of a variable which uses an anonymous type, it still does *have* a definite type, known to the compiler.
Methods and Anonymous Types
[ "", "c#", "extension-methods", "anonymous-types", "anonymous-methods", "" ]
I have some low level serialization code that is templated, and I need to know the system's endianness at compiletime obviously (because the templates specializes based on the system's endianness). Right now I have a header with some platform defines, but I'd rather have someway to make assertions about endianness with some templated test (like a static\_assert or boost\_if). Reason being my code will need to be compiled and ran on a wide range of machines, of many specialized vendor, and probably devices that don't exist in 2008, so I can't really guess what might need to go into that header years down the road. And since the code-base has an expected lifetime of about 10 years. So I can't follow the code for-ever. Hopefully this makes my situation clear. So does anyone know of a compile-time test that can determine endianness, without relying on vendor specific defines?
If you're using autoconf, you can use the `AC_C_BIGENDIAN` macro, which is fairly guaranteed to work (setting the `WORDS_BIGENDIAN` define by default) alternately, you could try something like the following (taken from autoconf) to get a test that will probably be optimized away (GCC, at least, removes the other branch) ``` int is_big_endian() { union { long int l; char c[sizeof (long int)]; } u; u.l = 1; if (u.c[sizeof(long int)-1] == 1) { return 1; } else return 0; } ```
There is no portable way to do this at compile time, your best bet is probably to use the [Boost](http://www.boost.org/) [endian macros](http://www.boost.org/doc/libs/1_36_0/boost/detail/endian.hpp) or emulate the methods they use.
Is there a way to do a C++ style compile-time assertion to determine machine's endianness?
[ "", "c++", "templates", "metaprogramming", "endianness", "" ]
Why are there not more opensource easy to use charting libraries for Java?. The only successful opensource project in this area seems to be jfreechart, and it doesn't even have any documentation or examples available.
There is [charts4j](https://github.com/julienchastang/charts4j) which is a charts and graphs API. It enables developers to programmatically create the charts available in the [Google Chart API](http://code.google.com/apis/chart/) through a straightforward and intuitive Java API. Disclaimer: I wrote [charts4j](https://github.com/julienchastang/charts4j). We will be doing another major release in the next few weeks.
There aren't a lot of them because they would be in competition with [JFreeChart](http://www.jfree.org/jfreechart/), and it's awesome. You can get documentation and examples by downloading the [developer's guide](http://www.object-refinery.com/jfreechart/guide.html). There are also tons of free online tutorials if you search for them. * [JFreeChart - Tutorial](http://www.vogella.com/tutorials/JFreeChart/article.html) * [Introduction to JFreeChart](http://www.codeproject.com/Articles/650480/Introduction-to-JFreeChart)
What is the best open-source java charting library? (other than jfreechart)
[ "", "java", "charts", "plot", "" ]
If I'm using ConcurrentHashMap (where the put is thread safe) , and I supply a public function myPut that uses the ConcurrentHashMap put - do I need to synchronize my function? meaning : should this be synchronized? ``` ConcurrentHashMap map; public void myPut(int something) { this.map.put(something); } ```
Concurrency utilities such as ConcurrentHashMap are designed so that you don't need to synchronize: they'll handle thread-safe access internally. What Tom says is true, that you need to think about the potential of the map reference to change. If the reference actually *doesn't* change, then in practice you'll get away with it here: the internal synchronization of ConcurrentHashMap -- and indeed the java.util.concurrentlibrary in general -- guarantees that objects put into the map are safely published to other threads. But I would agree that even so, it is good practice to decide whether the reference can change or not, and then explicitly state this in the code ('final' if it can't; something like 'volatile' or an AtomicReference if it can).
Because the `map` reference is not declared `final` it may be changed. Therefore, there is a potential threading bug here. If `map` is supposed to be a mutable reference then you will need to do some more work. Otherwise use `final`. Indeed, use `final` whenever you can, even if it is "easier" not to. "`final` is the new [old] `private`." You probably want to make `map` `private` and generic too.
Does a call to a threadsafe function need to be syncronized too?
[ "", "java", "synchronization", "concurrenthashmap", "" ]
Is it possible (by using the stock c# TreeView) to have Multiline TreeNodes? Also, is it possible to add control characters to TreeNode's text e.g. '\t'? This same effect could also be achieved by adding columns to the TreeNode. is this possible?
I do not believe this is possible using WinForms, at least without drawing your own nodes. You may be able to use an "owner-drawn" treenode to accomplish this in WinForms, though, I don't know. This is natively supported with WPF, though.
It isn't very clear from your question whether you are on ASP.NET or WinForms. If you're in ASP.NET you could try using the PRE tag to get formatting...? What that's said the Ra-Ajax TreeView which you can see an example of [here](http://ra-ajax.org/samples/Viewport-RSS-Starter-Kit.aspx) can have any arbitrage amount of HTML and/or even controls within it which mostly makes it way superior to the built in TreeView in ASP.NET. (Disclaimer; I Work with Ra-Ajax) This makes it possible for you to format the contents of the TreeView exactly as you wish. Though if you go completely berserk you still run the risk of creating "display artifacts" due to too much space or something like that...
c# Treeview Node with formatted text e.g. tabs & new lines
[ "", "c#", "treeview", "format", "treenode", "" ]
I am trying to use the range property of the jQuery slider so that the slider control displays two handles from which the user can select a price range for real estate. The code I have is: ``` $("#price").slider({ range: true, minValue: 0, maxValue: 2000000, change: function(e, ui) { var range = (Math.round(ui.range) * 10) + " to " + ui.value; $("#pricedesc").text(range); } }); ``` The price range should be from $0 to $2,000,000. When I slide the handles on the slider though I get unusual values such as "690 to 13". How exactly is the double handle slider meant to work?
To access the slider handle values in a double handled slider you need to access them from the [slider( "value", index )](http://docs.jquery.com/UI/Slider/slider#slider.28.C2.A0.22value.22.2C.C2.A0index_.29) function. Try the following code: ``` $(document).ready(function(){ $("#price").slider( { range: true, min: 0, max: 2000000, change: function(e,ui) { alert($("#price").slider("value", 0) + ' - ' + $("#price").slider("value", 1) ); }}); $("#price").slider("moveTo", 500000, 1); }); ```
``` <script type="text/javascript"> var str; $(function() { $("#slider-range").slider({ range: true, min: 250, max: 2500, values: [500, 1000], slide: function(event, ui) { $("#amount").val('Rs' + ui.values[0] + ' - Rs' + ui.values[1]); } }); $("#amount").val('Rs' + $("#slider-range").slider("values", 0) + ' - Rs' + $("#slider-range").slider("values", 1)); //document.getElementById('valueofslide').value = arrIntervals[ui.values[1]]; }); </script> in html <div id="Priceslider" class="demo" style="margin-top:5px; " > <%--<Triggers> <asp:AsyncPostBackTrigger ControlID="Chk1" /> </Triggers>--%> <asp:UpdatePanel ID="UpdatePanel2" runat="server"> <ContentTemplate> <asp:TextBox ID="amount" runat="server" style="border:0; color:#f6931f; font-weight:bold;margin-bottom:7px;" OnTextChanged="amount_TextChanged" AutoPostBack="True"></asp:TextBox> </ContentTemplate> </asp:UpdatePanel> <div id="slider-range"></div> <asp:TextBox ID="valueofslide" runat="server" AutoPostBack="True"></asp:TextBox> </div> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" AllowPaging="True" PageSize="5" Width="555px" onpageindexchanging="GridView1_PageIndexChanging"> <Columns> <asp:TemplateField> <ItemTemplate> <div class="propertyName"> <asp:CheckBox ID="chkProperty" runat="server" Text='<%# Eval("PropertyName") %>' />, <asp:Label ID="lblLocation" runat="server" Text='<%# Eval("PropertyLocality") %>'></asp:Label>, <asp:Label ID="lblCity" runat="server" Text='<%# Eval("CityName") %>'></asp:Label> </div> <div class="property-image"> <asp:Image ID="Image1" runat="server" ImageUrl='<%# Eval("PhotoPath") %>' Height="100" Width="100" /> &nbsp; </div> <div> <div style="float: left; width: 380px; margin: 10px; border: thin solid black;"> <div style="height: 80px; width: 80px; border: 1px solid; float: right; margin-top: 10px; margin-right: 10px;"> <font size="2">Weekdays Price:<span id="weekdayPrice6"><%# Eval("WeekdayPrice")%></span></font><br> <font size="2">Weekend Price: <span id="weekendPrice6"><%# Eval("WeekendPrice")%></span></font><br> <input name="getamt" value="Get your amount" style="font-size: 8px;" type="button"> </div> <div style="float: right; width: 280px;"> <input name="Map" value="Map" onclick="showPropertyMap(6)" type="button"> <input name="availability" value="Check Availability" onclick="showPropertyAvailabilityCalender(6)" type="button"><br> Ratings : <img src="images/star<%# Eval("PropertyRating") %>.PNG" alt="'<%# Eval("PropertyRating") %>'"/> (Votes : <span></span>) <br> View <span></span> times, <span> <asp:Label ID="Label1" runat="server" Text='<%# Eval("NumberOfReviews") %>'></asp:Label></span> Reviews<br> <span></span><%# Eval("PropertyRecommended")%> % Recommend<br> Check in <%# Eval("CheckinTime") %> Check out <%# Eval("CheckoutTime")%><br> <div id='<%# Eval("PropertyId") %>' class="property"> <%-- <input name="Book" value="Book" type="button">--%> <asp:Button ID="Book" runat="server" Text="Book" OnClientClick="return retrivPropertyId(this);" onclick="Book_Click"/> <input name="Save" value="Save" type="button"> <input name="Details" value="Details" type="button" onclick="return retreivePId(this);"> <asp:Button ID="Contact" runat="server" Text="Contact" OnClientClick="return retreivePropId(this);" onclick="Contact_Click" /> <br> </div> </div> </div> </div> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> ```
jQuery slider range
[ "", "javascript", "jquery", "slider", "" ]
I have a binary field in my database that is hard to describe in a UI using a single "Is XXXX?"-type checkbox. I'd rather use a pair of radio buttons (e.g. "Do it the Foo way" and "Do it the Bar way"), but right now all the other fields on my form are data-bound to a business object. I'd like to data-bind the pair of radio buttons to the business object as well, but haven't come up with a good way to do it yet. I can bind one of the buttons to the field, such that the field is set "true" if the button is selected, but while selecting the other button does de-select the first one (that is, the two radio buttons are properly paired), the value of the field does not update to reflect this. I'd like to be able to say ``` button1.DataBindings.Add(new Binding("checked", source, "useFoo")); button2.DataBindings.Add(new Binding("checked", source, "!useFoo")); ``` but I'm pretty sure that will throw when it runs. Is there an easier way, or should I just put more thought into how to word a single checkbox? I don't want to add extra functions to handle something this trivial... ETA: A commenter has suggested considering a dropdown (ComboBox). I had thought about this, but how would I data-bind that to a boolean field in a database/Property in a business object? If I bind the SelectedItem to the useFoo property, what would go in the Items collection? Would I have to add just "True" and "False", or could I somehow add a key/value pair object that ties a displayed item ("Use Foo" / "Do Not Use Foo") to the boolean value behind it? I'm having trouble finding docs on this. --- About the answer: the solution I wound up using involved modifying the business object -- the basic idea is very similar to the one posted by Gurge, but I came up with it separately before I read his response. In short, I added a separate property that simply returns `!useFoo`. One radio button is bound to `source.UseFoo`, and the other is bound to `source.UseBar` (the name of the new property). It's important to make sure the new property has both getters and setters, or you'll wind up with really odd behavior.
I have found a way of doing this using DataSet/DataTable. I make a calculated column in the DataTable with the expression `IIF(Foo=true, false, true)`. Let's call that column `Bar`. `Bar` is of type Boolean. Now you can bind one `RadioButton.Checked` to `Foo` and one to `Bar`. To get `Bar` checking/unchecking to propagate back to `Foo` you must go to the generated DataTable code and add one line, the last one in this sample: ``` [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] public bool Bar { get { try { return ((bool)(this[this.tableradio.BarColumn])); } catch (global::System.InvalidCastException e) { throw new global::System.Data.StrongTypingException("The value for column \'Bar\' in table \'radio\' is DBNull.", e); } } set { this[this.tableradio.BarColumn] = value; this[this.tableradio.FooColumn] = !value; } } ```
1. Bind the `RadioButton` that is directly linked to your boolean value (ie is checked when the value is `true`). 2. Add an event handler to the `CheckedChanged` event on this `RadioButton` that looks like the following : ``` private void radioButton_CheckedChanged(object sender, EventArgs e) { foreach (Binding b in ((Control)sender).DataBindings) b.WriteValue(); } ```
How do I use databinding with Windows Forms radio buttons?
[ "", "c#", "winforms", "data-binding", "radio-button", "" ]
I am hoping to dynamically update a ModelForm's inline Meta class from my view. Although this code seems to update the exclude list in the Meta class, the output from `as_p()`, `as_ul()`, etc does not reflect the updated Meta exclude. I assume then that the html is generated when the ModelForm is created not when the `as_*()` is called. Is there a way to force the update of the HTML? Is this even the best way to do it? I just assumed this *should* work. Thoughts? ``` from django.forms import ModelForm from testprogram.online_bookings.models import Passenger class PassengerInfoForm(ModelForm): def set_form_excludes(self, exclude_list): self.Meta.exclude = excludes_list class Meta: model = Passenger exclude = [] ```
The Meta class is used to dynamically construct the form definition - so by the time you've created the ModelForm instance, the fields not in the exclude have already been added as the new object's attributes. The normal way to do it would be to just have multiple class definitions for each possible exclude list. But if you want the form itself to be dynamic, you'll have to create a class definition on the fly. Something like: ``` def get_form(exclude_list): class MyForm(ModelForm): class Meta: model = Passenger exclude = exclude_list return MyForm form_class = get_form(('field1', 'field2')) form = form_class() ``` **UPDATE**: I just revisited this post and thought I'd post a little more idiomatic way to handle a dynamic class: ``` def PassengerForm(exclude_list, *args, **kwargs): class MyPassengerForm(ModelForm): class Meta: model = Passenger exclude = exclude_list def __init__(self): super(MyPassengerForm, self).__init__(*args, **kwargs) return MyPassengerForm() form = PassengerForm(('field1', 'field2')) ```
Another way: ``` class PassengerInfoForm(ModelForm): def __init__(self, *args, **kwargs): exclude_list=kwargs.pop('exclude_list', '') super(PassengerInfoForm, self).__init__(*args, **kwargs) for field in exclude_list: del self.fields[field] class Meta: model = Passenger form = PassengerInfoForm(exclude_list=['field1', 'field2']) ```
Dynamically update ModelForm's Meta class
[ "", "python", "django", "django-forms", "metaprogramming", "modelform", "" ]
In C#, if I declare an auto-implemented property, why do I have to declare BOTH the get and set part? i.e. ``` public string ThisWorks { get; set; } public string ThisDoesnt { get; } ``` Isn't this just syntactic sugar - i.e. the compiler inserts a private field for the property? So why the problem? Curious.
If you didn't have a setter - then how would you ever set the property? Incidentally, you can specify the accessibility, eg: ``` public string Foo { get; private set; } ```
Without a setter, you would never be able to provide a value - as you don't have any way of specifying the backing variable's name. I've requested a readonly automatic property, declared like this: ``` public string ReadonlyProperty { get; readonly set; } ``` which would create a readonly backing variable, a property with only a getter, and translate all calls to the setter into direct access to the variable. You could only call the setter within the constructor - just like for normal readonly variables. We'll see whether this request does any good... it's a real shame it's not in there at the moment, as it makes it harder to implement immutable types than mutable types :(
Why do automatic properties require both getters AND setters?
[ "", "c#", ".net", "automatic-properties", "" ]
Whats this syntax useful for : ``` function(String... args) ``` Is this same as writing ``` function(String[] args) ``` with difference only while invoking this method or is there any other feature involved with it ?
The only difference between the two is the way you call the function. With String var args you can omit the array creation. ``` public static void main(String[] args) { callMe1(new String[] {"a", "b", "c"}); callMe2("a", "b", "c"); // You can also do this // callMe2(new String[] {"a", "b", "c"}); } public static void callMe1(String[] args) { System.out.println(args.getClass() == String[].class); for (String s : args) { System.out.println(s); } } public static void callMe2(String... args) { System.out.println(args.getClass() == String[].class); for (String s : args) { System.out.println(s); } } ```
The difference is only when invoking the method. The second form must be invoked with an array, the first form can be invoked with an array (just like the second one, yes, this is valid according to Java standard) or with a list of strings (multiple strings separated by comma) or with no arguments at all (the second one always must have one, at least null must be passed). It is syntactically sugar. Actually the compiler turns ``` function(s1, s2, s3); ``` into ``` function(new String[] { s1, s2, s3 }); ``` internally.
difference fn(String... args) vs fn(String[] args)
[ "", "java", "variadic-functions", "" ]
I need to read data added to the end of an executable from within that executable . On win32 I have a problem that I cannot open the .exe for reading. I have tried CreateFile and std::ifstream. Is there a way of specifying non-exclusive read access to a file that wasn't initially opened with sharing. EDIT- Great thing about stackoverflow, you ask the wrong question and get the right answer.
Why not just use resources which are designed for this functionality. It won't be at the end, but it will be in the executable. If you are adding to the .exe after it is built -- you don't have to add to the end, you can update resources on a built .exe <http://msdn.microsoft.com/en-us/library/ms648049(VS.85).aspx>
We do this in one of our projects. What's the problem with it? If the EXE is running, then it's already held open for reading, and you can continue to open it read-only multiple times. I just checked our code, we just use: ``` HANDLE file=CreateFile(filename, GENERIC_READ, FILE_SHARE_READ, 0, OPEN_EXISTING, 0, 0); ``` This works without problem on all versions of 32- and 64-bit Windows to date.
Executable reading itself
[ "", "c++", "winapi", "file", "" ]
I know that new-ing something in one module and delete-ing it in another can often cause problems in VC++. Problems with different runtimes. Mixing modules with staticly linked runtimes and/or dynamically linked versioning mismatches both can screw stuff up if I recall correctly. **However, is it safe to use VC++ 2008's std::tr1::shared\_ptr across modules?** Since there is only one version of the runtime that even knows what what a shared\_ptr is, static linking is my only danger (for now...). I thought I've read that boost's version of a shared\_ptr was safe to use like this, but I'm using Redmond's version... I'm trying to avoid having a special call to free objects in the allocating module. (or something like a "delete this" in the class itself). If this all seems a little hacky, I'm using this for unit testing. If you've ever tried to unit test existing C++ code you can understand how ***creative*** you need to be at times. My memory is allocated by an EXE, but ultimately will be freed in a DLL (if the reference counting works the way I think it does).
Freeing the memory is safe, so long as it all came from the same *memory management* context. You've identified the most common issue (different C++ runtimes); having separate heaps is another less-common issue you can run into. Another issue which you didn't mention, but which can be exascerbated by shared pointers, is when an object's code exists in the DLL and is created by the DLL, but another object outside the DLL ends up with a reference to it (via shared pointer). If that object is destroyed after the DLL is unloaded (for example, if it's a module-level static, or if the DLL is explicitly unloaded by `FreeLibrary()`, the shared object's destructor will crash. This can bite you if you attempt to write DLL-based, loosely-coupled plugins. It's also the reason that COM lets DLLs decide when they *can* be unloaded, rather than letting COM servers demand-unload them.
You're beginning to see how incredibly amazing `shared_ptr` is :) Being safe across DLL boundaries is exactly what `shared_ptr` was designed to be (among other things, of course). Contrary to what others have said, you don't even need to pass a custom deleter when constructing the `shared_ptr`, as the default is already something like ``` template <typename T> struct default_deleter { void operator()( T * t ) { delete t; } }; ``` and ``` shared_ptr<Foo> foo( new Bar ); ``` is equivalent to ``` shared_ptr<Foo> foo( new Bar, default_deleter<Bar>() ); ``` (ie. there's no such thing as a `shared_ptr` without a deleter). Because of the type erasure performed on the deleter, the `delete` that's called will *always* be the one from the DLL that *instantiated* the `shared_ptr`, never the one from the DLL where the last `shared_ptr` goes out of scope (ie. the `shared_ptr` invoking the deleter will call it through a pointer to a function put there by the original `shared_ptr`). Compare this to `auto_ptr`, which embeds the `delete` operator directly in its (inline) destructor, which means that the `delete` of the DLL that *destroys* the `auto_ptr` is used, creating the same problems as deleting a naked pointer. By the same technique, polymorphic classes that are always held in `shared_ptr`s don't even need a virtual destructor, because the deleter will always call the right destructor, even when the last `shared_ptr` to go out of scope is one instantiated for the base class.
Is it safe to use STL (TR1) shared_ptr's between modules (exes and dlls)
[ "", "c++", "visual-c++", "memory-management", "stl", "shared-ptr", "" ]
I have written a CGI script that creates an image dynamically using GET data. To include this image in my webpage, I am using the following code: ``` <img src="image.py?text=xxxxxxxxxxxxxx"> ``` The problem is that I expect in the future the "text" field will get very long and the URL will become too large. From Googling around there doesn't seem to be a fixed limit on URL length (ie. depends on the browser, server, proxy, etc.) Is there a better way to do this? If it matters, I am working with Django and Python and I cannot use any client-side scripting (ie. JavaScript). Cheers, Ben
Store the text somewhere (e.g. a database) and then pass through the primary key.
This will get you an Image as the result of a POST -- you may not like it 1. Put an iFrame where you want the image and size it and remove scrollbars 2. Set the src to a form with hidden inputs set to your post parameters and the action set to the URL that will generate the image 3. submit the form automatically with JavaScript in the body.onload of the iFrame's HTML Then, either: 4. Serve back an content-type set to an image and stream the image bytes or: 5. store the post parameters somewhere and generate a small id 6. serve back HTML with an img tag using the id in the url -- on the server look up the post parameters or: 7. generate a page with an image tag with an embedded image <http://danielmclaren.net/2008/03/embedding-base64-image-data-into-a-webpage>
Including a dynamic image in a web page using POST?
[ "", "python", "html", "django", "cgi", "image", "" ]
Is there anything I should know before converting a large C++ program from VS2005 to VS2008?
I'm working on this very problem right now. *Running WinMerge to see what I've changed...* OK, here is what I had to fix in an huge Win32/MFC client application: Some MFC functions have become virtual (which were not in the past - CWnd::GetMenu for one, if I recall correctly). Also something related to our legacy mouse wheel support (before Windows had built-in mouse wheel support) somehow broke (I just removed the feature, so I never really figured out why that broke). Some ATL methods (or method params) have changed to const that were not originally (screwed up my overrides). The Platform SDK is newer - be careful if you're setting the windows SDK version #defines correctly (we were not in all places - which was dumb). You may now be building with newer versions (Vista/2008) of Win32 structures. This didn't work so great on my XP box. STDMETHOD now includes \_\_declspec(nothrow) which is 100% right - except this found some problems in our code. Some interface that was written like it would be exposed through COM, but never was, threw exceptions. The IDE has a [bug](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=366060&wa=wsignin1.0) where disabled breakpoints don't show the hollow circle in the margin if you don't have the break points set to highlight the whole line (which I *think* is the default for VC++, maybe?). Most of these issues were due to subtle mistakes in our code or aggressive overloading of MFC/ATL libraries. Since everyone else's code is perfect, you should be fine ;)
If you have to support older platforms, beware: VC2008 is the first version that [can't target Win9x or NT4](https://stackoverflow.com/questions/213711/does-visual-studio-2008-support-windows-98). My company has to stick to VC2005 for that very reason.
Anything I should know before converting a large C++ program from VS2005 to VS2008?
[ "", "c++", "visual-studio", "visual-c++", "" ]
If I have a query to return all matching entries in a DB that have "news" in the searchable column (i.e. `SELECT * FROM table WHERE column LIKE %news%`), and one particular row has an entry starting with "In recent World news, Somalia was invaded by ...", can I return a specific "chunk" of an SQL entry? Kind of like a teaser, if you will.
``` select substring(column, CHARINDEX ('news',lower(column))-10, 20) FROM table WHERE column LIKE %news% ``` basically substring the column starting 10 characters before where the word 'news' is and continuing for 20. Edit: You'll need to make sure that 'news' isn't in the first 10 characters and adjust the start position accordingly.
You can use substring function in a SELECT part. Something like: ``` SELECT SUBSTRING(column, 1,20) FROM table WHERE column LIKE %news% ``` This will return the first 20 characters from column *column*
Is it possible to get the matching string from an SQL query?
[ "", "sql", "" ]
What's the difference between these two? Which is better/faster/reliable?
SCGI is a language-neutral means of connecting a front-end web server and a web application. WSGI is a Python-specific interface standard for web applications. Though they both have roots in CGI, they're rather different in scope and you could indeed quite reasonably use both at once, for example having a mod\_scgi on the webserver talk to a WSGI app run as an SCGI server. There are multiple library implementations that will run WSGI applications as SCGI servers for you (eg. wsgitools, cherrypy). They are both 'reliable', in as much as you can consider a specification reliable as opposed to a particular implementation. These days you would probably write your application as a WSGI callable, and consider the question of deployment separately. Maybe an Apache+mod\_wsgi (embedded) interface might be a *bit* faster than an Apache+mod\_scgi+(SCGI wrapper lib), but in all likelihood it's not going to be *hugely* different. More valuable is the ability to run the application on a variety of servers, platforms and connection standards.
SCGI (like FastCGI) is a (serialized) protocol suitable for inter-process communication between a web-server and a web-application. WSGI is a Python API, connecting two (or more) Python WSGI-compatible modules inside the same process (Python interpreter). One module represents the web-server (being either a Python in-process web-server implementation or a gateway to a web-server in another process via e.g. SCGI). The other module is or represents the web application. Additionally, zero or more modules between theses two modules, may serve as WSGI "middleware" modules, doing things like session/cookie management, content caching, authentication, etc. The WSGI API uses Python language features like iteration/generators and passing of callable objects between the cooperating WSGI-compatible modules.
What's the difference between scgi and wsgi?
[ "", "python", "wsgi", "scgi", "" ]
I'm revisiting som old code of mine and have stumbled upon a method for getting the title of a website based on its url. It's not really what you would call a stable method as it often fails to produce a result and sometimes even produces incorrect results. Also, sometimes it fails to show some of the characters from the title as they are of an alternative encoding. Does anyone have suggestions for improvements over this old version? ``` public static string SuggestTitle(string url, int timeout) { WebResponse response = null; string line = string.Empty; try { WebRequest request = WebRequest.Create(url); request.Timeout = timeout; response = request.GetResponse(); Stream streamReceive = response.GetResponseStream(); Encoding encoding = System.Text.Encoding.GetEncoding("utf-8"); StreamReader streamRead = new System.IO.StreamReader(streamReceive, encoding); while(streamRead.EndOfStream != true) { line = streamRead.ReadLine(); if (line.Contains("<title>")) { line = line.Split(new char[] { '<', '>' })[2]; break; } } } catch (Exception) { } finally { if (response != null) { response.Close(); } } return line; } ``` One final note - I would like the code to run faster as well, as it is blocking until the page as been fetched, so if I can get only the site header and not the entire page, it would be great.
A simpler way to get the content: ``` WebClient x = new WebClient(); string source = x.DownloadString("http://www.singingeels.com/"); ``` A simpler, more reliable way to get the title: ``` string title = Regex.Match(source, @"\<title\b[^>]*\>\s*(?<Title>[\s\S]*?)\</title\>", RegexOptions.IgnoreCase).Groups["Title"].Value; ```
Perhaps with this suggestion a new world opens up for you I also had this question and came to this Download "Html Agility Pack" from <http://html-agility-pack.net/?z=codeplex> Or go to nuget: <https://www.nuget.org/packages/HtmlAgilityPack/> And add in this reference. Add folow using in the code file: ``` using HtmlAgilityPack; ``` Write folowing code in your methode: ``` var webGet = new HtmlWeb(); var document = webGet.Load(url); var title = document.DocumentNode.SelectSingleNode("html/head/title").InnerText; ``` Sources: <https://codeshare.co.uk/blog/how-to-scrape-meta-data-from-a-url-using-htmlagilitypack-in-c/> [HtmlAgilityPack obtain Title and meta](https://stackoverflow.com/questions/17606247/htmlagilitypack-obtain-title-and-meta)
How to get website title from c#
[ "", "c#", "webrequest", "" ]
I'm preferably looking for a SQL query to accomplish this, but other options might be useful too.
``` SELECT LAST_DDL_TIME, TIMESTAMP FROM USER_OBJECTS WHERE OBJECT_TYPE = 'PROCEDURE' AND OBJECT_NAME = 'MY_PROC'; ``` **`LAST_DDL_TIME`** is the last time it was compiled. **`TIMESTAMP`** is the last time it was changed. Procedures may need to be recompiled even if they have not changed when a dependency changes.
``` SELECT name, create_date, modify_date FROM sys.procedures order by modify_date desc ```
How do I find out when a stored procedure was last modified or compiled in Oracle?
[ "", "sql", "oracle", "stored-procedures", "plsql", "oracle10g", "" ]
I have an XML document looking similar to this: ``` <items> <item cat="1" owner="14">bla</item> <item cat="1" owner="9">bla</item> <item cat="1" owner="14">bla</item> <item cat="2" owner="12">bla</item> <item cat="2" owner="12">bla</item> </items> ``` Now I'd like to get all unique owners (I actually only need the attribute value of the owner) belonging to a specified category using a linq query. In my example, the query for cat 1 would return a list containing 9 and 14. How can I do that? Linq syntax would be preferred over Lambdas. Thanks in advance ;)
Presuming the fragment is in itemsElement: ``` var distinctOwners = (from item in itemsElement.Element("item") where itemElements.Attribute("cat") == 1 select item.Attribute("owner")).Distinct(); ``` Apologies for formatting and indentation!
Try this function:- ``` static IEnumerable<int> GetOwners(XDocument doc, string cat) { return from item in doc.Descendants("item") where item.Attribute("cat").Value == cat select (int)item.Attribute("owner")).Distinct(); } ```
Select unique XElements (by attribute) with a filter using LinqToXml
[ "", "c#", "xml", "linq-to-xml", "unique", "" ]
I would like to get all descendant text nodes of an element, as a jQuery collection. What is the best way to do that?
jQuery doesn't have a convenient function for this. You need to combine `contents()`, which will give just child nodes but includes text nodes, with `find()`, which gives all descendant elements but no text nodes. Here's what I've come up with: ``` var getTextNodesIn = function(el) { return $(el).find(":not(iframe)").addBack().contents().filter(function() { return this.nodeType == 3; }); }; getTextNodesIn(el); ``` **Note: If you're using jQuery 1.7 or earlier, the code above will not work. To fix this, replace [`addBack()`](http://api.jquery.com/addBack/) with [`andSelf()`](http://api.jquery.com/andSelf/). `andSelf()` is deprecated in favour of `addBack()` from 1.8 onwards.** This is somewhat inefficient compared to pure DOM methods and has to include an [ugly workaround for jQuery's overloading of its `contents()` function](http://bugs.jquery.com/ticket/11275) (thanks to @rabidsnail in the comments for pointing that out), so here is non-jQuery solution using a simple recursive function. The `includeWhitespaceNodes` parameter controls whether or not whitespace text nodes are included in the output (in jQuery they are automatically filtered out). Update: Fixed bug when includeWhitespaceNodes is falsy. ``` function getTextNodesIn(node, includeWhitespaceNodes) { var textNodes = [], nonWhitespaceMatcher = /\S/; function getTextNodes(node) { if (node.nodeType == 3) { if (includeWhitespaceNodes || nonWhitespaceMatcher.test(node.nodeValue)) { textNodes.push(node); } } else { for (var i = 0, len = node.childNodes.length; i < len; ++i) { getTextNodes(node.childNodes[i]); } } } getTextNodes(node); return textNodes; } getTextNodesIn(el); ```
Jauco posted a good solution in a comment, so I'm copying it here: ``` $(elem) .contents() .filter(function() { return this.nodeType === 3; //Node.TEXT_NODE }); ```
How do I select text nodes with jQuery?
[ "", "javascript", "jquery", "dom", "" ]
can i somehow compare two numbers in regex? i want regex that is correct for 10-12, but incorrect for 12-10. I mean that 10 must be smaller than 12. I want to do it in Javascript.
If the input is always of the form X-Y, then why not use the split() function with '-' as the delimiter and then compare the two parts with > You can't compare numerical values using RegExps.
I wouldn't use regex for this. I'd split the string on the operator, then compare the two resulting numbers based on what operator I found (I'm assuming `10+12` and `12+10` would both be legal).
regex compare two numbers
[ "", "javascript", "regex", "numbers", "compare", "" ]
I'm new to using LINQ to Entities (or Entity Framework whatever they're calling it) and I'm writing a lot of code like this: ``` var item = (from InventoryItem item in db.Inventory where item.ID == id select item).First<InventoryItem>(); ``` and then calling methods on that object like this: ``` var type = item.ItemTypeReference; ``` or ``` var orders = item.OrderLineItems.Load(); ``` to retrieve child or related objects. I haven't profiled the DB or dug too deeply but my guess is that when I call a .Load() or a \*Reference property I'm actually making another call to the DB. If this is the case, is there any way to get those objects in my initial LINQ expression?
You want to use the .Include(string) method references in this ["Shaping query results"](http://msdn.microsoft.com/en-us/library/bb896272.aspx) article. ``` var item = from InventoryItem item in db.Inventory.Include("ItemTypeReference").Include("OrderLineItems") where item.ID == id select item; ``` There is probably a "sql" style syntax for the Includes as well. Also see this [article](http://blogs.msdn.com/adonet/archive/2008/10/07/migrating-from-linq-to-sql-to-entity-framework-eager-loading.aspx) about moving from LINQ-to-SQL to LINQ-to-Entities. For others looking for a solution to this problem for **Linq to SQL** you want to do the following (Substitute DataContext and other types for whatever you have): ``` using (DataContext db = new DataContext()) { DataLoadOptions options = new DataLoadOptions(); options.LoadWith<InventoryItem>(ii => ii.ItemTypeReference); options.LoadWith<InventoryItem>(ii => ii.OrderLineItems); db.LoadOptions = options; var item = from InventoryItem item in db.Inventory where item.ID == id select item; } ``` This will load the properties specified in LoadWith whenever the parent item (InventoryItem) is loaded, for that particular context. In response to some further questions from James and Jesper, check out this [question](https://stackoverflow.com/questions/648782/how-do-i-create-a-where-condition-on-a-sub-table-in-linq)
In addition to Robert's answer, you might like to check out this question for options for an extension method that that allows you to .Include() using an expression instead of a string, so you get compile time checking: [Entity Framework .Include() with compile time checking?](https://stackoverflow.com/questions/2921119/entity-framework-include-with-compile-time-checking)
How do you construct a LINQ to Entities query to load child objects directly, instead of calling a Reference property or Load()
[ "", "c#", "linq", "linq-to-entities", "" ]
I have a C++ DLL including bitmap resources created by Visual Studio. Though I can load the DLL in VB6 using LoadLibrary, I cannot load the image resources either by using LoadImage or by using LoadBitmap. When I try to get the error using GetLastError(), it doesnot return any errors. I have tried using LoadImage and LoadBitmap in another C++ program with the same DLL and they work without any problems. Is there any other way of accessing the resource bitmaps in C++ DLLs using VB6?
Since you are using the numeric ID of the bitmap as a string, you have to add a "#" in front of it: ``` DLLHandle = LoadLibrary("Mydll.dll") myimage = LoadBitmap(DLLHandle, "#101") ' note the "#" ``` In C++ you could also use the MAKEINTRESOURCE macro, which is simply a cast to LPCTSTR: ``` imagehandle = LoadBitmap(DLLHandle, MAKEINTRESOURCE(101)); ```
You've got the right idea. You probably have the call wrong. Perhaps you could show a bit of code as I can't guess as to what you're passing.
accessing bitmap resources in a C++ DLL from VB6
[ "", "c++", "dll", "vb6", "bitmap", "resources", "" ]
A weird bug was occurring in production which I was asked to look into. The issue was tracked down to a couple of variables being declared within a For loop and not being initialized on each iteration. An assumption had been made that due to the scope of their declaration they would be "reset" on each iteration. Could someone explain why they would not be)? (My first question, really looking forward to the responses.) The example below is obviously not the code in question but reflects the scenario: Please excuse the code example, it looks fine in the editor preview?? ``` for (int i =0; i< 10; i++) { decimal? testDecimal; string testString; switch( i % 2 ) { case 0: testDecimal = i / ( decimal ).32; testString = i.ToString(); break; default: testDecimal = null; testString = null; break; } Console.WriteLine( "Loop {0}: testDecimal={1} - testString={2}", i, testDecimal , testString ); } ``` --- ### EDIT: Sorry, had to rush out for child care issue. The issue was that the prod code had was that the switch statement was huge and in some "case"'s a check on a class' property was being made, like if (myObject.Prop != null) then testString = myObject.Stringval... At the end of the switch, (outside) a check on testString == null was being made but it was holding the value from the last iteration,hence not being null as the coder assumed with the variable being declared within the loop. Sorry if my question and example was a bit off, I got the phone call about the day care as I was banging it together. I should have mentioned I compared IL from both variables in and out the loop. So, is the common opinion that "obviously the variables would not be reinitialized on each loop"? A little more info, the variables WHERE being initialized on each iteration until someone got over enthusiastic with ReSharper pointing out "the value is never used" and removed them. --- ### EDIT: Folks, I thank you all. As my first post I see how much clearer I should be in the future. The cause of our unexpected variable assignment can me placed on an inexperienced developer doing everything ReSharper told him and not running any unit tests after he ran a "Code Cleanup" on an entire solution. Looking at the history of this module in VSS I see variables Where declared outside of the loop and where initialized on each iteration. The person in question wanted his ReSharper to show "all green" so "moved his variables closer to assignment" then "Removed redundant assignment"! I don't think he will be doing it again...now to spend the weekend running all the unit tests he missed!
**Most** of the time, it does not matter whether you declare a variable inside or outside the loop; the rules of definite assignment ensure that it doesn't matter. In the debugger you might occasionally see old values (i.e. if you look at a variable in a breakpoint before it is assigned), but static-analysis proves that this won't impact executing code. The variables are never reset per loop, as there is demonstrably no need. At the IL level, \*\*usually\* the variable is declared just once for the method - the placement inside the loop is just a convenience for us programmers. **HOWEVER** there is an important exception; any time a variable is captured, the scoping rules get more complex. For example (2 secs): ``` int value; for (int i = 0; i < 5; i++) { value = i; ThreadPool.QueueUserWorkItem(delegate { Console.WriteLine(value); }); } Console.ReadLine(); ``` Is **very** different to: ``` for (int i = 0; i < 5; i++) { int value = i; ThreadPool.QueueUserWorkItem(delegate { Console.WriteLine(value); }); } Console.ReadLine(); ``` As the "value" in the second example is **truly** per instance, since it is captured. This means that the first example might show (for example) "4 4 4 4 4", where-as the second example will show 0-5 (in any order) - i.e. "1 2 5 3 4". So: were captures involved in the original code? Anything with a lambda, an anonymous method, or a LINQ query would qualify.
**Summary** Comparing the generated IL for declaring variables inside the loop to the generated IL for declaring variables outside the loop proves that there is no performance difference between the two styles of variable declaration. (The generated IL is virtually identical.) --- Here is the original source, supposedly using "more resources" because the variables are declared inside the loop: ``` using System; class A { public static void Main() { for (int i =0; i< 10; i++) { decimal? testDecimal; string testString; switch( i % 2 ) { case 0: testDecimal = i / ( decimal ).32; testString = i.ToString(); break; default: testDecimal = null; testString = null; break; } Console.WriteLine( "Loop {0}: testDecimal={1} - testString={2}", i, testDecimal , testString ); } } } ``` Here is the IL from the inefficient declaration source: ``` .method public hidebysig static void Main() cil managed { .entrypoint .maxstack 8 .locals init ( [0] int32 num, [1] valuetype [mscorlib]System.Nullable`1<valuetype [mscorlib]System.Decimal> nullable, [2] string str, [3] int32 num2, [4] bool flag) L_0000: nop L_0001: ldc.i4.0 L_0002: stloc.0 L_0003: br.s L_0061 L_0005: nop L_0006: ldloc.0 L_0007: ldc.i4.2 L_0008: rem L_0009: stloc.3 L_000a: ldloc.3 L_000b: ldc.i4.0 L_000c: beq.s L_0010 L_000e: br.s L_0038 L_0010: ldloca.s nullable L_0012: ldloc.0 L_0013: call valuetype [mscorlib]System.Decimal [mscorlib]System.Decimal::op_Implicit(int32) L_0018: ldc.i4.s 0x20 L_001a: ldc.i4.0 L_001b: ldc.i4.0 L_001c: ldc.i4.0 L_001d: ldc.i4.2 L_001e: newobj instance void [mscorlib]System.Decimal::.ctor(int32, int32, int32, bool, uint8) L_0023: call valuetype [mscorlib]System.Decimal [mscorlib]System.Decimal::op_Division(valuetype [mscorlib]System.Decimal, valuetype [mscorlib]System.Decimal) L_0028: call instance void [mscorlib]System.Nullable`1<valuetype [mscorlib]System.Decimal>::.ctor(!0) L_002d: nop L_002e: ldloca.s num L_0030: call instance string [mscorlib]System.Int32::ToString() L_0035: stloc.2 L_0036: br.s L_0044 L_0038: ldloca.s nullable L_003a: initobj [mscorlib]System.Nullable`1<valuetype [mscorlib]System.Decimal> L_0040: ldnull L_0041: stloc.2 L_0042: br.s L_0044 L_0044: ldstr "Loop {0}: testDecimal={1} - testString={2}" L_0049: ldloc.0 L_004a: box int32 L_004f: ldloc.1 L_0050: box [mscorlib]System.Nullable`1<valuetype [mscorlib]System.Decimal> L_0055: ldloc.2 L_0056: call void [mscorlib]System.Console::WriteLine(string, object, object, object) L_005b: nop L_005c: nop L_005d: ldloc.0 L_005e: ldc.i4.1 L_005f: add L_0060: stloc.0 L_0061: ldloc.0 L_0062: ldc.i4.s 10 L_0064: clt L_0066: stloc.s flag L_0068: ldloc.s flag L_006a: brtrue.s L_0005 L_006c: ret } ``` Here is the source declaring the variables outside the loop: ``` using System; class A { public static void Main() { decimal? testDecimal; string testString; for (int i =0; i< 10; i++) { switch( i % 2 ) { case 0: testDecimal = i / ( decimal ).32; testString = i.ToString(); break; default: testDecimal = null; testString = null; break; } Console.WriteLine( "Loop {0}: testDecimal={1} - testString={2}", i, testDecimal , testString ); } } } ``` Here is the IL declaring the variables outside the loop: ``` .method public hidebysig static void Main() cil managed { .entrypoint .maxstack 8 .locals init ( [0] valuetype [mscorlib]System.Nullable`1<valuetype [mscorlib]System.Decimal> nullable, [1] string str, [2] int32 num, [3] int32 num2, [4] bool flag) L_0000: nop L_0001: ldc.i4.0 L_0002: stloc.2 L_0003: br.s L_0061 L_0005: nop L_0006: ldloc.2 L_0007: ldc.i4.2 L_0008: rem L_0009: stloc.3 L_000a: ldloc.3 L_000b: ldc.i4.0 L_000c: beq.s L_0010 L_000e: br.s L_0038 L_0010: ldloca.s nullable L_0012: ldloc.2 L_0013: call valuetype [mscorlib]System.Decimal [mscorlib]System.Decimal::op_Implicit(int32) L_0018: ldc.i4.s 0x20 L_001a: ldc.i4.0 L_001b: ldc.i4.0 L_001c: ldc.i4.0 L_001d: ldc.i4.2 L_001e: newobj instance void [mscorlib]System.Decimal::.ctor(int32, int32, int32, bool, uint8) L_0023: call valuetype [mscorlib]System.Decimal [mscorlib]System.Decimal::op_Division(valuetype [mscorlib]System.Decimal, valuetype [mscorlib]System.Decimal) L_0028: call instance void [mscorlib]System.Nullable`1<valuetype [mscorlib]System.Decimal>::.ctor(!0) L_002d: nop L_002e: ldloca.s num L_0030: call instance string [mscorlib]System.Int32::ToString() L_0035: stloc.1 L_0036: br.s L_0044 L_0038: ldloca.s nullable L_003a: initobj [mscorlib]System.Nullable`1<valuetype [mscorlib]System.Decimal> L_0040: ldnull L_0041: stloc.1 L_0042: br.s L_0044 L_0044: ldstr "Loop {0}: testDecimal={1} - testString={2}" L_0049: ldloc.2 L_004a: box int32 L_004f: ldloc.0 L_0050: box [mscorlib]System.Nullable`1<valuetype [mscorlib]System.Decimal> L_0055: ldloc.1 L_0056: call void [mscorlib]System.Console::WriteLine(string, object, object, object) L_005b: nop L_005c: nop L_005d: ldloc.2 L_005e: ldc.i4.1 L_005f: add L_0060: stloc.2 L_0061: ldloc.2 L_0062: ldc.i4.s 10 L_0064: clt L_0066: stloc.s flag L_0068: ldloc.s flag L_006a: brtrue.s L_0005 L_006c: ret } ``` I'll share the secret, with the exception of the order in which `.locals init ( ... )` are specified, the IL is exactly the same. DECLARING variables inside a loop results in NO ADDITIONAL IL.
Declaring variables within FOR loops
[ "", "c#", "scope", "" ]
I'm working on a script in PHP that needs to get some info from a SQL Server database. However, I am having trouble connecting to the database. When i use the mssql\_connect() function, it gives me an error and says it cannot connect to the database. However, it gives no reason why. Is there any way to find out why it won't connect? There doesn't seem to be a function mssql\_error() or anything like there is with the mysql library.
Try to use pdo (<http://php.net/pdo>). the mssql-extension is a mess. Instead of '' it returns ' ' for empty strings. It seems to be a bug in ntwdblib that has never been fixed. When I experienced the problem i nearly went crazy... To get the client connected: Have you activated tcp/ip on the sql-server? On MSSQL 2005 Express it's *not* activated by default!
Have you tried looking in the Windows Event Log? I am not sure if there will be enough info there, but it may help.
PHP and SQL Server debugging
[ "", "php", "sql-server", "" ]
I am having a postgres production database in production (which contains a lot of Data). now I need to modify the model of the tg-app to add couple of new tables to the database. How do i do this? I am using sqlAlchemy.
This always works and requires little thinking -- only patience. 1. Make a backup. 2. Actually make a backup. Everyone skips step 1 thinking that they have a backup, but they can never find it or work with it. Don't trust any backup that you can't recover from. 3. Create a new database schema. 4. Define your new structure from the ground up in the new schema. Ideally, you'll run a DDL script that builds the new schema. Don't have a script to build the schema? Create one and put it under version control. With SA, you can define your tables and it can build your schema for you. This is ideal, since you have your schema under version control in Python. 5. Move data. a. For tables which did not change structure, move data from old schema to new schema using simple INSERT/SELECT statements. b. For tables which did change structure, develop INSERT/SELECT scripts to move the data from old to new. Often, this can be a single SQL statement per new table. In some cases, it has to be a Python loop with two open connections. c. For new tables, load the data. 6. Stop using the old schema. Start using the new schema. Find every program that used the old schema and fix the configuration. Don't have a list of applications? Make one. Seriously -- it's important. Applications have hard-coded DB configurations? Fix that, too, while you're at it. Either create a common config file, or use some common environment variable or something to (a) assure consistency and (b) centralize the notion of "production". You can do this kind of procedure any time you do major surgery. It never touches the old database except to extract the data.
The simplest approach is to simply write some sql update scripts and use those to update the database. Obviously that's a fairly low-level (as it were) approach. If you think you will be doing this a lot and want to stick in Python you might want to look at [sqlalchemy-migrate](http://code.google.com/p/sqlalchemy-migrate/). There was an article about it in the recent Python Magazine.
How to update turbogears application production database
[ "", "python", "database", "postgresql", "data-migration", "turbogears", "" ]
Has anybody seen such a thing? Small self-sufficient modules are preferred.
The [fractions module](http://docs.python.org/library/fractions.html) from 2.6 can be ripped out if necessary. Grab fractions.py, numbers.py, and abc.py; all pure python modules. You can get the single files from here (2.6 branch, 2.7 does not work): <http://hg.python.org/cpython/branches>
[SymPy](http://code.google.com/p/sympy/) is a symbolic maths library written entirely in Python and has full support for rational numbers. From the [tutorial](http://docs.sympy.org/tutorial.html): ``` >>> from sympy import * >>> a = Rational(1,2) >>> a 1/2 >>> a*2 1 >>> Rational(2)**50/Rational(10)**50 1/88817841970012523233890533447265625 ``` There is also GMP for Python ([GMPY](http://gmpy.sourceforge.net/)) which, while not pure Python, is probably more efficient.
Pure Python rational numbers module for 2.5
[ "", "python", "rational-numbers", "" ]
I am interested if there is a port for the server implementation.
Windows Server ports: * [memcached for Win32](http://www.splinedancer.com/memcached-win32/) * [Jellycan Code - memcached](http://code.jellycan.com/memcached/) Memcached .NET Client ports: * [memcacheddotnet](http://sourceforge.net/projects/memcacheddotnet/) * [enyim.com Memcached Client](https://github.com/enyim/EnyimMemcached) * [BeIT](http://code.google.com/p/beitmemcached/)
There is no official support for windows builds. Check this link: <https://github.com/memcached/memcached/wiki/Install> For the clients, check this link: <https://github.com/memcached/memcached/wiki/Clients> Hope that helps.
Is there a port of memcache to .Net?
[ "", "c#", "asp.net", "windows", "memcached", "" ]
Using `MySQL`, I can do something like: ``` SELECT hobbies FROM peoples_hobbies WHERE person_id = 5; ``` **My Output:** ``` shopping fishing coding ``` but instead I just want 1 row, 1 col: **Expected Output:** ``` shopping, fishing, coding ``` The reason is that I'm selecting multiple values from multiple tables, and after all the joins I've got a lot more rows than I'd like. I've looked for a function on [MySQL Doc](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_concat-ws) and it doesn't look like the `CONCAT` or `CONCAT_WS` functions accept result sets. So does anyone here know how to do this?
You can use [`GROUP_CONCAT`](https://dev.mysql.com/doc/refman/8.0/en/aggregate-functions.html#function_group-concat): ``` SELECT person_id, GROUP_CONCAT(hobbies SEPARATOR ', ') FROM peoples_hobbies GROUP BY person_id; ``` As Ludwig stated in [his comment,](https://stackoverflow.com/questions/276927/can-i-concatenate-multiple-mysql-rows-into-one-field#comment14513101_276949) you can add the `DISTINCT` operator to avoid duplicates: ``` SELECT person_id, GROUP_CONCAT(DISTINCT hobbies SEPARATOR ', ') FROM peoples_hobbies GROUP BY person_id; ``` As Jan stated in [their comment,](https://stackoverflow.com/questions/276927/can-i-concatenate-multiple-mysql-rows-into-one-field#comment72475644_276949) you can also sort the values before imploding it using `ORDER BY`: ``` SELECT person_id, GROUP_CONCAT(hobbies ORDER BY hobbies ASC SEPARATOR ', ') FROM peoples_hobbies GROUP BY person_id; ``` As Dag stated in [his comment,](https://stackoverflow.com/questions/276927/can-i-concatenate-multiple-mysql-rows-into-one-field/276949#comment12638055_276949) there is a 1024 byte limit on the result. To solve this, run this query before your query: ``` SET group_concat_max_len = 2048; ``` Of course, you can change `2048` according to your needs. To calculate and assign the value: ``` SET group_concat_max_len = CAST( (SELECT SUM(LENGTH(hobbies)) + COUNT(*) * LENGTH(', ') FROM peoples_hobbies GROUP BY person_id) AS UNSIGNED); ```
Have a look at `GROUP_CONCAT` if your MySQL version (4.1) supports it. See [the documentation](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat) for more details. It would look something like: ``` SELECT GROUP_CONCAT(hobbies SEPARATOR ', ') FROM peoples_hobbies WHERE person_id = 5 GROUP BY 'all'; ```
Can I concatenate multiple MySQL rows into one field?
[ "", "mysql", "sql", "concatenation", "group-concat", "" ]
Is there any way I can specify a standard or custom numeric format string to always output the sign, be it +ve or -ve (although what it should do for zero, I'm not sure!)
Yes, you can. There is conditional formatting. See [Conditional formatting in MSDN](http://msdn.microsoft.com/en-us/library/0c899ak8.aspx) eg: ``` string MyString = number.ToString("+0;-#"); ``` Where each section separated by a semicolon represents positive and negative numbers or: ``` string MyString = number.ToString("+#;-#;0"); ``` if you don't want the zero to have a plus sign.
Beware, when using conditional formatting the negative value doesn't automatically get a sign. You need to do ``` string MyString = number.ToString("+#;-#;0"); ```
Custom numeric format string to always display the sign
[ "", "c#", ".net", "formatting", "string-formatting", "" ]
I have a project built and packaged with a specific version of jsp-apiand servlet-api jar files. Now I want these jars to be loaded when deploying the web project on any application server for example tomcat, WAS, Weblogic etc. The behaviour I have seen on tomcat is that it gives messages that the packaged version of these apis are not loaded along with an offending class. Is there any way I could override these server settings or behaviour? My concern is that letting the default behaviour of a server may allow different behaviour on different servers or even on different versions of same app server.
1. If you have control over the server where you want to install this webapp you can replace the core jars with yours. 2. Additionally you can prepend the jars in the startup of the app server. **Update:** As for the second part, you'll need to modify the startup file of the application server it self. I don't have an installation at hand but lets suppose in the dir $YOUR\_APPSERV/bin there are a bunch of scripts ( either .cmd or .sh files ) Some of them start the app server , some other help to configure it. You need to modify one of those in such a way the command line look like this: (assume a windows installation) ``` java -Xbootclasspath/p:c:\cutomjars\myJar.jar;customjars\myOtherJar.jar ..................... // the rest of the normal command line. ``` -bootclasspath/p prepends the jars to the app classpath -bootclasspath/a appends the jars to the app claspath This option let you override any class in the JVM with those specified in the jars, so you can even substitute java.lang.String if you want to. That's one approach. Unfortunately -Xbootclasspath is an option for Sun JVM ( that is JRockit does not have it, nor the IBM's VM what ever his name is ) There was another option where you declare a folder where all the extensions are. Plus, there is an ext directory in the jre. Take a deep dive into your application server bin directory and find out what each script is used for, I'm pretty sure you'll make it through. Here's a more formal explanation of this topic: <http://java.sun.com/j2se/1.5.0/docs/tooldocs/findingclasses.html> I hope it helps. BTW, I use to do this years ago, to substitue the CORBA package with a veeeery old version. So this works for sure.
*I've split the answer in two for clarity* Tushu, I have two news for you. The good one is I've managed to replace the servlet api from 2.5 to 2.3 in my tomcat using the steps I've described you in my previous post( screeshots below ) The bad new ( and I should've guessed this before ) The tomcat won't start.That's obvious, the servlet-api.jar is the core of the tomcat, and the version depends on some features present there. If it is changed, the engine won't work. The solution I've show you, works to change the behavior of one or two classes, but not to substitute the whole system. So , the only options you have are: 1. Run on a servlet container that meets your servlet specification Upgrade your app 2. Test it as it is on the new spec. Chances are ( and if you didn't link to non public classes ) your app still work 3. ( I did this in the past ) create a new jar with exactly the classes needed ( let's say your app only needs one class to run well ) and then prepend that class to the container. --- Here's the test jsp ``` Servlet version: <%=application.getMajorVersion()%>.<%=application.getMinorVersion()%> ``` Output with unmodified version: [unmodified version http://img89.imageshack.us/img89/9822/87694136ld9.png](http://img89.imageshack.us/img89/9822/87694136ld9.png) Modified version: [modified version http://img241.imageshack.us/img241/7842/86370197ev3.png](http://img241.imageshack.us/img241/7842/86370197ev3.png) Screenshot of the modified catalina startup [diff ouput http://img246.imageshack.us/img246/3333/30172332tp7.png](http://img246.imageshack.us/img246/3333/30172332tp7.png) Tomcat stacktrace ``` SEVERE: Servlet.service() for servlet jsp threw exception javax.servlet.ServletException: javax.servlet.jsp.JspFactory.getJspApplicationContext(Ljavax/servlet/ServletContext;)Ljavax/servlet/jsp/JspApplicationContext; at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:275) at javax.servlet.http.HttpServlet.service(HttpServlet.java:853) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) ```
Overriding application server behaviour for loading jsp-api and servlet-api jars in a packaged web application
[ "", "java", "web-applications", "jakarta-ee", "classloader", "" ]
We have an advanced webpage (ASP.NET, C#), and a application which needs to be installed on the client computer in order to utilize the webpage to its fullest. The application is a tray app, and has primarily two tasks. Detect when certain events happen on the webserver (for instance invited to a meeting, or notify of an upcoming meeting). The other task the trayapp has is to use a custom protocol (trayapp://) to perform some ajax calls back to the server. One problem we have is how to determine if the application is installed on the local machine or not. Now the user has to tick a checkbox to inform the website that the application is installed, and that it's safe to call the trayapp:// url calls. Is there any way, for instance through a JavaScript or similar to detect if our application is installed on the local machine? The check needs to work for IE, FF and Opera browsers.
If you want to detect with javascript inside the browser, you can probably use the collection "navigator.plugins". It works with Firefox, Opera and Chrome but unfortunately not with IE. Update: In FF, Opera and Chrome you can test it easily like this: ``` if (navigator.plugins["Adobe Acrobat"]) { // do some stuff if it is installed } else { // do some other stuff if its not installed } ``` Update #2: If it is an ActiveX object in IE you can test if it exists by using something like this: ``` function getActiveXObject(name){ try{ return new ActiveXObject(name); } catch(err){ return undefined; } }; ``` Another approach for IE is something similar to what JohnFx suggested (I found it [here](http://bytes.com/forum/thread145239.html) and have not tested it): ``` HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Curr entVersion\Internet Settings\User Agent\Post Platform ```
When installing your client-side app you could modify the browser configuration to include another request header in HTTP requests and then have the server code look for that header, for example as a supported mime type using the following registry key (for Internet explorer) ``` HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\ Internet Settings\Accepted Documents ``` I am not sure if Opera and FF use this same key, but they likely have similar configuration options, but this should at least get you on the right track.
Detect from browser if specific application is installed
[ "", "asp.net", "javascript", "ajax", "web-applications", "" ]
we're dealing with a very slow update statement in an Oracle project. Here's a little script to replciate the issue: ``` drop table j_test; CREATE TABLE J_TEST ( ID NUMBER(10) PRIMARY KEY, C1 VARCHAR2(50 BYTE), C2 VARCHAR2(250 BYTE), C3 NUMBER(5), C4 NUMBER(10) ); -- just insert a bunch of rows insert into j_test (id) select rownum from <dummy_table> where rownum < 100000; -- this is the statement that runs forever (longer than my patience allows) update j_test set C3 = 1, C1 = 'NEU'; ``` There are some environments where the Update-Statement takes just about 20 seconds, some where the statement runs for a few minutes. When using more rows, the problem gets even worse. We have no idea what is causing this behavior, and would like to have an understanding of what is going on before proposing a solution. Any ideas and suggestions? Thanks Thorsten
One possible cause of poor performance is row chaining. All your rows initially have columns C3 and C4 null, and then you update them all to have a value. The new data won't fit into the existing blocks, so Oracle has to chain the rows to new blocks. If you know in advance that you will be doing this you can pre-allocate sufficient free space like this: ``` CREATE TABLE J_TEST ( ID NUMBER(10) PRIMARY KEY, C1 VARCHAR2(50 BYTE), C2 VARCHAR2(250 BYTE), C3 NUMBER(5), C4 NUMBER(10) ) PCTFREE 40; ``` ... where PCTFREE specifies a percentage of space to keep free for updates. The default is 10, which isn't enough for this example, where the rows are more or less doubling in size (from an average length of 8 to 16 bytes according to my db). This test shows the difference it makes: ``` SQL> CREATE TABLE J_TEST 2 ( 3 ID NUMBER(10) PRIMARY KEY, 4 C1 VARCHAR2(50 BYTE), 5 C2 VARCHAR2(250 BYTE), 6 C3 NUMBER(5), 7 C4 NUMBER(10) 8 ); Table created. SQL> insert into j_test (id) 2 select rownum 3 from transactions 4 where rownum < 100000; 99999 rows created. SQL> update j_test 2 set C3 = 1, 3 C2 = 'NEU' 4 / 99999 rows updated. Elapsed: 00:01:41.60 SQL> analyze table j_test compute statistics; Table analyzed. SQL> select blocks, chain_cnt from user_tables where table_name='J_TEST'; BLOCKS CHAIN_CNT ---------- ---------- 694 82034 SQL> drop table j_test; Table dropped. SQL> CREATE TABLE J_TEST 2 ( 3 ID NUMBER(10) PRIMARY KEY, 4 C1 VARCHAR2(50 BYTE), 5 C2 VARCHAR2(250 BYTE), 6 C3 NUMBER(5), 7 C4 NUMBER(10) 8 ) PCTFREE 40; Table created. SQL> insert into j_test (id) 2 select rownum 3 from transactions 4 where rownum < 100000; 99999 rows created. SQL> update j_test 2 set C3 = 1, 3 C2 = 'NEU' 4 / 99999 rows updated. Elapsed: 00:00:27.74 SQL> analyze table j_test compute statistics; Table analyzed. SQL> select blocks, chain_cnt from user_tables where table_name='J_TEST'; BLOCKS CHAIN_CNT ---------- ---------- 232 0 ``` As you can see, with PCTFREE 40 the update takes 27 seconds instead of 81 seconds, and the resulting table consumes 232 blocks with no chained rows instead of 694 blocks with 82034 chained rows!
Try this: ``` insert into j_test (id, C3, C4) select rownum, 1, 'NEU' from <dummy_table> where rownum < 100000; ```
Slow Update Statement
[ "", "sql", "oracle", "" ]
I have the following Python code: ``` import xml.dom.minidom import xml.parsers.expat try: domTree = ml.dom.minidom.parse(myXMLFileName) except xml.parsers.expat.ExpatError, e: return e.args[0] ``` which I am using to parse an XML file. Although it quite happily spots simple XML errors like mismatched tags, it completely ignores the DTD specified at the top of the XML file: ``` <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <!DOCTYPE ServerConfig SYSTEM "ServerConfig.dtd"> ``` so it doesn't notice when mandatory elements are missing, for example. How can I switch on DTD checking?
See [this question](https://stackoverflow.com/questions/15798/how-do-i-validate-xml-against-a-dtd-file-in-python) - the accepted answer is to use [lxml validation](http://codespeak.net/lxml/validation.html).
Just by way of explanation: Python xml.dom.minidom and xml.sax use the expat parser by default, which is a non-validating parser. It may read the DTD in order to do entity replacement, but it won't validate against the DTD. [gimel](/users/6491/gimel) and [Tim](/users/20670/tim) recommend lxml, which is a nicely pythonic binding for the libxml2 and libxslt libraries. It supports validation against a DTD. I've been using lxml, and I like it a lot.
Python xml.dom.minidom.parse() function ignores DTDs
[ "", "python", "xml", "" ]
I have two tables. Club and Coach. Between them is 0,1 - 0,1 relationship (coach can have zero or one club. club can have zero or one coach). When I want to change the coach of the given club, i have to update the club table. So i have to change idCoach of that club. Lets consider a new coach (the newly assigned coach of the given club) is already a coach of some other club. I have to set to Null field idCoach of that club, because his coach will be assigned to another club. Also if the club for which I want to change the coach, already has a coach, then I have to set idClub of that coach (in Coach table) to Null. All the given above, also applies to when I want to change the club of the certain coach. (have to set to Null idCoach field of the previous club of a given coach and have to set to null idCoach of the new club) . The same problems could occur when I insert new record in Club or Coach table, or if I delete the existing (in insert, update and delete I have to watch out for the corresponding references and unlink them). I want to solve this with triggers. I guess I would have to have 3 triggers in Club table (for insert, update and delete) and 3 more in Coach table. In update trigger (for example in Club) I have to update Coach table which will again trigger update in Club (cause in it's body it has to update Club table) and that trigger will update Coach and trigger and so forth. So I'll have a dead lock. How to solve this? This is the first time ever I work with triggers, so sorry in advance, for this big explanation of a simple thing. Cheers!
I suggest this might not be an appropriate use for triggers, which are very difficult to debug, and often surprise people by the way they don't support single-record operations as you would expect. Break it down into simple SQL statements and wrap a transaction around it instead.
Is it OK to have a 3rd table, which can hold relations? I think that will be a simple approach to work with. Just delete the record from the new table, if the coach resigns from a club. Insert a record, for a coach joining a new club. Hope that helps.
MS SQL Trigger update call dead lock?
[ "", "sql", "sql-server", "triggers", "" ]
I have a huge text file (~1GB) and sadly the text editor I use won't read such a large file. However, if I can just split it into two or three parts I'll be fine, so, as an exercise I wanted to write a program in python to do it. What I think I want the program to do is to find the size of a file, divide that number into parts, and for each part, read up to that point in chunks, writing to a *filename*.nnn output file, then read up-to the next line-break and write that, then close the output file, etc. Obviously the last output file just copies to the end of the input file. Can you help me with the key filesystem related parts: filesize, reading and writing in chunks and reading to a line-break? I'll be writing this code test-first, so there's no need to give me a complete answer, unless its a one-liner ;-)
Check out `os.stat()` for file size and `file.readlines([sizehint])`. Those two functions should be all you need for the reading part, and hopefully you know how to do the writing :)
linux has a split command ``` split -l 100000 file.txt ``` would split into files of equal 100,000 line size
How do I split a huge text file in python
[ "", "python", "text-files", "" ]
I had telephone interview question yesterday. The interviewer asked me if I had faced any challenging debugging issue? I told him I once faced a problem debugging someone else's code and it took me 3-4 days to solve that. I used Windbg, symbols and a crash dump to solve the problem. Now is this enough to tell? What is interviewer expecting? I don't know what else to tell, I faced that problem quite some time back and can't explain all details... This question is very very common and I am not sure what the right answer to it? One more question that is very common: Have you worked on multi-threaded applications? How can you find out deadlock? Answer: Well to find deadlock we can look at snapshot of process in memory and can look at threads that are waiting. Then next question: What can you do to avoid deadlock? 1. Use waitformultipleobjects 2. Maintain sequence of critical sections What do you guys say?
The general rule for interviews is to use the STAR model (my co-op coordinator is going to be proud here...): **S** - Describe the situation you were in **T** - Explain the task, providing enough info so that the interviewer understands the problem. **A** - Describe the action you took to solve the problem. **R** - What were the results of your actions If you provide a concise answer, not too short or too long, 99.9% of interviewers will be satisfied.
One of the main reason deadlock can occur in a multi-threaded application is circular wait where two different threads holding two resources and each of them wait for the other resource. The other conditions deadlock to occur is no preemption, hold-and-wait and mutual-exclusion. The best way to avoid deadlock is to maintain lock order. In other words let the threads able to get the lock in a particular order. This will restrict the threads to come into deadlock.
Interview question about debugging, multithreading
[ "", "c++", "debugging", "" ]
There are a few things that I almost always do when I put a class together in C++. 1) Virtual Destructor 2) Copy constructor and assignment operator (I either implement them in terms of a private function called Copy(), or declare them private and thus explicitly disallow the compiler to auto generate them). What things do you find are almost always useful?
I find turning on the gcc flags `-Wall`, `-Werror`, and (this is the fun one) `-Weffc++` help catch a lot of potential problems. From the gcc man page: > ``` > -Weffc++ (C++ only) > Warn about violations of the following style guidelines from Scott > Meyers’ Effective C++ book: > > · Item 11: Define a copy constructor and an assignment operator > for classes with dynamically allocated memory. > > · Item 12: Prefer initialization to assignment in constructors. > > · Item 14: Make destructors virtual in base classes. > > · Item 15: Have "operator=" return a reference to *this. > > · Item 23: Don’t try to return a reference when you must return > an object. > > and about violations of the following style guidelines from Scott > Meyers’ More Effective C++ book: > > · Item 6: Distinguish between prefix and postfix forms of incre- > ment and decrement operators. > > · Item 7: Never overload "&&", "││", or ",". > > If you use this option, you should be aware that the standard > library headers do not obey all of these guidelines; you can use > grep -v to filter out those warnings. > ```
Oddly, most of the suggestions here are things I specifically don't do. * I don't make dtors virtual unless I am designing it specifically to be inherited. It adds a lot of overhead and prevents automatic inlining, which is bad since most dtors are empty anyways (and few classes benefit from inheritance) * I don't make copy ctor/assignment op unless the defaults won't work -- and if it won't, I may want to reconsider the design. Remember, between string & vector, there's hardly ever a reason to call new anymore. And creating your own copy ctor identical to the default one will almost certainly be less efficient. * I don't add string cast. It causes too many problems where the cast is called silently where you didn't intend it to be. Better to add a ToString() method. * I don't add a friend oper<<, because friends are evil and messy. Better to add a Display(ostream) method. Then the oper<< can call that, and doesn't need to be a friend. In fact, you could make the oper<< a template function calling Display() and never have to worry about it again.
In C++, what do you do nearly all the time?
[ "", "c++", "" ]
I'm writing a basic sprite engine for my own amusement and to get better aquainted with Java's 2d API. Currently I am making use of large numbers of separate .png files with transparent backgrounds to represent the various sprites and different frames of animation that I need. Most 'real world' game development projects seem to make use of 'sprite sheets' which contain multiple sprites or frames of animation within a single file. Also, rather than making use of native image transparency support, people often nominate an arbitrary colour that does not appear in the sprite pallette to be the transparent colour. How does one manage a file like this programatically? * how do you know where one sprite starts and the next begins * how do you deal with transparency There may be other factors that I've not thought of here, so I may add to the list above as I think of things or as people make suggestions (please do so in the comments).
I currently use XML files generated by a simple sprite editor that store the sprite as a collection of (optionally animated) poses, which are in turn a collection of frames or cells. Frames store per-frame information like the x and y offset of the frame in sheet, cell width and height, and any transformation (resize/rotation/hue/etc.). Poses store individual frames and animation information (speed, for example), and a pose name to easily identify them in the program (hero.pose = sprite.pose["standing\_right"]). Sprites serve as a document root to hold several poses, such as a pose for each facing direction. A less flexible alternative I used earlier was to specify fixed sizes for cells and sheets and calculate frame offsets and sizes based on these values (e.g. width is always 32 pixels, so third sprite is at 32 \* 2). Later I started specifying these dimensions in the file name (e.g. sprite\_name\_32x64.png) for sprites that don't fit the fixed cell dimensions. I like the new approach more, especially with a simplistic editor that fills most values for me and allows me to use sprites as templates for other sprites. I use the alpha and transparency information stored in PNG images directly so I don't need to worry about storing it elsewhere, although other approaches would be to pick a fixed value per sprite and store somewhere, use the leftmost pixel in the pose if you know it's always empty, use a specific palette entry if you're using those, sprite masks, or what have you.
Make your sprite sheet knowing the size and number of each sequence. Grab a buffered image of your sheet and use something like this: ``` currentframe=spritesheet.getSubimage(x, y, w, h); ``` Your x and y will change based on the frame you are on. Keep the width and height the same to make things easy on yourself. Forget trying to keep the entire game on one sheet. It's nuts and hard to manage. Use a new png for each animation sequence. If you are anal about saving space only create moving right animations and just flip the buffered image real time to move left. Java will read png files with the alpha so don't worry about the transparency colour. Draw everything in pngs. Use Photoshop or Gimp. Search google for java image TYPE\_INT\_ARGB
What's the best way of reading a sprite sheet in Java?
[ "", "java", "io", "2d", "sprite-sheet", "" ]
I was reading about parsers and parser generators and found this statement in wikipedia's LR parsing -page: > Many programming languages can be parsed using some variation of an LR parser. One notable exception is C++. Why is it so? What particular property of C++ causes it to be impossible to parse with LR parsers? Using google, I only found that C can be perfectly parsed with LR(1) but C++ requires LR(∞).
There is an interesting thread on [Lambda the Ultimate](http://lambda-the-ultimate.org/) that discusses the [LALR grammar for C++](http://lambda-the-ultimate.org/node/2158#comment-27800). It includes a link to a [PhD thesis](http://www.computing.surrey.ac.uk/research/dsrg/fog/FogThesis.pdf) that includes a discussion of C++ parsing, which states that: > "C++ grammar is ambiguous, > context-dependent and potentially > requires infinite lookahead to resolve > some ambiguities". It goes on to give a number of examples (see page 147 of the pdf). The example is: ``` int(x), y, *const z; ``` meaning ``` int x; int y; int *const z; ``` Compare to: ``` int(x), y, new int; ``` meaning ``` (int(x)), (y), (new int)); ``` (a comma-separated expression). The two token sequences have the same initial subsequence but different parse trees, which depend on the last element. There can be arbitrarily many tokens before the disambiguating one.
LR parsers can't handle ambiguous grammar rules, by design. (Made the theory easier back in the 1970s when the ideas were being worked out). C and C++ both allow the following statement: ``` x * y ; ``` It has two different parses: 1. It can be the declaration of y, as pointer to type x 2. It can be a multiply of x and y, throwing away the answer. Now, you might think the latter is stupid and should be ignored. Most would agree with you; however, there are cases where it might have a side effect (e.g., if multiply is overloaded). but that isn't the point. The point is there *are* two different parses, and therefore a program can mean different things depending on how this *should* have been parsed. The compiler must accept the appropriate one under the appropriate circumstances, and in the absence of any other information (e.g., knowledge of the type of x) must collect both in order to decide later what to do. Thus a grammar must allow this. And that makes the grammar ambiguous. Thus pure LR parsing can't handle this. Nor can many other widely available parser generators, such as Antlr, JavaCC, YACC, or traditional Bison, or even PEG-style parsers, used in a "pure" way. There are lots of more complicated cases (parsing template syntax requires arbitrary lookahead, whereas LALR(k) can look ahead at most k tokens), but only it only takes one counterexample to shoot down *pure* LR (or the others) parsing. Most real C/C++ parsers handle this example by using some kind of deterministic parser with an extra hack: they intertwine parsing with symbol table collection... so that by the time "x" is encountered, the parser knows if x is a type or not, and can thus choose between the two potential parses. But a parser that does this isn't context free, and LR parsers (the pure ones, etc.) are (at best) context free. One can cheat, and add per-rule reduction-time semantic checks in the to LR parsers to do this disambiguation. (This code often isn't simple). Most of the other parser types have some means to add semantic checks at various points in the parsing, that can be used to do this. And if you cheat enough, you can make LR parsers work for C and C++. The GCC guys did for awhile, but gave it up for hand-coded parsing, I think because they wanted better error diagnostics. There's another approach, though, which is nice and clean and parses C and C++ just fine without any symbol table hackery: [GLR parsers](http://en.wikipedia.org/wiki/GLR_parser). These are full context free parsers (having effectively infinite lookahead). GLR parsers simply accept *both* parses, producing a "tree" (actually a directed acyclic graph that is mostly tree like) that represents the ambiguous parse. A post-parsing pass can resolve the ambiguities. We use this technique in the C and C++ front ends for our DMS Software Reengineering Tookit (as of June 2017 these handle full C++17 in MS and GNU dialects). They have been used to process millions of lines of large C and C++ systems, with complete, precise parses producing ASTs with complete details of the source code. (See [the AST for C++'s most vexing parse.](https://stackoverflow.com/a/17393852/120163))
Why can't C++ be parsed with a LR(1) parser?
[ "", "c++", "parsing", "grammar", "formal-languages", "" ]
I'm reviewing some code for a friend and say that he was using a return statement inside of a try-finally block. Does the code in the Finally section still fire even though the rest of the try block doesn't? Example: ``` public bool someMethod() { try { return true; throw new Exception("test"); // doesn't seem to get executed } finally { //code in question } } ```
Simple answer: Yes.
Normally, yes. The finally section is guaranteed to execute whatever happens including exceptions or return statement. An exception to this rule is an asynchronous exception happening on the thread (`OutOfMemoryException`, `StackOverflowException`). To learn more about async exceptions and reliable code in that situations, read about [constrained execution regions](https://blogs.msdn.microsoft.com/bclteam/2005/06/13/constrained-execution-regions-and-other-errata-brian-grunkemeyer/).
Will code in a Finally statement fire if I return a value in a Try block?
[ "", "c#", ".net", "exception", "try-catch", "" ]
I'm working on an AJAXy project (Dojo and Rails, if the particulars matter). There are several places where the user should be able to sort, group, and filter results. There are also places where a user fills out a short form and the resulting item gets added to a list on the same page. The non-AJAXy implementation works fine -- the view layer server-side already knows how to render this stuff, so it can just do it again in a different order or with an extra element. This, however, adds lots of burden to the server. So we switched to sending JSON from the server and doing lots of (re-)rendering client-side. The downside is that now we have duplicate code for rendering every page: once in Rails, which was built for this, and once in Dojo, which was not. The latter is basically just string concatenation. So question part one: is there a good Javascript MVC framework we could use to make the rendering on the client-side more maintainable? And question part two: is there a way to generate the client-side views in Javascript and the server-side views in ERB from the same template? I think that's what the Pragmatic Programmers would do. Alternatively, question part three: am I completely missing another angle? Perhaps send JSON from the server but also include the HTML snippet as an attribute so the Javascript can do the filtering, sorting, etc. and then just insert the given fragment?
Number 5 in my [list of five AJAX styles](https://stackoverflow.com/questions/429005/how-dangerous-is-it-send-html-in-ajax-as-opposed-to-sending-json-and-building-the#429072) tends to work pretty well.
Well, every time you generate HTML snippets on the client and on the server you may end up with duplicated code. There is no good way around it generally. But you can do two things: 1. Generate everything on the server. Use [AHAH](http://en.wikipedia.org/wiki/Ahah) when you need to generate HTML snippets dynamically. Basically you ask server to generate an HTML fragment, receive it asynchronously, and plug it in place using `innerHTML` or any other similar mechanism. 2. Generate everything on the client (AKA the thick client paradigm). In this case even for the initial rendering you pass data instead of pre-rendered HTML, and process the data client-side using JavaScript to make HTML. Depending on the situation you can use the data island technique, or request data asynchronously. Variant: include it as <script> using [JSONP](http://en.wikipedia.org/wiki/JSONP#JSONP) so the browser will make a request for you while loading the page. Both approaches are very simple and have different set of pros and cons. Sometimes it is possible to combine both techniques within one web application for different parts of data. Of course you can go for exotic solutions, like using some JavaScript-based server-side framework. In this case you can share the code between the server and the client.
How do I reduce view duplication between client and server?
[ "", "javascript", "ruby-on-rails", "ajax", "model-view-controller", "" ]
In C# is it guaranteed that expressions are evaluated left to right? For example: ``` myClass = GetClass(); if (myClass == null || myClass.Property > 0) continue; ``` Are there any languages that do not comply?
You actually refer to a language feature called "short-circuiting logical expressions": What this means is this: When the outcome of a logical expression cannot change anymore, e.g. when it is clear that the expression will evaluate to "true" or "false" no matter what, remaining parts of the expression will not be evaluated. For example, C#, Java or JavaScript do that, and you can rely on it in those languages (to answer your question). In your case, if MyClass is not null: * `MyClass == null` evaluates to false * since it is an "or" expression, the second part still can change the result, so it is evaluated * `myClass.Property > 0` determines the end result if MyClass is null: * `MyClass == null` evaluates to true * since it is an "or" expression, it does not matter what follows * no more evaluation is done, the end result is true There are languages that do not short-circuit logical expressions. Classical VB is an example, here "myClass.Property > 0" would be evaluated and produce an error if MyClass was null (called "Nothing" in VB).
Short-circuiting is described in section 7.11 of the C# 3.0 spec: > The operation x || y corresponds to > the operation x | y, except that y is > evaluated only if x is not true. So yes, you're fine. As for other languages - I never like to speak for *all* languages. In VB.NET, you can use OrElse and AndAlso which are short-circuited, but plain Or and And aren't.
Left to right expression evaluation
[ "", "c#", "language-agnostic", "compiler-construction", "expression", "expression-evaluation", "" ]
I want to develop Java apps, real quick, what IDE should I choose?
I think CodeGear [jbuilder](http://www.codegear.com/products/jbuilder) is a very good start.
[IntelliJ IDEA.](http://www.jetbrains.com/idea/) from JetBrains.
Which Java IDE is the most RAD in a Delphi-like way?
[ "", "java", "ide", "rad", "" ]
I need to be able to determine when `ContainsFocus` changes on a `Control` (specifically a windows form). Overriding `OnGotFocus` is not the answer. When I bring the form to the foreground, `ContainsFocus` is true and `Focused` is false. So is there an `OnGotFocus` equivalent for `ContainsFocus`? Or any other way?
Note: GotFocus events of the child controls are fired if you have a child control. Otherwise OnGotFocus of the form is called. If I understood the question correctly, then this should work: ``` bool lastNotificationWasGotFocus = false; protected override void OnControlAdded(ControlEventArgs e) { SubscribeEvents(e.Control); base.OnControlAdded(e); } protected override void OnControlRemoved(ControlEventArgs e) { UnsubscribeEvents(e.Control); base.OnControlRemoved(e); } private void SubscribeEvents(Control control) { control.GotFocus += new EventHandler(control_GotFocus); control.LostFocus += new EventHandler(control_LostFocus); control.ControlAdded += new ControlEventHandler(control_ControlAdded); control.ControlRemoved += new ControlEventHandler(control_ControlRemoved); foreach (Control innerControl in control.Controls) { SubscribeEvents(innerControl); } } private void UnsubscribeEvents(Control control) { control.GotFocus -= new EventHandler(control_GotFocus); control.LostFocus -= new EventHandler(control_LostFocus); control.ControlAdded -= new ControlEventHandler(control_ControlAdded); control.ControlRemoved -= new ControlEventHandler(control_ControlRemoved); foreach (Control innerControl in control.Controls) { UnsubscribeEvents(innerControl); } } private void control_ControlAdded(object sender, ControlEventArgs e) { SubscribeEvents(e.Control); } private void control_ControlRemoved(object sender, ControlEventArgs e) { UnsubscribeEvents(e.Control); } protected override void OnGotFocus(EventArgs e) { CheckContainsFocus(); base.OnGotFocus(e); } protected override void OnLostFocus(EventArgs e) { CheckLostFocus(); base.OnLostFocus(e); } private void control_GotFocus(object sender, EventArgs e) { CheckContainsFocus(); } private void control_LostFocus(object sender, EventArgs e) { CheckLostFocus(); } private void CheckContainsFocus() { if (lastNotificationWasGotFocus == false) { lastNotificationWasGotFocus = true; OnContainsFocus(); } } private void CheckLostFocus() { if (ContainsFocus == false) { lastNotificationWasGotFocus = false; OnLostFocus(); } } private void OnContainsFocus() { Console.WriteLine("I have the power of focus!"); } private void OnLostFocus() { Console.WriteLine("I lost my power..."); } ```
One way to solve this is to use a Timer. It's definitely brute force, but it gets the job done: ``` private Timer m_checkContainsFocusTimer = new Timer(); private bool m_containsFocus = true; m_checkContainsFocusTimer.Interval = 1000; // every second is good enough m_checkContainsFocusTimer.Tick += new EventHandler(CheckContainsFocusTimer_Tick); m_checkContainsFocusTimer.Start(); private void CheckContainsFocusTimer_Tick(object sender, EventArgs e) { if (!m_containsFocus && ContainsFocus) OnAppGotFocus(); m_containsFocus = ContainsFocus; } ``` But is there an easier way?
Is there a way to catch when ContainsFocus changes?
[ "", "c#", "winforms", "events", "" ]
I am curious to know how the Loader maps DLL into Process Address Space. How does the loader do that magic?
What level of detail are you looking for? On the basic level, all dynamic linkers work pretty much the same way: 1. Dynamic libraries are compiled to relocatable code (using relative jumps instead of absolute, for example). 2. The linker finds an appropriately-sized empty space in the memory map of the application, and reads the DLL's code and any static data into that space. 3. The dynamic library contains a table of offsets to the start of each exported function, and calls to the DLL's functions in the client program are patched at load-time with a new destination address, based on where the library was loaded. 4. Most dynamic linker systems have some system for setting a preferred base address for a particular library. If a library is loaded at its preferred address, then the relocation in steps 2 and 3 can be skipped.
Okay, I'm assuming the Windows side of things here. What happens when you load a PE file is that the loader (contained in NTDLL) will do the following: 1. Locate each of the DLLs using the DLL search semantics (system and patch-level specific), well-known DLLs are kind of exempt from this 2. Map the file into memory (MMF), where pages are copy-on-write (CoW) 3. Traverse the import directory and for each import start (recursively) at point 1. 4. Resolve relocations, which most of the time is only a very limited number of entities, since the code itself is position-independent code (PIC) 5. (IIRC) patch the EAT from RVA (relative virtual address) to VA (virtual address within current process memory space) 6. Patch the IAT (import address table) to reference the imports with their actual address within the process memory space 7. For a DLL call `DLLMain()` for an EXE create a thread whose start address is at the entry point of the PE file (this is also oversimplified, because the actual start address is inside kernel32.dll for Win32 processes) Now when you compile code it depends on the linker how the external function is referenced. Some linkers create stubs so that - in theory - trying to check the function address against NULL will always say it's not NULL. It's a quirk you have to be aware of if and when your linker is affected. Others reference the IAT entry directly in which case an unreferenced function (think delay-loaded DLLs) address can be NULL and the SEH handler will then invoke the delay-load helper and (attempt to) resolve the function address, before resuming execution at the point it failed. There is a lot of red tape involved in the above process which I oversimplified. **The gist for what you wanted to know is that the mapping into the process happens as an MMF**, though you can artificially mimic the behavior with heap space. However, if you remember the point about CoW, that's the crux in the idea of DLLs. Actually the *same* copy of (most of) the pages of the DLL will be shared among the processes that load a particula DLL. The pages which are not shared are the ones that we wrote to, for example when resolving relocations and similar things. In this case each process has a - now modified - copy of the original page. And a word of warning concerning EXE packers on DLL. They defeat exactly this CoW mechanism I described in that they allocate space for the unpacked contents of the DLL on the heap of the process into which the DLL is loaded. So while the actual file contents are still mapped as MMF and shared, the unpacked contents occupy the same amount of memory for each process loading the DLL instead of sharing that.
How loader maps DLL into Process Address Space
[ "", "c++", "c", "dll", "loader", "" ]
Check out this test: ``` [TestFixture] public class Quick_test { [Test] public void Test() { Assert.AreEqual(0, GetByYield().Count()); Assert.AreEqual(0, GetByEnumerable().Count()); } private IEnumerable<string> GetByYield() { yield break; } private IEnumerable<string> GetByEnumerable() { return Enumerable.Empty<string>(); } } ``` When I write stub methods I generally use the Enumerable.Empty way of doing it. I stumbled across some old code I wrote where I did it the yield way. This got me to wondering: * Which is more visually appealing to other developers? * Are there any hidden gotchas that would cause us to prefer one over the other? Thanks!
I would prefer any method that delivers the clearest meaning to the developer. Personally, I don't even know what the ***yield break;*** line is does, so returning 'Enumerable.Empty();` would be preferred in any of my code bases.
[Enumerable.Empty](http://msdn.microsoft.com/en-us/library/bb341042.aspx) : the documentation claims that it "caches an empty sequence". Reflector confirms. If caching behavior matters to you, there's one advantage for Enumerable.Empty
Returning empty collections
[ "", "c#", "linq", "" ]
I need to watch when certain processes are started or stopped on a Windows machine. I'm currently tapped into the WMI system and querying it every 5 seconds, but this causes a CPU spike every 5 seconds because WMI is WMI. Is there a better way of doing this? I could just make a list of running processes and attach an Exited event to them through the System.Diagnostics Namespace, but there is no Event Handler for creation.
This is not exactly how you'd do it in the real world but should help. This seems not to drive my CPU much at all. ``` static void Main(string[] args) { // Getting all instances of notepad // (this is only done once here so start up some notepad instances first) // you may want use GetProcessByPid or GetProcesses and filter them as required Process[] processesToWatch = Process.GetProcessesByName("notepad"); foreach (var process in processesToWatch) { process.EnableRaisingEvents = true; process.Exited += (s, e) => Console.WriteLine("An instance of notepad exited"); } Thread watchThread = new Thread(() => { while (true) { Process[] processes = Process.GetProcesses(); foreach (var process in processes) { Console.WriteLine("{0}:{1}", process.Id, process.ProcessName); } // Don't dedicate a thread to this like I'm doing here // setup a timer or something similiar Thread.Sleep(2000); } }); watchThread.IsBackground = true; watchThread.Start(); Console.WriteLine("Polling processes and waiting for notepad process exit events"); Console.ReadLine(); } ```
If you are only looking for PID/Name of your processes, you may instead wish to pick up on Win32\_ProcessTrace events, using a WQL query such as "SELECT \* FROM Win32\_ProcessTrace WHERE TargetInstance.ProcessName = 'name'" **if applicable\***. The pitfall of using "SELECT \* FROM \_\_InstanceModificationEvent WITHIN 10 WHERE TargetInstance ISA 'Win32Process' AND TargetInstance.Name = 'name'" is in how it works on the back end. If you inspect wbemess.log within your %windir%\system32\wbem\logs directory, you will notice the following logs (using \_\_InstanceDeletionEvent): ``` (Wed Jul 22 13:58:31 2009.73889577) : Registering notification sink with query select * from __InstanceDeletionEvent within 10 where TargetInstance ISA 'Win32_Process' in namespace //./root/CIMV2. (Wed Jul 22 13:58:31 2009.73889577) : Activating filter 047209E0 with query select * from __InstanceDeletionEvent within 10 where TargetInstance ISA 'Win32_Process' in namespace //./root/CIMV2. (Wed Jul 22 13:58:31 2009.73889577) : Activating filter 0225E560 with query select * from __ClassOperationEvent where TargetClass isa "Win32_Process" in namespace //./root/CIMV2. (Wed Jul 22 13:58:31 2009.73889577) : Activating filter 'select * from __ClassOperationEvent where TargetClass isa "Win32_Process"' with provider $Core (Wed Jul 22 13:58:31 2009.73889587) : Activating filter 'select * from __InstanceDeletionEvent within 10 where TargetInstance ISA 'Win32_Process'' with provider $Core (Wed Jul 22 13:58:31 2009.73889587) : Instituting polling query select * from Win32_Process to satisfy event query select * from __InstanceDeletionEvent within 10 where TargetInstance ISA 'Win32_Process' (Wed Jul 22 13:58:31 2009.73889587) : Executing polling query 'select * from Win32_Process' in namespace '//./root/CIMV2' (Wed Jul 22 13:58:31 2009.73889697) : Polling query 'select * from Win32_Process' done (Wed Jul 22 13:58:41 2009.73899702) : Executing polling query 'select * from Win32_Process' in namespace '//./root/CIMV2' (Wed Jul 22 13:58:41 2009.73899792) : Polling query 'select * from Win32_Process' done ``` As you can see, the actual event implementation on the remote machine is to perform a query against Win32\_Process on an interval that is specified by your value in the WITHIN clause. As a result, any processes that start and stop within that poll will never fire an event. You can set the WITHIN clause to a small value to try and minimize this effect, but the better solution is to use a true event like Win32\_ProcessTrace, which should **always** fire. \*Note that MSDN indicates Win32\_ProcessTrace requires a minimum of Windows XP on a client machine and Windows 2003 on a server machine to work. If you are working with an older OS, you may be stuck using the \_\_InstanceModificationEvent query.
WMI Process Watching uses too much CPU! Any better method?
[ "", "c#", "wmi", "wmi-query", "" ]
Is there an easy way to parse the user's HTTP\_ACCEPT\_LANGUAGE and set the locale in PHP? I know the Zend framework has a method to do this, but I'd rather not install the whole framework just to use that one bit of functionality. The PEAR I18Nv2 package is in beta and hasn't been changed for almost three years, so I'd rather not use that if possible. Also nice would be if it could figure out if the server was running on Windows or not, since Windows's locale strings are different from the rest of the world's... (German is "deu" or "german" instead of "de".)
Nice solution is [on its way](http://php.net/manual/en/locale.acceptfromhttp.php). Without that you'll need to parse that header. It's a comma-separated list of semicolon-separated locales and attributes. It can look like this: ``` en_US, en;q=0.8, fr_CA;q=0.2, *;q=0.1 ``` and then try each locale until `setlocale()` accepts it. Be prepared that none of them may match. Don't base on it anything too important or allow users to override it, because some users may have misconfigured browsers. --- For Windows locale, perhaps you need to convert ISO 639-1 names to ISO 639-2/3?
It's not as easy as it should be (in my humble opinion). First of all you have to extract the locales from the `$_SERVER['HTTP_ACCEPT_LANGUAGE']` and sort them by their `q` values. Afterwards you have to retrieve the appropriate system locale for each of the given locales, which should be no problem on a \*nix machine (you only might have to cope with the correct charset) but on Windows you'll have to translate the locales into Windows locales, e.g. `de_DE` will be `German_Germany` (again you also have to cope with charset issues if you're using UTF-8 in your app for example). I think you'll have to build a lookup table for this issue - and there are a lot of locales ;-) No you try one locale after the other (sorted with descending `q` values) until you find a match using [`setlocale()`](https://www.php.net/manual/en/function.setlocale.php) (the function will return `false` if the given locale could not be set). But then there will be a last obstacle to cope with: > The locale information is maintained > per process, not per thread. If you > are running PHP on a multithreaded > server api like IIS or Apache on > Windows you may experience sudden > changes of locale settings while a > script is running although the script > itself never called setlocale() > itself. This happens due to other > scripts running in different threads > of the same process at the same time > changing the processwide locale using > setlocale(). (see: <https://www.php.net/manual/en/function.setlocale.php>) This means that you could experience sudden locale changes during the execution of a script because another user with a different locale set just hit your webpage. Therefore the mentioned [`Zend_Locale`](http://framework.zend.com/manual/en/zend.locale.html) does not rely on the PHP function `setlocale()` (it's only used to retrieve the system locale information) but instead uses a system based on the data provided by the [Unicode CLDR Project](http://www.unicode.org/cldr/). This makes the component independent from all those `setlocale()` issues but this also introduces some other deficiencies such as the lack of support for locale-aware string operations (sorting for example).
How do I automatically set a user's locale in PHP?
[ "", "php", "internationalization", "locale", "" ]
Continuing [my investigation](https://stackoverflow.com/questions/308481/writing-the-f-recursive-folder-visitor-in-c-seq-vs-ienumerable) of expressing F# ideas in C#, I wanted a pipe forward operator. For anything wrapped in a IEnumerable, we already have it, as you can .NextFunc() to your heart's content. But for example if you have any fold-like reduction at the end, you can't feed the result of that into a function. Here are two extension methods, I wondered if anyone else had tried this, and if it's a good idea or not (EDIT: now with [Earwicker's Maybe](http://#337846) included): ``` public static void Pipe<T>(this T val, Action<T> action) where T : class { if (val!=null) action(val); } public static R Pipe<T, R>(this T val, Func<T, R> func) where T : class where R : class { return val!=null?func(val):null; } ``` You can then write something like: ``` Func<string, string[]> readlines = (f) => File.ReadAllLines(f); Action<string, string> writefile = (f, s) => File.WriteAllText(f, s); Action<string, string> RemoveLinesContaining = (file, text) => { file.Pipe(readlines) .Filter(s => !s.Contains(text)) .Fold((val, sb) => sb.AppendLine(val), new StringBuilder()) .Pipe((o) => o.ToString()) .Pipe((s) => writefile(file, s)); }; ``` (I know, Filter == Where in C#, and Fold==Aggregate, but I wanted to roll my own, and I could have done WriteAllLines, but that's not the point) EDIT: corrections as per Earwicker's comment (if I've understood correctly).
I haven't bothered with a raw pipe, but I have tried making all references into the Maybe monad: ``` public static class ReferenceExtensions { public static TOut IfNotNull<TIn, TOut>(this TIn v, Func<TIn, TOut> f) where TIn : class where TOut: class { if (v == null) return null; return f(v); } } ``` Then suppose you have an object model that lets you lookup a RecordCompany by name, and then lookup a Band within that RecordCompany, a Member of the Band, and any of these might return null, so this might throw a NullReferenceException: ``` var pixiesDrummer = Music.GetCompany("4ad.com") .GetBand("Pixes") .GetMember("David"); ``` We can fix that: ``` var pixiesDrummer = Music.GetCompany("4ad.com") .IfNotNull(rc => rc.GetBand("Pixes")) .IfNotNull(band => band.GetMember("David")); ``` Hey presto, if any of those transitions return null, pixiesDrummer will be null. Wouldn't it be great if we could do extension methods that are operator overloads? ``` public static TOut operator| <TIn, TOut>(TIn v, Func<TIn, TOut> f) ``` Then I could pipe together my transition lambdas like this: ``` var pixiesDrummer = Music.GetCompany("4ad.com") | rc => rc.GetBand("Pixes") | band => band.GetMember("David"); ``` Also wouldn't it be great if System.Void was defined as a type and Action was really just Func<..., Void>? **Update:** [I blogged a little about the theory behind this](http://incrediblejourneysintotheknown.blogspot.com/2008/12/maybe-monad-in-c.html). **Update 2:** An alternative answer to the original question, which is roughly "How would you express the F# pipe-forward operator in C#?" Pipe-forward is: ``` let (|>) x f = f x ``` In other words, it lets you write a function and its first argument in the opposite order: argument followed by function. It's just a syntactic helper that assists with readability, allowing you to make use of infix notation with any function. This is exactly what extension methods are for in C#. Without them, we would have to write: ``` var n = Enumerable.Select(numbers, m => m * 2); ``` With them, we can write: ``` var n = numbers.Select(m => m * 2); ``` (Ignore the fact that they also let us omit the class name - that's a bonus but could also be made available for non-extension methods as it is in Java). So C# already solves the same problem in a different way.
So for Piping I don't think there is expectation to check for null and not call the piped function. The function argument in many cases could easy take a null and have it handled by the function. Here is my implementation. I have `Pipe` and `PipeR`. Be forewarned, the `PipeR` is not pipe right, but just for the cases in which the target is in the opposite position for currying, because the alternate overloads allow limited fake currying of parameters. The nice thing about the fake currying is that you can pipe in the method name after providing the parameters, thus producing less nesting than you would with a lambda. ``` new [] { "Joe", "Jane", "Janet" }.Pipe(", ", String.Join) ``` String.Join has the IEnumerable in the last position so this works. ``` "One car red car blue Car".PipeR(@"(\w+)\s+(car)",RegexOptions.IgnoreCase, Regex.IsMatch) ``` `Regex.IsMatch` has the target in the first Position so `PipeR` works. Here's my example implementaion: ``` public static TR Pipe<T,TR>(this T target, Func<T, TR> func) { return func(target); } public static TR Pipe<T,T1, TR>(this T target, T1 arg1, Func<T1, T, TR> func) { return func(arg1, target); } public static TR Pipe<T, T1, T2, TR>(this T target, T1 arg1, T2 arg2, Func<T1, T2, T, TR> func) { return func(arg1, arg2, target); } public static TR PipeR<T, T1, TR>(this T target, T1 arg1, Func<T, T1, TR> func) { return func(target, arg1); } public static TR PipeR<T, T1, T2, TR>(this T target, T1 arg1, T2 arg2, Func<T, T1, T2, TR> func) { return func(target, arg1, arg2); } ```
Pipe forwards in C#
[ "", "c#", "f#", "functional-programming", "" ]
I'm using JPA (Hibernate's implementation) to annotate entity classes to persist to a relational database (MySQL or SQL Server). Is there an easy way to auto generate the database schema (table creation scripts) from the annotated classes? I'm still in the prototyping phase and anticipate frequent schema changes. I would like to be able to specify and change the data model from the annotated code. Grails is similar in that it generates the database from the domain classes.
You can use [hbm2ddl](https://blog.eyallupu.com/2007/05/hibernates-hbm2ddl-tool.html) from Hibernate. The docs are [here](https://web.archive.org/web/20120606064202/http://docs.jboss.org/tools/2.1.0.Beta1/hibernatetools/html/ant.html#d0e2726).
**Generate create and drop script for given JPA entities** We use this code to generate the drop and create statements: Just construct this class with all entity classes and call create/dropTableScript. If needed you can use a persitence.xml and persitance unit name instead. Just say something and I post the code too. ``` import java.util.Collection; import java.util.Properties; import org.hibernate.cfg.AnnotationConfiguration; import org.hibernate.dialect.Dialect; import org.hibernate.ejb.Ejb3Configuration; /** * SQL Creator for Tables according to JPA/Hibernate annotations. * * Use: * * {@link #createTablesScript()} To create the table creationg script * * {@link #dropTablesScript()} to create the table destruction script * */ public class SqlTableCreator { private final AnnotationConfiguration hibernateConfiguration; private final Properties dialectProps; public SqlTableCreator(final Collection<Class<?>> entities) { final Ejb3Configuration ejb3Configuration = new Ejb3Configuration(); for (final Class<?> entity : entities) { ejb3Configuration.addAnnotatedClass(entity); } dialectProps = new Properties(); dialectProps.put("hibernate.dialect", "org.hibernate.dialect.SQLServerDialect"); hibernateConfiguration = ejb3Configuration.getHibernateConfiguration(); } /** * Create the SQL script to create all tables. * * @return A {@link String} representing the SQL script. */ public String createTablesScript() { final StringBuilder script = new StringBuilder(); final String[] creationScript = hibernateConfiguration.generateSchemaCreationScript(Dialect .getDialect(dialectProps)); for (final String string : creationScript) { script.append(string).append(";\n"); } script.append("\ngo\n\n"); return script.toString(); } /** * Create the SQL script to drop all tables. * * @return A {@link String} representing the SQL script. */ public String dropTablesScript() { final StringBuilder script = new StringBuilder(); final String[] creationScript = hibernateConfiguration.generateDropSchemaScript(Dialect .getDialect(dialectProps)); for (final String string : creationScript) { script.append(string).append(";\n"); } script.append("\ngo\n\n"); return script.toString(); } } ```
Auto generate data schema from JPA annotated entity classes
[ "", "java", "database", "hibernate", "jpa", "jakarta-ee", "" ]
How does the SQL Server JDBC Trusted Connection Authentication work? (ie how does the trusted connection authenticate the logged in AD user in such a transparent and elegant fashion and how can I implement a similar authentication solution for my client-server applications in Java without a database connection or any use of the existing SQL Server solution.) Assumptions \* Working within a Windows 2003 domain \* You have access to the Windows API via JNI/JNA
It depends on the client. For example if you have a Web Browser, it can use the NTLM Authentication to pass the domain authentication of your current client to the server. In this case the browser like IE or FF supports this, and you web server needs the support for NTLM. For example here for Tomcat: <http://jcifs.samba.org/src/docs/ntlmhttpauth.html> There is also the SPNEGO protcol in combination with Kerberos, as explained here: <http://java.sun.com/javase/6/docs/technotes/guides/security/jgss/lab/index.html> If you have your own client, it depends on the client's framework if it is able to use the local user's security context and is able to pass it on. The page above describes this at least for a kerberos scenario. Greetings Bernd PS: I am not sure if you can pass the authentication context established with the jcifs/ntmlm solution to a backend component like SQL Server. It should work with Kerberos tickets (if configured).
jTDS and Microsoft JDBC Driver both offer native Windows Authentication.
How does the SQL Server JDBC Trusted Connection Authentication work?
[ "", "java", "sql-server", "jdbc", "jna", "trustedconnection", "" ]
The number is bigger than `int` & `long` but can be accomodated in `Decimal`. But the normal `ToString` or `Convert` methods don't work on `Decimal`.
I believe this will produce the right results where it returns anything, but may reject valid integers. I dare say that can be worked around with a bit of effort though... (Oh, and it will also fail for negative numbers at the moment.) ``` static string ConvertToHex(decimal d) { int[] bits = decimal.GetBits(d); if (bits[3] != 0) // Sign and exponent { throw new ArgumentException(); } return string.Format("{0:x8}{1:x8}{2:x8}", (uint)bits[2], (uint)bits[1], (uint)bits[0]); } ```
Do it manually! <http://www.permadi.com/tutorial/numDecToHex/>
How can I Convert a Big decimal number to Hex in C# (Eg : 588063595292424954445828)
[ "", "c#", "decimal", "hex", "type-conversion", "" ]
I come from the Java world, where you can hide variables and functions and then run unit tests against them using reflection. I have used nested functions to hide implementation details of my classes so that only the public API is visible. I am trying to write unit tests against these nested functions to make sure that I don't break them as I develop. I have tried calling one of the nested functions like: ``` def outer(): def inner(): pass outer.inner() ``` which results in the error message: > AttributeError: 'function' object has no attribute 'inner' Is there a way for me to write unit tests against these nested functions? If not, is there a way to trigger the name munging for function names like you can for class variables by prefixing them with \_\_?
The Python convention is to name "private" functions and methods with a leading underscore. When you see a leading underscore, you know not to try and use it. Remember, [Python is not Java](http://dirtsimple.org/2004/12/python-is-not-java.html).
inner doesn't exist until outer makes it. You should either move inner up to a toplevel function for testability, or have the outer test test all the possible execution paths of itself and inner. Do note that the inner function isn't a simple function, it's a closure. Consider this case: ``` def outer(a): b = compute_something_from(a) def inner(): do_something_with(a, b) ``` That's the standard testability trade-off. If your [cyclomatic complexity](http://en.wikipedia.org/wiki/Cyclomatic_complexity) is too high, your tests will be too numerous.
Running unit tests on nested functions
[ "", "python", "testing", "closures", "" ]
I currently have a list view which has several rows of data and I have a contextmenustrip in C# .NET. What I am having problems with is when you click on the menu strip item I want to know which row has been selected.
To get selected rows as sindre says you do like this: ``` foreach (ListViewItem item in lvFiles.SelectedItems) { .................................... } ``` lvFiles is the ListView.
To get the selected item of list view, try this: int index = 0; if (this.myListView.SelectedItem.Count > 0) index = this.myListView.SelectedIndices[0] This will give you the index of selected item in listview. You may also refer this: <http://www.neowin.net/forum/index.php?showtopic=358458>
Finding the selected item of list view
[ "", "c#", "listview", "contextmenustrip", "" ]
I have a row of buttons, which all create a pdf file which I want to open in a new tab. This way the button page stays on top, and the pdf's open to get printed. To prevent clicking a button twice I disable the button, like this (I use python): ``` <input type='submit' value='Factureren' name='submitbutton' id='%s' onclick="javascript:document.getElementById('%s').disabled=true; document.getElementById('%s').className='button_disabled';"> % ((but_id,) *3) ``` In FF3 this works fine, i.e. the form is submitted, the script executed and then the button disables. In IE the button just disables, but the form script isn't executed. Is there a solution to this IE problem?
It is easy: a disabled submit button do not submit a form in IE. Consider to restructure your code: * Use a regular button, disable it, and call form.submit() from its handler. * Do not disable the button in its "onclick", but save it, and do it in form's onsubmit.
It is easier to do: ``` <input type='submit' value='Factureren' name='submitbutton' id='%s' onclick="this.disabled=true; this.className='button_disabled';"> % ((but_id,) *3) ``` I don't know if this solves your problem but it is what I would do in a case like this. I think you don't need "javascript:" anyway.
Disabling button with javascript: FF vs IE
[ "", "javascript", "html", "internet-explorer", "" ]
How can I read a Chinese text file using C#, my current code can't display the correct characters: ``` try { using (StreamReader sr = new StreamReader(path,System.Text.Encoding.UTF8)) { // This is an arbitrary size for this example. string c = null; while (sr.Peek() >= 0) { c = null; c = sr.ReadLine(); Console.WriteLine(c); } } } catch (Exception e) { Console.WriteLine("The process failed: {0}", e.ToString()); } ```
You need to use the right encoding for the file. Do you know what that encoding is? It might be UTF-16, aka Encoding.Unicode, or possibly something like Big5. Really you should try to find out for sure instead of guessing though. As leppie's answer mentioned, the problem might also be the capabilities of the console. To find out for sure, dump the string's Unicode character values out as numbers. See my [article on debugging unicode issues](http://www.yoda.arachsys.com/csharp/debuggingunicode.html) for more information and a useful method for dumping the contents of a string. I would also avoid using the code you're currently using for reading a file line by line. Instead, use something like: ``` using (StreamReader sr = new StreamReader(path, appropriateEncoding)) { string line; while ( (line = sr.ReadLine()) != null) { // ... } } ``` Calling Peek() requires that the stream is capable of seeking, which may be true for files but not all streams. Also look into [File.ReadAllText](http://msdn.microsoft.com/en-us/library/system.io.file.readalltext.aspx) and [File.ReadAllLines](http://msdn.microsoft.com/en-us/library/system.io.file.readalllines.aspx) if that's what you want to do - they're very handy utility methods.
If it is simplified chinese usually it is gb2312 and for the traditionnal chinese it is usually the Big5 : ``` // gb2312 (codepage 936) : System.Text.Encoding.GetEncoding(936) // Big5 (codepage 950) : System.Text.Encoding.GetEncoding(950) ```
How to read a Chinese text file from C#?
[ "", "c#", "text-files", "" ]
I wonder if there is something like a standalone version of Visual Studio's "Immediate Window"? Sometimes I just want to test some simple stuff, like `DateTime.Parse("blah")` to see if that works. But everytime I have to create a new console application, put in my code and test it. The Immediate Window sadly only works when I am debugging something. Could PowerShell do that? Just open a CLI similar to what cmd.exe does, allowing me to execute some C# code?
Linqpad - I use it like this all the time. <http://www.linqpad.net/> Don't be misled by the name - that just describes the original motivation for it, not its functionality. Just recently he released a version with proper statement completion - that's a chargeable add-on (the core tool is free), but a minute amount of money and well worth it, I think.
*C# Interactive* window and *csi.exe* REPL were added to **Visual Studio 2015 Update 1**: > # Introducing Interactive > > The Interactive Window is back! The C# Interactive Window returns in Visual Studio 2015 Update 1 along with a couple other interactive treats: > > * **C# Interactive**. The C# Interactive window is essentially a read-eval-print-loop (REPL) that allows you to play and explore with .NET technologies while taking advantage of editor features like IntelliSense, syntax-coloring, etc. Learn more about how to use C# Interactive on Channel 9 or by reading our beginner’s walkthrough. > * **csi**. If you don’t want to open Visual Studio to play around with C# or run a script file, you can access the interactive engine from the Developer Command Prompt. Type `csi /path/myScript.csx` to execute a script file or type simply `csi` to drop inside the command-line REPL. > * **Scripting APIs**. The Scripting APIs give you the ability to execute snippets of C# code in a host-created execution environment. You can learn more about how to create your own C# script engine by checking out our code samples. See [What’s New in Visual Studio Update 1 for .NET Managed Languages](https://devblogs.microsoft.com/dotnet/whats-new-in-visual-studio-update-1-for-net-managed-languages/). Basically, now you have: * IDE REPL — C# Interactive window in VS * Script interpreter — `csi foo.csx` from Dev Cmd Prompt * Command line REPL — `csi` from Dev Cmd Prompt * Scripting API
C# Console/CLI Interpreter?
[ "", "c#", ".net", "read-eval-print-loop", "" ]
I have a control that is basically functioning as a client-side timer countdown control. I want to fire a server-side event when the count down has reached a certain time. Does anyone have an idea how this could be done? So, when timer counts down to 0, a server-side event is fired.
You would probably want to use AJAX to make your server side call.
When you render the page create a client-side button that would do the action you want on postback. Then use [ClientScriptManager.GetPostBackEventReference](http://msdn.microsoft.com/en-us/library/system.web.ui.clientscriptmanager.getpostbackeventreference.aspx) passing in the control as reference and add a client-side event to it using attributes as the example at the bottom of that link shows. You can then see the Javascript it renders and use that in your function to trigger the correct server-side event.
How do you raise a server-side event from javascript?
[ "", ".net", "asp.net", "javascript", "event-handling", "" ]
What is [`java.awt.Component.getName()`](http://docs.oracle.com/javase/6/docs/api/java/awt/Component.html#getName%28%29) used for? It always seems to be `null` in the applications I build with NetBeans. I'm thinking of storing some help text per component in it -- I don't want to use the tooltip, I have another panel where I'll show the help text.
Component.setName(..) is used in the JDK mostly by the look and feel implementation classes to set ID-like strings for each component, e.g. BasicOptionPaneUI might call it on a button component to set its name to "OptionPane.button". The getName() is used in toString() methods, when setting the names of child components inside a Composite/parent Component and in AWT and Swing debug logging code. I suspect strongly that the getName() method is also used by some AWT/Swing testing frameworks. So if you're not dependent on any of the above uses of getName(), you might try using it for your help messages, though I would not recommend it. Maybe you should reconsider your design? Use the name to do some lookup in a hashmap that loads the help text from a resource bundle?
I haven't seen it used for anything by the framework. Its useful if you have components being passed in to a method so you can ask their name to decide how to handle them. Also, many UI testing frameworks use this to allow you to refer to the components by name in the testing scripts. I don't see any reason you can't use it for help text though.
What is java.awt.Component.getName() and setName() used for?
[ "", "java", "awt", "" ]
Here's my problem: I have to call a web service with a secure header from a classic ASP page that returns a complex data type. For various reasons concerning a 3rd party tool it has to be classic ASP. We decided that I should create an external dll to do this - which I did (in c#) so it returns a dataset (Something ASP can understand). However now I need to expose that function to the ASP page. Because this is classic ASP I think the only straightforward way to do this is to expose this class library as a COM object. I need to know the down and dirty easiest way to accomplish this task. What do I have to do to my dll? I have never created a COM object before only used. Somebody said my class has to be static and I can't have a constructor. Is this true? Can someone layout the steps for me? HELP! (o: Edit: This specific problem is now solved however as Robert Rossney noted I can't do anything with the DataSet in classic ASP. This has led me to post a second question [here](https://stackoverflow.com/questions/301045/problem-implementing-xmltextwriter-in-new-xmlrecordsetwriter-for-streams) regarding implementing XmlTextWriter - Robert if you see this I think you could really help!
No, that (static/no ctor) isn't true. Quite the opposite, in fact, since COM will need to create an instance! You simply need to make the class COM visible. Mainly, this is just adding some attributes, and registering it as a COM dll (regasm). <http://msdn.microsoft.com/en-us/library/zsfww439.aspx>
Creating a class that returns a DataSet is not so difficult: ``` using System; using System.Data; using System.Runtime.InteropServices; namespace COMTest { [Guid("AC4C4347-27EA-4735-B9F2-CF672B4CBB4A")] [ComVisible(true)] public interface ICOMTest { [ComVisible(true)] DataSet GetDataSet(); } [Guid("CB733AB1-9DFC-437d-A769-203DD7282A8C")] [ProgId("COMTest.COMTest")] [ComVisible(true)] public class COMTest : ICOMTest { public DataSet GetDataSet() { DataSet ds = new DataSet("COMTest"); return ds; } } ``` } You'll need to check the "Register for COM Interop" box in the Project properties, you'll also need to sign the assembly, and you'll need to make sure that the IIS user can access your `bin\Debug` directory. Once you've done this, you can create an instance from ASP just fine: ``` <% Dim o Set o = Server.CreateObject("COMTest.COMTest") Response.Write("Server.CreateObject worked.") Response.Write("<br/>") Dim ds Set ds = o.GetDataSet() If Not ds is Nothing Then Response.Write("o.GetDataSet returned an object. Can we use it?") Response.Write("<br/>") Response.Write("We have a DataSet, and its DataSetName is: ") Response.Write(ds.DataSetName) End If %> ``` And here you will be sad. For while the method that returns a DataSet is visible to COM, none of the DataSet's properties or methods are.
Converting a fairly simple C# Class library into a COM object?
[ "", "c#", "web-services", "com", "asp-classic", "" ]
I'm looking for a *free* JavaScript obfuscator. Would compression be enough? What tools would you recommend? Of course, I don't need military-style obfuscation, I need a *simple* way to prevent kiddies from stealing my javascript by looking at the source or by using something simple such as unescape(). Thanks, Tom
Your problem is that no matter how much you compress it or hide it, eventually the browser has to interpret it. The best you can do is renaming all variables to meaningless random vars, and removing all comments and whitespace. **A few good tools:** * <http://www.dev411.com/dojo/javascript_compressor/> * <http://javascriptcompressor.com/Default.aspx> * <http://developer.yahoo.com/yui/compressor/>
You can use /packer/ <http://dean.edwards.name/packer/>
Free JavaScript obfuscators?
[ "", "javascript", "compression", "obfuscation", "" ]
I am writing a Firefox extension. I would like to search the current webpage for a set of words, and count how many times each occurs. This activity is only performed when the user asks, but it must still happen reasonably quickly. I am currently using indexOf on the BODY tag's innerHTML element, but am finding it too slow to run repeatedly in the following manner: ``` function wordcount(doc, match) { var count = 0; var pos = 0; for(;;) { len=doc.indexOf(match, pos); if(len == -1) { break; } pos = len + match.length; count++; } return count; } var html = content.document.body.innerHTML.toLowerCase() for(var i=0; i<keywords.length; i++) { var kw = keywords[i]; myDump(kw + ": " + wordcount(html, kw)); } ``` With 100 keywords, this takes approximately 10 to 20 seconds to run. There is some scope to reduce the number of keywords, but it will still need to run much quicker. Is there a more obvious way to do this? What is the most efficient method? I have some ideas, but am reluctant to code each up without some idea of the performance I can expect: * Navigate the DOM rather than using innerHTML. Will this be likely quicker or slower? It would have the benefit of only searching textual content. * Loop through the document word by word, accumulating a count of each word's occurence simultaneously. With this method I would have to do a bit more work parsing the HTML. *Edit: Turns out that the slowest part was the myDump function writing to the error console. Duh! Nevertheless, there some interesting more efficient alternatives have been presented, which I am intending to use.*
I'm not sure if it is the fastest but the following worked pretty quickly for me. ``` var words = document.body.innerHTML.replace(/<.*?>/g,'').split(/\s+/); var i = words.length; var keywordCounts = {'keyword': 0, 'javascript': 0, 'today': 0}; var keywords = []; var keywordMatcher = ''; var word; for (word in keywordCounts) { keywords[keywords.length] = word ; keywordMatcher = keywordMatcher + '(' + word + ')?'; } var regex = new RegExp(keywordMatcher); var j = keywords.length; var matched, keyword; if (i && j) { do { i = i - 1; matched = words[i].match(regex); if (!matched) continue; j = keywords.length; do { j = j - 1; if (matched[j + 1]) { keyword = keywords[j]; keywordCounts[keyword] = keywordCounts[keyword] + 1; } } while (j); } while (i); } ``` I'll definitely grant that from a Big(O) perspective it isn't the best because as i and j get big it still requires n squared time but I've found regular expression processing to generally be pretty fast. Basically I'm taking tvanfosson's idea and expanding on it, but rather than traversing the DOM I'm removing the tags with a regex (the first line) and then splitting the page into individual words. The keyword 'hash' is defined on the third line with initial counts (they should all start at zero obviously). From there I a new regular expression is constructed using each keyword as a group so when matched it returns an array of results that has (in my example) [fullMatch,keywordMatch,javascriptMatch,todayMatch]. I'm using decrementing do while loops because they've been shown in lots of places to be the fastest looping structure in JavaScript and since it doesn't matter in what order the words get processed loop speed is really the only consideration. I hope this is helpful, if not it was at least a fun exercise. :)
I could not find hasItem, setItem or getItem in prototypes Hash like tvanfosson suggested, but used set and get and wrote a hasItem based on get. But profiling showed that it is slower to use prototypes Hash compared to javascripts native object. If you have an array with keywords, convert it to an hash object with the keywords as the key and a value of 0: ``` function prepareCount(words) { var result = {}; for (var i=0,len=words.length; i < len; i++) { result[words[i]] = 0; } return result; } ``` Instead of splitting the string and go through it with a for statement, you could use a function as a parameter to replace. In the tests I did this was much faster. In the regexp I choosed to match everything but white space. You probably want to add other separators like parentheses, comma, dot and dash and so on, or if you know the text is ASCII only, you can use a-z instead. ``` function countKeywords(text,wordcount) { text.replace(/[^\s]+/g,function(s) { if (wordcount[s]!==undefined) { ++wordcount[s];} return ""; }); return wordcount; } ``` To use it: ``` var wordcount = countKeywords(document.documentElement.textContent.toLowerCase(),prepareCount(["my","key","words"])); ``` **Update:** Use this regexp to exclude all delimiters in ASCII but underscore (allows non ASCII characters): ``` /[^\s\x00-\x2F\x3A-\x40\x5B-\x5E\x60\x7B-\x7F]+/g ``` if you know that your text with keywords are ASCII only, you can instead use: /[a-z]+
Fastest javascript page search
[ "", "javascript", "firefox", "search", "performance", "" ]
I'm astonished that the [Apache Commons Collections](http://commons.apache.org/collections/) project still hasn't got around to making their library generics-aware. I really like the features provided by this library, but the lack of support for generics is a big turn-off. There is a [Lavalabs fork of Commons Collections which does support generics](http://larvalabs.com/collections/index.html), which seems to claim backward compatibility, but when I tried updating to this version, my web application failed to start (in JBoss). My questions are: * Whether anyone has successfully updated from Commons Collections to the fork mentioned above * If Commons Collections has any plans to add support for generics BTW, I'm aware of Google collections, but am reluctant to use it until the API stabilises. Cheers, Don
There are contributions. Checkout the [jira](https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mode=hide&sorter/order=DESC&sorter/field=priority&resolution=-1&pid=12310465&fixfor=12312131)'s There is also a [JDK5 branch](http://svn.apache.org/repos/asf/commons/proper/collections/branches/collections_jdk5_branch/). We do would like to add generics and update Commons Collections to 1.5 (and 1.6). The biggest problem is how to address backwards compatibility. And people have very different opinions there. For some of the Commons components the newer JDK almost asks for a rewrite for the new JDKs IMHO. During ApacheCon I felt the urge across several people to get this moving though. It's just a big task. Feel free to show up on dev@commons.apache.org cheers, Torsten
Consider [Google Collections](http://code.google.com/p/google-collections/). From their [Javalobby interview](http://www.javalobby.org/articles/google-collections/): > [Google Collections is] built with Java 5 features: generics, enums, covariant return types, etc. When writing Java 5 code, you want a collections library that takes full advantage of the language. In addition, we put enormous effort into making the library complete, robust, and consistent with the JDK collection classes.
Genericized commons collection
[ "", "java", "collections", "upgrade", "apache-commons", "binary-compatibility", "" ]
I have a (varchar) field Foo which can only be specified if (bit) Bar is *not* true. I would like the textbox in which Foo is displayed to be *disabled* when Bar is true -- essentially, `FooBox.Enabled = !isBar`. I'm trying to do something like ``` FooBox.DataBindings.Add(new Binding("Enabled", source, "!isBar")); ``` but of course the bang in there throws an exception. I've also tried constructs like "isBar != true" or "isBar <> true", but none work. Am I barking up the wrong tree here?
I tried doing something like this a while ago and the best I could come up with was either a) Changing the source class to also have a NotBar property and bind to that b) Make a dumb wrapper class around source that has a NotBar property and bind to that.
As far as I can tell, Databind uses reflection to find the member passed as the 3rd string argument. You cannot pass an expression there, just the member name.
How do I data bind Control.Enabled to !(field)?
[ "", "c#", "winforms", "data-binding", "" ]
I am using a specific command in in my C# code, which works well. However, it is said to misbehave in "unmanaged" code. What is managed or unmanaged code?
Here is some text from MSDN about [unmanaged code](http://msdn.microsoft.com/en-us/library/0e91td57.aspx). > Some library code needs to call into unmanaged code (for example, native code APIs, such as Win32). Because this means going outside the security perimeter for managed code, due caution is required. Here is some other complimentary explication about Managed code: * Code that is executed by the CLR. * Code that targets the common language runtime, the foundation of the .NET Framework, is known as managed code. * Managed code supplies the metadata necessary for the CLR to provide services such as memory management, cross-language integration, code access security, and automatic lifetime control of objects. All code based on IL executes as managed code. * Code that executes under the CLI execution environment. For your problem: I think it's because NUnit execute your code for UnitTesting and might have some part of it that is unmanaged. But I am not sure about it, so do not take this for gold. I am sure someone will be able to give you more information about it. Hope it helps!
[This](http://www.developer.com/net/cplus/article.php/2197621/Managed-Unmanaged-Native-What-Kind-of-Code-Is-This.htm) is a good article about the subject. To summarize, 1. **Managed code** is not compiled to machine code but to an intermediate language which is interpreted and executed by some service on a machine and is therefore operating within a (hopefully!) secure framework which handles dangerous things like memory and threads for you. In modern usage this frequently means .NET but does not have to. > An application program that is executed within a runtime engine > installed in the same machine. The application cannot run without it. > The runtime environment provides the general library of software > routines that the program uses and typically performs memory > management. It may also provide just-in-time (JIT) conversion from > source code to executable code or from an intermediate language to > executable code. Java, Visual Basic and .NET's Common Language Runtime > (CLR) are examples of runtime engines. ([Read more](https://web.archive.org/web/20080804071505/http://www.pcmag.com/encyclopedia_term/0,2542,t=managed+code&i=46560,00.asp)) 2. **Unmanaged code** is compiled to machine code and therefore executed by the OS directly. It therefore has the ability to do damaging/powerful things Managed code does not. This is how everything used to work, so typically it's associated with old stuff like .dlls. > An executable program that runs by itself. Launched from the operating > system, the program calls upon and uses the software routines in the > operating system, but does not require another software system to be > used. Assembly language programs that have been assembled into machine > language and C/C++ programs compiled into machine language for a > particular platform are examples of unmanaged code.([Read more](https://web.archive.org/web/20100227034155/http://www.pcmag.com/encyclopedia_term/0,2542,t=unmanaged+code&i=53477,00.asp)) 3. **Native** code is often synonymous with Unmanaged, but is not identical.
What is managed or unmanaged code in programming?
[ "", "c#", ".net", "unmanaged", "definition", "managed", "" ]
I'm working on something that requires traversing through the file system and for any given path, I need to know how 'deep' I am in the folder structure. Here's what I'm currently using: ``` int folderDepth = 0; string tmpPath = startPath; while (Directory.GetParent(tmpPath) != null) { folderDepth++; tmpPath = Directory.GetParent(tmpPath).FullName; } return folderDepth; ``` This works but I suspect there's a better/faster way? Much obliged for any feedback.
Off the top of my head: ``` Directory.GetFullPath().Split("\\").Length; ```
I'm more than late on this but I wanted to point out Paul Sonier's answer is probably the shortest but should be: ``` Path.GetFullPath(tmpPath).Split(Path.DirectorySeparatorChar).Length; ```
C# Best way to get folder depth for a given path?
[ "", "c#", ".net", "directory", "" ]
I have created a windows installer for a windows forms app as an MSI. I have published this and put it in a zip file and sent it to the client. When they try to run the installer they get the message 'The publisher could not be verified. Are you sure you want to run this software?’ Is there a setting or something i need to do to stop this message appearing when the client clicks on the installer? Cheers
I have spoken to some of the guys here and someone has used Orca to edit some of the msi content. Apparently before this happened the installer was fine.
Is this a certificate issue? I haven't had to do this with msi (I usually use ClickOnce, which makes this very easy), but a quick search shows things like [this](http://www.advancedinstaller.com/digital-signatures.html) or on MSDN [here](http://msdn.microsoft.com/en-us/library/aa371854(VS.85).aspx). Note that your certificate would have to be in the clients trust chain - i.e. indirectly from someone "pucka".
MSI produces question for installers
[ "", "c#", "winforms", "windows-installer", "deployment", "" ]
What is the difference between using `#include<filename> and #include<filename.h`> in [C++](http://en.wikipedia.org/wiki/C%2B%2B)? Which of the two is used and why is it is used?
C++ only include-files not found in the C standard never used `filename.h` . Since the very first C++ Standard came out (1998) they have used `filename` for their own headers. Files inherited by the C Standard became `cfilename` instead of `filename.h`. The C files inherited used like `filename.h` are deprecated, but still part of the C++ standard. The difference is that names not defined as macros in C are found within namespace `std::` in `cfilename` in C++, while names in `filename.h` are within the global namespace scope. So you will find `::size_t` in stddef.h, and `std::size_t` in cstddef. Both are Standard C++, but use of ::size\_t is deprecated (See Annex D of the C++ Standard). Now those were the difference. ### Why would you use `filename.h` ? * Compatibility with C compilers * Compatibility with very old C++ compilers ### Why should you use `cfilename` ? * Names are within namespace `std::` . No name-clashes anymore. * New C++ features (e.g. overloaded math functions for float, long) * C Compatibility Headers (`filename.h`) could disappear in future.
The `#include <foo.h>` was common in C++ code prior to the C++ standard. The standard changed it to `#include <foo>` with everything from the header placed in the `std` namespace. (Thanks to litb for pointing out that the standard has never allowed .h headers.) There is no magic going on, the first looks for a file called 'foo.h' and the second for a file called 'foo'. They are two different files in the file system. The standard just changed the name of the file that should be included. In most compilers the old headers are still there for backwards compatibility (and compatibility with C), but modern C++ programs that want to follow the standard should not use them. In the case of standard C headers, the C++ versions have a c at the beginning, so the C header ``` #include <stdio.h> ``` becomes ``` #include <cstdio> ```
Difference between using #include<filename> and #include<filename.h> in C++
[ "", "c++", "include", "namespaces", "" ]
I myself am convinced that in a project I'm working on signed integers are the best choice in the majority of cases, even though the value contained within can never be negative. (Simpler reverse for loops, less chance for bugs, etc., in particular for integers which can only hold values between 0 and, say, 20, anyway.) The majority of the places where this goes wrong is a simple iteration of a std::vector, often this used to be an array in the past and has been changed to a std::vector later. So these loops generally look like this: ``` for (int i = 0; i < someVector.size(); ++i) { /* do stuff */ } ``` Because this pattern is used so often, the amount of compiler warning spam about this comparison between signed and unsigned type tends to hide more useful warnings. Note that we definitely do not have vectors with more then INT\_MAX elements, and note that until now we used two ways to fix compiler warning: ``` for (unsigned i = 0; i < someVector.size(); ++i) { /*do stuff*/ } ``` This usually works but might silently break if the loop contains any code like 'if (i-1 >= 0) ...', etc. ``` for (int i = 0; i < static_cast<int>(someVector.size()); ++i) { /*do stuff*/ } ``` This change does not have any side effects, but it does make the loop a lot less readable. (And it's more typing.) So I came up with the following idea: ``` template <typename T> struct vector : public std::vector<T> { typedef std::vector<T> base; int size() const { return base::size(); } int max_size() const { return base::max_size(); } int capacity() const { return base::capacity(); } vector() : base() {} vector(int n) : base(n) {} vector(int n, const T& t) : base(n, t) {} vector(const base& other) : base(other) {} }; template <typename Key, typename Data> struct map : public std::map<Key, Data> { typedef std::map<Key, Data> base; typedef typename base::key_compare key_compare; int size() const { return base::size(); } int max_size() const { return base::max_size(); } int erase(const Key& k) { return base::erase(k); } int count(const Key& k) { return base::count(k); } map() : base() {} map(const key_compare& comp) : base(comp) {} template <class InputIterator> map(InputIterator f, InputIterator l) : base(f, l) {} template <class InputIterator> map(InputIterator f, InputIterator l, const key_compare& comp) : base(f, l, comp) {} map(const base& other) : base(other) {} }; // TODO: similar code for other container types ``` What you see is basically the STL classes with the methods which return size\_type overridden to return just 'int'. The constructors are needed because these aren't inherited. **What would you think of this as a developer, if you'd see a solution like this in an existing codebase?** Would you think 'whaa, they're redefining the STL, what a huge WTF!', or would you think this is a nice simple solution to prevent bugs and increase readability. Or maybe you'd rather see we had spent (half) a day or so on changing all these loops to use std::vector<>::iterator? (In particular if this solution was combined with banning the use of unsigned types for anything but raw data (e.g. unsigned char) and bit masks.)
## I made this community wiki... Please edit it. I don't agree with the advice against "int" anymore. I now see it as not bad. Yes, i agree with Richard. You should never use `'int'` as the counting variable in a loop like those. The following is how you might want to do various loops using indices (althought there is little reason to, occasionally this can be useful). ## Forward ``` for(std::vector<int>::size_type i = 0; i < someVector.size(); i++) { /* ... */ } ``` ## Backward You can do this, which is perfectly defined behaivor: ``` for(std::vector<int>::size_type i = someVector.size() - 1; i != (std::vector<int>::size_type) -1; i--) { /* ... */ } ``` Soon, with c++1x (next C++ version) coming along nicely, you can do it like this: ``` for(auto i = someVector.size() - 1; i != (decltype(i)) -1; i--) { /* ... */ } ``` Decrementing below 0 will cause i to wrap around, because it is unsigned. ## But unsigned will make bugs slurp in That should never be an argument to make it the wrong way (using `'int'`). ## Why not use std::size\_t above? The C++ Standard defines in `23.1 p5 Container Requirements`, that `T::size_type` , for `T` being some `Container`, that this type is some implementation defined unsigned integral type. Now, using `std::size_t` for `i` above will let bugs slurp in silently. If `T::size_type` is less or greater than `std::size_t`, then it will overflow `i`, or not even get up to `(std::size_t)-1` if `someVector.size() == 0`. Likewise, the condition of the loop would have been broken completely.
Don't derive publicly from STL containers. They have nonvirtual destructors which invokes undefined behaviour if anyone deletes one of your objects through a pointer-to base. If you must derive e.g. from a vector, do it privately and expose the parts you need to expose with `using` declarations. Here, I'd just use a `size_t` as the loop variable. It's simple and readable. The poster who commented that using an `int` index exposes you as a n00b is correct. However, using an iterator to loop over a vector exposes you as a slightly more experienced n00b - one who doesn't realize that the subscript operator for vector is constant time. (`vector<T>::size_type` is accurate, but needlessly verbose IMO).
acceptable fix for majority of signed/unsigned warnings?
[ "", "c++", "stl", "coding-style", "unsigned", "" ]
I have a custom attribute which can be assigned to a class, `[FooAttribute]`. What I would like to do, from within the attribute, is determine which type has actually used me. e.g. If I have: ``` [FooAttribute] public class Bar { } ``` In the code for FooAttribute, how can I determine it was Bar class that added me? I'm not specifically looking for the Bar type, I just want to set a friendly name using reflection. e.g. ``` [FooAttribute(Name="MyFriendlyNameForThisClass")] public class Bar { } public class FooAttribute() { public FooAttribute() { // How do I get the target types name? (as a default) } } ```
First off, you might consider the existing `[DisplayName]` for keeping friendly names. As has already been covered, you simply can't get this information inside the attribute. You can look up the attribute from Bar, but in general, the only way to do it from the attribute would be to pass the type *into* the attribute - i.e. ``` [Foo("Some name", typeof(Bar)] ``` What exactly is it you want to do? There may be other options... Note that for i18n, resx, etc; you can subclass `DisplayNameAttribute` and provide lookup from keys by overriding the `DisplayName` getter.
To elaborat. A attribute, built in or custom, is just meta data for a class, or class member, and the attribute itself nas no notation that it's being associated with something. * The type knows of it's own metadata * The meta data (in this case, the attribute) does not know to whom it belongs
How to determine the attached type from within a custom attribute?
[ "", "c#", ".net", "reflection", "custom-attributes", "" ]
I need to set the fetch mode on my hibernate mappings to be eager in some cases, and lazy in others. I have my default (set through the hbm file) as lazy="true". How do I override this setting in code? MyClass has a set defined of type MyClass2 for which I want to set the FetchMode to EAGER. Currently, I have something like: ``` Session s = HibernateUtil.getSessionFactory().openSession(); MyClass c = (MyClass)session.get(MyClass.class, myClassID); ```
You could try something like this: (code off the top of my head) ``` Criteria crit = session.createCriteria(MyClass.class); crit.add(Restrictions.eq("id", myClassId)); crit.setFetchMode("myProperty", FetchMode.EAGER); MyClass myThingy = (MyClass)crit.uniqueResult(); ``` I believe that FetchMode.JOIN or FetchMode.SELECT should be used instead of FetchMode.EAGER, though.
If you're not using Criteria there's also the `JOIN FETCH` keyword that will eagerly load the association specified by the join. ``` session.createQuery("select p from Parent p join fetch p.children c") ```
Setting FetchMode in native Hibernate
[ "", "java", "hibernate", "" ]
I wrote a PHP code like this ``` $site="http://www.google.com"; $content = file_get_content($site); echo $content; ``` But when I remove "http://" from `$site` I get the following warning: > Warning: > file\_get\_contents(www.google.com) > [function.file-get-contents]: failed > to open stream: I tried `try` and `catch` but it didn't work.
Step 1: check the return code: `if($content === FALSE) { // handle error here... }` Step 2: suppress the warning by putting an [error control operator](http://php.net/manual/en/language.operators.errorcontrol.php) (i.e. `@`) in front of the call to *file\_get\_contents()*: `$content = @file_get_contents($site);`
You can also [set your error handler](https://www.php.net/set_error_handler) as an [anonymous function](http://php.net/manual/en/functions.anonymous.php) that calls an [Exception](http://us.php.net/manual/en/class.errorexception.php) and use a try / catch on that exception. ``` set_error_handler( function ($severity, $message, $file, $line) { throw new ErrorException($message, $severity, $severity, $file, $line); } ); try { file_get_contents('www.google.com'); } catch (Exception $e) { echo $e->getMessage(); } restore_error_handler(); ``` Seems like a lot of code to catch one little error, but if you're using exceptions throughout your app, you would only need to do this once, way at the top (in an included config file, for instance), and it will convert all your errors to Exceptions throughout.
How can I handle the warning of file_get_contents() function in PHP?
[ "", "php", "function", "exception", "warnings", "file-get-contents", "" ]
We are using Google Charts for render the chart images, but 500 requests/day became not enough. What PHP classes/frameworks are worth looking at?
[JpGraph](http://jpgraph.net/) and [GraPHPite](http://graphpite.sourceforge.net/) are the most developed open-source packages available. I recommend GraPHPite in particular, it has a somewhat better API and produces better looking graphs. **Edit:** I recently developed a small [graphing library](http://www.binpress.com/app/analytics-and-graphing-with-php-sql/851) for fetching database data, process it with PHP and create Google Analytics style interactive graphs from it (uses [flot](http://code.google.com/p/flot/) to render the graphs on the client side). It's basically a complete server/client solution to turning your database tables into graphs.
The Google Chart API no longer has a 500 request/day limit. I recall this changed a couple of months ago.
Rendering eye-candy bar charts with PHP: tools / best practices
[ "", "php", "charts", "" ]
What is the 'correct' way to store a native pointer inside a Java object? I could treat the pointer as a Java `int`, if I happen to know that native pointers are <= 32 bits in size, or a Java `long` if I happen to know that native pointers are <= 64 bits in size. But is there a better or cleaner way to do this? **Edit**: Returning a native pointer from a JNI function is exactly what I *don't* want to do. I would rather return a Java object that represents the native resource. However, the Java object that I return must presumably have a field containing a pointer, which brings me back to the original question. Or, alternatively, is there some better way for a JNI function to return a reference to a native resource?
IIRC, both `java.util.zip` and `java.nio` just use `long`.
`java.nio.DirectByteBuffer` does what you want. Internally it uses a `private long address` to store pointer value. *Dah !* Use JNI function `env->NewDirectByteBuffer((void*) data, sizeof(MyNativeStruct))` to create a DirectByteBuffer on C/C++ side and return it to Java side as a ByteBuffer. **Note:** It's your job to free this data at native side! It miss the automatic Cleaner available on standard DirectBuffer. At Java side, you can create a DirectByteBuffer this way : ``` ByteBuffer directBuff = ByteBuffer.allocateDirect(sizeInBytes); ``` Think it as sort of C's `malloc(sizeInBytes)`. **Note:** It has as automatic Cleaner, which deallocates the memory previously requested. But there are some points to consider about using DirectByteBuffer: * It can be Garbage Collected (GC) if you miss your direct ByteBuffer reference. * You can read/write values to pointed structure, but beware with both offset and data size. Compiler may add extra spaces for padding and break your assumed internal offsets in structure. Structure with pointers (stride is 4 or 8 bytes ?) also puzzle your data. * Direct ByteBuffers are very easy to pass as a parameter for native methods, as well to get it back as return. * You must cast to correct pointer type at JNI side. Default type returned by `env->GetDirectBufferAddress(buffer)` is `void*`. * You are unable to change pointer value once created. * Its your job to free memory previously allocated for buffers at native side. That ones you used with `env->NewDirectByteBuffer()`.
What is the 'correct' way to store a native pointer inside a Java object?
[ "", "java", "java-native-interface", "" ]
If I have a method with a parameter that's an interface, whats the fasts way to see if the interface's reference is of a specific generic type? More specifically, if I have: ``` interface IVehicle{} class Car<T> : IVehicle {} CheckType(IVehicle param) { // How do I check that param is Car<int>? } ``` I'm also going to have to cast after the check. So if there is a way to kill 2 birds with one stone on this one let me know.
To check if param is a `Car<int>` you can use "is" and "as" as normal: ``` CheckType(IVehicle param) { Car<int> car = param as Car<int>; if (car != null) { ... } } ```
Or, you can just do: ``` if(param is Car<int>) { // Hey, I'm a Car<int>! } ```
Whats a fast way to check that reference is a specific generic type?
[ "", "c#", "reflection", "" ]
In a html page we use the head tag to add reference to our external .js files .. we can also include script tags in the body .. But how do we include our external .js file in a web user control? After little googling I got this. It works but is this the only way? ``` ScriptManager.RegisterStartupScript(this.Page, Page.GetType(), "MyUniquekey", @"<script src=""myJsFile.js"" type=""text/javascript""></script>", false); ``` -- Zuhaib
You can also use ``` Page.ClientScript.RegisterClientScriptInclude("key", "path/to/script.js"); ``` That's the way I always do it anyway
> Yes this works too .. but why does all > the script gets dumped in the body and > not in the head?? There's a potential workaround for that [here](http://www.codeproject.com/KB/aspnet/scriptregister.aspx)
External JS file in web user control?
[ "", "asp.net", "javascript", "webusercontrol", "" ]
I am extremely new to python, having started to learn it less than a month ago, but experienced with some other programming languages (primarily C# and SQL). But now that Python 3.0 has been released and is not backwards compatible, what would be the advantages and disadvantages of deciding to focus on Python 3.0 or Python 2.6?
I would say begin with 2.6 since the vast, vast majority of documentation regarding Python will be applicable to 2.6 as well most open source projects you may want to contribute to will be in 2.6 for awhile. Then, once you have a good foundation in 2.6, you can learn 3.0. That way you can kind of appreciate how the language has evolved and where the "aesthetic" of the code comes from.
Go with 2.6 since that's what most libraries(pygame, wxpython, django, etc) target. The differences in 3.0 aren't that huge, so transitioning to it later shouldn't be much of a problem.
Python Version for a Newbie
[ "", "python", "python-3.x", "" ]
Here's my problem.I have 2 xmlfiles with identical structure, with the second xml containing only few node compared to first. File1 ``` <root> <alpha>111</alpha> <beta>22</beta> <gamma></gamma> <delta></delta> </root> ``` **File2** ``` <root> <beta>XX</beta> <delta>XX</delta> </root> ``` This's what the result should look like ``` <root> <alpha>111</alpha> <beta>22</beta> <gamma></gamma> <delta>XX</delta> </root> ``` Basically if the node contents of any node in File1 is blank then it should read the values from File2(if it exists, that is). I did try my luck with Microsoft XmlDiff API but it didn't work out for me(the patch process didn't apply changes to the source doc). Also I'm a bit worried about the DOM approach that it uses, because of the size of the xml that I'll be dealing with. Can you please suggest a good way of doing this. I'm using C# 2
Here is a little bit simpler and more efficient solution that that proposed by Alastair (see my comment to his solution). This transformation: ``` <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" > <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:variable name="vFile2" select="document('File2.xml')"/> <xsl:template match="*"> <xsl:copy> <xsl:copy-of select="@*"/> <xsl:apply-templates/> </xsl:copy> </xsl:template> <xsl:template match="*[not(text())]"> <xsl:copy> <xsl:copy-of select="$vFile2/*/*[name() = name(current())]/text()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> ``` when applied on this XML document: ``` <root> <alpha>111</alpha> <beta>22</beta> <gamma></gamma> <delta></delta> </root> ``` produces the wanted result: ``` <root> <alpha>111</alpha> <beta>22</beta> <gamma></gamma> <delta>XX</delta> </root> ```
In XSLT you can use the `document()` function to retrieve nodes from File2 if you encounter an empty node in File1. Something like: ``` <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="root/*[.='']"> <xsl:variable name="file2node"> <xsl:copy-of select="document('File2.xml')/root/*[name()=name(current())]"/> </xsl:variable> <xsl:choose> <xsl:when test="$file2node != ''"> <xsl:copy-of select="$file2node"/> </xsl:when> <xsl:otherwise> <xsl:copy/> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template match="*"> <xsl:copy> <xsl:copy-of select="@*"/> <xsl:apply-templates/> </xsl:copy> </xsl:template> </xsl:stylesheet> ```
Comparing 2 XML docs and applying the changes to source document
[ "", "c#", "xml", "xslt", "merge", "" ]
I'm working on a site with multiple subdomains, some of which should get their own session. I think I've got it worked out, but have noticed something about cookie handling that I don't understand. I don't see anything in the docs that explains it, so thought I would see if anyone here has some light to shed on the question. If I just do: ``` session_start(); ``` I end up with a session cookie like this: subdomain.example.net However, if I make any attempt to set the cookie domain myself, either like ``` ini_set('session.cookie_domain', 'subdomain.example.net'); ``` or like ``` session_set_cookie_params( 0, "/", "subdomain.example.net", false, false); ``` I end up with a cookie for .subdomain.example.net (note the opening dot), which I believe means "match all subdomains (or in this case sub-subdomains). This seems to happen with all my cookies actually, not just session. If I set the cookie domain myself, it automatically has the dot prepended, meaning this domain and all subs of it. If I don't set the domain, then it gets it right by using only the current domain. Any idea what causes this, and what I can do to control that prepending dot? Thanks!
PHP's cookie functions automatically prefix the $domain with a dot. If you don't want this behavior you could use the [header](https://www.php.net/header) function. For example: ``` header("Set-Cookie: cookiename=cookievalue; expires=Tue, 06-Jan-2009 23:39:49 GMT; path=/; domain=subdomain.example.net"); ```
If you run your PHP script under "<http://subdomain.example.net>", **don't use the domain parameter**: ``` setcookie('cookiename','cookievalue',time()+(3600*24),'/'); ``` You will get a cookie with "subdomain.example.net" (and not ".subdomain.example.net")
PHP: Cookie domain / subdomain control
[ "", "php", "cookies", "" ]
I'm completely new at C# and NUnit. In Boost.Test there is a family of `BOOST_*_THROW` macros. In Python's test module there is `TestCase.assertRaises` method. As far as I understand it, in C# with NUnit (2.4.8) the only method of doing exception test is to use `ExpectedExceptionAttribute`. Why should I prefer `ExpectedExceptionAttribute` over - let's say - Boost.Test's approach? What reasoning can stand behind this design decision? Why is that better in case of C# and NUnit? Finally, if I decide to use `ExpectedExceptionAttribute`, how can I do some additional tests after exception was raised and catched? Let's say that I want to test requirement saying that object has to be valid after some setter raised `System.IndexOutOfRangeException`. How would you fix following code to compile and work as expected? ``` [Test] public void TestSetterException() { Sth.SomeClass obj = new SomeClass(); // Following statement won't compile. Assert.Raises( "System.IndexOutOfRangeException", obj.SetValueAt( -1, "foo" ) ); Assert.IsTrue( obj.IsValid() ); } ``` --- Edit: Thanks for your answers. Today, I've found an *It's the Tests* [blog entry](http://nunit.com/blogs/?p=63) where all three methods described by you are mentioned (and one more minor variation). It's shame that I couldn't find it before :-(.
I'm surprised I haven't seen this pattern mentioned yet. David Arno's is very similar, but I prefer the simplicity of this: ``` try { obj.SetValueAt(-1, "foo"); Assert.Fail("Expected exception"); } catch (IndexOutOfRangeException) { // Expected } Assert.IsTrue(obj.IsValid()); ```
If you can use NUnit 2.5 there's some nice [helpers](http://nunit.com/blogs/?p=63) there. ``` Assert.That( delegate { ... }, Throws.Exception<ArgumentException>()) ```
Is NUnit's ExpectedExceptionAttribute only way to test if something raises an exception?
[ "", "c#", "exception", "nunit", "" ]
Is it possible to write a PL/SQL query to identify a complete list of a stored procedures dependencies? I'm only interested in identifying other stored procedures and I'd prefer not to limit the depth of nesting that it gets too either. For example, if A calls B, which calls C, which calls D, I'd want B, C and D reported as dependencies for A.
On [this page](http://www.oracle.com/technology/oramag/code/tips2004/091304.html), you will find the following query which uses the [PUBLIC\_DEPENDENCY](http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/statviews_5132.htm#REFRN29106) dictionary table: ``` SELECT lvl , u.object_id , u.object_type , LPAD (' ', lvl) || object_name obj FROM ( SELECT LEVEL lvl, object_id FROM SYS.public_dependency s START WITH s.object_id = ( SELECT object_id FROM user_objects WHERE object_name = UPPER ('&OBJECT_NAME') AND object_type = UPPER ('&OBJECT_TYPE')) CONNECT BY s.object_id = PRIOR referenced_object_id GROUP BY LEVEL, object_id) tree , user_objects u WHERE tree.object_id = u.object_id ORDER BY lvl / ```
I agree with EddieAwad. Its valuable to point out that Oracle only tracks the dependencies down to the object level. If you have your stored procedures in a package you can only track the dependencies if the package, not the individual functions/procedures within the package. If you're looking to track intra-package dependencies then you'll need a PLSQL parser.
How do you programatically identify a stored procedure's dependencies?
[ "", "sql", "oracle", "stored-procedures", "plsql", "oracle10g", "" ]
I am sure making a silly mistake but I can't figure what: In SQL Server 2005 I am trying select all customers except those who have made a reservation before 2 AM. When I run this query: ``` SELECT idCustomer FROM reservations WHERE idCustomer NOT IN (SELECT distinct idCustomer FROM reservations WHERE DATEPART ( hour, insertDate) < 2) ``` I get 0 results. But ``` SELECT idCustomer FROM reservations ``` returns 152.000 results and the "NOT IN" part: ``` SELECT distinct idCustomer FROM reservations WHERE DATEPART ( hour, insertDate) < 2 ``` returns only 284 rows
``` SELECT distinct idCustomer FROM reservations WHERE DATEPART ( hour, insertDate) < 2 and idCustomer is not null ``` Make sure your list parameter does not contain null values. Here's an explanation: ``` WHERE field1 NOT IN (1, 2, 3, null) ``` is the same as: ``` WHERE NOT (field1 = 1 OR field1 = 2 OR field1 = 3 OR field1 = null) ``` * That last comparision evaluates to null. * That null is OR'd with the rest of the boolean expression, yielding null. (\*) * null is negated, yielding null. * null is not true - the where clause only keeps true rows, so all rows are filtered. (\*) Edit: this explanation is pretty good, but I wish to address one thing to stave off future nit-picking. (TRUE OR NULL) would evaluate to TRUE. This is relevant if field1 = 3, for example. That TRUE value would be negated to FALSE and the row would be filtered.
It's always dangerous to have `NULL` in the `IN` list - it often behaves as expected for the `IN` but not for the `NOT IN`: ``` IF 1 NOT IN (1, 2, 3, NULL) PRINT '1 NOT IN (1, 2, 3, NULL)' IF 1 NOT IN (2, 3, NULL) PRINT '1 NOT IN (2, 3, NULL)' IF 1 NOT IN (2, 3) PRINT '1 NOT IN (2, 3)' -- Prints IF 1 IN (1, 2, 3, NULL) PRINT '1 IN (1, 2, 3, NULL)' -- Prints IF 1 IN (2, 3, NULL) PRINT '1 IN (2, 3, NULL)' IF 1 IN (2, 3) PRINT '1 IN (2, 3)' ```
SQL query question: SELECT ... NOT IN
[ "", "sql", "sql-server", "" ]
**Duplicate of:** [Use javascript to inject script references as needed?](https://stackoverflow.com/questions/203113/use-javascript-to-inject-script-references-as-needed) Javascript doesn't have any directive to "include" or "import" another js file. This means that if `script1.js` uses functions/objects defined in `script2.js`, then every html page that includes script1.js must include script2.js before it. This shouldn't be a big problem if you only have 2 js files in like 10 html pages. I mean, it's manageable then! But say suddenly you change `script1.js` and improve it by using functions/objects defined in a new file, `script3.js` The problem is, you can't just tell `script1.js` to include `script3.js`, instead, you have to remember every html file that included `script1.js` and update it to include `script3.js` as well! This seems like rather stupid way of organizing code. Are there recommended strategies or practices to deal with this issue? Would it be acceptable to have a gigantic js file that holds all the functionality that you use across the website?
Use dynamic includes: [Use javascript to inject script references as needed?](https://stackoverflow.com/questions/203113/use-javascript-to-inject-script-references-as-needed)
Scriptaculous (and probably other frameworks) handle this by writing script tags for the included files to the document when they are loaded. Below is the relevant bit from the scriptaculous.js file that allows loading the other files in the framework. ``` var Scriptaculous = { Version: '1.8.2', require: function(libraryName) { // inserting via DOM fails in Safari 2.0, so brute force approach document.write('<script type="text/javascript" src="'+libraryName+'"><\/script>'); }, ... ```
javascript "include" strategies
[ "", "javascript", "" ]
In the past I wrote most of my unit tests using C# even when the actual software development was in another .NET language (VB.NET, C++.NET etc.) but I could use VB to get the same results. I guess the only reason I use C# is because most of the examples in the internet are written in C#. I you use unit tests as part of .NET software development what language do you **prefer** to use and what is the reason you choose to use it? I know that it depend greatly on the language of the proect under test but what I'm interested in finding out is whether there is a preference of a specific language to use while unit testing.
You should use what ever language is comfortable for your dev team. I don't see what you would write your unit tests in a language other than the language that the rest of the project is written in as it would have the possibility to cause confusion, or require devs know two different languages to work on the project. Tests should follow the KISS principle.
C# because that is what all new code base is being written in. I would say it largely depends on what the companies preferred language is. If possible I think it best to fix on one if possible and this would also be what tests are written in too. And if only a few write in C++ then keep the more common languages still as tests so more could work well with them.
What .NET language you use to write Unit Tests?
[ "", "c#", ".net", "unit-testing", "" ]
I want to make a transparent dialog. I capture the OnCtlColor message in a CDialog derived class...this is the code: ``` HBRUSH CMyDialog::OnCtlColor(CDC* pDC, CWnd* pWnd, UINT nCtlColor) { HBRUSH hbr = CDialog::OnCtlColor(pDC, pWnd, nCtlColor); if(bSetBkTransparent_) { pDC->SetBkMode(TRANSPARENT); hbr = (HBRUSH)GetStockObject(NULL_BRUSH); } return hbr; } ``` It works fine for all the controls but the group-box (CStatic). All the labels (CStatic) are been painted with a transparent text background but the text of the group box it is not transparent. I already googled for this but I didn't find a solutions. Does anybody know how to make a real transparent group-box? By the way, I am working in Windows XP. And I don't want to fully draw the control to avoid having to change the code if the application is migrated to another OS. Thanks, Javier Note: I finally changed the dialog so that I don't need to make it transparent. Anyway, I add this information because maybe someone is still trying to do it. The groupbox isn't a CStatic but a CButton (I know this is not new). I changed the Windows XP theme to Windows classic and then the groupbox backgraund was transparent. The bad new is that in this case the frame line gets visible beneath the text...so if someone is following this approach I think maybe he/she would better follow the Adzm's advice.
You have two options. You can not use Common Controls v6 (the XP-Styled controls), which will make your app lose the fanciness of newer windows versions. However IIRC the groupbox will respect the CTLCOLOR issue. If you are not using that anyway, and it is still not respecting your color, then you only have one option... Which is to draw it yourself. I know you said you don't want to, but sometimes you have to. Thankfully a group box is a very simple control to draw. This page has an example for drawing a classic-style group box: <http://www.codeguru.com/cpp/controls/controls/groupbox/article.php/c2273/> You can also draw it very simply using the UxTheme libraries that come with XP+. If the application will be migrated to another OS, you will have plenty to deal with migrating over an MFC application in general. If that is your goal, then you should really look into developing with a cross-platform UI toolkit.
Simply set the WS\_EX\_TRANSPARENT extended window style for the group box.
How to make the group-box text background transparent
[ "", "c++", "visual-studio-2005", "mfc", "" ]
I want to display from cache for a long time and I want a slightly different behavior on page render vs loading the page from cache. Is there an easy way I can determine this with JavaScript?
One way you could do it is to include the time the page was generated in the page and then use some javascript to compare the local time to the time the page was generated. If the time is different by a threshold then the page has come from a cache. The problem with that is if the client machine has its time set incorrectly, although you could get around this by making the client include its current system time in the request to generate the page and then send that value back to the client.
I started with the answer "Daniel" gave above but I fear that over a slow connection I could run into some latency issues. Here is the solution that ultimately worked for me. On the server side I add a cookie refCount and set it's value to 0. On document load in javascript I first check refCount and then increment it. When checking if refCount is greater than 1 I know the page is cached. So for this works like a charm. Thanks guys for leading me to this solution.
How can I use JavaScript to detect if I am on a cached page
[ "", "javascript", "" ]
Could anyone suggest a good packet sniffer class for c++? Looking for a easy insertable class I can use in my c++ program, nothing complicated.
You will never be able to intercept network traffic just by inserting a class into your project. Packet capture functionality requires kernel mode support, hence you will at the very least need to have your application require or install libpcap/WinPcap, as Will Dean pointed out. Most modern Unix-like distributions include libpcap out of the box, in which case you could take a look at this very simple example: <http://www.tcpdump.org/pcap.htm> If you're using Windows, you're more or less on your own, although [WinPcap](http://www.winpcap.org/) programming is extremely similar to libpcap programming (unsurprisingly, since it's a libpcap port to Win32.) The SDK can be found here: <http://www.winpcap.org/devel.htm> At any rate, no matter the operating system, you *will* need root / Administrator access to actually perform a capture. Just using the library to replay or analyze precaptured data doesn't require any special privilege, of course.
You'll need to say something about your platform, as this is a platform rather than a language thing. But assuming you're on something common, look into pcap or winpcap.
Could anyone suggest a good packet sniffer class for c++?
[ "", "c++", "packet-capture", "winpcap", "sniffing", "" ]
What is the best design decision for a 'top-level' class to attach to an event to a class that may be '5+ layers down in the callstack? For example, perhaps the MainForm has spawned an object, and that object has spawned a callstack of several other object calls. The most obvious way would be to chain the event up the object hierarchy, but this seems messy and requires a lot of work. One other solution ive seen is to use the observer pattern by creating a publically accessible static object which exposes the event, and acts as a proxy between the bottom-level object, and the top-level 'form'. Any recommendations? Here's a pseudo-code example. In this example, the MainForm instantiates 'SomeObject', and attaches to an event. 'SomeObject' attaches to an object it instantiates, in an effort to carry the event up to the MainForm listener. ``` class Mainform { public void OnLoad() { SomeObject someObject = new SomeObject(); someObject.OnSomeEvent += MyHandler; someObject.DoStuff(); } public void MyHandler() { } } class SomeObject { public void DoStuff() { SomeOtherObject otherObject = new SomeOtherObject(); otherObject.OnSomeEvent += MyHandler; otherObject.DoStuff(); } public void MyHandler() { if( OnSomeEvent != null ) OnSomeEvent(); } public event Action OnSomeEvent; } ```
If your application isn't based on Composite UI Application Blocks, the easiest solution is to put a "listener" class between Main form and your other components which both classes can easily access. Conceptually, the classes are laid out as follows: ``` ---------- ---------------- | MainForm | | Some Component | --------- ---------------- | | Hooks onto Notifies | | \ / ----------------- | Proxy Notifier | ----------------- ``` Here's some example code: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { FakeMainForm form = new FakeMainForm(); form.CreateComponentAndListenForMessage(); Console.ReadKey(true); } } class FakeMainForm { public FakeMainForm() { Listener.AddListener(MessageRecieved); } void MessageRecieved(string msg) { Console.WriteLine("FakeMainForm.MessageRecieved: {0}", msg); } public void CreateComponentAndListenForMessage() { ComponentClass component = new ComponentClass(); component.PretendToProcessData(); } } class Listener { private static event Action<string> Notify; public static void AddListener(Action<string> handler) { Notify += handler; } public static void InvokeListener(string msg) { if (Notify != null) { Notify(msg); } } } class ComponentClass { public void PretendToProcessData() { Listener.InvokeListener("ComponentClass.PretendToProcessData() was called"); } } } ``` This program outputs the following: ``` FakeMainForm.MessageRecieved: ComponentClass.PretendToProcessData() was called ``` This code allows you to invoke methods directly on any listener, no matter how far apart they are in the call stack. Its easy to rewrite your Listener class so that its a little more generic and works on different types, but you should get the idea.
My initial intention would be to try and avoid that, so that an object's scope has obvious boundaries. In the particular case of Forms, I would attempt to have the child's parent form manage all required communications withs its ancestors. Can you be more specific about your case?
Best way to attach to events far down in the callstack in C#?
[ "", "c#", "events", "" ]
I have pretty much finished my first working Symbian application, but in my hastened learning have paid little attention to memory management and pushing to and cleaning up the stack? Could somebody please point me in the direction of some of the best practises to use here, and maybe some of the best leak detection/memory profiling tools. For example, if I grab a TDesC or a TPtrC16 inside a function, how do I then clean them up the best way, is it simply ``` TPtrC16 temp = ... temp.CleanupClosePushL(); CleanupStack::PopAndDestroy() ``` ..for everything? Thanks and please forgive me, I am a self confessed Symbian n00b.
I have in the past used [HookLogger from Symbian](http://www.newlc.com/Hooklogger-Tracking-leaked-heap.html) to trace and investigate memory leaks. It is not the best, but it sure does help. Also, the heap markers raise ALLOC panics in case of memory leaks whenever your exit your application. The information those panics provide is barely helpful, but it is good indication that there is a memory leak somewhere. From there, you may use tools like HookLogger to narrow in on which part of the code the leak is getting introduced at. I mentioned a few Symbian resources [in reply to this thread](https://stackoverflow.com/questions/200441/startup-point-for-symbianembedded-programming-learning#330815). While all those have been most useful for me, I did find the book [Accredited Symbian Developer Primer](http://www.amazon.co.uk/Accredited-Symbian-Developer-Primer-Fundamentals/dp/0470058277) immensely helpful. If you could get your hands on it, I would definitely recommend it. It is a quick read, but it explains the concepts brilliantly, including things about memory management and descriptors.
Things stored on the stack do not need to be stored on the cleanup stack (unless they need special handling (R Classes etc, see below) ) The cleanup stack is for deleting objects when a leave (think exception) occurs, which would otherwise leak memory. The actual use of the cleanup stack is through the static functions CleanupStack::PushL(..) and CleanupStack::Pop / PopAndDestroy. Some classes such as RFile,RFs have to closed rather than deleted, so for these functions have their ::Close function called so you should use the global function CleanupClosePushL(), which instead of calling the delete operator on your object on a leave, it calls the class' ::Close function instead. To check your code for memory leaks, you can use the macros \_\_UHEAP\_MARK; and \_\_UHEAP\_MARKEND; which will verify that nothing is left on the heap from between these two calls. If you leave anything on the cleanupstack in an CActive's::RunL, the active scheduler will panic. As a general technique, if a function that you are calling *could* leave, (denoted by a trailing 'L') then anythign that must be deleted or closed (etc) should be added to the cleanup stack.
Memory management practices and tools for Symbian C++
[ "", "c++", "memory-management", "symbian", "s60", "" ]
I have a question about using `new[]`. Imagine this: ``` Object.SomeProperty = new[] {"string1", "string2"}; ``` Where SomeProperty expects an array of strings. I know this code snippet will work. But i want to know what it does under the hood. Does `new[]` makes an instance of the class `object` and in `SomeProperty` it converts it automatically to a `string` object? Thanks
Okay, there's still a *little* bit of confusion here. The inference that's going on has nothing to with the type of Object.SomeProperty, but everything to do with the types of the expressions in the array initializer. In other words, you could do: ``` object o = new[] { "string1", "string2" }; ``` and o would still be a reference to a string array. Basically, the compiler looks at an expression like this: ``` new[] { A, B, C, D, ... } ``` (where A, B, C, D etc are expressions) and tries to work out the correct array type to use. It only considers the types of A, B, C and D (etc) as the array element type. Taking this set of candidate types, it tries to find one which all the others can be implicitly converted to. If there's not *exactly one* such type then the compiler will complain. So for example: ``` new[] { new Form(), new MemoryStream() } ``` will not compile - neither `MemoryStream` nor `Form` is convertible to the other. However: ``` new[] { GetSomeIDisposable(), new MemoryStream() } ``` will be treated as an `IDisposable[]` because there's an implicit conversion from `MemoryStream` to `IDisposable`. Likewise: ``` new[] { 0, 1, 3.5 } // double[] new[] { 1, 3, 100L } // long[] ```
This is just syntactical sugar. The compiler will infer the type actually necessary here and create code that is equivalent to the explicit construct: ``` Object.SomeProperty = new string[] {"string1", "string2"}; ``` There's no such thing as `new[]` that gets executed at runtime.
C# Using new[]
[ "", "c#", "new-operator", "" ]
I get an error when I compile this code: ``` using System; public struct Vector2 { public event EventHandler trigger; public float X; public float Y; public Vector2 func() { Vector2 vector; vector.X = 1; vector.Y = 2; return vector; // error CS0165: Use of unassigned local variable 'vector' } } ``` hi! The compiler says: "Use of unassigned local variable 'vector'" and points to the return value. It looks to me that Vector2 become a reference type (without the event member it acts normally). What is happening?
In C# you still need to 'new' a struct to call a constructor unless you are initializing **all** the fields. You left EventHandler member 'trigger' unassigned. Try either assigning to 'trigger' or using: ``` Vector2 vector = new Vector2() ``` The new object is **not** allocated on the heap, it is still allocated on the functions stack. To quote [MSDN](http://msdn.microsoft.com/en-us/library/ah19swz4(VS.71).aspx): > When you create a struct object using > the new operator, it gets created and > the appropriate constructor is called. > Unlike classes, structs can be > instantiated without using the new > operator. **If you do not use new,** the > fields will remain unassigned and **the** > **object cannot be used** until all of the > fields are initialized.
Others have explained ways round this, but I think it's worth mentioning the other big, big problem with your code: you have a mutable struct. Those are pretty much *always* a bad idea. This is bound to be just the first of many issues you'll run into if you keep it that way. I *strongly* recommend that you either make it immutable or make it a class.
Why do I get this error creating & returning a new struct?
[ "", "c#", "reference", "events", "struct", "compiler-errors", "" ]
this is the scenario: multiple web systems (mostly lampp/wampp) exist, most of them with separate login information, (some share it). We're considering the benefits/disadvantages of unifying them somehow, or at least making handling the user administration parts easier. Due to the nature of some systems (its a mixed bag of custom OSS systems,internally developed software and 3rd party commercial software) we can't unify all login-screens into a single screen. A idea passed around is a sort of login master brain were we can control all user name creation,permissions,inactivation, etc. This will still make people have to manually log in into every system, but at least it'll make the administrative load of user management easier. Are there any known solutions to this kind of problem that involves (necesarily, it could be considered) changing the least amount of code/systems possible? **Edit:** **OpenID doesnt work for us** since we have different login needs and some systems we cant directly control how they handle the login process (but we can control the users/passwords).
What we did was to centralise all login details in one repository (Active Directory for us), then wrote a c# library to authenticate with wrappers for all the languages we programmed in (PHP, C, .NET, etc). and then just wrote some glue code in the appropriate place for each application. Aside from our in-house apps, we successfully logged into Mediawiki, Subversion, ActiveCollab and Apache this way. It does involve writing a reasonable amount of code, but not ridiculous amounts, and it will work for the future as well. I can't see a practical solution which would be easier than this. Reading your question I note that this is more-or-less what you're thinking anyway, but it will work!
There is a big industry around it and it is called IAM - Identity Access Management. The IAM solutions basically do what You want - manager users, user permissions and translate their internal state to the multitude of systems. Depending on possibility of integration, You might have a "SSO" - Single Sign On for some software or You could have Single Source of Authentication. The former differs from later in the fact that with SSO user needs in to punch the credentials once, while the in the later he only has same login and password combo. Also IAM would manage to extent of its possibilities user rights. For example, a network equipment can only support one user/password. Then IAM solution would automatically open a terminal and log on the user, when he/she requests it; assuming the user is in the right security group. Implementing an IAM solution could go a long way to ease systems management. I can't recommend any particular solution, just bear in mind that transition from current method to IAM will require more than integration with different software, but also some change in corporate culture as one system will bind all others.
Best Way to handle/consolidate multiple Logins?
[ "", "php", "security", "authentication", "" ]