text
stringlengths
8
267k
meta
dict
Q: ReportViewer Division by zero I have some formulas in my reports, and to prevent divsion by zero I do like this in the expression field: =IIF(Fields!F1.Value <> 0, Fields!F2.Value/Fields!F1.Value, 0) This normally works fine, but when both F1 and F2 are zero, I get "#Error" in the report, and I get this warning: "The Value expression for the textbox ‘textbox196’ contains an error: Attempted to divide by zero." Why is that? A: IIF() is just a function, and like with any function all the arguments are evaluated before the function is called, including Fields!F2.Value/Fields!F1.Value. In other words, it will attempt to divide by zero in spite of the Fields!F1.Value <> 0 condition. A: There has to be a prettier way than this, but this should work: =IIF(Fields!F1.Value <> 0, Fields!F2.Value / IIF(Fields!F1.Value <> 0, Fields!F1.Value, 42), 0) A: However, you can use if Fields!F1.Value <> 0 then Fields!F2.Value/Fields!F1.Value else 0 which should work, since it doesn't evaluate the then clause if the "if" section is false.
{ "language": "en", "url": "https://stackoverflow.com/questions/158508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: .NET enterprise application platform (same as JBoss to Java) As a .NET developer I'm asking whether JBoss alternatives exist to be "more suitable for .NET development" as an enterprise application platform. Please do not make any suggestions, such as "make JBoss to expose WebServices"... A: Java lacks a "hosting" solution - this is where (mainly) all the solutions like JBoss and WhebLogic are popping up from. In .NET you have so many different hosting solutions like: services, IIS, SQL, BizTalk ... Now with the recent WCF features you can implement your own JBoss in 5 minutes - create an object with a Data and Messaging contracts, drop a couple of lines of config files to expose the interfaces via Web/Other services and define a threading model - you get your own JBoss. Java has no such story in their toolbox - this is why you need things like JBoss. [Of cause I'm oversimplifying things, but it's the truth in 80% of the time]. If you note that with the solutions like Apache CX Services - more and more people are moving away from JBoss. Now they can get an alternative. A: I'm also not sure quite what you're looking for. However, if you are looking for an Enterprise Application Platform for .Net you might take a look at our product, NetQuarry, at www.netquarry.com. The NetQuarry Enterprise Application Platform uses IIS as the application server, and provides a feature-rich application support later on top of that including object relational mapping, rich AJAX UI, etc. A: If you're looking for java app servers, there's WebSphere and WebLogic. But it'd probably be little different from jBoss from you prospective. What are you looking for? What does jBoss do that you want the alternative to do? Are you looking for something in .NET? Is your .NET code a client that's going to interface with the server or are you writing a server in .NET?
{ "language": "en", "url": "https://stackoverflow.com/questions/158509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is it possible to make git svn dcommit result in a single svn commit? According to the manual, git dcommit “will create a revision in SVN for each commit in git.” But is there a way to avoid multiple Subversion revisions? That is, to have git merge all changes prior to performing the svn commit? A: Ryan Tomayko wrote a bit about git rebase -i, which he said: …[it's] a bit like git commit --amend hopped up on acid and holding a chainsaw – completely insane and quite dangerous but capable of exposing entirely new states of mind. Here you can edit, squash, reorder, tease apart, and annotate existing commits in a way that’s easier and more intuitive than it ought to be. I have a tendency to commit often in git, but don't necessarily want to dcommit every commit to svn, and squashing all my work makes just as little sense. I'm trying it now to reorder and squash a few together into more logical commit units now. A: If you work on a branch in git, you can git-merge --squash, which does that within git. You could then push that one squashed commit to SVN. Of course, lots of small commits are good, so why would you want to squash them? A: The command git rebase -i can do this, and more. This command is extremely powerful, so it's good to make friends with it. The syntax is: git rebase -i <commit ID>. This brings up your text editor, with options (and instructions) for modifying all the commits up to (not including) the given ID. For example, to modify the previous 5 commits, you can do this: git rebase -i HEAD~5 Or if your SVN branch is called "svn/trunk", then this syntax is good too: git rebase -i svn/trunk Then a text editor window will pop up. To squash everything, change the first word of every line after the first from "pick" to "squash" (If this sounds confusing- it will make more sense when you see it). Then save and close the editor. You'll then have a chance to edit the commit message for the squashed commit. Among the other things you can do with git rebase -i, are reordering commits, squashing commits in different ways, and removing commits. I use this command constantly; it's a killer feature of Git.
{ "language": "en", "url": "https://stackoverflow.com/questions/158514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: How to handle a lot of flags for a SQL record I need advice on how to handle relatively large set of flags in my SQL2k8 table. Two question, bear with me please :) Let's say I have 20 flags I'd like to store for one record. For example: CanRead = 0x1 CanWrite = 0x2 CanModify = 0x4 ... and so on to the final flag 2^20 Now, if i set the following combination of one record: Permissions = CanRead | CanWrite I can easily check whether that record has required permission by doing WHERE (Permissions & CanRead) = CanRead That works. But, I would also like to retrieve all records that can either write OR modify. If I issue WHERE (Permissions & ( CanWrite | CanModify )) = (CanWrite | CanModify) i obviously won't get my record that has permissions set to CanRead | CanWrite In other words, how can I find records that match ANY of the flags in my mask that i'm sending to the procedure? Second question, how performant is in in SQL 2008? Would it actually be better to create 20 bit fields? Thanks for your help A: What about WHERE (Permissions & CanWrite) = CanWrite OR (Permissions & CanModify) = CanModify ? A: WHERE (Permissions & CanWrite) = CanWrite OR (Permissions & CanModify) = CanModify I think A: Isn't it as simple as ... WHERE (Permissions & ( CanWrite | CanModify )) > 0 ...as any 'bit' being set to 1 will result in a non-zero value for the '&' operator. It's late in the day, and I'm about to go home, so my brain could be working inefficiently. A: I assume your Permissions column is an Int. If it is, I encourage you to play around with the sample code I provide below. This should give you a clear indication of how the functionality works. Declare @Temp Table(Permission Int, PermissionType VarChar(20)) Declare @CanRead Int Declare @CanWrite Int Declare @CanModify Int Select @CanRead = 1, @CanWrite = 2, @CanModify = 4 Insert Into @Temp Values(@CanRead | @CanWrite, 'Read,write') Insert Into @Temp Values(@CanRead, 'Read') Insert Into @Temp Values(@CanWrite, 'Write') Insert Into @Temp Values(@CanModify | @CanWrite, 'Modify, write') Insert Into @Temp Values(@CanModify, 'Modify') Select * From @Temp Where Permission & (@CanRead | @CanWrite) > 0 Select * From @Temp Where Permission & (@CanRead | @CanModify) > 0 When you use logical and, you will get a number with the 1's set appropriately based on your condition. If nothing matches, the result will be 0. If 1 or more condition matches, the result will be greater than 0. Let me show you an example. Suppose CanRead = 1, CanWrite = 2, and CanModify = 4. The valid combinations are: Modify Write Read Permissions ------ ----- ---- ----------- 0 0 0 Nothing 0 0 1 Read 0 1 0 Write 0 1 1 Read, Write 1 0 0 Modify 1 0 1 Modify, Read 1 1 0 Modify, Write 1 1 1 Modify, Write, Read Now, suppose you want to test for Read or Modify. From your app, you would pass in (CanRead | CanModify). This would be 101 (in binary). First, let's test this against a row in the table the ONLY has read. 001 (Row from table) & 101 (Permissions to test) ------ 001 (result is greater than 0) Now, let's test against a row that only has Write. 010 (Row from table) & 101 (Permission to test) ------ 000 (result = 0) Now test it against row that has all 3 permissions. 111 (Row from table) & 101 (Permission to test) ------ 101 (result is greater than 0) I hope you can see that if the result of the AND operation results in a value = 0, then none of the tested permissions apply to that row. If the value is greater than 0, then at least one row is present. A: Don't to that. It's like saving a CSV string into a memo field and defeating the purpose of a database. Use a boolean (bit) value for every flag. In this specific sample you're finding everything that can read and can write or modify: WHERE CanRead AND (CanWrite OR CanModify) Simple pure SQL with no clever hacks. The extra 7 bit's you're wasting for every flag aren't worth the headache. A: It'd be considerably better to have a different permissions model. 20 flags would indicate to me that a rethink is required, most filing systems can get by with 12 basic flags and ACLS - maybe having a separate table that merely grants permissions, or grouping objects or accessors to allow different control. I would expect a select to be quicker to have 20 separate fields - but I wouldn't add 20 fields for performance either. --update-- the original query written as WHERE (Permissions & ( CanWrite | CanModify )) > 0 would suffice, however it sounds to be as though what you have in the database is a set of attributes that an entity can have. In which case the only sensible (in database terms) way to do this is with a one-to-many relationship to an attribute table. A: Nope, that won't work. I'm sending just one mask to the procedure Something like @filter which in C# i fill with @filter = CanModify | CanWrite So, the procedure gets the OR-ed value as a filter. Oh and by the way, it is NOT a permission model, I'm using that just as an example. I really have around 20 unique flags that my object can have. A: Do this only if you are also querying by some other key. Don't do this if you are querying by flag combinations. An index against this column will not help you in general. You'll be restricted to table-scans.
{ "language": "en", "url": "https://stackoverflow.com/questions/158519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: In PowerShell, how can I determine the root of a drive (supposing it's a networked drive) In PowerShell, even if it's possible to know if a drive is a network drive: see In PowerShell, how can I determine if the current drive is a networked drive or not? When I try to get the "root" of the drive, I get back the drive letter. The setup: MS-Dos "net use" shows that H: is really a mapped network drive: New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK H: \\spma1fp1\JARAVJ$ Microsoft Windows Network The command completed successfully. Get-PSDrive tells us that the Root is H: PS:24 H:\temp >get-psdrive h Name Provider Root CurrentLocation ---- -------- ---- --------------- H FileSystem H:\ temp and using system.io.driveinfo does not give us a complete answer: PS:13 H:\ >$x = new-object system.io.driveinfo("h:\") PS:14 H:\ >$x.DriveType Network PS:15 H:\ >$x.RootDirectory Mode LastWriteTime Length Name ---- ------------- ------ ---- d---- 29/09/2008 16:45 h:\ Any idea of how to get that info? Thanks A: The trick is that the attribute name is different than expected. Try: (Get-PSDrive h).DisplayRoot A: Try WMI: Get-WMIObject -query "Select ProviderName From Win32_LogicalDisk Where DeviceID='H:'" A: $drive = gwmi win32_logicaldisk -filter "DeviceID='H:'" if($drive.DriveType -eq 4) {write-host "drive is a network share"} A: $fso=new-object -com "Scripting.Filesystemobject" $fso.GetDrive("Y").ShareName
{ "language": "en", "url": "https://stackoverflow.com/questions/158520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there an easy way to implement LINQ to object with a DataContext? I have a working LINQ to SQL model. I want to be able to use the same model but with a connection to a DataSet object, instead of SQL Server. I need to be able to query the model, modify fields, as well as do insert and delete operations. Is there an easy way to accomplish this? I noticed another question mentions a similar scenario, but I'm not sure if this applies to my question. A: You can use LINQ to DataSet directly but the LINQ to SQL query translator converts expression trees into SQL statements and that can't be changed. For lists of inserts/updates/deletes for a given DataContext, you can call DataContext.GetChangeSet() A: You want a DataContext that is backed by a DataSet. No, this does not exist unless you build it.
{ "language": "en", "url": "https://stackoverflow.com/questions/158521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reading a windows *.dmp file I was wonder if any knows how to open up a windows *.dmp file after a application crash written C/C++. A: Here's a link to an article from Microsoft on reading the small memory dump files that Windows creates for debugging A: When using Debugging Tools for Windows be sure to setup symbols. For Microsoft symbols use: SRV*DownstreamStore*http://msdl.microsoft.com/download/symbols For example: SRV*c:\websymbols*http://msdl.microsoft.com/download/symbols Take a look at these blogs for more on debugging: * *http://blogs.msdn.com/tom *http://blogs.msdn.com/ntdebugging *http://blogs.msdn.com/tess A: Using Visual Studio's File>Open Project or the free WinDbg's (part of Debugging Tools for Windows) File>Open Crash Dump select the dmp file. Make sure to configure the tools to include a path to the location of the PDB debugging symbols for that application (you do have symbols right?). Either tool has a thread and call stack window that should give you a good idea where the crash occurred. Including paths to the source code will help as well. Symbol and Source paths can be set in WinDbg under the File menu. It's buried in Visual Studio under Tools>Options>Debugging>Symbols and Tools>Options>Project and Solutions>VC++ Directores A: If you mean a dump file created by windows (either small memory dump, kernel memory dump or full memory dump) that is created after a system crash then you need WinDBG A: You should be able to just double click the .dmp file to automatically open it in Visual Studio. If the .pdb file that was generated when the program was compiled is still around, Visual Studio should be able to automatically load the symbols from that. From then on, you can just hit Run/Debug (F5) to start peeking into the .dmp file.
{ "language": "en", "url": "https://stackoverflow.com/questions/158534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Setting Focus on a Control within a ControlTemplate in WPF In an application I'm working on, we have a bunch of custom controls with their ControlTemplates defined in Generic.xaml. For instance, our custom textbox would look similar to this: <Style TargetType="{x:Type controls:FieldTextBox}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type controls:FieldTextBox}"> <Border BorderThickness="0" Margin="5"> <StackPanel ToolTip="{Binding Path=Field.HintText, RelativeSource={RelativeSource TemplatedParent}}"> <TextBlock Text="{Binding Path=Field.FieldLabel, RelativeSource={RelativeSource TemplatedParent}}" HorizontalAlignment="Left" /> <TextBox Width="{Binding Path=Field.DisplayWidth, RelativeSource={RelativeSource TemplatedParent}}" HorizontalAlignment="Left" Text="{Binding Path=Field.Data.CurrentValue, RelativeSource={RelativeSource TemplatedParent}}" IsEnabled="{Binding Path=Field.IsEnabled, RelativeSource={RelativeSource TemplatedParent}}" ContextMenu="{Binding Source={StaticResource FieldContextMenu}}" > <TextBox.Background> <SolidColorBrush Color="{Binding Path=Field.CurrentBackgroundColor, RelativeSource={RelativeSource TemplatedParent}}"/> </TextBox.Background> </TextBox> </StackPanel> </Border> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="Focusable" Value="True" /> <Setter Property="IsTabStop" Value="False" /> </Style> In our application, we need to be able to programatically set the focus on a particular control within the ControlTemplate. Within our C# code, we can get to the particular "FieldTextBox" based on our data. Once we have the correct FieldTextBox, we need to be able to set the focus on the actual TextBox contained within the ControlTemplate. The best solution I've come up with is to set a name on the primary control in each control template (in this case it's the TextBox), such as "FocusableControl." My code (contained in the code-behind for the FieldTextBox) to then set focus on the control would be: Control control = (Control)this.Template.FindName("FocusableControl", this); if (control != null) { control.Focus(); } This solution works. However, does anyone else know of a solution that would be more efficient than this? A: Within your control template you can add a Trigger that sets the FocusedElement of the StackPanel's FocusManager to the textbox you want focused. You set the Trigger's property to {TemplateBinding IsFocused} so it fires when the containing control is focused. A: You can get rid of the hard coding of control name in the code by providing some DependancyProperty and have the same code in controlLoaded or OnApplyTemplate function based on the DependancyProperty. This DependancyProperty's sender will the candidate for .Focus() call.
{ "language": "en", "url": "https://stackoverflow.com/questions/158536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: not working in firefox and chrome A: You're looking for <br /> instead of </br> Self closing tags such as br have the slash at the end of the tag. Here are the other self-closing tags in XHTML: * *What are all the valid self-closing tags in XHTML (as implemented by the major browsers)? A: IE7 is more forgiving of incorrect syntax in quirksmode. Instead of <br> or </br> it should be <br /> A: That's because </br> is an invalid tag. What you want is <br />. A: If you are trying to put space between two divs and <br/> is not working then insert this code (between the divs) to get the <br/> tag working. <div class="clear"></div> and add .clear { clear: both; } in your css file. A: The br tag should be: <br/> A: It should be <br> or <br /> not </br> A: should probably be used only if you are writing XHTML. If you use validator.w3.org to validate the following as HTML 4.01: <html> <head> <title></title> </head> <body> <p> <br /> </p> </body> </html> This warning is generated: Line 8, Column 3: NET-enabling start-tag requires SHORTTAG YES. <br /> The sequence can be interpreted in at least two different ways, depending on the DOCTYPE of the document. For HTML 4.01 Strict, the '/' terminates the tag '). However, since many browsers don't interpret it this way, even in the presence of an HTML 4.01 Strict DOCTYPE, it is best to avoid it completely in pure HTML documents and reserve its use solely for those written in XHTML. A: It should just be <br>. A: You want <BR> or <BR />, not </BR> A: If you are using struts set escapeXml="false" A: It’s not </br>, it’s <br> or <br />. So, this doesn’t work as expected: <!doctype html> <html> <head> <title></title> </head> <body> <p> Some text... </br> Some more text... </p> More content... </body> </html> But this works: <!doctype html> <html> <head> <title></title> </head> <body> <p> Some text... <br> Some more text... <br /> </p> More content... </body> </html> A: For me CSS was an issue. For this tag display: none; was used so <br> tag was not rendering. A: Alternatively to <br /> or <br> you can use <p></p> or </p>
{ "language": "en", "url": "https://stackoverflow.com/questions/158539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Check/uncheck parents and children Can you do a better code? I need to check/uncheck all childs according to parent and when an child is checked, check parent, when all childs are unchecked uncheck parent. $(".parent").children("input").click(function() { $(this).parent().siblings("input").attr("checked", this.checked); }); $(".parent").siblings("input").click(function() { if (this.checked) { $(this).siblings("div").children("input").attr("checked", true); return; } var childs = $(this).siblings("div").siblings("input"); for (i = 0; i < childs.length; i++) { if ($(childs.get(i)).attr("checked")) return; } $(this).parent().children("div").children("input").attr("checked", false); }); A: $(".parent").children("input").click(function() { $(this).parent().siblings("input").attr("checked", this.checked); }); $(".parent").siblings("input").click(function() { $(this).siblings("div").children("input").attr("checked", this.checked || $(this).siblings("input[checked]").length>0 ); }); A: woah, i'm mega confused. it looks as though you have inputs with other inputs inside of them? ...which doesn't make sense. Here's what I think your structure looks like, so here I go. <div class="parent"> <input type="checkbox" /> <div> <input type="checkbox" /> <input type="checkbox" /> </div> <input type="checkbox" /> <div> <input type="checkbox" /> <input type="checkbox" /> </div> </div> And here's the code I'd use. $("input[type='checkbox']").click(function() { // turn on or off all descendants. $(this) // get this checkbox // get the div directly after it .next('div') // get ALL the inputs in that div (not just first level children) .find("input[type='checkbox']") .attr("checked", this.checked) ; // now check if we have to turn the parent on or off. $(this) .parent() // this will be the div .prev('input[type="checkbox"]') // this is the input .attr( "checked", // set checked to true if... this.checked // this one is checked, or... || $(this).siblings("input[type='checkbox'][checked]").length > 0 // any of the siblings are checked. ) ; }); update: i've just tested this and it totally works (woo!). It also works with as many levels of nesting as you want, not just two. A: This is a lovely little thread that has got me nearly where I need to be. I've slightly adapted Nickf's code. The aim is that checking a child would also check the parent, and unchecking a parent would also uncheck all children. $("input[type='checkbox']").click(function() { if (!$(this.checked)) { $(this) .next('div') .find("input[type='checkbox']") .attr("checked", this.checked) ; } $(this) .parent() .prev('input[type="checkbox"]') .attr("checked", this.checked || $(this).siblings("input[type='checkbox'][checked]").length > 0 ) ; }); How would one go about tweaking this so that all descendants are unchecked if the parent is unchecked? New all things JQ and javascript..apologies. A: $('.parent').click(function(){ checkBox = $(this).find('input[type=checkbox]'); checkBox.prop("checked", !checkBox.prop("checked")); // inverse selection }); Ultimate.
{ "language": "en", "url": "https://stackoverflow.com/questions/158544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best way to store and use a large text-file in python I'm creating a networked server for a boggle-clone I wrote in python, which accepts users, solves the boards, and scores the player input. The dictionary file I'm using is 1.8MB (the ENABLE2K dictionary), and I need it to be available to several game solver classes. Right now, I have it so that each class iterates through the file line-by-line and generates a hash table(associative array), but the more solver classes I instantiate, the more memory it takes up. What I would like to do is import the dictionary file once and pass it to each solver instance as they need it. But what is the best way to do this? Should I import the dictionary in the global space, then access it in the solver class as globals()['dictionary']? Or should I import the dictionary then pass it as an argument to the class constructor? Is one of these better than the other? Is there a third option? A: If you create a dictionary.py module, containing code which reads the file and builds a dictionary, this code will only be executed the first time it is imported. Further imports will return a reference to the existing module instance. As such, your classes can: import dictionary dictionary.words[whatever] where dictionary.py has: words = {} # read file and add to 'words' A: Even though it is essentially a singleton at this point, the usual arguments against globals apply. For a pythonic singleton-substitute, look up the "borg" object. That's really the only difference. Once the dictionary object is created, you are only binding new references as you pass it along unless if you explicitly perform a deep copy. It makes sense that it is centrally constructed once and only once so long as each solver instance does not require a private copy for modification. A: Adam, remember that in Python when you say: a = read_dict_from_file() b = a ... you are not actually copying a, and thus using more memory, you are merely making b another reference to the same object. So basically any of the solutions you propose will be far better in terms of memory usage. Basically, read in the dictionary once and then hang on to a reference to that. Whether you do it with a global variable, or pass it to each instance, or something else, you'll be referencing the same object and not duplicating it. Which one is most Pythonic? That's a whole 'nother can of worms, but here's what I would do personally: def main(args): run_initialization_stuff() dictionary = read_dictionary_from_file() solvers = [ Solver(class=x, dictionary=dictionary) for x in len(number_of_solvers) ] HTH. A: Depending on what your dict contains, you may be interested in the 'shelve' or 'anydbm' modules. They give you dict-like interfaces (just strings as keys and items for 'anydbm', and strings as keys and any python object as item for 'shelve') but the data is actually in a DBM file (gdbm, ndbm, dbhash, bsddb, depending on what's available on the platform.) You probably still want to share the actual database between classes as you are asking for, but it would avoid the parsing-the-textfile step as well as the keeping-it-all-in-memory bit.
{ "language": "en", "url": "https://stackoverflow.com/questions/158546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Get street address at lat/long pair I've seen that it's possible to get the latitude and longitude (geocoding, like in Google Maps API) from a street address, but is it possible to do the reverse and get the street address when you know what the lat/long already is? The application would be an iPhone app (and why the app already knows lat/long), so anything from a web service to an iPhone API would work. A: This is called "reverse geocoding", and there do exist web services that will provide this functionality. I'd urge being wary of the quality, scaling, and reliability of free services, but here's a place to start: http://www.geonames.org/export/reverse-geocoding.html A: iPhone OS 3.0 now has the MKReverseGeocoder class for precisely this purpose. A: Google again http://nicogoeminne.googlepages.com/documentation.html http://groups.google.com/group/Google-Maps-API/web/resources-non-google-geocoders A: Google now supports reverse geocoding in both JavaScript API and webservice over HTTP. Request looks like this: http://maps.google.com/maps/geo?output=xml&oe=utf-8&ll=LAT,LON&key=API_KEY Note, you must change LAT to latitude, LON to longitude and API_KEY to be you Google Maps API key. Service return results on on countries which geocoding marked as YES in following spreadsheet: http://gmaps-samples.googlecode.com/svn/trunk/mapcoverage_filtered.html More info should be found soon from official documentation: http://code.google.com/apis/maps/documentation/services.html#Geocoding_Direct A: You can also use LINK REMOVED library for that purpose. MKReverseGeocoder is nice but it requires you to use it with a Google map. From MKReverseGeocoder reference documentation: The Google terms of service require that the reverse geocoding service be used in conjunction with a Google map; take this into account when designing your application’s user interface. SSLocationManager might be an alternative in the case your application does not use a Google Map but just needs to access detailed information about the current location having only latitude and longitude data at hand. It uses Yahoo! PlaceFinder API. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/158557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Tomcat 6: how to delete temporary files after a web method call has ended? I have a temporary file with data that's returned as part of a SOAP response via a MTOM binary attachment. I would like to trash it as soon as the method call "ends" (i.e., finishes transferring). What's the best way for me to do this? The best way I can figure out how to do this is to delete them when the session is destroyed, but I'm not sure if there's a more 'immediate' way to do this. FYI, I'm NOT using Axis, I'm using jax-ws, if that matters. UPDATE: I'm not sure the answerers are really understanding the issue. I know how to delete a file in java. My problem is this: @javax.jws.WebService public class MyWebService { ... @javax.jws.WebMethod public MyFileResult getSomeObject() { File mytempfile = new File("tempfile.txt"); MyFileResult result = new MyFileResult(); result.setFile(mytempfile); // sets mytempfile as MTOM attachment // mytempfile.delete() iS WRONG // can't delete mytempfile because it hasn't been returned to the web service client // yet. So how do I remove it? return result; } } A: I ran into this same problem. The issue is that the JAX-WS stack manages the file. It is not possible to determine in your code when JAX-WS is done with the file so you do not know when to delete it. In my case, I am using a DataHandler on my object model rather than a file. MyFileResult would have the following field instead of a file field: private DataHandler handler; My solution was to create a customized version of FileDataSource. Instead of returning a FileInputStream to read the contents of the file, I return the following extension of FileInputStream: private class TemporaryFileInputStream extends FileInputStream { public TemporaryFileInputStream(File file) throws FileNotFoundException { super(file); } @Override public void close() throws IOException { super.close(); file.delete(); } } Essentially the datasource allows reading only once. After the stream is closed, the file is deleted. Since the JAX-WS stack only reads the file once, it works. The solution is a bit of a hack but seems to be the best option in this case. A: Are you using standard java temp files? If so, you can do this: File script = File.createTempFile("temp", ".tmp", new File("./")); ... use the file ... script.delete(); // delete when done. A: the work folder that you set up in the context for this webapp that you're talking about. Can you set this work directory in a known directory ? If yes, then you can find the temp file within the temp work directory(that you know). Once you find, you can delete it.
{ "language": "en", "url": "https://stackoverflow.com/questions/158568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Best practices: .NET: How to return PK against an oracle database? With SQLServer, it seems to be generally accepted that adding a SELECT SCOPE_IDENTITY() to the end of your insert is the best way to return the PK of the newly-inserted record, assuming you're using an auto-increment field for the pk. However, I can't seem to find the equivalent for Oracle. Best practice seems to be to use a sequence to generate the PK, but there are different options for how to implement even that. Do you leave it up to the developer to insert sequence.nexval, or use a trigger? In either case, getting the new ID back seems to be a common problem. Suggestions and solutions I've run across include: * *creating a stored proc that returns the PK *running a select id from seq.nextval, then passing that to the insert *select max(id) after insert (Note: Don't do this!) *add a RETURNING clause to the insert What should the "best practice" solution be for this situation? A: You can use the RETURNING clause to do this in Oracle stored procs. For example: TABLEA has NAME and EMP_ID. EMP_ID is populated internally when records are inserted. INSERT INTO TABLEA(NAME) VALUES ('BOB') RETURNING EMP_ID INTO o_EMP_ID; That's assuming that line is in a stored proc with an output parameter of o_EMP_ID. Hope that helps... if not, here's a more detailed example: http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/returninginto_clause.htm A: The RETURNING clause is intended for just this kind of usage, so I would call it a best practice to use it. An alternative would be to select seq.CURRVAL after the insert. That returns the last value obtained from the sequence by this session. A: The stored procedure and the returning clause have the distinct benefit of a single database call any other solution is inferior. Whether you do it via a stored procedure or you use a returning clause is a whole can of worms in itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/158571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Programmatically align a toolbar on top of the iPhone keyboard In several cases I want to add a toolbar to the top of the iPhone keyboard (as in iPhone Safari when you're navigating form elements, for example). Currently I am specifying the toolbar's rectangle with constants but because other elements of the interface are in flux - toolbars and nav bars at the top of the screen - every time we make a minor interface change, the toolbar goes out of alignment. Is there a way to programmatically determine the position of the keyboard in relation to the current view? A: So basically: In the init method: NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc addObserver:self selector:@selector(keyboardWillShow:) name: UIKeyboardWillShowNotification object:nil]; [nc addObserver:self selector:@selector(keyboardWillHide:) name: UIKeyboardWillHideNotification object:nil]; And then have methods referred to above to adjust the position of the bar: -(void) keyboardWillShow:(NSNotification *) note { CGRect r = bar.frame, t; [[note.userInfo valueForKey:UIKeyboardBoundsUserInfoKey] getValue: &t]; r.origin.y -= t.size.height; bar.frame = r; } Could make it pretty by animating the position change by wrapping it in [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.3]; //... [UIView commitAnimations]; A: This is based on the existing answer from tonklon - I'm just adding a code snippet that shows a semi transparent black toolbar on top of the keyboard, together with a "done" button on the right: UIToolbar *toolbar = [[[UIToolbar alloc] init] autorelease]; [toolbar setBarStyle:UIBarStyleBlackTranslucent]; [toolbar sizeToFit]; UIBarButtonItem *flexButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemFlexibleSpace target:self action:nil]; UIBarButtonItem *doneButton =[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(resignKeyboard)]; NSArray *itemsArray = [NSArray arrayWithObjects:flexButton, doneButton, nil]; [flexButton release]; [doneButton release]; [toolbar setItems:itemsArray]; [aTextField setInputAccessoryView:toolbar]; and the -resignKeyboard looks like: -(void)resignKeyboard { [aTextField resignFirstResponder]; } A: If you register for keyboard notifications, ie UIKeyboardWillShowNotification UIKeyboardWillHideNotification, etc, the notification you receive will contain the bounds of the keyboard in the userInfo dict (UIKeyboardBoundsUserInfoKey). See the UIWindow class reference. A: In 3.0 and above you can get the animation duration and curve from the userInfo dictionary of the notifications. for instance, to animate the size of the view to make room for the keyboard, register for the UIKeyboardWillShowNotification and do something like the following: - (void)keyboardWillShow:(NSNotification *)notification { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationCurve:[[[notification userInfo] objectForKey:UIKeyboardAnimationCurveUserInfoKey] intValue]]; [UIView setAnimationDuration:[[[notification userInfo] objectForKey:UIKeyboardAnimationDurationUserInfoKey] doubleValue]]; CGRect frame = self.view.frame; frame.size.height -= [[[notification userInfo] objectForKey:UIKeyboardBoundsUserInfoKey] CGRectValue].size.height; self.view.frame = frame; [UIView commitAnimations]; } Do a similar animation for UIKeyboardWillHideNotification. A: As of iOS 3.2 there's a new way to achieve this effect: UITextFields and UITextViews have an inputAccessoryView property, which you can set to any view, that is automatically displayed above and animated with the keyboard. Note that the view you use should neither be in the view hierarchy elsewhere, nor should you add it to some superview, this is done for you. A: Create this method and call it on ViewWillLoad: - (void) keyboardToolbarSetup { if(self.keyboardToolbar==nil) { self.keyboardToolbar = [[UIToolbar alloc] initWithFrame:CGRectMake(0, 0, self.view.bounds.size.width, 44)]; UIBarButtonItem *cancelButton = [[UIBarButtonItem alloc] initWithTitle:@"Cancel" style:UIBarButtonItemStylePlain target:self action:@selector(anyAction)]; UIBarButtonItem *extraSpace = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemFlexibleSpace target:nil action:nil]; UIBarButtonItem *doneButton = [[UIBarButtonItem alloc] initWithTitle:@"Done" style:UIBarButtonItemStyleDone target:self action:@selector(anyOtherAction)]; NSArray *toolbarButtons = [[NSArray alloc]initWithObjects:cancelButton,extraSpace,doneButton, nil]; [self.keyboardToolbar setItems:toolbarButtons]; self.myTextView.inputAccessoryView=self.keyboardToolbar; } } A: There's no way (AFAIK) to get the dimensions of the keyboard view. It is however constant, at least in every iPhone version so far. If you calculate the toolbar position as an offset from the BOTTOM of your view, and take the size of your view into account, then you should not have to worry whether a navbar is present or not. E.g. #define KEYBOARD_HEIGHT 240 // example - can't remember the exact size #define TOOLBAR_HEIGHT 30 toolBarRect.origin.y = viewRect.size.height - KEYBOARD_HEIGHT - TOOLBAR_HEIGHT; // move toolbar either directly or with an animation Instead of a define, you could easily create a keyboardHeight function that returns the size based on whether the keyboard is being displayed, and move this toolbar positioning into a separate function that reorganizes your layout. Also it can depend on where you do this positioning as it's possible the size of your view may change between being loaded and shown based on your navbar setup. I believe the best place to do it would be in viewWillAppear.
{ "language": "en", "url": "https://stackoverflow.com/questions/158574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94" }
Q: Simple .net portal. Easy to add new modules I am writing a simple database with web access. I have previous experience with microsoft's (very!) old IBuySpy portal system. I am sure there must be something a bit more up to date I could use now! I want a simple light weight system that will allow my friend to have tabs with news and pictures etc, and it be easy for me to add tabs with my database entry forms. There must be some authentication mechanism for users, but nothing complex in the way of personal blogs or forums are required. I have had a quick look ad DNN but it looks like a lot to learn. Is there a simple solution? A: You should have a look on this page: http://www.asp.net/community/projects/. They list a few portals, DNN for example. But they also have some CMS's etc that you may want to have a look at. It almost sounds to me like a CMS would be better for you then a portal would. I have played around with the Graffiti CMS system and think it's probably the best .Net CMS system. And they offer a free version if you are not making a commercial website. Another portal you can have a look at is Rainbow Portal. But I think any portal system will likely be very complex "out of the box". I would highly recommend you have a look at simple CMS systems instead. A: I suggest mojoPortal. A: DotNetNuke is a good portal that was originally built using the IBuySpy portal as its' base. http://dotnetnuke.com A: Also as an alternative, you may want to look at starting with BlogEngine.NET, and just extending it with the functionality you need. I did this for the http://communitycodingcontest.org website. A: You may also want to take a look at http://www.umbraco.org/ A: The current MS-provided portal solution is SharePoint, but that's probably both overkill for your situation and out of your budget. A: if this is just for you and your friend, a wiki site might be a simpler solution
{ "language": "en", "url": "https://stackoverflow.com/questions/158584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you add a timed delay to a C++ program? I am trying to add a timed delay in a C++ program, and was wondering if anyone has any suggestions on what I can try or information I can look at? I wish I had more details on how I am implementing this timed delay, but until I have more information on how to add a timed delay I am not sure on how I should even attempt to implement this. A: Note that this does not guarantee that the amount of time the thread sleeps will be anywhere close to the sleep period, it only guarantees that the amount of time before the thread continues execution will be at least the desired amount. The actual delay will vary depending on circumstances (especially load on the machine in question) and may be orders of magnitude higher than the desired sleep time. Also, you don't list why you need to sleep but you should generally avoid using delays as a method of synchronization. A: You can try this code snippet: #include<chrono> #include<thread> int main(){ std::this_thread::sleep_for(std::chrono::nanoseconds(10)); std::this_thread::sleep_until(std::chrono::system_clock::now() + std::chrono::seconds(1)); } A: #include <unistd.h> usleep(3000000); This will also sleep for three seconds. You can refine the numbers a little more though. A: You can also use select(2) if you want microsecond precision (this works on platform that don't have usleep(3)) The following code will wait for 1.5 second: #include <sys/select.h> #include <sys/time.h> #include <unistd.h>` int main() { struct timeval t; t.tv_sec = 1; t.tv_usec = 500000; select(0, NULL, NULL, NULL, &t); } ` A: I found that "_sleep(milliseconds);" (without the quotes) works well for Win32 if you include the chrono library E.g: #include <chrono> using namespace std; main { cout << "text" << endl; _sleep(10000); // pauses for 10 seconds } Make sure you include the underscore before sleep. A: Do you want something as simple like: #include <unistd.h> sleep(3);//sleeps for 3 second A: Yes, sleep is probably the function of choice here. Note that the time passed into the function is the smallest amount of time the calling thread will be inactive. So for example if you call sleep with 5 seconds, you're guaranteed your thread will be sleeping for at least 5 seconds. Could be 6, or 8 or 50, depending on what the OS is doing. (During optimal OS execution, this will be very close to 5.) Another useful feature of the sleep function is to pass in 0. This will force a context switch from your thread. Some additional information: http://www.opengroup.org/onlinepubs/000095399/functions/sleep.html A: The top answer here seems to be an OS dependent answer; for a more portable solution you can write up a quick sleep function using the ctime header file (although this may be a poor implementation on my part). #include <iostream> #include <ctime> using namespace std; void sleep(float seconds){ clock_t startClock = clock(); float secondsAhead = seconds * CLOCKS_PER_SEC; // do nothing until the elapsed time has passed. while(clock() < startClock+secondsAhead); return; } int main(){ cout << "Next string coming up in one second!" << endl; sleep(1.0); cout << "Hey, what did I miss?" << endl; return 0; } A: to delay output in cpp for fixed time, you can use the Sleep() function by including windows.h header file syntax for Sleep() function is Sleep(time_in_ms) as cout<<"Apple\n"; Sleep(3000); cout<<"Mango"; OUTPUT. above code will print Apple and wait for 3 seconds before printing Mango. A: An updated answer for C++11: Use the sleep_for and sleep_until functions: #include <chrono> #include <thread> int main() { using namespace std::this_thread; // sleep_for, sleep_until using namespace std::chrono; // nanoseconds, system_clock, seconds sleep_for(nanoseconds(10)); sleep_until(system_clock::now() + seconds(1)); } With these functions there's no longer a need to continually add new functions for better resolution: sleep, usleep, nanosleep, etc. sleep_for and sleep_until are template functions that can accept values of any resolution via chrono types; hours, seconds, femtoseconds, etc. In C++14 you can further simplify the code with the literal suffixes for nanoseconds and seconds: #include <chrono> #include <thread> int main() { using namespace std::this_thread; // sleep_for, sleep_until using namespace std::chrono_literals; // ns, us, ms, s, h, etc. using std::chrono::system_clock; sleep_for(10ns); sleep_until(system_clock::now() + 1s); } Note that the actual duration of a sleep depends on the implementation: You can ask to sleep for 10 nanoseconds, but an implementation might end up sleeping for a millisecond instead, if that's the shortest it can do. A: In Win32: #include<windows.h> Sleep(milliseconds); In Unix: #include<unistd.h> unsigned int microsecond = 1000000; usleep(3 * microsecond);//sleeps for 3 second sleep() only takes a number of seconds which is often too long. A: Syntax: void sleep(unsigned seconds); sleep() suspends execution for an interval (seconds). With a call to sleep, the current program is suspended from execution for the number of seconds specified by the argument seconds. The interval is accurate only to the nearest hundredth of a second or to the accuracy of the operating system clock, whichever is less accurate. A: Many others have provided good info for sleeping. I agree with Wedge that a sleep seldom the most appropriate solution. If you are sleeping as you wait for something, then you are better off actually waiting for that thing/event. Look at Condition Variables for this. I don't know what OS you are trying to do this on, but for threading and synchronisation you could look to the Boost Threading libraries (Boost Condition Varriable). Moving now to the other extreme if you are trying to wait for exceptionally short periods then there are a couple of hack style options. If you are working on some sort of embedded platform where a 'sleep' is not implemented then you can try a simple loop (for/while etc) with an empty body (be careful the compiler does not optimise it away). Of course the wait time is dependant on the specific hardware in this case. For really short 'waits' you can try an assembly "nop". I highly doubt these are what you are after but without knowing why you need to wait it's hard to be more specific. A: On Windows you can include the windows library and use "Sleep(0);" to sleep the program. It takes a value that represents milliseconds.
{ "language": "en", "url": "https://stackoverflow.com/questions/158585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: How do I rig Firebug 1.2 to behave the way previous versions did? After waffling for months I've installed Firefox 3 on my development machine and am regretting it, because the new Firebug is sucking my will to live. Flakiness aside--it's not catching basic syntax errors--there's a whole new layer of UI all over everything, and it's making me THINK. Please, can somebody tell me the series of choices I need to make to get Firebug to behave exactly the way it did--just show everything all the time, please, and quit bugging me to re-POST to get results in my Net tab--before I upgraded? A: Progress ... smart co-worker pointed me at about:config for part of the answer. I toggled extensions.firebug.allowDoublePost to true, restarted Firefox, and the Net tab quit asking me to re-POST.
{ "language": "en", "url": "https://stackoverflow.com/questions/158593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Synchronize DataSet What is the best approach to synchronizing a DataSet with data in a database? Here are the parameters: * *We can't simply reload the data because it's bound to a UI control which a user may have configured (it's a tree grid that they may expand/collapse) *We can't use a changeflag (like a UpdatedTimeStamp) in the database because changes don't always flow through the application (e.g. a DBA could update a field with a SQL statement) *We cannot use an update trigger in the database because it's a multi-user system *We are using ADO.NET DataSets *Multiple fields can change of a given row I've looked at the DataSet's Merge capability, but this doesn't seem to keep the notion of an "ID" column. I've looked at DiffGram capability but the issue here is those seem to be generated from changes within the same DataSet rather than changes that occured on some external data source. I've been running from this solution for a while but the approach I know would work (with a lot of ineffeciency) is to build a separate DataSet and then iterate all rows applying changes, field by field, to the DataSet on which it is bound. Has anyone had a similar scenario? What did you do to solve the problem? Even if you haven't run into a similar problem, any recommendation for a solution is appreciated. Thanks A: DataSet.Merge works well for this if you have a primary key defined for each DataTable; the DataSet will raise changed events to any databound GUI controls if your table is small you can just re-read all of the rows and merge periodically, otherwise limiting the set to be read with a timestamp is a good idea - just tell the DBAs to follow the rules and update the timestamp ;-) another option - which is a bit of work - is to keep a changed-row queue (timestamp, row ID) using a trigger or stored procedure, and base the refresh queries off of the timestamp in the queue; this will be more efficient if the base table has a lot of rows in it, allowing you (via an inner join on the queue record) to pull only the changed rows since the last poll time. A: I think it would be easier to store a list of the nodes that the user has expanded (assuming you can uniquely identify each one), then re-load the data and re-bind it to the tree view, and then expand all the nodes previously expanded.
{ "language": "en", "url": "https://stackoverflow.com/questions/158617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: MainSoft Grasshopper, .Net to Java I just stumbled upon MainSoft's Grasshopper, which claims to cross-compile .Net ILM to Java bytecode. It seems to use the Mono implementation of the .Net libraries. All of the samples refer to web apps, but my requirement would be to cross-compile a .Net API (class library) to a Java API so that Java clients can use the API. Does anyone have any experience of using Grasshopper, and can you you see any problems with my plan? A: I tried it about 12-18 months ago for porting a ASPNET site to something I could run on top of Apache. I know that's not your intended purpose but stay with me. The process wasn't smooth. There were parts of the .net framework that (at the time) weren't implemented in the grasshopper codebase and once we'd evaluated the extent of the problem, decided that targeting the development version of Mono would be much easier. Anyway, try it. They had a demo back in the day, so I imagine there's still one about. If you run into a billion language errors, I'd consider a proper port (if the codebase is small). If it works, make sure you have test cases to really test it thoroughly.
{ "language": "en", "url": "https://stackoverflow.com/questions/158618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Marshal C++ "string" class in C# P/Invoke I have a function in a native DLL defined as follows: #include <string> void SetPath(string path); I tried to put this in Microsoft's P/Invoke Interop Assistant, but it chokes on the "string" class (which I think is from MFC?). I have tried marshaling it as a variety of different types (C# String, char[], byte[]) but every time I either get a NotSupportedException or a Native Assembly Exception (depending on what marshaling I tried). As anyone ever done Native/Managed Interop where the native string class is used? Is there any way to Marshal this? Am I going to have to write my own Marshaler? A: Looks like you're trying to use the C++ standard library string class. I doubt that will be easy to Marshal. Better to stick with a char * and Marshal as StringBuilder. That's what I usually do. You'll have to add a wrapper that generates the C++ string for you. A: The PInvoke interop assistant only supports C not C++. Unfortunately the MFC String class (CString I believe?) is C++ and won't work through the assistant. Instead try using the following void SetPath(__in const WCHAR* path); A: Yes. You can. Actually, not just std::string, std::wstring, any standard C++ class or your own classes can be marshaled or instantiated and called from C#/.NET. The basic idea of instantiating a C++ object from .NET world is to allocate exact size of the C++ object from .NET, then call the constructor which is exported from the C++ DLL to initialize the object, then you will be able to call any of the functions to access that C++ object, if any of the method involves other C++ classes, you will need to wrap them in a C# class as well, for methods with primitive types, you can simply P/Invoke them. If you have only a few methods to call, it would be simple, manual coding won't take long. When you are done with the C++ object, you call the destructor method of the C++ object, which is a export function as well. if it does not have one, then you just need to free your memory from .NET. Here is an example. public class SampleClass : IDisposable { [DllImport("YourDll.dll", EntryPoint="ConstructorOfYourClass", CharSet=CharSet.Ansi, CallingConvention=CallingConvention.ThisCall)] public extern static void SampleClassConstructor(IntPtr thisObject); [DllImport("YourDll.dll", EntryPoint="DoSomething", CharSet=CharSet.Ansi, CallingConvention=CallingConvention.ThisCall)] public extern static void DoSomething(IntPtr thisObject); [DllImport("YourDll.dll", EntryPoint="DoSomethingElse", CharSet=CharSet.Ansi, CallingConvention=CallingConvention.ThisCall)] public extern static void DoSomething(IntPtr thisObject, int x); IntPtr ptr; public SampleClass(int sizeOfYourCppClass) { this.ptr = Marshal.AllocHGlobal(sizeOfYourCppClass); SampleClassConstructor(this.ptr); } public void DoSomething() { DoSomething(this.ptr); } public void DoSomethingElse(int x) { DoSomethingElse(this.ptr, x); } public void Dispose() { Marshal.FreeHGlobal(this.ptr); } } For the detail, please see the below link, C#/.NET PInvoke Interop SDK (I am the author of the SDK tool) Once you have the C# wrapper class for your C++ class ready, it is easy to implement ICustomMarshaler so that you can marshal the C++ object from .NET. http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.icustommarshaler.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/158628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How can I send an HTTP POST request to a server from Excel using VBA? What VBA code is required to perform an HTTP POST from an Excel spreadsheet? A: You can use ServerXMLHTTP in a VBA project by adding a reference to MSXML. * *Open the VBA Editor (usually by editing a Macro) *Go to the list of Available References *Check Microsoft XML *Click OK. (from Referencing MSXML within VBA Projects) The ServerXMLHTTP MSDN documentation has full details about all the properties and methods of ServerXMLHTTP. In short though, it works basically like this: * *Call open method to connect to the remote server *Call send to send the request. *Read the response via responseXML, responseText, responseStream or responseBody A: In addition to the answer of Bill the Lizard: Most of the backends parse the raw post data. In PHP for example, you will have an array $_POST in which individual variables within the post data will be stored. In this case you have to use an additional header "Content-type: application/x-www-form-urlencoded": Set objHTTP = CreateObject("WinHttp.WinHttpRequest.5.1") URL = "http://www.somedomain.com" objHTTP.Open "POST", URL, False objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" objHTTP.setRequestHeader "Content-type", "application/x-www-form-urlencoded" objHTTP.send "var1=value1&var2=value2&var3=value3" Otherwise, you have to read the raw post data on the variable "$HTTP_RAW_POST_DATA". A: If you need it to work on both Mac and Windows, you can use QueryTables: With ActiveSheet.QueryTables.Add(Connection:="URL;http://carbon.brighterplanet.com/flights.txt", Destination:=Range("A2")) .PostText = "origin_airport=MSN&destination_airport=ORD" .RefreshStyle = xlOverwriteCells .SaveData = True .Refresh End With Notes: * *Regarding output... I don't know if it's possible to return the results to the same cell that called the VBA function. In the example above, the result is written into A2. *Regarding input... If you want the results to refresh when you change certain cells, make sure those cells are the argument to your VBA function. *This won't work on Excel for Mac 2008, which doesn't have VBA. Excel for Mac 2011 got VBA back. For more details, you can see my full summary about "using web services from Excel." A: Set objHTTP = CreateObject("MSXML2.ServerXMLHTTP") URL = "http://www.somedomain.com" objHTTP.Open "POST", URL, False objHTTP.setRequestHeader "User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)" objHTTP.send "" Alternatively, for greater control over the HTTP request you can use WinHttp.WinHttpRequest.5.1 in place of MSXML2.ServerXMLHTTP. A: To complete the response of the other users: For this I have created an "WinHttp.WinHttpRequest.5.1" object. Send a post request with some data from Excel using VBA: Dim LoginRequest As Object Set LoginRequest = CreateObject("WinHttp.WinHttpRequest.5.1") LoginRequest.Open "POST", "http://...", False LoginRequest.setRequestHeader "Content-type", "application/x-www-form-urlencoded" LoginRequest.send ("key1=value1&key2=value2") Send a get request with token authentication from Excel using VBA: Dim TCRequestItem As Object Set TCRequestItem = CreateObject("WinHttp.WinHttpRequest.5.1") TCRequestItem.Open "GET", "http://...", False TCRequestItem.setRequestHeader "Content-Type", "application/xml" TCRequestItem.setRequestHeader "Accept", "application/xml" TCRequestItem.setRequestHeader "Authorization", "Bearer " & token TCRequestItem.send A: I did this before using the MSXML library and then using the XMLHttpRequest object, see here.
{ "language": "en", "url": "https://stackoverflow.com/questions/158633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "160" }
Q: Doing a Cast Within a LINQ Query Is it possible to do a cast within a LINQ query (for the compiler's sake)? The following code isn't terrible, but it would be nice to make it into one query: Content content = dataStore.RootControl as Controls.Content; List<TabSection> tabList = (from t in content.ChildControls select t).OfType<TabSection>().ToList(); List<Paragraph> paragraphList = (from t in tabList from p in t.ChildControls select p).OfType<Paragraph>().ToList(); List<Line> parentLineList = (from p in paragraphList from pl in p.ChildControls select pl).OfType<Line>().ToList(); The code continues on with a few more queries, but the gist is I have to create a List out of each query in order for the compiler to know that all of the objects in content.ChildControls are of type TabSection and all of the objects in t.ChildControls are of type Paragraph...and so on and and so forth. Is there a way within the LINQ query to tell the compiler that t in from t in content.ChildControls is a TabSection? A: List<TabSection> tabList = (from t in content.ChildControls let ts = t as TabSection where ts != null select ts).ToList(); A: Try this: from TabSection t in content.ChildControls Also, even if this were not available (or for a different, future scenario you may encounter), you wouldn't be restricted to converting everything to Lists. Converting to a List causes query evaluation on the spot. But if you removing the ToList call, you could work with the IEnumerable type, which would continue to defer the execution of the query until you actually iterate or store in a real container. A: Depending on what you are trying to do, one of these might do the trick: List<Line> parentLineList1 = (from t in content.ChildControls.OfType<TabSection>() from p in t.ChildControls.OfType<Paragraph>() from pl in p.ChildControls.OfType<Line>() select pl).ToList(); List<Line> parentLineList2 = (from TabSection t in content.ChildControls from Paragraph p in t.ChildControls from Line pl in p.ChildControls select pl).ToList(); Note that one uses OfType<T>(), which you were using. This will filter the results and return only the items of the specified type. The second query implicitly uses Cast<T>(), which casts the results into the specified type. If any item cannot be cast, an exception is thrown. As mentioned by Turbulent Intellect, you should refrain from calling ToList() as long as possible, or try to avoid it altogether. A: yes you can do the following: List<TabSection> tabList = (from t in content.ChildControls where t as TabSection != null select t as TabSection).ToList(); A: And here's the query method form. List<Line> parentLineList = content.ChildControls.OfType<TabSections>() .SelectMany(t => t.ChildControls.OfType<Paragraph>()) .SelectMany(p => p.ChildControls.OfType<Line>()) .ToList();
{ "language": "en", "url": "https://stackoverflow.com/questions/158634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Same Presenter working with different Repositories You do you manage the same presenter working with different repositories using the MVP pattern? I just have multiple constructor overloads and the presenter simply uses the one that is suitable for the scenario. AddCustomerPresenter presenter = new AddCustomerPresenter(this,customerRepository); presenter.AddCustomer(); presenter = new AddCustomerPresenter(this,archiveRepository); presenter.Archive(); A: Why not have IRepository { /* .. */ } CustomerRepository : IRepository { /* .. */ } ArchiveRepository : IRepository { /* .. */ } and then AddCustomerPresenter { IRepository Store {get;set;} public AddCustomerPresenter(IRepository store) { /*...*/ } /*...*/ } Your presenter should NOT have any static dependency on ANY implementation of IRepository. If you find there's no other way, you need to rework your design because it's probably flawed. A: Thanks Will! But CustomerRepository and ArchiveRepository are not related in any way. They are two completely different things.
{ "language": "en", "url": "https://stackoverflow.com/questions/158651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What should I do when 'svn cleanup' fails? I have a lot of changes in a working folder, and something screwed up trying to do an update. Now when I issue an 'svn cleanup' I get: >svn cleanup . svn: In directory '.' svn: Error processing command 'modify-wcprop' in '.' svn: 'MemPoolTests.cpp' is not under version control MemPoolTests.cpp is a new file another developer added and was brought down in the update. It did not exist in my working folder before. Is there anything I can do to try and move forward without having to checkout a fresh copy of the repository? Clarification: Thanks for the suggestions about moving the directory out of the way and bringing down a new copy. I know that is an option, but it is one I'd like to avoid since there are many changes nested several directories deep (this should have been a branch...) I'm hoping for a more aggressive way of doing the cleanup, maybe someway of forcing the file SVN is having trouble with back into a known state (and I tried deleting the working copy of it ... that didn't help). A: $ ls -la .svn $ rm -f .svn/lock Then $ svn update Hope it helps A: I had the exact same problem. I couldn't commit, and cleanup would fail. Using a command-line client I was able to see an error message indicating that it was failing to move a file from .svn/props to .svn/prop-base. I looked at the specific file and found that it was marked read-only. After removing the read-only attribute I was able to cleanup the folder and the commit my changes. A: It's possible that you have a problem with two filenames differing only by uppercase. If you ran into this problem, creating another working copy directory does not solve the problem. Current Windows (i.e. crappy) filesystems simply do not grok the difference between Filename and FILEname. You have two possible fixes: * *Check out at platform with a real filesystem (Unix-based), rename the file, and commit changes. *When you are bound to Windows you can rename files in the Eclipse SVN repository browser which does recognise the difference and rename the file there. *You can rename the problematic files also remotely from any command-line SVN client using svn rename -m "broken filename case" http://server/repo/FILEname http://server/repo/filename A: I've tried to do svn cleanup via the console and got an error like: svn: E720002: Can't open file '..\.svn\pristine\40\40d53d69871f4ff622a3fbb939b6a79932dc7cd4.svn-base': The system cannot find the file specified. So I created this file manually (empty) and did svn cleanup again. This time it was done OK. A: If all else fails: * *Check out into a new folder. *Copy your modified files over. *Check back in. *Zip the old folder up somewhere (you never know + paranoia is good) before deleting it and using the new one. A: Run svn cleanup command in a terminal (if it fails from Eclipse which was my case): ~/path/to/svn-folder/$ svn cleanup I tried different solutions explained here, but none worked. Action Team → Update to head fails: svn: E155004: There are unfinished work items in '/home/user/path/to/svn-folder'; run 'svn cleanup' first. Action Team → Cleanup fails with same error. Solution that worked for me: run svn cleanup command in a terminal. The command succeeded. Then Team → Update in Eclipse worked again. Note: my SVN version is 1.9.3. Also check Chris's answer if svn cleanup does not work. A: I had the same problem. For me the cause was a conflict with EasySVN and (TortoiseSVN or just SVN). I had auto update and commit with EasySVN (which wasn't working). When I turned this off, I was unable to cleanup, commit, or update. None of the above solutions worked, but rebooting did :) A: The latest verion (I'm using 1.9.5) solve this problem by adding an option of "Break locks" on the clean up menu. Just make sure this check box is selected when doing clean up. A: When starting all over is not an option... I deleted the log file in the .svn directory (I also deleted the offending file in .svn/props-base), did a cleanup, and resumed my update. A: If the issue is case sensitivity (which can be a problem when checking out to a Mac, as well as Windows) and you don't have the option of checking out onto a *nix system, the following should work. Here's the process from the beginning: % svn co http://[domain]/svn/mortgages mortgages (Checkout ensues… then…) svn: In directory 'mortgages/trunk/images/rates' svn: Can't open file 'mortgages/trunk/images/rates/.svn/tmp/text-base/Header_3_nobookmark.gif.svn-base': No such file or directory Here SVN is trying to check out two files with similar names that differ only by case - Header_3_noBookmark.gif and Header_3_nobookmark.gif. Mac filesystems default to case insensitivity in a way that causes SVN to choke in situations like this. So... % cd mortgages/trunk/images/rates/ % svn up svn: Working copy '.' locked svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for details) However, running svn cleanup doesn't work, as we know. % svn cleanup svn: In directory '.' svn: Error processing command 'modify-wcprop' in '.' svn: 'spacer.gif' is not under version control spacer.gif isn't the problem here… It just can't move past the previous error to the next file. So I deleted all of the files from the directory other than .svn, and removed the SVN log. This made cleanup work, so that I could check out and rename the offending file. % rm *; rm -rf .svn/log; svn cleanup % svn up Header_3_nobookmark.gif A Header_3_nobookmark.gif Updated to revision 1087. % svn mv Header_3_nobookmark.gif foo A foo D Header_3_nobookmark.gif % svn up A spacer.gif A Header_3_noBookmark.gif Following this, I was able to go back to the root directory of the project, and run svn up to check out the rest of it. A: (Before you try moving folders and doing a new checkout.) Delete the folder the offending file(s) are in - yes, even the .svn folder, then do an svn cleanup on the very top / parent folder. A: Subclipse gets confused by Windows' truly diabolical locking behaviour. Unlocker is your friend. This can find locked files and forcibly release the locks. A: I just had this same problem on Windows 7 64-bit. I ran console as administrator and deleted the .svn directory from the problem directory (got an error about logs or something, but ignored it). Then, in explorer, I deleted the problem directory which was no longer showing as under version control. Then, I ran an update and things proceeded as expected. A: Whenever I have similar problems I use rsync (NB: I use Linux or Mac OS X) to help out like so: # Go to the parent directory cd dir_above_borked # Rename corrupted directory mv borked_dir borked_dir.bak # Checkout a fresh copy svn checkout svn://... borked_dir # Copy the modified files to the fresh checkout # - test rsync # (possibly use -c to verify all content and show only actually changed files) rsync -nav --exclude=.svn borked_dir.bak/ borked_dir/ # - If all ok, run rsync for real # (possibly using -c again, possibly not using -v) rsync -av --exclude=.svn borked_dir.bak/ borked_dir/ That way you have a fresh checkout, but with the same working files. For me this always works like a charm. A: When I face this issue with TortoiseSVN (Windows), I go to Cygwin and run the 'svn cleanup' from there; it cleans up correctly for me, after which everything works from TortoiseSVN. A: I ran into that too lately. The trick for me was after selecting "Clean up", in the popup options dialog, check "Break Locks", and then "OK". It cleaned up successfully for me. A: This answer only applies to versions before 1.7 (thanks @ŁukaszBachman). Subversion stores its information per folder (in .svn), so if you are just dealing with a subfolder you don't need checkout the whole repository - just the folder that has borked: cd dir_above_borked mv borked_dir borked_dir.bak svn update borked_dir This will give you a good working copy of the borked folder, but you still have your changes backed up in borked_dir.bak. The same principle applies with Windows/TortoiseSVN. If you have changes in an isolated folder have a look at the svn checkout -N borked_dir # Non-recursive, but deprecated or svn checkout --depth=files borked_dir # 'depth' is new territory to me, but do 'svn help checkout' A: Things have changed with SVN 1.7, and the popular solution of deleting the log file in the .svn directory isn't feasible with the move to a database working-copy implementation. Here's what I did that seemed to work: * *Delete the .svn directory for your working copy. *Start a new checkout in a new, temporary directory. *Cancel the checkout (we don't want to wait for everything to get pulled down). *Run a cleanup on this cancelled checkout. *Now we have a new .svn directory with a clean database (although no/few files) *Copy this .svn into your old, corrupted working directory. *Run svn update and it should bring your new partial .svn directory up to speed with your old working directory. That's all a little confusing, process wise. Essentially, what we're doing is deleting the corrupt .svn then creating a new .svn for the same checkout path. We then move this new .svn to our old working directory and update it to the repo. I just did this in TSVN and it seems to work fine and not require a full checkout and download. -Jody A: Take a look at http://www.anujvarma.com/svn-cleanup-failedprevious-operation-has-not-finished-run-cleanup-if-it-was-interrupted/ Summary of fix from above link (Thanks to Anuj Varma) * *Install sqlite command-line shell (sqlite-tools-win32) from http://www.sqlite.org/download.html *sqlite3 .svn/wc.db "select * from work_queue" The SELECT should show you your offending folder/file as part of the work queue. What you need to do is delete this item from the work queue. *sqlite3 .svn/wc.db "delete from work_queue" That’s it. Now, you can run cleanup again – and it should work. Or you can proceed directly to the task you were doing before being prompted to run cleanup (adding a new file etc.) A: I faced the same issue. After some searching on the Internet found the below article. Then realized that I was logged as a user different from the user that I had used to setup SVN under, a permission issue basically. A: There are some very good suggestions in the previous answer, but if you are having an issue with TortoiseSVN on Windows (a good product, but ...) always fallback to the command line and do a simple "svn cleanup" first. In many circumstances the Windows client will not run the cleanup command, but cleanup works fine using thing the SVN command line utility. A: I also had the problem where cleanup would fail. Originally I was trying to commit some code, but it said: svn: E155004: There are unfinished work items in '/my/path/to/files'; run 'svn cleanup' first. But when I tried to cleanup: svn: E155007: '/my/path/to/files' is not a working copy directory In my case, it turns out that I had a revision conflict. My svn folder contained .mine, .r1, and .r2 files. Once I resolved the conflict, the cleanup ran successfully. A: It might not apply in all situations, but when I recently encountered this problem my "fix" was to upgrade the Subversion package on my system. I had been running 1.4.something, and when I upgraded to the latest (1.6.6 in my case) the checkout worked. (I did try re-downloading it, but a checkout to a clean directory always hung at the same spot.) A: Read-only locking sometimes happens on network drives with Windows. Try to disconnect and reconnect it again. Then cleanup and update. A: After going through most of the solutions that are cited here, I still was getting the error. The issue was case insensitive OS X. Checking out a directory that has two files with the same name, but different capitalization causes an issue. For example, ApproximationTest.java and Approximationtest.java should not be in the same directory. As soon as we get rid of one of the file, the issue goes away. A: I hit an issue where following an Update, SVN showed a folder as being conflicted. Strangely, this was only visible through the command line - TortoiseSVN thought it was all fine. #>svn st ! my_dir ! my_dir\sub_dir svn cleanup, svn revert, svn update and svn resolve were all unsuccessful at fixing this. I eventually solved the problem as follows: * *Look in the .svn directory for "sub_dir" *Use RC -> Properties to uncheck the 'read only' flag on the entries file *Open the entries file and delete the line "unfinished ..." and the corresponding checksum *Save, and re-enable the read-only flag *Repeat for the my_dir directory Following that, everything was fine. Note I didn't have any local changes, so I don't know if you'd be at risk if you did. I didn't use the delete / update method suggested by others - I got into this state by trying that on the my_dir/sub_dir/sub_sub_dir directory (which started with the same symptoms) - so I didn't want to risk making things worse again! Not quite on-topic, but maybe helpful if someone comes across this post as I did. A: I did sudo chmod 777 -R . to be able to change the permissions. Without sudo, it wouldn't work, giving me the same error as running other commands. Now you can do svn update or whatever, without having to scrap your entire directory and recreating it. This is especially helpful, since your IDE or text editor may already have certain tabs open, or have syncing problems. You don't need to scrap and replace your working directory with this method. A: I solved this problem by copying some colleague's .svn directory into mine and then updating my working copy. It was a nice, quick and clean solution. A: Answers here didn't help me, but before checking out the project again, I closed and opened Eclipse (Subversive is my SVN client) and the problem disappeared. A: While facing a similar issue, manual merge in the repository sync view helped to solve the issue. One file name was conflicting with other and it clearly mentioned the issue. Renaming the newer file to a different name resolved it. A: I just removed the file svn-xxxxxxxx from the ~\.svn\tmp folder, where xxxxxxxx is a number.
{ "language": "en", "url": "https://stackoverflow.com/questions/158664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "255" }
Q: How do I delete a dirset of directories with Ant? I want to delete all directories and subdirectories under a root directory that are contain "tmp" in their names. This should include any .svn files too. My first guess is to use <delete> <dirset dir="${root}"> <include name="**/*tmp*" /> </dirset> </delete> This does not seem to work as you can't nest a dirset in a delete tag. Is this a correct approach, or should I be doing something else? * *ant version == 1.6.5. *java version == 1.6.0_04 A: try: <delete includeemptydirs="true"> <fileset dir="${root}"> <include name="**/*tmp*/*" /> </fileset> </delete> ThankYou flicken ! A: I just wanted to add that the part of the solution that worked for me was appending /** to the end of the include path. I tried the following to delete Eclipse .settings directories: <delete includeemptydirs="true"> <fileset dir="${basedir}" includes"**/.settings"> </delete> but it did not work until I changed it to the following: <delete includeemptydirs="true"> <fileset dir="${basedir}" includes"**/.settings/**"> </delete> For some reason appending /** to the path deletes files in the matching directory, all files in all sub-directories, the sub-directories, and the matching directories. Appending /* only deletes files in the matching directory but will not delete the matching directory. A: Here's the answer that worked for me: <delete includeemptydirs="true"> <fileset dir="${root}" defaultexcludes="false"> <include name="**/*tmp*/**" /> </fileset> </delete> I had an added complication I needed to remove .svn directories too. With defaultexcludes, .* files were being excluded, and so the empty directories weren't really empty, and so weren't getting removed. The attribute includeemptydirs (thanks, flicken, XL-Plüschhase) enables the trailing ** wildcard to match the an empty string.
{ "language": "en", "url": "https://stackoverflow.com/questions/158665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: onbeforeunload support detection I'd like to check if the current browser supports the onbeforeunload event. The common javascript way to do this does not seem to work: if (window.onbeforeunload) { alert('yes'); } else { alert('no'); } Actually, it only checks whether some handler has been attached to the event. Is there a way to detect if onbeforeunload is supported without detecting the particular browser name? A: Unfortunately kangax's answer doesn't work for Safari on iOS. In my testing beforeunload was supported in every browser I tried exactly except Safari on IOS :-( Instead I suggest a different approach: The idea is simple. On the very first page visit, we don't actually know yet if beforeunload is supported. But on that very first page, we set up both an unload and a beforeunload handler. If the beforeunload handler fires, we set a flag saying that beforeunload is supported (actually beforeunloadSupported = "yes"). When the unload handler fires, if the flag hasn't been set, we set the flag that beforeunload is not supported. In the following we'll use localStorage ( supported in all the browsers I care about - see http://caniuse.com/namevalue-storage ) to get/set the flag. We could just as well have used a cookie, but I chose localStorage because there is no reason to send this information to the web server at every request. We just need a flag that survives page reloads. Once we've detected it once, it'll stay detected forever. With this, you can now call isBeforeunloadSupported() and it will tell you. (function($) { var field = 'beforeunloadSupported'; if (window.localStorage && window.localStorage.getItem && window.localStorage.setItem && ! window.localStorage.getItem(field)) { $(window).on('beforeunload', function () { window.localStorage.setItem(field, 'yes'); }); $(window).on('unload', function () { // If unload fires, and beforeunload hasn't set the field, // then beforeunload didn't fire and is therefore not // supported (cough * iPad * cough) if (! window.localStorage.getItem(field)) { window.localStorage.setItem(field, 'no'); } }); } window.isBeforeunloadSupported = function () { if (window.localStorage && window.localStorage.getItem && window.localStorage.getItem(field) && window.localStorage.getItem(field) == "yes" ) { return true; } else { return false; } } })(jQuery); Here is a full jsfiddle with example usage. Note that it will only have been detected on the second or any subsequent page loads on your site. If it is important to you to have it working on the very first page too, you could load an iframe on that page with a src attribute pointing to a page on the same domain with the detection here, make sure it has loaded and then remove it. That should ensure that the detection has been done so isBeforeunloadSupported() works even on the first page. But I didn't need that so I didn't put that in my demo. A: I realize I'm a bit late on this one, but I am dealing with this now, and I was thinking that something more like the following would be easier and more reliable. This is jQuery specific, but it should work with any system that allows you to bind and unbind events. $(window).bind('unload', function(){ alert('unload event'); }); window.onbeforeunload = function(){ $(window).unbind('unload'); return 'beforeunload event'; } This should unbind the unload event if the beforeunload event fires. Otherwise it will simply fire the unload. A: alert('onbeforeunload' in window); Alerts 'true' if onbeforeunload is a property of window (even if it is null). This should do the same thing: var supportsOnbeforeunload = false; for (var prop in window) { if (prop === 'onbeforeunload') { supportsOnbeforeunload = true; break; } } alert(supportsOnbeforeunload); Lastly: alert(typeof window.onbeforeunload != 'undefined'); Again, typeof window.onbeforeunload appears to be 'object', even if it currently has the value null, so this works. A: Cruster, The "beforeunload" is not defined in the DOM-Events specification, this is a IE-specific feature. I think it was created in order to enable execution to be triggered before standard "unload" event. In other then IE browsers you could make use of capture-phase "unload" event listener thus getting code executed before for example an inline body onunload event. Also, DOM doesn't offer any interfaces to test its support for a specific event, you can only test for support of an events group (MouseEvents, MutationEvents etc.) Meanwhile you can also refer to DOM-Events specification http://www.w3.org/TR/DOM-Level-3-Events/events.html (unfortunately not supported in IE) Hope this information helps some A: I wrote about a more-or-less reliable inference for detecting event support in modern browsers some time ago. You can see on a demo page that "beforeunload" is supported in at least Safari 4+, FF3.x+ and IE. Edit: This technique is now used in jQuery, Prototype.js, Modernizr, and likely other scripts and libraries. A: Different approach, get the typeof if(typeof window.onbeforeunload == 'function') { alert("hello functionality!"); } A: onbeforeunload is also supported by FF, so testing for browser won't help. A: Mobile browsers don't tend to not support beforeunload because the browser can go into the background without unloading the page, then be killed by the operating system at any time. However, all modern non-mobile browsers support it. Therefore, you can just check if the browser is a mobile browser. To solve the problem I use: var isMobile = navigator.userAgent.match(/Android/i) || navigator.userAgent.match(/BlackBerry/i) || navigator.userAgent.match(/iPhone|iPad|iPod/i) || navigator.userAgent.match(/Opera Mini/i) || navigator.userAgent.match(/IEMobile/i); if (isMobile) { window.addEventListener("visibilitychange", function(e) { if (document.visibilityState == 'hidden') { console.log("beforeunload"); location.reload(); } }); } else { window.addEventListener("beforeunload", function(e) { console.log("beforeunload"); }); } A: It would probably be better to just find out by hand which browsers support it and then have your conditional more like: if( $.browser.msie ) { alert( 'no' ); } ...etc. The $.browser.msie is jQuery syntax, most frameworks have similar built-in functions since they use them so much internally. If you aren't using a framework then I'd suggest just taking a look at jQuery's implementation of those functions. A: I see that this is a very old thread, but the accepted answer incorrectly detects support for Safari on iOS, which caused me to investigate other strategies: if ('onbeforeunload' in window && typeof window['onbeforeunload'] === 'function') { // onbeforeunload is supported } else { // maybe bind to unload as a last resort } The second part of the if-check is necessary for Safari on iOS, which has the property set to null. Tested in Chrome 37, IE11, Firefox 32 and Safari for iOS 7.1
{ "language": "en", "url": "https://stackoverflow.com/questions/158673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: TCP Connection Life How long can I expect a client/server TCP connection to last in the wild? I want it to stay permanently connected, but things happen, so the client will have to reconnect. At what point do I say that there's a problem in the code rather than there's a problem with some external equipment? A: It shouldn't really matter, you should design your code to automatically reconnect if that is the desired behavior. A: There really is no way to tell. There is nothing inherent to TCP that would cause the connection to just drop after a certain amount of time. Someone on a reliable connection could have years of uptime, while someone on a different connection could have to reconnect every 5 minutes. There is no way to tell or even guess. A: You will need some data going over the connection periodically to keep it alive - many OS's or firewalls will drop an inactive connection. A: I agree with Zan Lynx. There's no guarantee, but you can keep a connection alive almost indefinitely by sending data over it, assuming there are no connectivity or bandwidth issues. Generally I've gone for the application level keep-alive approach, although this has usually because it's been in the client spec so I've had to do it. But just send some short piece of data every minute or two, to which you expect some sort of acknowledgement. Whether you count one failure to acknowledge as the connection having failed is up to you. Generally this is what I have done in the past, although there was a case I had wait for three failed responses in a row to drop the connection because the app at the other end of the connection was extremely flaky about responding to "are you there?" requests. If the connection fails, which at some point it probably will, even with machines on the same network, then just try to reestablish it. If that fails a set number of times then you have a problem. If your connection persistently fails after it's been connected for a while then again, you have a problem. Most likely in both cases it's probably some network issue, rather than your code, or maybe a problem with the TCP/IP stack on your machine (has been known: I encountered issues with this on an old version of QNX--it'd just randomly fall over). Having said that you might have a software problem, and the only way to know for sure is often to attach a debugger, or to get some logging in there. E.g. if you can always connect successfully, but after a time you stop getting ACKs, even after reconnect, then maybe your server is deadlocking, or getting stuck in a loop or something. What's really useful is to set up a series of long-running tests under a variety of load conditions, from just sending the keep alive are you there?/ack requests and responses, to absolutely battering the server. This will generally give you more confidence about your software components, and can be really useful in shaking out some really weird problems which won't necessarily cause a problem with your connection, although they might result in problems with the transactions taking place. For example, I was once writing a telecoms application server that provided services such as number translation, and we'd just leave it running for days at a time. The thing was that when Saturday came round, for the whole day, it would reject every call request that came in, which amounted to millions of calls, and we had no idea why. It turned out to be because of a single typo in some date conversion code that only caused a problem on Saturdays. Hope that helps. A: I think the most important idea here is theory vs. practice. The original theory was that the connections had no lifetimes. If you had a connection, it stayed open forever, even if there was no traffic, until an event caused it to close. The new theory is that most OS releases have turned on the keep-alive timer. This means that connections will last forever, as long as the system on the other end responds to an occasional TCP-level exchange. In reality, many connections will be terminated after time, with a variety of criteria and situations. Two really good examples are: The remote client is using DHCP, the lease expires, and the IP address changes. Another example is firewalls, which seem to be increasingly intelligent, and can identify keep-alive traffic vs. real data, and close connections based on any high level criteria, especially idle time. How you want to implement reconnect logic depends a lot on your architecture, the working environment, and your performance goals. A: Pick a value. One drop every hour is probably fine. Ten unexpected connection drops in 5 minutes probably indicates a problem. TCP connections will generally last about two hours without any traffic. Either end can send keep-alive packets, which are, I think, just an ACK on the last received packet. This can usually be set per socket or by default on every TCP connection. An application level keep-alive is also possible. For a telnet style protocol like FTP, SMTP, POP or IMAP something like sending return, newline and getting back a command prompt.
{ "language": "en", "url": "https://stackoverflow.com/questions/158674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Spreadsheet-like functionality in web app I have a web app for commercial property management that needs spreadsheet-like functionality for doing budgets. I don't want to use Google Sheets because my users won't necessarily have a Google account. So is there anything out there that I could use? I looked and could only find SocialCalc which wasn't quite good enough for me. Options: *ExtJS Grid Component (Open Source[GPL3] & Commercial License) *Infragistics Grid Component (Commercial License) *TreeGrid (Commercial License, Free Version has maximum of 33 rows) A: If you don't mind implementing the logic yourself, the ExtJS grid component is a JavaScript grid component with lots of powerful features, and it is available in both open-source and commercial versions. A: I have used dhtmlXGrid successfully. There is an open source version that you can use freely to get your application developed. Assuming everything works out, for $200 you can purchase a license for it and distribute it with your application. Very easy to use; create an HTML table structure with your data in it and then bind dhtmlXGrid to the table - it automatically turns the table cells into editable fields. Check it out here: http://www.dhtmlx.com/docs/products/dhtmlxGrid Again, you will need to implement the spreadsheet logic yourself but dhtmlXGrid makes it straightforward to translate that into an editable column/row display. A: There is an excellent grid from Farpoint They have web and win forms grids available and are not too bad price wise A: Infragistics has spreadsheet-type functionality in their data grid product A: You may want to give a try to treegrid @ www.coqsoft.com . A: try telerik radgrid. It is a pretty decent .net user control which can easily render an XML datasource for user editting. Its also fully Ajax enabled to avoid delays when entering volumes of data.
{ "language": "en", "url": "https://stackoverflow.com/questions/158695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it OK to use HttpRuntime.Cache outside ASP.NET applications? Scott Hanselman says yes. Adding System.Web to your non-web project is a good way to get folks to panic. Another is adding a reference to Microsoft.VisualBasic in a C# application. Both are reasonable and darned useful things to do, though. MSDN says no. The Cache class is not intended for use outside of ASP.NET applications. It was designed and tested for use in ASP.NET to provide caching for Web applications. In other types of applications, such as console applications or Windows Forms applications, ASP.NET caching might not work correctly. So what should I think? A: There shouldn't be any problem with using HttpRuntime.Cache. It's a sophisticated in-memory hash-table that can be very useful outside of the web context. Still, it might be a bit of a code-smell to reference HttpRuntime.Cache in a non-Http related application, so it can be a good idea to wrap it behind some ICache interface and use that wherever possible. A: One thing to keep in mind is that Microsoft have a released the .NET Framework Client Profile Setup Package. This is a version of the 3.5 framework that is targeted at client applications and has a reduced footprint. The Client Profile does not include the ASP.NET pieces of the framework. If your application depends on System.Web it will stop your application being able to take advantage of the Client Profile. See Scott Gu's Blog for more details. A: I realize this question is old, but in the interest of helping anyone who finds this via search, its worth noting that .net v4 includes a new general purpose cache for this type of scenario. It's in the System.Runtime.Caching namespace: https://msdn.microsoft.com/en-us/library/dd997357(v=vs.110).aspx The static reference to the default cache instance is: MemoryCache.Default A: There doesn't seem to be anything in current versions of System.Web.Caching.Cache that depend on the HTTP runtime except for the Insert() method that accepts a CacheItemUpdateCallback, so Scott is right for the most part. This doesn't prevent Microsoft from modifying the class in the future to be more integrated with the HTTP infrastructure. I wrote a WeakReference-based lightweight cache in another answer. A: Don't use it, even if it does work it can stop working in the next service pack/version. The moment you do something based on internal implementation details and not the contract (in this case, MSDN) you can expect to get into trouble in the future. A: I once used it, but it didn't feel right and IIRC increased the memory footprint quite dramatically. Instead, I implemented my own lightweight cache mechanism which is surprisingly easy to do. It utilized the WeakReference class which allowed the cache to keep references to the object, but also allows the Garbage Collector to reclaim the memory if the reference is unused. The only thing I didn't have was a separate thread to clean up stale items in the cache. What I did do is if the cache had > x items in it, I would go through all the cached items and turf out the old items before adding the new item. If you need something more robust, use something like the MS Enterprise Library Caching Application Block. A: If you are looking for a general solution: This is a typical situation for the dependency injection approach. Using this approach you can follow Scott Hanselman and MSDN! Inject a System.Web dependency like HttpRuntime.Cache without having a System.Web reference in your library. A: Why not avoid the question completely and use the Caching Block of Enterprise Library? You could use the System.Web.Caching, but you probably won't get Microsoft support if you run into issues and you'll raise eyebrows so it might not be worth it.
{ "language": "en", "url": "https://stackoverflow.com/questions/158703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: How do I properly clean up Excel interop objects? I'm using the Excel interop in C# (ApplicationClass) and have placed the following code in my finally clause: while (System.Runtime.InteropServices.Marshal.ReleaseComObject(excelSheet) != 0) { } excelSheet = null; GC.Collect(); GC.WaitForPendingFinalizers(); Although this kind of works, the Excel.exe process is still in the background even after I close Excel. It is only released once my application is manually closed. What am I doing wrong, or is there an alternative to ensure interop objects are properly disposed of? A: The accepted answer here is correct, but also take note that not only "two dot" references need to be avoided, but also objects that are retrieved via the index. You also do not need to wait until you are finished with the program to clean up these objects, it's best to create functions that will clean them up as soon as you're finished with them, when possible. Here is a function I created that assigns some properties of a Style object called xlStyleHeader: public Excel.Style xlStyleHeader = null; private void CreateHeaderStyle() { Excel.Styles xlStyles = null; Excel.Font xlFont = null; Excel.Interior xlInterior = null; Excel.Borders xlBorders = null; Excel.Border xlBorderBottom = null; try { xlStyles = xlWorkbook.Styles; xlStyleHeader = xlStyles.Add("Header", Type.Missing); // Text Format xlStyleHeader.NumberFormat = "@"; // Bold xlFont = xlStyleHeader.Font; xlFont.Bold = true; // Light Gray Cell Color xlInterior = xlStyleHeader.Interior; xlInterior.Color = 12632256; // Medium Bottom border xlBorders = xlStyleHeader.Borders; xlBorderBottom = xlBorders[Excel.XlBordersIndex.xlEdgeBottom]; xlBorderBottom.Weight = Excel.XlBorderWeight.xlMedium; } catch (Exception ex) { throw ex; } finally { Release(xlBorderBottom); Release(xlBorders); Release(xlInterior); Release(xlFont); Release(xlStyles); } } private void Release(object obj) { // Errors are ignored per Microsoft's suggestion for this type of function: // http://support.microsoft.com/default.aspx/kb/317109 try { System.Runtime.InteropServices.Marshal.ReleaseComObject(obj); } catch { } } Notice that I had to set xlBorders[Excel.XlBordersIndex.xlEdgeBottom] to a variable in order to clean that up (Not because of the two dots, which refer to an enumeration which does not need to be released, but because the object I'm referring to is actually a Border object that does need to be released). This sort of thing is not really necessary in standard applications, which do a great job of cleaning up after themselves, but in ASP.NET applications, if you miss even one of these, no matter how often you call the garbage collector, Excel will still be running on your server. It requires a lot of attention to detail and many test executions while monitoring the Task Manager when writing this code, but doing so saves you the hassle of desperately searching through pages of code to find the one instance you missed. This is especially important when working in loops, where you need to release EACH INSTANCE of an object, even though it uses the same variable name each time it loops. A: After trying * *Release COM objects in reverse order *Add GC.Collect() and GC.WaitForPendingFinalizers() twice at the end *No more than two dots *Close workbook and quit application *Run in release mode the final solution that works for me is to move one set of GC.Collect(); GC.WaitForPendingFinalizers(); that we added to the end of the function to a wrapper, as follows: private void FunctionWrapper(string sourcePath, string targetPath) { try { FunctionThatCallsExcel(sourcePath, targetPath); } finally { GC.Collect(); GC.WaitForPendingFinalizers(); } } A: I followed this exactly... But I still ran into issues 1 out of 1000 times. Who knows why. Time to bring out the hammer... Right after the Excel Application class is instantiated I get a hold of the Excel process that was just created. excel = new Microsoft.Office.Interop.Excel.Application(); var process = Process.GetProcessesByName("EXCEL").OrderByDescending(p => p.StartTime).First(); Then once I've done all the above COM clean-up, I make sure that process isn't running. If it is still running, kill it! if (!process.HasExited) process.Kill(); A: Excel does not quit because your application is still holding references to COM objects. I guess you're invoking at least one member of a COM object without assigning it to a variable. For me it was the excelApp.Worksheets object which I directly used without assigning it to a variable: Worksheet sheet = excelApp.Worksheets.Open(...); ... Marshal.ReleaseComObject(sheet); I didn't know that internally C# created a wrapper for the Worksheets COM object which didn't get released by my code (because I wasn't aware of it) and was the cause why Excel was not unloaded. I found the solution to my problem on this page, which also has a nice rule for the usage of COM objects in C#: Never use two dots with COM objects. So with this knowledge the right way of doing the above is: Worksheets sheets = excelApp.Worksheets; // <-- The important part Worksheet sheet = sheets.Open(...); ... Marshal.ReleaseComObject(sheets); Marshal.ReleaseComObject(sheet); POST MORTEM UPDATE: I want every reader to read this answer by Hans Passant very carefully as it explains the trap I and lots of other developers stumbled into. When I wrote this answer years ago I didn't know about the effect the debugger has to the garbage collector and drew the wrong conclusions. I keep my answer unaltered for the sake of history but please read this link and don't go the way of "the two dots": Understanding garbage collection in .NET and Clean up Excel Interop Objects with IDisposable A: ¨°º¤ø„¸ Shoot Excel proc and chew bubble gum ¸„ø¤º°¨ public class MyExcelInteropClass { Excel.Application xlApp; Excel.Workbook xlBook; public void dothingswithExcel() { try { /* Do stuff manipulating cells sheets and workbooks ... */ } catch {} finally {KillExcelProcess(xlApp);} } static void KillExcelProcess(Excel.Application xlApp) { if (xlApp != null) { int excelProcessId = 0; GetWindowThreadProcessId(xlApp.Hwnd, out excelProcessId); Process p = Process.GetProcessById(excelProcessId); p.Kill(); xlApp = null; } } [DllImport("user32.dll")] static extern int GetWindowThreadProcessId(int hWnd, out int lpdwProcessId); } A: You need to be aware that Excel is very sensitive to the culture you are running under as well. You may find that you need to set the culture to EN-US before calling Excel functions. This does not apply to all functions - but some of them. CultureInfo en_US = new System.Globalization.CultureInfo("en-US"); System.Threading.Thread.CurrentThread.CurrentCulture = en_US; string filePathLocal = _applicationObject.ActiveWorkbook.Path; System.Threading.Thread.CurrentThread.CurrentCulture = orgCulture; This applies even if you are using VSTO. For details: http://support.microsoft.com/default.aspx?scid=kb;en-us;Q320369 A: "Never use two dots with COM objects" is a great rule of thumb to avoid leakage of COM references, but Excel PIA can lead to leakage in more ways than apparent at first sight. One of these ways is subscribing to any event exposed by any of the Excel object model's COM objects. For example, subscribing to the Application class's WorkbookOpen event. Some theory on COM events COM classes expose a group of events through call-back interfaces. In order to subscribe to events, the client code can simply register an object implementing the call-back interface and the COM class will invoke its methods in response to specific events. Since the call-back interface is a COM interface, it is the duty of the implementing object to decrement the reference count of any COM object it receives (as a parameter) for any of the event handlers. How Excel PIA expose COM Events Excel PIA exposes COM events of Excel Application class as conventional .NET events. Whenever the client code subscribes to a .NET event (emphasis on 'a'), PIA creates an instance of a class implementing the call-back interface and registers it with Excel. Hence, a number of call-back objects get registered with Excel in response to different subscription requests from the .NET code. One call-back object per event subscription. A call-back interface for event handling means that, PIA has to subscribe to all interface events for every .NET event subscription request. It cannot pick and choose. On receiving an event call-back, the call-back object checks if the associated .NET event handler is interested in the current event or not and then either invokes the handler or silently ignores the call-back. Effect on COM instance reference counts All these call-back objects do not decrement the reference count of any of the COM objects they receive (as parameters) for any of the call-back methods (even for the ones that are silently ignored). They rely solely on the CLR garbage collector to free up the COM objects. Since GC run is non-deterministic, this can lead to the holding off of Excel process for a longer duration than desired and create an impression of a 'memory leak'. Solution The only solution as of now is to avoid the PIA’s event provider for the COM class and write your own event provider which deterministically releases COM objects. For the Application class, this can be done by implementing the AppEvents interface and then registering the implementation with Excel by using IConnectionPointContainer interface. The Application class (and for that matter all COM objects exposing events using callback mechanism) implements the IConnectionPointContainer interface. A: UPDATE: Added C# code, and link to Windows Jobs I spent sometime trying to figure out this problem, and at the time XtremeVBTalk was the most active and responsive. Here is a link to my original post, Closing an Excel Interop process cleanly, even if your application crashes. Below is a summary of the post, and the code copied to this post. * *Closing the Interop process with Application.Quit() and Process.Kill() works for the most part, but fails if the applications crashes catastrophically. I.e. if the app crashes, the Excel process will still be running loose. *The solution is to let the OS handle the cleanup of your processes through Windows Job Objects using Win32 calls. When your main application dies, the associated processes (i.e. Excel) will get terminated as well. I found this to be a clean solution because the OS is doing real work of cleaning up. All you have to do is register the Excel process. Windows Job Code Wraps the Win32 API Calls to register Interop processes. public enum JobObjectInfoType { AssociateCompletionPortInformation = 7, BasicLimitInformation = 2, BasicUIRestrictions = 4, EndOfJobTimeInformation = 6, ExtendedLimitInformation = 9, SecurityLimitInformation = 5, GroupInformation = 11 } [StructLayout(LayoutKind.Sequential)] public struct SECURITY_ATTRIBUTES { public int nLength; public IntPtr lpSecurityDescriptor; public int bInheritHandle; } [StructLayout(LayoutKind.Sequential)] struct JOBOBJECT_BASIC_LIMIT_INFORMATION { public Int64 PerProcessUserTimeLimit; public Int64 PerJobUserTimeLimit; public Int16 LimitFlags; public UInt32 MinimumWorkingSetSize; public UInt32 MaximumWorkingSetSize; public Int16 ActiveProcessLimit; public Int64 Affinity; public Int16 PriorityClass; public Int16 SchedulingClass; } [StructLayout(LayoutKind.Sequential)] struct IO_COUNTERS { public UInt64 ReadOperationCount; public UInt64 WriteOperationCount; public UInt64 OtherOperationCount; public UInt64 ReadTransferCount; public UInt64 WriteTransferCount; public UInt64 OtherTransferCount; } [StructLayout(LayoutKind.Sequential)] struct JOBOBJECT_EXTENDED_LIMIT_INFORMATION { public JOBOBJECT_BASIC_LIMIT_INFORMATION BasicLimitInformation; public IO_COUNTERS IoInfo; public UInt32 ProcessMemoryLimit; public UInt32 JobMemoryLimit; public UInt32 PeakProcessMemoryUsed; public UInt32 PeakJobMemoryUsed; } public class Job : IDisposable { [DllImport("kernel32.dll", CharSet = CharSet.Unicode)] static extern IntPtr CreateJobObject(object a, string lpName); [DllImport("kernel32.dll")] static extern bool SetInformationJobObject(IntPtr hJob, JobObjectInfoType infoType, IntPtr lpJobObjectInfo, uint cbJobObjectInfoLength); [DllImport("kernel32.dll", SetLastError = true)] static extern bool AssignProcessToJobObject(IntPtr job, IntPtr process); private IntPtr m_handle; private bool m_disposed = false; public Job() { m_handle = CreateJobObject(null, null); JOBOBJECT_BASIC_LIMIT_INFORMATION info = new JOBOBJECT_BASIC_LIMIT_INFORMATION(); info.LimitFlags = 0x2000; JOBOBJECT_EXTENDED_LIMIT_INFORMATION extendedInfo = new JOBOBJECT_EXTENDED_LIMIT_INFORMATION(); extendedInfo.BasicLimitInformation = info; int length = Marshal.SizeOf(typeof(JOBOBJECT_EXTENDED_LIMIT_INFORMATION)); IntPtr extendedInfoPtr = Marshal.AllocHGlobal(length); Marshal.StructureToPtr(extendedInfo, extendedInfoPtr, false); if (!SetInformationJobObject(m_handle, JobObjectInfoType.ExtendedLimitInformation, extendedInfoPtr, (uint)length)) throw new Exception(string.Format("Unable to set information. Error: {0}", Marshal.GetLastWin32Error())); } #region IDisposable Members public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } #endregion private void Dispose(bool disposing) { if (m_disposed) return; if (disposing) {} Close(); m_disposed = true; } public void Close() { Win32.CloseHandle(m_handle); m_handle = IntPtr.Zero; } public bool AddProcess(IntPtr handle) { return AssignProcessToJobObject(m_handle, handle); } } Note about Constructor code * *In the constructor, the info.LimitFlags = 0x2000; is called. 0x2000 is the JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE enum value, and this value is defined by MSDN as: Causes all processes associated with the job to terminate when the last handle to the job is closed. Extra Win32 API Call to get the Process ID (PID) [DllImport("user32.dll", SetLastError = true)] public static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId); Using the code Excel.Application app = new Excel.ApplicationClass(); Job job = new Job(); uint pid = 0; Win32.GetWindowThreadProcessId(new IntPtr(app.Hwnd), out pid); job.AddProcess(Process.GetProcessById((int)pid).Handle); A: A great article on releasing COM objects is 2.5 Releasing COM Objects (MSDN). The method that I would advocate is to null your Excel.Interop references if they are non-local variables, and then call GC.Collect() and GC.WaitForPendingFinalizers() twice. Locally scoped Interop variables will be taken care of automatically. This removes the need to keep a named reference for every COM object. Here's an example taken from the article: public class Test { // These instance variables must be nulled or Excel will not quit private Excel.Application xl; private Excel.Workbook book; public void DoSomething() { xl = new Excel.Application(); xl.Visible = true; book = xl.Workbooks.Add(Type.Missing); // These variables are locally scoped, so we need not worry about them. // Notice I don't care about using two dots. Excel.Range rng = book.Worksheets[1].UsedRange; } public void CleanUp() { book = null; xl.Quit(); xl = null; GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); } } These words are straight from the article: In almost all situations, nulling the RCW reference and forcing a garbage collection will clean up properly. If you also call GC.WaitForPendingFinalizers, garbage collection will be as deterministic as you can make it. That is, you'll be pretty sure exactly when the object has been cleaned up—on the return from the second call to WaitForPendingFinalizers. As an alternative, you can use Marshal.ReleaseComObject. However, note that you are very unlikely to ever need to use this method. A: This worked for a project I was working on: excelApp.Quit(); Marshal.ReleaseComObject (excelWB); Marshal.ReleaseComObject (excelApp); excelApp = null; We learned that it was important to set every reference to an Excel COM object to null when you were done with it. This included Cells, Sheets, and everything. A: First - you never have to call Marshal.ReleaseComObject(...) or Marshal.FinalReleaseComObject(...) when doing Excel interop. It is a confusing anti-pattern, but any information about this, including from Microsoft, that indicates you have to manually release COM references from .NET is incorrect. The fact is that the .NET runtime and garbage collector correctly keep track of and clean up COM references. For your code, this means you can remove the whole `while (...) loop at the top. Second, if you want to ensure that the COM references to an out-of-process COM object are cleaned up when your process ends (so that the Excel process will close), you need to ensure that the garbage collector runs. You do this correctly with calls to GC.Collect() and GC.WaitForPendingFinalizers(). Calling this twice is safe, and ensures that cycles are definitely cleaned up too (though I'm not sure it's needed, and would appreciate an example that shows this). Third, when running under the debugger, local references will be artificially kept alive until the end of the method (so that local variable inspection works). So GC.Collect() calls are not effective for cleaning object like rng.Cells from the same method. You should split the code doing the COM interop from the GC cleanup into separate methods. (This was a key discovery for me, from one part of the answer posted here by @nightcoder.) The general pattern would thus be: Sub WrapperThatCleansUp() ' NOTE: Don't call Excel objects in here... ' Debugger would keep alive until end, preventing GC cleanup ' Call a separate function that talks to Excel DoTheWork() ' Now let the GC clean up (twice, to clean up cycles too) GC.Collect() GC.WaitForPendingFinalizers() GC.Collect() GC.WaitForPendingFinalizers() End Sub Sub DoTheWork() Dim app As New Microsoft.Office.Interop.Excel.Application Dim book As Microsoft.Office.Interop.Excel.Workbook = app.Workbooks.Add() Dim worksheet As Microsoft.Office.Interop.Excel.Worksheet = book.Worksheets("Sheet1") app.Visible = True For i As Integer = 1 To 10 worksheet.Cells.Range("A" & i).Value = "Hello" Next book.Save() book.Close() app.Quit() ' NOTE: No calls the Marshal.ReleaseComObject() are ever needed End Sub There is a lot of false information and confusion about this issue, including many posts on MSDN and on Stack Overflow (and especially this question!). What finally convinced me to have a closer look and figure out the right advice was blog post Marshal.ReleaseComObject Considered Dangerous together with finding the issue with references kept alive under the debugger that was confusing my earlier testing. A: As others have pointed out, you need to create an explicit reference for every Excel object you use, and call Marshal.ReleaseComObject on that reference, as described in this KB article. You also need to use try/finally to ensure ReleaseComObject is always called, even when an exception is thrown. I.e. instead of: Worksheet sheet = excelApp.Worksheets(1) ... do something with sheet you need to do something like: Worksheets sheets = null; Worksheet sheet = null try { sheets = excelApp.Worksheets; sheet = sheets(1); ... } finally { if (sheets != null) Marshal.ReleaseComObject(sheets); if (sheet != null) Marshal.ReleaseComObject(sheet); } You also need to call Application.Quit before releasing the Application object if you want Excel to close. As you can see, this quickly becomes extremely unwieldy as soon as you try to do anything even moderately complex. I have successfully developed .NET applications with a simple wrapper class that wraps a few simple manipulations of the Excel object model (open a workbook, write to a Range, save/close the workbook etc). The wrapper class implements IDisposable, carefully implements Marshal.ReleaseComObject on every object it uses, and does not pubicly expose any Excel objects to the rest of the app. But this approach doesn't scale well for more complex requirements. This is a big deficiency of .NET COM Interop. For more complex scenarios, I would seriously consider writing an ActiveX DLL in VB6 or other unmanaged language to which you can delegate all interaction with out-proc COM objects such as Office. You can then reference this ActiveX DLL from your .NET application, and things will be much easier as you will only need to release this one reference. A: Anything that is in the Excel namespace needs to be released. Period You can't be doing: Worksheet ws = excel.WorkBooks[1].WorkSheets[1]; You have to be doing Workbooks books = excel.WorkBooks; Workbook book = books[1]; Sheets sheets = book.WorkSheets; Worksheet ws = sheets[1]; followed by the releasing of the objects. A: When all the stuff above didn't work, try giving Excel some time to close its sheets: app.workbooks.Close(); Thread.Sleep(500); // adjust, for me it works at around 300+ app.Quit(); ... FinalReleaseComObject(app); A: Make sure that you release all objects related to Excel! I spent a few hours by trying several ways. All are great ideas but I finally found my mistake: If you don't release all objects, none of the ways above can help you like in my case. Make sure you release all objects including range one! Excel.Range rng = (Excel.Range)worksheet.Cells[1, 1]; worksheet.Paste(rng, false); releaseObject(rng); The options are together here. A: You can actually release your Excel Application object cleanly, but you do have to take care. The advice to maintain a named reference for absolutely every COM object you access and then explicitly release it via Marshal.FinalReleaseComObject() is correct in theory, but, unfortunately, very difficult to manage in practice. If one ever slips anywhere and uses "two dots", or iterates cells via a for each loop, or any other similar kind of command, then you'll have unreferenced COM objects and risk a hang. In this case, there would be no way to find the cause in the code; you would have to review all your code by eye and hopefully find the cause, a task that could be nearly impossible for a large project. The good news is that you do not actually have to maintain a named variable reference to every COM object you use. Instead, call GC.Collect() and then GC.WaitForPendingFinalizers() to release all the (usually minor) objects to which you do not hold a reference, and then explicitly release the objects to which you do hold a named variable reference. You should also release your named references in reverse order of importance: range objects first, then worksheets, workbooks, and then finally your Excel Application object. For example, assuming that you had a Range object variable named xlRng, a Worksheet variable named xlSheet, a Workbook variable named xlBook and an Excel Application variable named xlApp, then your cleanup code could look something like the following: // Cleanup GC.Collect(); GC.WaitForPendingFinalizers(); Marshal.FinalReleaseComObject(xlRng); Marshal.FinalReleaseComObject(xlSheet); xlBook.Close(Type.Missing, Type.Missing, Type.Missing); Marshal.FinalReleaseComObject(xlBook); xlApp.Quit(); Marshal.FinalReleaseComObject(xlApp); In most code examples you'll see for cleaning up COM objects from .NET, the GC.Collect() and GC.WaitForPendingFinalizers() calls are made TWICE as in: GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); This should not be required, however, unless you are using Visual Studio Tools for Office (VSTO), which uses finalizers that cause an entire graph of objects to be promoted in the finalization queue. Such objects would not be released until the next garbage collection. However, if you are not using VSTO, you should be able to call GC.Collect() and GC.WaitForPendingFinalizers() just once. I know that explicitly calling GC.Collect() is a no-no (and certainly doing it twice sounds very painful), but there is no way around it, to be honest. Through normal operations you will generate hidden objects to which you hold no reference that you, therefore, cannot release through any other means other than calling GC.Collect(). This is a complex topic, but this really is all there is to it. Once you establish this template for your cleanup procedure you can code normally, without the need for wrappers, etc. :-) I have a tutorial on this here: Automating Office Programs with VB.Net / COM Interop It's written for VB.NET, but don't be put off by that, the principles are exactly the same as when using C#. A: Preface: my answer contains two solutions, so be careful when reading and don't miss anything. There are different ways and advice of how to make Excel instance unload, such as: * *Releasing EVERY com object explicitly with Marshal.FinalReleaseComObject() (not forgetting about implicitly created com-objects). To release every created com object, you may use the rule of 2 dots mentioned here: How do I properly clean up Excel interop objects? *Calling GC.Collect() and GC.WaitForPendingFinalizers() to make CLR release unused com-objects * (Actually, it works, see my second solution for details) *Checking if com-server-application maybe shows a message box waiting for the user to answer (though I am not sure it can prevent Excel from closing, but I heard about it a few times) *Sending WM_CLOSE message to the main Excel window *Executing the function that works with Excel in a separate AppDomain. Some people believe Excel instance will be shut, when AppDomain is unloaded. *Killing all excel instances which were instantiated after our excel-interoping code started. BUT! Sometimes all these options just don't help or can't be appropriate! For example, yesterday I found out that in one of my functions (which works with excel) Excel keeps running after the function ends. I tried everything! I thoroughly checked the whole function 10 times and added Marshal.FinalReleaseComObject() for everything! I also had GC.Collect() and GC.WaitForPendingFinalizers(). I checked for hidden message boxes. I tried to send WM_CLOSE message to the main Excel window. I executed my function in a separate AppDomain and unloaded that domain. Nothing helped! The option with closing all excel instances is inappropriate, because if the user starts another Excel instance manually, during execution of my function which works also with Excel, then that instance will also be closed by my function. I bet the user will not be happy! So, honestly, this is a lame option (no offence guys). So I spent a couple of hours before I found a good (in my humble opinion) solution: Kill excel process by hWnd of its main window (it's the first solution). Here is the simple code: [DllImport("user32.dll")] private static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId); /// <summary> Tries to find and kill process by hWnd to the main window of the process.</summary> /// <param name="hWnd">Handle to the main window of the process.</param> /// <returns>True if process was found and killed. False if process was not found by hWnd or if it could not be killed.</returns> public static bool TryKillProcessByMainWindowHwnd(int hWnd) { uint processID; GetWindowThreadProcessId((IntPtr)hWnd, out processID); if(processID == 0) return false; try { Process.GetProcessById((int)processID).Kill(); } catch (ArgumentException) { return false; } catch (Win32Exception) { return false; } catch (NotSupportedException) { return false; } catch (InvalidOperationException) { return false; } return true; } /// <summary> Finds and kills process by hWnd to the main window of the process.</summary> /// <param name="hWnd">Handle to the main window of the process.</param> /// <exception cref="ArgumentException"> /// Thrown when process is not found by the hWnd parameter (the process is not running). /// The identifier of the process might be expired. /// </exception> /// <exception cref="Win32Exception">See Process.Kill() exceptions documentation.</exception> /// <exception cref="NotSupportedException">See Process.Kill() exceptions documentation.</exception> /// <exception cref="InvalidOperationException">See Process.Kill() exceptions documentation.</exception> public static void KillProcessByMainWindowHwnd(int hWnd) { uint processID; GetWindowThreadProcessId((IntPtr)hWnd, out processID); if (processID == 0) throw new ArgumentException("Process has not been found by the given main window handle.", "hWnd"); Process.GetProcessById((int)processID).Kill(); } As you can see I provided two methods, according to Try-Parse pattern (I think it is appropriate here): one method doesn't throw the exception if the Process could not be killed (for example the process doesn't exist anymore), and another method throws the exception if the Process was not killed. The only weak place in this code is security permissions. Theoretically, the user may not have permissions to kill the process, but in 99.99% of all cases, user has such permissions. I also tested it with a guest account - it works perfectly. So, your code, working with Excel, can look like this: int hWnd = xl.Application.Hwnd; // ... // here we try to close Excel as usual, with xl.Quit(), // Marshal.FinalReleaseComObject(xl) and so on // ... TryKillProcessByMainWindowHwnd(hWnd); Voila! Excel is terminated! :) Ok, let's go back to the second solution, as I promised in the beginning of the post. The second solution is to call GC.Collect() and GC.WaitForPendingFinalizers(). Yes, they actually work, but you need to be careful here! Many people say (and I said) that calling GC.Collect() doesn't help. But the reason it wouldn't help is if there are still references to COM objects! One of the most popular reasons for GC.Collect() not being helpful is running the project in Debug-mode. In debug-mode objects that are not really referenced anymore will not be garbage collected until the end of the method. So, if you tried GC.Collect() and GC.WaitForPendingFinalizers() and it didn't help, try to do the following: 1) Try to run your project in Release mode and check if Excel closed correctly 2) Wrap the method of working with Excel in a separate method. So, instead of something like this: void GenerateWorkbook(...) { ApplicationClass xl; Workbook xlWB; try { xl = ... xlWB = xl.Workbooks.Add(...); ... } finally { ... Marshal.ReleaseComObject(xlWB) ... GC.Collect(); GC.WaitForPendingFinalizers(); } } you write: void GenerateWorkbook(...) { try { GenerateWorkbookInternal(...); } finally { GC.Collect(); GC.WaitForPendingFinalizers(); } } private void GenerateWorkbookInternal(...) { ApplicationClass xl; Workbook xlWB; try { xl = ... xlWB = xl.Workbooks.Add(...); ... } finally { ... Marshal.ReleaseComObject(xlWB) ... } } Now, Excel will close =) A: You should be very careful using Word/Excel interop applications. After trying all the solutions we still had a lot of "WinWord" process left open on server (with more than 2000 users). After working on the problem for hours, I realized that if I open more than a couple of documents using Word.ApplicationClass.Document.Open() on different threads simultaneously, IIS worker process (w3wp.exe) would crash leaving all WinWord processes open! So I guess there is no absolute solution to this problem, but switching to other methods such as Office Open XML development. A: The two dots rule did not work for me. In my case I created a method to clean my resources as follows: private static void Clean() { workBook.Close(); Marshall.ReleaseComObject(workBook); excel.Quit(); CG.Collect(); CG.WaitForPendingFinalizers(); } A: My solution [DllImport("user32.dll")] static extern int GetWindowThreadProcessId(int hWnd, out int lpdwProcessId); private void GenerateExcel() { var excel = new Microsoft.Office.Interop.Excel.Application(); int id; // Find the Excel Process Id (ath the end, you kill him GetWindowThreadProcessId(excel.Hwnd, out id); Process excelProcess = Process.GetProcessById(id); try { // Your code } finally { excel.Quit(); // Kill him ! excelProcess.Kill(); } A: I found a useful generic template that can help implement the correct disposal pattern for COM objects, that need Marshal.ReleaseComObject called when they go out of scope: Usage: using (AutoReleaseComObject<Application> excelApplicationWrapper = new AutoReleaseComObject<Application>(new Application())) { try { using (AutoReleaseComObject<Workbook> workbookWrapper = new AutoReleaseComObject<Workbook>(excelApplicationWrapper.ComObject.Workbooks.Open(namedRangeBase.FullName, false, false, missing, missing, missing, true, missing, missing, true, missing, missing, missing, missing, missing))) { // do something with your workbook.... } } finally { excelApplicationWrapper.ComObject.Quit(); } } Template: public class AutoReleaseComObject<T> : IDisposable { private T m_comObject; private bool m_armed = true; private bool m_disposed = false; public AutoReleaseComObject(T comObject) { Debug.Assert(comObject != null); m_comObject = comObject; } #if DEBUG ~AutoReleaseComObject() { // We should have been disposed using Dispose(). Debug.WriteLine("Finalize being called, should have been disposed"); if (this.ComObject != null) { Debug.WriteLine(string.Format("ComObject was not null:{0}, name:{1}.", this.ComObject, this.ComObjectName)); } //Debug.Assert(false); } #endif public T ComObject { get { Debug.Assert(!m_disposed); return m_comObject; } } private string ComObjectName { get { if(this.ComObject is Microsoft.Office.Interop.Excel.Workbook) { return ((Microsoft.Office.Interop.Excel.Workbook)this.ComObject).Name; } return null; } } public void Disarm() { Debug.Assert(!m_disposed); m_armed = false; } #region IDisposable Members public void Dispose() { Dispose(true); #if DEBUG GC.SuppressFinalize(this); #endif } #endregion protected virtual void Dispose(bool disposing) { if (!m_disposed) { if (m_armed) { int refcnt = 0; do { refcnt = System.Runtime.InteropServices.Marshal.ReleaseComObject(m_comObject); } while (refcnt > 0); m_comObject = default(T); } m_disposed = true; } } } Reference: http://www.deez.info/sengelha/2005/02/11/useful-idisposable-class-3-autoreleasecomobject/ A: I cant believe this problem has haunted the world for 5 years.... If you have created an application, you need to shut it down first before removing the link. objExcel = new Excel.Application(); objBook = (Excel.Workbook)(objExcel.Workbooks.Add(Type.Missing)); when closing objBook.Close(true, Type.Missing, Type.Missing); objExcel.Application.Quit(); objExcel.Quit(); When you new an excel application, it opens a excel program in the background. You need to command that excel program to quit before you release the link because that excel program is not part of your direct control. Therefore, it will stay open if the link is released! Good programming everyone~~ A: Common developers, none of your solutions worked for me, so I decide to implement a new trick. First let specify "What is our goal?" => "Not to see excel object after our job in task manager" Ok. Let no to challenge and start destroying it, but consider not to destroy other instance os Excel which are running in parallel. So , get the list of current processors and fetch PID of EXCEL processes , then once your job is done, we have a new guest in processes list with a unique PID ,find and destroy just that one. < keep in mind any new excel process during your excel job will be detected as new and destroyed > < A better solution is to capture PID of new created excel object and just destroy that> Process[] prs = Process.GetProcesses(); List<int> excelPID = new List<int>(); foreach (Process p in prs) if (p.ProcessName == "EXCEL") excelPID.Add(p.Id); .... // your job prs = Process.GetProcesses(); foreach (Process p in prs) if (p.ProcessName == "EXCEL" && !excelPID.Contains(p.Id)) p.Kill(); This resolves my issue, hope yours too. A: This sure seems like it has been over-complicated. From my experience, there are just three key things to get Excel to close properly: 1: make sure there are no remaining references to the excel application you created (you should only have one anyway; set it to null) 2: call GC.Collect() 3: Excel has to be closed, either by the user manually closing the program, or by you calling Quit on the Excel object. (Note that Quit will function just as if the user tried to close the program, and will present a confirmation dialog if there are unsaved changes, even if Excel is not visible. The user could press cancel, and then Excel will not have been closed.) 1 needs to happen before 2, but 3 can happen anytime. One way to implement this is to wrap the interop Excel object with your own class, create the interop instance in the constructor, and implement IDisposable with Dispose looking something like if (!mDisposed) { mExcel = null; GC.Collect(); mDisposed = true; } That will clean up excel from your program's side of things. Once Excel is closed (manually by the user or by you calling Quit) the process will go away. If the program has already been closed, then the process will disappear on the GC.Collect() call. (I'm not sure how important it is, but you may want a GC.WaitForPendingFinalizers() call after the GC.Collect() call but it is not strictly necessary to get rid of the Excel process.) This has worked for me without issue for years. Keep in mind though that while this works, you actually have to close gracefully for it to work. You will still get accumulating excel.exe processes if you interrupt your program before Excel is cleaned up (usually by hitting "stop" while your program is being debugged). A: I've traditionally followed the advice found in VVS's answer. However, in an effort to keep this answer up-to-date with the latest options, I think all my future projects will use the "NetOffice" library. NetOffice is a complete replacement for the Office PIAs and is completely version-agnostic. It's a collection of Managed COM wrappers that can handle the cleanup that often causes such headaches when working with Microsoft Office in .NET. Some key features are: * *Mostly version-independent (and version-dependant features are documented) *No dependencies *No PIA *No registration *No VSTO I am in no way affiliated with the project; I just genuinely appreciate the stark reduction in headaches. A: To add to reasons why Excel does not close, even when you create direct refrences to each object upon read, creation, is the 'For' loop. For Each objWorkBook As WorkBook in objWorkBooks 'local ref, created from ExcelApp.WorkBooks to avoid the double-dot objWorkBook.Close 'or whatever FinalReleaseComObject(objWorkBook) objWorkBook = Nothing Next 'The above does not work, and this is the workaround: For intCounter As Integer = 1 To mobjExcel_WorkBooks.Count Dim objTempWorkBook As Workbook = mobjExcel_WorkBooks.Item(intCounter) objTempWorkBook.Saved = True objTempWorkBook.Close(False, Type.Missing, Type.Missing) FinalReleaseComObject(objTempWorkBook) objTempWorkBook = Nothing Next A: I think that some of that is just the way that the framework handles Office applications, but I could be wrong. On some days, some applications clean up the processes immediately, and other days it seems to wait until the application closes. In general, I quit paying attention to the details and just make sure that there aren't any extra processes floating around at the end of the day. Also, and maybe I'm over simplifying things, but I think you can just... objExcel = new Excel.Application(); objBook = (Excel.Workbook)(objExcel.Workbooks.Add(Type.Missing)); DoSomeStuff(objBook); SaveTheBook(objBook); objBook.Close(false, Type.Missing, Type.Missing); objExcel.Quit(); Like I said earlier, I don't tend to pay attention to the details of when the Excel process appears or disappears, but that usually works for me. I also don't like to keep Excel processes around for anything other than the minimal amount of time, but I'm probably just being paranoid on that. A: As some have probably already written, it's not just important how you close the Excel (object); it's also important how you open it and also by the type of the project. In a WPF application, basically the same code is working without or with very few problems. I have a project in which the same Excel file is being processed several times for different parameter value - e.g. parsing it based on values inside a generic list. I put all Excel-related functions into the base class, and parser into a subclass (different parsers use common Excel functions). I didn't want that Excel is opened and closed again for each item in a generic list, so I've opened it only once in the base class and close it in the subclass. I had problems when moving the code into a desktop application. I've tried many of the above mentioned solutions. GC.Collect() was already implemented before, twice as suggested. Then I've decided that I will move the code for opening Excel to a subclass. Instead of opening only once, now I create a new object (base class) and open Excel for every item and close it at the end. There is some performance penalty, but based on several tests Excel processes are closing without problems (in debug mode), so also temporary files are removed. I will continue with testing and write some more if I will get some updates. The bottom line is: You must also check the initialize code, especially if you have many classes, etc. A: The accepted answer did not work for me. The following code in the destructor did the job. if (xlApp != null) { xlApp.Workbooks.Close(); xlApp.Quit(); } System.Diagnostics.Process[] processArray = System.Diagnostics.Process.GetProcessesByName("EXCEL"); foreach (System.Diagnostics.Process process in processArray) { if (process.MainWindowTitle.Length == 0) { process.Kill(); } } A: I am currently working on Office automation and have stumbled across a solution for this that works every time for me. It is simple and does not involve killing any processes. It seems that by merely looping through the current active processes, and in any way 'accessing' an open Excel process, any stray hanging instance of Excel will be removed. The below code simply checks for processes where the name is 'Excel', then writes the MainWindowTitle property of the process to a string. This 'interaction' with the process seems to make Windows catch up and abort the frozen instance of Excel. I run the below method just before the add-in which I am developing quits, as it fires it unloading event. It removes any hanging instances of Excel every time. In all honesty I am not entirely sure why this works, but it works well for me and could be placed at the end of any Excel application without having to worry about double dots, Marshal.ReleaseComObject, nor killing processes. I would be very interested in any suggestions as to why this is effective. public static void SweepExcelProcesses() { if (Process.GetProcessesByName("EXCEL").Length != 0) { Process[] processes = Process.GetProcesses(); foreach (Process process in processes) { if (process.ProcessName.ToString() == "excel") { string title = process.MainWindowTitle; } } } } A: 'This sure seems like it has been over-complicated. From my experience, there are just three key things to get Excel to close properly: 1: make sure there are no remaining references to the excel application you created (you should only have one anyway; set it to null) 2: call GC.Collect() 3: Excel has to be closed, either by the user manually closing the program, or by you calling Quit on the Excel object. (Note that Quit will function just as if the user tried to close the program, and will present a confirmation dialog if there are unsaved changes, even if Excel is not visible. The user could press cancel, and then Excel will not have been closed.) 1 needs to happen before 2, but 3 can happen anytime. One way to implement this is to wrap the interop Excel object with your own class, create the interop instance in the constructor, and implement IDisposable with Dispose looking something like That will clean up excel from your program's side of things. Once Excel is closed (manually by the user or by you calling Quit) the process will go away. If the program has already been closed, then the process will disappear on the GC.Collect() call. (I'm not sure how important it is, but you may want a GC.WaitForPendingFinalizers() call after the GC.Collect() call but it is not strictly necessary to get rid of the Excel process.) This has worked for me without issue for years. Keep in mind though that while this works, you actually have to close gracefully for it to work. You will still get accumulating excel.exe processes if you interrupt your program before Excel is cleaned up (usually by hitting "stop" while your program is being debugged).' A: Here is a really easy way to do it: [DllImport("User32.dll")] static extern uint GetWindowThreadProcessId(IntPtr hWnd, out int lpdwProcessId); ... int objExcelProcessId = 0; Excel.Application objExcel = new Excel.Application(); GetWindowThreadProcessId(new IntPtr(objExcel.Hwnd), out objExcelProcessId); Process.GetProcessById(objExcelProcessId).Kill(); A: My answer is late and its only purpose is to support the solution proposed by Govert. Short version: * *Write a local function with no global variables and no arguments executing the COM stuff. *Call the COM function in a wrapping function that calls the COM function and cleans thereafter. Long version: You are not using .Net to count references of COM objects and to release them yourself in the correct order. Even C++ programmers don't do that any longer by using smart pointers. So, forget about Marshal.ReleaseComObject and the funny one dot good two dots bad rule. The GC is happy to do the chore of releasing COM objects if you null out all references to COM objects that are no longer needed. The easiest way is to handle COM objects in a local function, with all variables for COM objects naturally going out of scope at the end. Due to some strange features of the debugger pointed out in the brilliant answers of Hans Passant mentioned in the accepted answers Post Mortem, the cleanup should be delegated to a wrapping function that also calls the executing function. So, COM objects like Excel or Word need two functions, one that does the actual job and a wrapper that calls this function and calls the GC afterwards like Govert did, the only correct answer in this thread. To show the principle I use a wrapper suitable for all functions doing COM stuff. Except for this extension, my code is just the C# version of Govert's code. In addition, I stopped the process for 6 seconds so that you can check out in the Task Manager that Excel is no longer visible after Quit() but lives on as a zombie until the GC puts an end to it. using Excel = Microsoft.Office.Interop.Excel; public delegate void WrapCom(); namespace GCTestOnOffice{ class Program{ static void DoSomethingWithExcel(){ Excel.Application ExcelApp = new(); Excel.Workbook Wb = ExcelApp.Workbooks.Open(@"D:\\Sample.xlsx"); Excel.Worksheet NewWs = Wb.Worksheets.Add(); for (int i = 1; i < 10; i++){ NewWs.Cells[i, 1] = i;} Wb.Save(); ExcelApp.Quit(); } static void TheComWrapper(WrapCom wrapCom){ wrapCom(); //All COM objects are out of scope, ready for the GC to gobble //Excel is no longer visible, but the process is still alive, //check out the Task-Manager in the next 6 seconds Thread.Sleep(6000); GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); GC.WaitForPendingFinalizers(); //Check out the Task-Manager, the Excel process is gone } static void Main(string[] args){ TheComWrapper(DoSomethingWithExcel); } } } A: Use: [DllImport("user32.dll")] private static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId); Declare it, add code in the finally block: finally { GC.Collect(); GC.WaitForPendingFinalizers(); if (excelApp != null) { excelApp.Quit(); int hWnd = excelApp.Application.Hwnd; uint processID; GetWindowThreadProcessId((IntPtr)hWnd, out processID); Process[] procs = Process.GetProcessesByName("EXCEL"); foreach (Process p in procs) { if (p.Id == processID) p.Kill(); } Marshal.FinalReleaseComObject(excelApp); } } A: So far it seems all answers involve some of these: * *Kill the process *Use GC.Collect() *Keep track of every COM object and release it properly. Which makes me appreciate how difficult this issue is :) I have been working on a library to simplify access to Excel, and I am trying to make sure that people using it won't leave a mess (fingers crossed). Instead of writing directly on the interfaces Interop provides, I am making extension methods to make live easier. Like ApplicationHelpers.CreateExcel() or workbook.CreateWorksheet("mySheetNameThatWillBeValidated"). Naturally, anything that is created may lead to an issue later on cleaning up, so I am actually favoring killing the process as last resort. Yet, cleaning up properly (third option), is probably the least destructive and most controlled. So, in that context I was wondering whether it wouldn't be best to make something like this: public abstract class ReleaseContainer<T> { private readonly Action<T> actionOnT; protected ReleaseContainer(T releasible, Action<T> actionOnT) { this.actionOnT = actionOnT; this.Releasible = releasible; } ~ReleaseContainer() { Release(); } public T Releasible { get; private set; } private void Release() { actionOnT(Releasible); Releasible = default(T); } } I used 'Releasible' to avoid confusion with Disposable. Extending this to IDisposable should be easy though. An implementation like this: public class ApplicationContainer : ReleaseContainer<Application> { public ApplicationContainer() : base(new Application(), ActionOnExcel) { } private static void ActionOnExcel(Application application) { application.Show(); // extension method. want to make sure the app is visible. application.Quit(); Marshal.FinalReleaseComObject(application); } } And one could do something similar for all sorts of COM objects. In the factory method: public static Application CreateExcelApplication(bool hidden = false) { var excel = new ApplicationContainer().Releasible; excel.Visible = !hidden; return excel; } I would expect that every container will be destructed properly by the GC, and therefore automatically make the call to Quit and Marshal.FinalReleaseComObject. Comments? Or is this an answer to the question of the third kind? A: Just to add another solution to the many listed here, using C++/ATL automation (I imagine you could use something similar from VB/C#??) Excel::_ApplicationPtr pXL = ... : SendMessage ( ( HWND ) m_pXL->GetHwnd ( ), WM_DESTROY, 0, 0 ) ; This works like a charm for me... A: There i have an idea,try to kill the excel process you have opened: * *before open an excelapplication,get all the process ids named oldProcessIds. *open the excelapplication. *get now all the excelapplication process ids named nowProcessIds. *when need to quit,kill the except ids between oldProcessIds and nowProcessIds. private static Excel.Application GetExcelApp() { if (_excelApp == null) { var processIds = System.Diagnostics.Process.GetProcessesByName("EXCEL").Select(a => a.Id).ToList(); _excelApp = new Excel.Application(); _excelApp.DisplayAlerts = false; _excelApp.Visible = false; _excelApp.ScreenUpdating = false; var newProcessIds = System.Diagnostics.Process.GetProcessesByName("EXCEL").Select(a => a.Id).ToList(); _excelApplicationProcessId = newProcessIds.Except(processIds).FirstOrDefault(); } return _excelApp; } public static void Dispose() { try { _excelApp.Workbooks.Close(); _excelApp.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(_excelApp); _excelApp = null; GC.Collect(); GC.WaitForPendingFinalizers(); if (_excelApplicationProcessId != default(int)) { var process = System.Diagnostics.Process.GetProcessById(_excelApplicationProcessId); process?.Kill(); _excelApplicationProcessId = default(int); } } catch (Exception ex) { _excelApp = null; } } A: Tested with Microsoft Excel 2016 A really tested solution. To C# Reference please see: https://stackoverflow.com/a/1307180/10442623 To VB.net Reference please see: https://stackoverflow.com/a/54044646/10442623 1 include the class job 2 implement the class to handle the apropiate dispose of excel proces A: I had this same problem getting PowerPoint to close after newing up the Application object in my VSTO AddIn. I tried all the answers here with limited success. This is the solution I found for my case - DONT use 'new Application', the AddInBase base class of ThisAddIn already has a handle to 'Application'. If you use that handle where you need it (make it static if you have to) then you don't need to worry about cleaning it up and PowerPoint won't hang on close. A: Of the three general strategies considered in other answers, killing the excel process is clearly a hack, whereas invoking the garbage collector is a brutal shotgun approach meant to compensate for incorrect deallocation of COM-objects. After lots of experimentation and rewriting the management of COM objects in my version-agnostic and late-bound wrapper, I have come to the conclusion that accurate and timely invocations of Marshal.ReleaseComObject() is the most efficient and elegant strategy. And no, you do not ever need FinalReleaseComObject(), because in a well-writtin program each COM acquired on once and therefore requires a single decrement of the reference counter. One shall make sure to release every single COM object, preferably as soon as it is no longer needed. But it is perfectly possible to release everything right after quitting the Excel application, at the only expense of higher memory usage. Excel will close as expected as long as one does not loose or forget to release a COM object. The simplest and most obvious aid in the process is wrapping every interop object into a .NET class implementing IDisposable, where the Dispose() method invokes ReleaseComObject() on its interop object. Doing it in the destructor, as proposed in here, makes no sense because destructors are non-deterministic. Show below is our wrapper's method that obtains a cell from WorkSheet bypassing the intermediate Cells member. Notice the way it disposes of the intermediate object after use: public ExcelRange XCell( int row, int col) { ExcelRange anchor, res; using( anchor = Range( "A1") ) { res = anchor.Offset( row - 1, col - 1 ); } return res; } The next step may be a simple memory manager that will keep track of every COM object obtained and make sure to release it after Excel quits if the user prefers to trade some RAM usage for simpler code. Futher reading * *How to properly release Excel COM objects, *Releasing COM objects: Garbage Collector vs. Marshal.RelseaseComObject. A: I really like when things clean up after them selves... So I made some wrapper classes that do all the cleanup for me! These are documented further down. The end code is quite readable and accessible. I haven't yet found any phantom instances of Excel running after I Close() the workbooks and Quit() the application (besides where I debug and close the app mid process). function void OpenCopyClose() { var excel = new ExcelApplication(); var workbook1 = excel.OpenWorkbook("C:\Temp\file1.xslx", readOnly: true); var readOnlysheet = workbook1.Worksheet("sheet1"); var workbook2 = excel.OpenWorkbook("C:\Temp\file2.xslx"); var writeSheet = workbook.Worksheet("sheet1"); // do all the excel manipulation // read from the first workbook, write to the second workbook. var a1 = workbook1.Cells[1, 1]; workbook2.Cells[1, 1] = a1 // explicit clean-up workbook1.Close(false); workbook2 .Close(true); excel.Quit(); } Note: You can skip the Close() and Quit() calls but if you are writing to an Excel document you will at least want to Save(). When the objects go out of scope (the method returns) the class finalizers will automatically kick in and do any cleanup. Any references to COM objects from the Worksheet COM object will automatically be managed and cleaned up as long as you are careful with the scope of your variables, eg keep variables local to the current scope only when storing references to COM objects. You can easily copy values you need to POCOs if you need, or create additional wrapper classes as discussed below. To manage all this, I have created a class, DisposableComObject, that acts as a wrapper for any COM object. It implements the IDisposable interface and also contains a finalizer for those that don't like using. The Dispose() method calls Marshal.ReleaseComObject(ComObject) and then sets the ComObjectRef property to null. The object is in a disposed state when the private ComObjectRef property is null. If the ComObject property is accessed after being disposed, a ComObjectAccessedAfterDisposeException exception is thrown. The Dispose() method can be called manually. It is also called by the finalizer, at the conclusion of a using block, and for using var at the conclusion of the scope of that variable. The top level classes from Microsoft.Office.Interop.Excel, Application, Workbook, and Worksheet, get their own wrapper classes where each are subclasses of DisposableComObject Here is the code: /// <summary> /// References to COM objects must be explicitly released when done. /// Failure to do so can result in odd behavior and processes remaining running after the application has stopped. /// This class helps to automate the process of disposing the references to COM objects. /// </summary> public abstract class DisposableComObject : IDisposable { public class ComObjectAccessedAfterDisposeException : Exception { public ComObjectAccessedAfterDisposeException() : base("COM object has been accessed after being disposed") { } } /// <summary>The actual COM object</summary> private object ComObjectRef { get; set; } /// <summary>The COM object to be used by subclasses</summary> /// <exception cref="ComObjectAccessedAfterDisposeException">When the COM object has been disposed</exception> protected object ComObject => ComObjectRef ?? throw new ComObjectAccessedAfterDisposeException(); public DisposableComObject(object comObject) => ComObjectRef = comObject; /// <summary> /// True, if the COM object has been disposed. /// </summary> protected bool IsDisposed() => ComObjectRef is null; public void Dispose() { Dispose(true); GC.SuppressFinalize(this); // in case a subclass implements a finalizer } /// <summary> /// This method releases the COM object and removes the reference. /// This allows the garbage collector to clean up any remaining instance. /// </summary> /// <param name="disposing">Set to true</param> protected virtual void Dispose(bool disposing) { if (!disposing || IsDisposed()) return; Marshal.ReleaseComObject(ComObject); ComObjectRef = null; } ~DisposableComObject() { Dispose(true); } } There is also a handy generic subclass which makes usage slightly easier. public abstract class DisposableComObject<T> : DisposableComObject { protected new T ComObject => (T)base.ComObject; public DisposableComObject(T comObject) : base(comObject) { } } Finally, we can use DisposableComObject<T> to create our wrapper classes for the Excel interop classes. The ExcelApplication subclass has a reference to a new Excel application instance and is used to open workbooks. OpenWorkbook() returns an ExcelWorkbook which is also a subclass of DisposableComObject. Dispose() has been overridden to quit the Excel application before calling the base Dispose() method. Quit() is an alias of Dispose(). public class ExcelApplication : DisposableComObject<Application> { public class OpenWorkbookActionCancelledException : Exception { public string Filename { get; } public OpenWorkbookActionCancelledException(string filename, COMException ex) : base($"The workbook open action was cancelled. {ex.Message}", ex) => Filename = filename; } /// <summary>The actual Application from Interop.Excel</summary> Application App => ComObject; public ExcelApplication() : base(new Application()) { } /// <summary>Open a workbook.</summary> public ExcelWorkbook OpenWorkbook(string filename, bool readOnly = false, string password = null, string writeResPassword = null) { try { var workbook = App.Workbooks.Open(Filename: filename, UpdateLinks: (XlUpdateLinks)0, ReadOnly: readOnly, Password: password, WriteResPassword: writeResPassword, ); return new ExcelWorkbook(workbook); } catch (COMException ex) { // If the workbook is already open and the request mode is not read-only, the user will be presented // with a prompt from the Excel application asking if the workbook should be opened in read-only mode. // This exception is raised when when the user clicks the Cancel button in that prompt. throw new OpenWorkbookActionCancelledException(filename, ex); } } /// <summary>Quit the running application.</summary> public void Quit() => Dispose(true); /// <inheritdoc/> protected override void Dispose(bool disposing) { if (!disposing || IsDisposed()) return; App.Quit(); base.Dispose(disposing); } } ExcelWorkbook also subclasses DisposableComObject<Workbook> and is used to open worksheets. The Worksheet() methods returns ExcelWorksheet which, you guessed it, is also an subclass of DisposableComObject<Workbook>. The Dispose() method is overridden and fist closes the worksheet before calling the base Dispose(). NOTE: I've added some extension methods which is uses to iterate over Workbook.Worksheets. If you get compile errors, this is why. Ill add the extension methods at the end. public class ExcelWorkbook : DisposableComObject<Workbook> { public class WorksheetNotFoundException : Exception { public WorksheetNotFoundException(string message) : base(message) { } } /// <summary>The actual Workbook from Interop.Excel</summary> Workbook Workbook => ComObject; /// <summary>The worksheets within the workbook</summary> public IEnumerable<ExcelWorksheet> Worksheets => worksheets ?? (worksheets = Workbook.Worksheets.AsEnumerable<Worksheet>().Select(w => new ExcelWorksheet(w)).ToList()); private IEnumerable<ExcelWorksheet> worksheets; public ExcelWorkbook(Workbook workbook) : base(workbook) { } /// <summary> /// Get the worksheet matching the <paramref name="sheetName"/> /// </summary> /// <param name="sheetName">The name of the Worksheet</param> public ExcelWorksheet Worksheet(string sheetName) => Worksheet(s => s.Name == sheetName, () => $"Worksheet not found: {sheetName}"); /// <summary> /// Get the worksheet matching the <paramref name="predicate"/> /// </summary> /// <param name="predicate">A function to test each Worksheet for a macth</param> public ExcelWorksheet Worksheet(Func<ExcelWorksheet, bool> predicate, Func<string> errorMessageAction) => Worksheets.FirstOrDefault(predicate) ?? throw new WorksheetNotFoundException(errorMessageAction.Invoke()); /// <summary> /// Returns true of the workbook is read-only /// </summary> public bool IsReadOnly() => Workbook.ReadOnly; /// <summary> /// Save changes made to the workbook /// </summary> public void Save() { Workbook.Save(); } /// <summary> /// Close the workbook and optionally save changes /// </summary> /// <param name="saveChanges">True is save before close</param> public void Close(bool saveChanges) { if (saveChanges) Save(); Dispose(true); } /// <inheritdoc/> protected override void Dispose(bool disposing) { if (!disposing || IsDisposed()) return; Workbook.Close(); base.Dispose(disposing); } } Finally, the ExcelWorksheet. UsedRows() simply returns an enumerable of unwrapped Microsoft.Office.Interop.Excel.Range objects. I haven't yet encountered a situation where COM objects accessed from properties of the Microsoft.Office.Interop.Excel.Worksheet object need to manually wrapped like was needed with Application, Workbook, and Worksheet. These all seem to clean them selves up automatically. Mostly, I was just iterating over Ranges and getting or setting values, so my particular use-case isn't as advanced as the available functionality. There is no override of Dispose() in this case as no special action needs to take place for worksheets. public class ExcelWorksheet : DisposableComObject<Worksheet> { /// <summary>The actual Worksheet from Interop.Excel</summary> Worksheet Worksheet => ComObject; /// <summary>The worksheet name</summary> public string Name => Worksheet.Name; // <summary>The worksheets cells (Unwrapped COM object)</summary> public Range Cells => Worksheet.Cells; public ExcelWorksheet(Worksheet worksheet) : base(worksheet) { } /// <inheritdoc cref="WorksheetExtensions.UsedRows(Worksheet)"/> public IEnumerable<Range> UsedRows() => Worksheet.UsedRows().ToList(); } It is possible to add even more wrapper classes. Just add additional methods to ExcelWorksheet as needed and return the COM object in a wrapper class. Just copy what we did when wrapping the workbook via ExcelApplication.OpenWorkbook() and ExcelWorkbook.WorkSheets. Some useful extension methods: public static class EnumeratorExtensions { /// <summary> /// Converts the <paramref name="enumerator"/> to an IEnumerable of type <typeparamref name="T"/> /// </summary> public static IEnumerable<T> AsEnumerable<T>(this IEnumerable enumerator) { return enumerator.GetEnumerator().AsEnumerable<T>(); } /// <summary> /// Converts the <paramref name="enumerator"/> to an IEnumerable of type <typeparamref name="T"/> /// </summary> public static IEnumerable<T> AsEnumerable<T>(this IEnumerator enumerator) { while (enumerator.MoveNext()) yield return (T)enumerator.Current; } /// <summary> /// Converts the <paramref name="enumerator"/> to an IEnumerable of type <typeparamref name="T"/> /// </summary> public static IEnumerable<T> AsEnumerable<T>(this IEnumerator<T> enumerator) { while (enumerator.MoveNext()) yield return enumerator.Current; } } public static class WorksheetExtensions { /// <summary> /// Returns the rows within the used range of this <paramref name="worksheet"/> /// </summary> /// <param name="worksheet">The worksheet</param> public static IEnumerable<Range> UsedRows(this Worksheet worksheet) => worksheet.UsedRange.Rows.AsEnumerable<Range>(); } A: Excel is not designed to be programmed via C++ or C#. The COM API is specifically designed to work with Visual Basic, VB.NET, and VBA. Also all the code samples on this page are not optimal for the simple reason that each call must cross a managed/unmanaged boundary and further ignore the fact that the Excel COM API is free to fail any call with a cryptic HRESULT indicating the RPC server is busy. The best way to automate Excel in my opinion is to collect your data into as big an array as possible / feasible and send this across to a VBA function or sub (via Application.Run) which then performs any required processing. Furthermore - when calling Application.Run - be sure to watch for exceptions indicating excel is busy and retry calling Application.Run. A: This is the only way that really works for me foreach (Process proc in System.Diagnostics.Process.GetProcessesByName("EXCEL")) { proc.Kill(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/158706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "800" }
Q: How do you find the largest font size that won't break a given text? I'm trying to use CSS (under @media print) and JavaScript to print a one-page document with a given piece of text made as large as possible while still fitting inside a given width. The length of the text is not known beforehand, so simply using a fixed-width font is not an option. To put it another way, I'm looking for proper resizing, so that, for example, "IIIII" would come out in a much larger font size than "WWWWW" because "I" is much skinnier than "W" in a variable-width font. The closest I've been able to get with this is using JavaScript to try various font sizes until the clientWidth is small enough. This works well enough for screen media, but when you switch to print media, is there any guarantee that the 90 DPI I appear to get on my system (i.e., I put the margins to 0.5in either side, and for a text resized so that it fits just within that, I get about 675 for clientWidth) will be the same anywhere else? How does a browser decide what DPI to use when converting from pixel measurements? Is there any way I can access this information using JavaScript? I would love it if this were just a CSS3 feature (font-size:max-for-width(7.5in)) but if it is, I haven't been able to find it. A: The CSS font-size property accepts length units that include absolute measurements in inches or centimeters: Absolute length units are highly dependent on the output medium, and so are less useful than relative units. The following absolute units are available: * *in (inches; 1in=2.54cm) *cm (centimeters; 1cm=10mm) *mm (millimeters) *pt (points; 1pt=1/72in) *pc (picas; 1pc=12pt) Since you don't know how many characters your text is yet, you may need to use a combination of javascript and CSS in order to dynamically set the font-size property correctly. For example, take the length of the string in characters, and divide 8.5 (assuming you're expecting US letter size paper) by the number of characters and that gives you the size in inches to set the font-size to for that chunk of text. Tested the font-size with absolute measurements in Firefox, Safari, and IE6 so it should be pretty portable. Hope that helps. EDIT: Note that you may also need to play around with settings such as the letter-spacing property as well and experiment with what font you use, since the font-size setting isn't really the width of the letters, which will be different based on letter-spacing, and font, proportional to length. Oh, and using a monospace font helps ;) A: I don't know of a way to do this in CSS. I think your best bet would be to use Javascript: * *Put the text in a div *Get the dimensions of the div *Make the text smaller if necessary *Go back to step 2 until the text is small enough Here's some sample code to detect the size of the div. A: Here's some code I ended up using, in case someone might find it useful. All you need to do is make the outer DIV the size you want in inches. function make_big(id) // must be an inline element inside a block-level element { var e = document.getElementById(id); e.style.whiteSpace = 'nowrap'; e.style.textAlign = 'center'; var max = e.parentNode.scrollWidth - 4; // a little padding e.style.fontSize = (max / 4) + 'px'; // make a guess, then we'll use the resulting ratio e.style.fontSize = (max / (e.scrollWidth / parseFloat(e.style.fontSize))) + 'px'; e.style.display = 'block'; // so centering takes effect }
{ "language": "en", "url": "https://stackoverflow.com/questions/158710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you efficiently generate a list of K non-repeating integers between 0 and an upper bound N The question gives all necessary data: what is an efficient algorithm to generate a sequence of K non-repeating integers within a given interval [0,N-1]. The trivial algorithm (generating random numbers and, before adding them to the sequence, looking them up to see if they were already there) is very expensive if K is large and near enough to N. The algorithm provided in Efficiently selecting a set of random elements from a linked list seems more complicated than necessary, and requires some implementation. I've just found another algorithm that seems to do the job fine, as long as you know all the relevant parameters, in a single pass. A: It is actually possible to do this in space proportional to the number of elements selected, rather than the size of the set you're selecting from, regardless of what proportion of the total set you're selecting. You do this by generating a random permutation, then selecting from it like this: Pick a block cipher, such as TEA or XTEA. Use XOR folding to reduce the block size to the smallest power of two larger than the set you're selecting from. Use the random seed as the key to the cipher. To generate an element n in the permutation, encrypt n with the cipher. If the output number is not in your set, encrypt that. Repeat until the number is inside the set. On average you will have to do less than two encryptions per generated number. This has the added benefit that if your seed is cryptographically secure, so is your entire permutation. I wrote about this in much more detail here. A: The following code (in C, unknown origin) seems to solve the problem extremely well: /* generate N sorted, non-duplicate integers in [0, max] */ int *generate(int n, int max) { int i, m, a; int *g = (int *)calloc(n, sizeof(int)); if (!g) return 0; m = 0; for (i = 0; i < max; i++) { a = random_in_between(0, max - i); if (a < n - m) { g[m] = i; m++; } } return g; } Does anyone know where I can find more gems like this one? A: Generate an array 0...N-1 filled a[i] = i. Then shuffle the first K items. Shuffling: * *Start J = N-1 *Pick a random number 0...J (say, R) *swap a[R] with a[J] * *since R can be equal to J, the element may be swapped with itself *subtract 1 from J and repeat. Finally, take K last elements. This essentially picks a random element from the list, moves it out, then picks a random element from the remaining list, and so on. Works in O(K) and O(N) time, requires O(N) storage. The shuffling part is called Fisher-Yates shuffle or Knuth's shuffle, described in the 2nd volume of The Art of Computer Programming. A: In The Art of Computer Programming, Volume 2: Seminumerical Algorithms, Third Edition, Knuth describes the following selection sampling algorithm: Algorithm S (Selection sampling technique). To select n records at random from a set of N, where 0 < n ≤ N. S1. [Initialize.] Set t ← 0, m ← 0. (During this algorithm, m represents the number of records selected so far, and t is the total number of input records that we have dealt with.) S2. [Generate U.] Generate a random number U, uniformly distributed between zero and one. S3. [Test.] If (N – t)U ≥ n – m, go to step S5. S4. [Select.] Select the next record for the sample, and increase m and t by 1. If m < n, go to step S2; otherwise the sample is complete and the algorithm terminates. S5. [Skip.] Skip the next record (do not include it in the sample), increase t by 1, and go back to step S2. An implementation may be easier to follow than the description. Here is a Common Lisp implementation that select n random members from a list: (defun sample-list (n list &optional (length (length list)) result) (cond ((= length 0) result) ((< (* length (random 1.0)) n) (sample-list (1- n) (cdr list) (1- length) (cons (car list) result))) (t (sample-list n (cdr list) (1- length) result)))) And here is an implementation that does not use recursion, and which works with all kinds of sequences: (defun sample (n sequence) (let ((length (length sequence)) (result (subseq sequence 0 n))) (loop with m = 0 for i from 0 and u = (random 1.0) do (when (< (* (- length i) u) (- n m)) (setf (elt result m) (elt sequence i)) (incf m)) until (= m n)) result)) A: The random module from Python library makes it extremely easy and effective: from random import sample print sample(xrange(N), K) sample function returns a list of K unique elements chosen from the given sequence. xrange is a "list emulator", i.e. it behaves like a list of consecutive numbers without creating it in memory, which makes it super-fast for tasks like this one. A: Speed up the trivial algorithm by storing the K numbers in a hashing store. Knowing K before you start takes away all the inefficiency of inserting into a hash map, and you still get the benefit of fast look-up. A: My solution is C++ oriented, but I'm sure it could be translated to other languages since it's pretty simple. * *First, generate a linked list with K elements, going from 0 to K *Then as long as the list isn't empty, generate a random number between 0 and the size of the vector *Take that element, push it into another vector, and remove it from the original list This solution only involves two loop iterations, and no hash table lookups or anything of the sort. So in actual code: // Assume K is the highest number in the list std::vector<int> sorted_list; std::vector<int> random_list; for(int i = 0; i < K; ++i) { sorted_list.push_back(i); } // Loop to K - 1 elements, as this will cause problems when trying to erase // the first element while(!sorted_list.size() > 1) { int rand_index = rand() % sorted_list.size(); random_list.push_back(sorted_list.at(rand_index)); sorted_list.erase(sorted_list.begin() + rand_index); } // Finally push back the last remaining element to the random list // The if() statement here is just a sanity check, in case K == 0 if(!sorted_list.empty()) { random_list.push_back(sorted_list.at(0)); } A: Step 1: Generate your list of integers. Step 2: Perform Knuth Shuffle. Note that you don't need to shuffle the entire list, since the Knuth Shuffle algorithm allows you to apply only n shuffles, where n is the number of elements to return. Generating the list will still take time proportional to the size of the list, but you can reuse your existing list for any future shuffling needs (assuming the size stays the same) with no need to preshuffle the partially shuffled list before restarting the shuffling algorithm. The basic algorithm for Knuth Shuffle is that you start with a list of integers. Then, you swap the first integer with any number in the list and return the current (new) first integer. Then, you swap the second integer with any number in the list (except the first) and return the current (new) second integer. Then...etc... This is an absurdly simple algorithm, but be careful that you include the current item in the list when performing the swap or you will break the algorithm. A: The Reservoir Sampling version is pretty simple: my $N = 20; my $k; my @r; while(<>) { if(++$k <= $N) { push @r, $_; } elsif(rand(1) <= ($N/$k)) { $r[rand(@r)] = $_; } } print @r; That's $N randomly selected rows from STDIN. Replace the <>/$_ stuff with something else if you're not using rows from a file, but it's a pretty straightforward algorithm. A: If the list is sorted, for example, if you want to extract K elements out of N, but you do not care about their relative order, an efficient algorithm is proposed in the paper An Efficient Algorithm for Sequential Random Sampling (Jeffrey Scott Vitter, ACM Transactions on Mathematical Software, Vol. 13, No. 1, March 1987, Pages 56-67.). edited to add the code in c++ using boost. I've just typed it and there might be many errors. The random numbers come from the boost library, with a stupid seed, so don't do anything serious with this. /* Sampling according to [Vitter87]. * * Bibliography * [Vitter 87] * Jeffrey Scott Vitter, * An Efficient Algorithm for Sequential Random Sampling * ACM Transactions on MAthematical Software, 13 (1), 58 (1987). */ #include <stdlib.h> #include <string.h> #include <math.h> #include <string> #include <iostream> #include <iomanip> #include <boost/random/linear_congruential.hpp> #include <boost/random/variate_generator.hpp> #include <boost/random/uniform_real.hpp> using namespace std; // This is a typedef for a random number generator. // Try boost::mt19937 or boost::ecuyer1988 instead of boost::minstd_rand typedef boost::minstd_rand base_generator_type; // Define a random number generator and initialize it with a reproducible // seed. // (The seed is unsigned, otherwise the wrong overload may be selected // when using mt19937 as the base_generator_type.) base_generator_type generator(0xBB84u); //TODO : change the seed above ! // Defines the suitable uniform ditribution. boost::uniform_real<> uni_dist(0,1); boost::variate_generator<base_generator_type&, boost::uniform_real<> > uni(generator, uni_dist); void SequentialSamplesMethodA(int K, int N) // Outputs K sorted random integers out of 0..N, taken according to // [Vitter87], method A. { int top=N-K, S, curr=0, currsample=-1; double Nreal=N, quot=1., V; while (K>=2) { V=uni(); S=0; quot=top/Nreal; while (quot > V) { S++; top--; Nreal--; quot *= top/Nreal; } currsample+=1+S; cout << curr << " : " << currsample << "\n"; Nreal--; K--;curr++; } // special case K=1 to avoid overflow S=floor(round(Nreal)*uni()); currsample+=1+S; cout << curr << " : " << currsample << "\n"; } void SequentialSamplesMethodD(int K, int N) // Outputs K sorted random integers out of 0..N, taken according to // [Vitter87], method D. { const int negalphainv=-13; //between -20 and -7 according to [Vitter87] //optimized for an implementation in 1987 !!! int curr=0, currsample=0; int threshold=-negalphainv*K; double Kreal=K, Kinv=1./Kreal, Nreal=N; double Vprime=exp(log(uni())*Kinv); int qu1=N+1-K; double qu1real=qu1; double Kmin1inv, X, U, negSreal, y1, y2, top, bottom; int S, limit; while ((K>1)&&(threshold<N)) { Kmin1inv=1./(Kreal-1.); while(1) {//Step D2: generate X and U while(1) { X=Nreal*(1-Vprime); S=floor(X); if (S<qu1) {break;} Vprime=exp(log(uni())*Kinv); } U=uni(); negSreal=-S; //step D3: Accept ? y1=exp(log(U*Nreal/qu1real)*Kmin1inv); Vprime=y1*(1. - X/Nreal)*(qu1real/(negSreal+qu1real)); if (Vprime <=1.) {break;} //Accept ! Test [Vitter87](2.8) is true //step D4 Accept ? y2=0; top=Nreal-1.; if (K-1 > S) {bottom=Nreal-Kreal; limit=N-S;} else {bottom=Nreal+negSreal-1.; limit=qu1;} for(int t=N-1;t>=limit;t--) {y2*=top/bottom;top--; bottom--;} if (Nreal/(Nreal-X)>=y1*exp(log(y2)*Kmin1inv)) {//Accept ! Vprime=exp(log(uni())*Kmin1inv); break; } Vprime=exp(log(uni())*Kmin1inv); } // Step D5: Select the (S+1)th record currsample+=1+S; cout << curr << " : " << currsample << "\n"; curr++; N-=S+1; Nreal+=negSreal-1.; K-=1; Kreal-=1; Kinv=Kmin1inv; qu1-=S; qu1real+=negSreal; threshold+=negalphainv; } if (K>1) {SequentialSamplesMethodA(K, N);} else { S=floor(N*Vprime); currsample+=1+S; cout << curr << " : " << currsample << "\n"; } } int main(void) { int Ntest=10000000, Ktest=Ntest/100; SequentialSamplesMethodD(Ktest,Ntest); return 0; } $ time ./sampling|tail gives the following ouptut on my laptop 99990 : 9998882 99991 : 9998885 99992 : 9999021 99993 : 9999058 99994 : 9999339 99995 : 9999359 99996 : 9999411 99997 : 9999427 99998 : 9999584 99999 : 9999745 real 0m0.075s user 0m0.060s sys 0m0.000s A: This Ruby code showcases the Reservoir Sampling, Algorithm R method. In each cycle, I select n=5 unique random integers from [0,N=10) range: t=0 m=0 N=10 n=5 s=0 distrib=Array.new(N,0) for i in 1..500000 do t=0 m=0 s=0 while m<n do u=rand() if (N-t)*u>=n-m then t=t+1 else distrib[s]+=1 m=m+1 t=t+1 end #if s=s+1 end #while if (i % 100000)==0 then puts i.to_s + ". cycle..." end end #for puts "--------------" puts distrib output: 100000. cycle... 200000. cycle... 300000. cycle... 400000. cycle... 500000. cycle... -------------- 250272 249924 249628 249894 250193 250202 249647 249606 250600 250034 all integer between 0-9 were chosen with nearly the same probability. It's essentially Knuth's algorithm applied to arbitrary sequences (indeed, that answer has a LISP version of this). The algorithm is O(N) in time and can be O(1) in memory if the sequence is streamed into it as shown in @MichaelCramer's answer. A: This is Perl Code. Grep is a filter, and as always I didn't test this code. @list = grep ($_ % I) == 0, (0..N); * *I = interval *N = Upper Bound Only get numbers that match your interval via the modulus operator. @list = grep ($_ % 3) == 0, (0..30); will return 0, 3, 6, ... 30 This is pseudo Perl code. You may need to tweak it to get it to compile. A: Here's a way to do it in O(N) without extra storage. I'm pretty sure this is not a purely random distribution, but it's probably close enough for many uses. /* generate N sorted, non-duplicate integers in [0, max[ in O(N))*/ int *generate(int n, int max) { float step,a,v=0; int i; int *g = (int *)calloc(n, sizeof(int)); if ( ! g) return 0; for (i=0; i<n; i++) { step = (max-v)/(float)(n-i); v+ = floating_pt_random_in_between(0.0, step*2.0); if ((int)v == g[i-1]){ v=(int)v+1; //avoid collisions } g[i]=v; } while (g[i]>max) { g[i]=max; //fix up overflow max=g[i--]-1; } return g; }
{ "language": "en", "url": "https://stackoverflow.com/questions/158716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: What information has been released regarding the .NET Framework 4.0? As Microsoft seems to have started their trickle feed of information regarding .NET 4.0, I thought I'd ask the question as I'm sure there's more out there than I've spotted! What information has been released regarding the .NET Framework 4.0? A: InfoQ has some good information, but doesn't go into the details. * *Type embedding *Dynamic types in C# *Optional parameters in C# *Type covariance and contravariance Microsoft have an information page for downloading the Visual Studio 2010 and .Net 4 CTP here. A: http://news.google.com/news?hl=en&ned=us&q=net+Framework+4.0&btnG=Search+News is a good place to start A: It's mostly marketing "spiel", but there's a post on Steven Martin's blog discussing some of the new things to come in WF/WCF. A: ComputerWorld Australia just ran an excellent interview with Anders Hejlsberg , in which he talks a little bit about what's coming in C# 4.0. A: This video on Channel9 provided some interesting information on the next version of C#. Obviously, some of these changes will have to be backed into the CLR... (Video is of Anders etal talking about 4.0 in the room C# was born)
{ "language": "en", "url": "https://stackoverflow.com/questions/158729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How would you migrate a multi-site ClearCase/ClearQuest environment to all Open Source? I work in an multi-site environment that's currently using Rational ClearCase for source control and Rational ClearQuest for issue tracking (I accept your condolences ahead of time). As a rough estimate I would say this is supporting 200 engineers. How would you effectively migrate this SCM methodology to a comparative, all Open Source tool suite? Not only would this save literally hundreds of thousands of dollars but I also believe it would improve developer productivity and very little downtime compared to the current system. Platforms in use include Windows, Linux, UNIX and Solaris. A: First, why do you think this would improve developer productivity? I haven't used ClearCase much, and ClearQuest not at all. What about these tools is hindering development? Once you know what you want, you need to look at various tools. I'm fond of Subversion for SCM, as a general rule, but there are situations it isn't well suited for. I have no strong feelings on specific version tracking systems. Bear in mind that migration is likely to be a really big project, depending on what you want to bring over from the Rational systems (checking everything out in ClearCase and starting entirely new projects in Subversion will be easy, but any history you want to keep is a lot more work), so there will be no immediate dollar savings. Moreover, switching tools is going to reduce developer productivity for a short time (possibly very short), so this is best seen as a long-term move. Make sure you get the tools you want up front, since you aren't going to want to do migrations very often. A: Clearcase is awesome. I use to think like you but then after moving to perforce I realized how great dynamic views are. I actually asked about this in another question. Basically it is really, really hard and is made much easier if you can live without your revision history. As for bug tracking my experiences are that open source bug tracking tools are terrible. However using triggers it is usually very easy to integrate them with open source source control. As an example here is how to integrate bugzilla and subversion A: Does BasketCase cheer you up any? You might be able to modify, or at least abstract some of the environment you already have... A: I've done migration from ClearCase base to Git using Gitcc. Worked like a charm. A: As for any tools, ClearCase comes with advantages and drawbacks. We only use it for lager project with complex merge workflow, where UCM is very useful to visualize in advance the different branches. Right now, we are evaluating various DVCS open-source solutions, but in my opinion, they cannot handle all kind of projects (like the one with too many files). A: Condolences are not required, it seems that if you are working in a large scale development over more than one site, then you have the right tools for the job. Attempting to make Open Source SCM products work over various sites will be a very interesting challenge - I've not seen something that will work securely, reliably and without a horrendous amount of work (though I'd love to be proved wrong!). Although your licenses do cost a considerable amount, you also have access to the IBM tech support (who I've found very useful very often). How much would it cost if your open-source environment crashed to it's knees for some reason and your support network consisted of you and your colleagues? 200 developers unable to work effectively? Erk. I'd be interested to hear why you think it would improve developer productivity. Do they have specific gripes? What do they find is an issue? Could we help you from here work it out with them? In my humble opinion, Open Source tools are perfect for small to medium sized projects without a relative amount of complexity. I feel what you are attempting to do will be folly.
{ "language": "en", "url": "https://stackoverflow.com/questions/158737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is an appropriate use for ASP.NET's MultiView control? What are some scenarios where MultiView would be a good choice? The MultiView control along with its View controls simply seem to extend the notion of Panels. Both Panels and MultiViews seem prone to abuse. If your UI concerns and biz logic concerns are properly separated, why lump views together in a single ASPX? A: I've used it in the passed to implement a simple Ajax-enabled tab interface. Style a button to look like a tab, then set it's onClick event to switch the active view in an update panel. A: Any situation where you find yourself toggling the display of one or more panels is a prime candidate for a MultiView control. A more templated wizard control, or master / detail forms for example. I agree that they are open for abuse and you should evaluate whether you're better off separating your code into separate pages before using them. I've worked on projects where the previous developer has tried to put too much onto a single page using MultiViews and they are sheer hell to work with. One thing to be wary of with MultiViews is that unlike panels, any declarative datasource controls contained inside them will always bind, even when the view they are contained in is not active / visible. A: I have used MultiViews as a more flexible basis for a Wizard control. I do agree that lumping lots of views together is a code smell. In the case of a wizard there are often lots of pieces of state you want to share throughout the process. The multiview allows this state to be simply stored in the viewstate. Most of the time I make the contents of each view a single user control that it can encapsulate the logic related to that particular step. A: Any time that you want to show different content on a page based on some condition. At work I've created a tab control that just uses a MultiView and another simple control I made that looks like tabs. Each tabs puts a link (which is styled) in the other control that is wired up to set the active view to the correct tab. A: It can be useful for things like online forms, where you may have one view showing the actual form and another view displayed afterword with the "thank you" text etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/158741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Can you combine multiple images into a single one using JavaScript? I am wondering if there is a way to combine multiple images into a single image using only JavaScript. Is this something that Canvas will be able to do. The effect can be done with positing, but can you combine them into a single image for download? Update Oct 1, 2008: Thanks for the advice, I was helping someone work on a js/css only site, with jQuery and they were looking to have some MacOS dock-like image effects with multiple images that overlay each other. The solution we came up with was just absolute positioning, and using the effect on a parent <div> relatively positioned. It would have been much easier to combine the images and create the effect on that single image. It then got me thinking about online image editors like Picnik and wondering if there could be a browser based image editor with photoshop capabilities written only in javascript. I guess that is not a possibility, maybe in the future? A: MarvinJ provides the method combineByAlpha() in which combines multiple images using its alpha channel. Therefore, you just need to have your images in a format that supports transparency, like PNG, and use that method, as follow: Marvin.combineByAlpha(image, imageOver, imageOutput, x, y); image1: image2: image3: Result: Runnable Example: var canvas = document.getElementById("canvas"); image1 = new MarvinImage(); image1.load("https://i.imgur.com/ChdMiH7.jpg", imageLoaded); image2 = new MarvinImage(); image2.load("https://i.imgur.com/h3HBUBt.png", imageLoaded); image3 = new MarvinImage(); image3.load("https://i.imgur.com/UoISVdT.png", imageLoaded); var loaded=0; function imageLoaded(){ if(++loaded == 3){ var image = new MarvinImage(image1.getWidth(), image1.getHeight()); Marvin.combineByAlpha(image1, image2, image, 0, 0); Marvin.combineByAlpha(image, image3, image, 190, 120); image.draw(canvas); } } <script src="https://www.marvinj.org/releases/marvinj-0.8.js"></script> <canvas id="canvas" width="450" height="297"></canvas> A: I know this is an old question and the OP found a workaround solution, but this will work if the images and canvas are already part of the HTML page. <img id="img1" src="imgfile1.png"> <img id="img2" src="imgfile2.png"> <canvas id="canvas"></canvas> <script type="text/javascript"> var img1 = document.getElementById('img1'); var img2 = document.getElementById('img2'); var canvas = document.getElementById('canvas'); var context = canvas.getContext('2d'); canvas.width = img1.width; canvas.height = img1.height; context.globalAlpha = 1.0; context.drawImage(img1, 0, 0); context.globalAlpha = 0.5; //Remove if pngs have alpha context.drawImage(img2, 0, 0); </script> Or, if you want to load the images on the fly: <canvas id="canvas"></canvas> <script type="text/javascript"> var canvas = document.getElementById('canvas'); var context = canvas.getContext('2d'); var img1 = new Image(); var img2 = new Image(); img1.onload = function() { canvas.width = img1.width; canvas.height = img1.height; img2.src = 'imgfile2.png'; }; img2.onload = function() { context.globalAlpha = 1.0; context.drawImage(img1, 0, 0); context.globalAlpha = 0.5; //Remove if pngs have alpha context.drawImage(img2, 0, 0); }; img1.src = 'imgfile1.png'; </script> A: I don't think you can or would want to do this with client side javascript ("combing them into a single image for download"), because it's running on the client: even if you could combine them into a single image file on the client, at that point you've already downloaded all of the individual images, so the merge is pointless.
{ "language": "en", "url": "https://stackoverflow.com/questions/158750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Using Linq with WCF I am looking for any examples or guides to using Linq over WCF (n-tier application). Please specify if you are showing something for Linq-to-SQL or Linq-to-entities. I would like to see usage examples for both. I am wondering how things like deffered execution works over WCF (if it works at all)? Cyclic references support and so on... Any information to make this a quick start guide to using Linq with WCF is helpful. A: There isn't any LINQ provider that I'm aware of for generic WCF-based queries. LINQ to ADO.NET Data Services, however, lets you query an Entity model over WCF/REST. From Andy Conrad's blog: static void Main(string[] args) { var context=new WebDataContext("http://localhost:18752/Northwind.svc"); var query = from p in context.CreateQuery<Product>("Products") where p.UnitsInStock > 100 select p; foreach (Product p in query) { Console.WriteLine(p.ProductName+", UnitsInStock="+p.UnitsInStock); } } A: You can add a Linq to SQL class to a WCF service. Then go to your datacontext in the Linq to SQL class and in the properties set Serialization Mode to Unidirectional. The entities in your Linq to SQL class will now be available through the WCF service :) A: ADO.NET Data services is probably your best bet. There was a codeplex project interlinq to be able to use arbitrary LINQ expressions with WCF which could then be processed by another LINQ provider, like LINQ to NHibernate or LINQ to SQL. Sadly this project does not appear to be very active. Good luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/158760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Handle exception on service startup I'm writing a series of Windows services. I want them to fail if errors are thrown during startup (in OnStart() method). I had assumed that merely throwing an error in OnStart() would do this, but I'm finding that instead it "Starts" and presents me with a message stating "The service has started, but is inactive. Is this correct?" (Paraphrase). How do I handle the error so it actually fails to start the service? A: if you are running .NET 2.0 or higher, you can use ServiceBase.Stop to stop the service from OnStart. Otherwise call Stop from a new thread. ref [devnewsgroups] (http://www.devnewsgroups.net/group/microsoft.public.dotnet.framework/topic50404.aspx) (news group gone) A: Move all of your startup logic to a separate method, and Throw exceptions (or call OnStop) from that seperate method. OnStart has some oddities when starting up. I have found that if OnStart() has no more than one line in it, then I dont get the "The service started and then stopped.Some services stop automatically if they have no work to do" message, and thrown exceptions will terminate the process and log to the app event log. Also with the seperate startup method, you can use a technique like this to debug it without attaching. http://www.codeproject.com/KB/dotnet/DebugWinServices.aspx A: If the main thing you want is for the Services window to report that there was an error, from what I've tried (.NET3.5 on Windows 7), the only way to do this is by setting the ExitCode. I recommend setting it to 13816, since this results in the message, "An unknown error has occurred." See the windows error codes. The sample below accomplishes three things. * *Setting ExitCode results in a useful message for the end-user. It doesn't affect the Windows Application log but does include a message in the System log. *Calling Stop results in a "Service successfully stopped" message in the Application log. *throwing the exception results in a useful log entry in the Application log. protected override void OnStart(string[] args) { try { // Start your service }catch (Exception ex) { // Log exception this.ExitCode = 13816; this.Stop(); throw; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/158772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Are there any data warehouse frameworks? I've got a lot of mysql data that I need to generate reports from. It's mostly historic data so it won't be changing much, but it weighs in at 20-30 gigabytes easily and is expected to grow. I currently have a collection of php scripts that will do some complex queries and output csv and excel files. I also use phpMyAdmin with bookmarked queries. I manually edit them to change the parameters. The amount of data is growing and the number of people who need access to it is also growing, so I'm making the time to improve this situation. I started reading about data warehousing the other day and it seems that this an area that relates to what I need to do. I've read some good articles and am even waiting on a book. I think I'm getting a handle on what these sorts of systems do and what's possible. Creating a reporting system for my data has always been on a todo list, but until recently I figured it would be a highly niche programing venture. Since I now know data warehousing is a common thing, I figure there must be some sort of reporting/warehousing frames available to ease in the development. I'd gladly skip writing interfaces and scripts to schedule and email reports and the like and stick to writing queries and setting up relations. I've mostly been a lamp guy, but I'm not above switching languages or platforms. I just need a more robust solution as my one off scripts don't scale well. So where's a good place to get started? A: I'll discuss a few points on the {budget, business utility function, time frame} spectrum out there. For convenience, let's follow the architecture conceptualization you linked to at     WikipediaDataWarehouseArticle * *Operational database layer The source data for the data warehouse - Normalized for In One Place Only data maintenance *Data access layer The transformation of your source data into your informational access layer. ETL tools to extract, transform, load data into the warehouse fall into this layer. *Informational access layer   • Report-facilitating Data Structure       Data is not maintained here. It is merely a reflection of your source data       Hence, denormalized structures (containing duplicate, but systematically derived data)       are usually most effective here   • Reporting tools       How do you actually allow your users access to the data       • pre-canned reports (simple)       • more dynamic slice-and-dice access methods         The data accessed for reporting and analyzing and the tools for reporting and analyzing data         fall into this layer. And the Inmon-Kimball differences about design methodology,         discussed later in the Wikipedia article, have to do with this layer. * *Metadata layer (facilitates automation, organization, etc) Roll your own (low-end) For very little out-of-pocket cost, just recognizing the need for the denormalized structures can buy those that are not using it some efficiencies Get in the ballgame (some outlays required) You don't need to use all the functionality of a platform right off the bat. IMO, however, you want to be on a platform that you know will grow, and in the highly competitive and consolidating BI environment, that seems to be one of the four enterprise mega-vendors (my opinion) * *Microsoft (the platform of our 110 employee firm) *SAP *Oracle *IBM     BiMarketStateArticle My firm is at this stage, using some of the ETL capability offered by SQL Server Integration Services (SSIS) and some alternate usage of the open source, but in practice license requiring Talend product in the "Data Access Layer", a denormalized reporting structure (implemented completely in the basic SQL Server database), and SQL Server Reporting Services (SSRS) to largely automate (based on your skill) the production of pre-specified reports. Note that an SSRS "report" is merely a (scalable) XML configuration/specification that gets rendered at runtime via the SSRS engine. Choices such as export to an excel file are simple options. Serious Commitment (some significant human commitment required) Notice above that we have yet to utilize the data mining/dynamic slicing/dicing capabilities of SQL Server Analysis Services. We are working toward that, but now focused on improving the quality of our data cleansing in the "Data Access Layer". I hope this helps you to get a sense of where to start looking. A: Pentaho has put together a pretty comprehensive suite of products. The products are "free", but be prepared for the usual heavy sell once you fork over your identifying information. I haven't had a chance to really stretch them as we're a Microsoft shop from one sad end to the other. A: I think you should first check out Kimball and Inmon and see if you want to approach your data warehouse in a particular way. Kimball, in particular, lays out a very good framework for the modelling and construction of the warehouse. A: There are a number of tools which try to make the process of designing, implementing and managing/operating a Data Warehouse and they each have their strengths and weaknesses and often vastly differing price points. Under the covers you are always going to be best off if you have a good knowledge of warsehousing principles from the Kimball and/or Inmon camps. As well as tools like Kalido and Wherescape RED (which do similar thing in very different ways), many of the ETL platforms now have good in-built support for the donkey work of implementation - SCD components etc and lineage tracking. Best though to view all these as tools to be used in the hands of you, the craftsman, they make certain easy things even easier (or even trivial), some hard things easier but some things they just get in they way of IMHO ;) Learn the methodology and principles first and get a good understanding of them and then you will know which tools to apply from your kitbag and when... A: It hasn't been updated in a while but there's a nice Data Warehousing/ETL Ruby package called ActiveWarehouse. But I would check out the Pentaho products like Nick mentioned in another answer. It should easily handle the volume of data you have and may provide you with more ways to slice and dice your data than you could have ever imagined. A: The best framework you can currently get is Anchor Modeling. It might look quite complex because of it's generic structure and built-in capability to historize data. Also modeling technique is quite different than ERD. But you end-up with sql code to generate all db objects including 3NF views and: * *insert/update handled by triggers *query any point/range in history *you application developers will not see underlying 6NF anchor model. The technology is open sourced and at the moment is unbeatable. If you would have AM question you may want to ask on that tag anchor-modeling. A: Kimball is the simpler method for data warehousing. We use Informatica for moving data around, but it doesn't do DW things like indexing by default. I like the idea of Wherescape RED, as a DW tool and using MS SQL's Linked Servers to obviate the need for an ETL tool.
{ "language": "en", "url": "https://stackoverflow.com/questions/158775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Should I return null from or apply the "null object" pattern to a function returning a Date? Let's say you have a function that returns a date: Date myFunc(paramA, paramB){ //conditionally return a date? } Is it appropriate to return null from this function? This seems ugly because it forces clients to check for null. The "null object" pattern is an implementation pattern that addresses this concern. I'm not a huge fan of the null object pattern, but yes, it makes sense to always return a list, even if is empty, rather than to return null. However, say in Java, a null date would be one that is cleared and has the year 1970. What is the best implementation pattern here? A: The null object pattern is not for what you are trying to do. That pattern is about creating an object with no functionality in it's implementation that you can pass to a given function that requires an object not being null. An example is NullProgressMonitor in Eclipse, which is an empty implementation of IProgressMonitor. If you return a "null" date, like 1970, your clients will still need to check if it's "null" by seeing if it's 1970. And if they don't, misbehaviour will happen. However, if you return null, their code will fail fast, and they'll know they should check for null. Also, 1970 could be a valid date. You should document that your method may return null, and that's it. A: null is quite acceptable. However if you want to return null on an error, consider throwing an exception instead. A: If it is possible a date won't be found then the null makes sense. Otherwise you end up returning some magical date (like the 1970 epoch) that will frustrate people hooking into the function far more than just getting a null returned.Document that it could return null, however... A: I'm not a fan of the null object pattern. If null is a valid and intended return value, then return it. If it's caused by an error condition, an exception would make more sense. Sometimes, the real problem is the method should be returning a more complex type that does represent more information. In those cases it's easy to fall into a trap and return some basic type, plus some special magic values to represent other states. A: It seems like the expected results from this method is a Date, or none found. The none found case is typically represented by returning null. Although some would use an exception to represent this case, I would not (as it is a expected result and I have never been a fan of processing by exception). The Null object pattern is not appropriate for this case, as has been stated. In fact, from my own experience, it is not appropriate for many cases. Of course, I have some bias due to some experience with it being badly misused ;-) A: If its not a performance hit I like to have an explicit query method and then use exceptions: if(employee.hasCustomPayday()) { //throws a runtime exception if no payday Date d = emp.customPayday(); } A: Use exceptions if this is not a scenario that should usually happen. Otherwise, (if this is for example an end-date for an event), just return null. Please avoid magic values in any case ;) A: You could try using an output parameter boolean MyFunction( a,b,Date c) { if (good) c.SetDate(....); return good; } Then you can call it Date theDate = new Date(); if(MyFunction(a, b ,theDate ) { do stuff with C } It still requires you to check something, but there isn't a way of avoiding some checking in this scenario. Although SetDate is deprecated, and the Calendar implementation is just ugly. Stupidest API change Sun ever did.
{ "language": "en", "url": "https://stackoverflow.com/questions/158778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Update fonts recursively on a Delphi form I'm trying to iterate all the controls on a form and enable ClearType font smoothing. Something like this: procedure TForm4.UpdateControls(AParent: TWinControl); var I: Integer; ACtrl: TControl; tagLOGFONT: TLogFont; begin for I := 0 to AParent.ControlCount-1 do begin ACtrl:= AParent.Controls[I]; // if ParentFont=False, update the font here... if ACtrl is TWinControl then UpdateControls(Ctrl as TWinControl); end; end; Now, is there a easy way to check if ACtrl have a Font property so i can pass the Font.Handle to somethink like: GetObject(ACtrl.Font.Handle, SizeOf(TLogFont), @tagLOGFONT); tagLOGFONT.lfQuality := 5; ACtrl.Font.Handle := CreateFontIndirect(tagLOGFONT); Thank you in advance. A: You use TypInfo unit, more specifically methods IsPublishedProp and GetOrdProp. In your case, it would be something like: if IsPublishedProp(ACtrl, 'Font') then ModifyFont(TFont(GetOrdProp(ACtrl, 'Font'))) A fragment from one of my libraries that should put you on the right path: function ContainsNonemptyControl(controlParent: TWinControl; const requiredControlNamePrefix: string; const ignoreControls: string = ''): boolean; var child : TControl; iControl: integer; ignored : TStringList; obj : TObject; begin Result := true; if ignoreControls = '' then ignored := nil else begin ignored := TStringList.Create; ignored.Text := ignoreControls; end; try for iControl := 0 to controlParent.ControlCount-1 do begin child := controlParent.Controls[iControl]; if (requiredControlNamePrefix = '') or SameText(requiredControlNamePrefix, Copy(child.Name, 1, Length(requiredControlNamePrefix))) then if (not assigned(ignored)) or (ignored.IndexOf(child.Name) < 0) then if IsPublishedProp(child, 'Text') and (GetStrProp(child, 'Text') <> '') then Exit else if IsPublishedProp(child, 'Lines') then begin obj := TObject(cardinal(GetOrdProp(child, 'Lines'))); if (obj is TStrings) and (Unwrap(TStrings(obj).Text, child) <> '') then Exit; end; end; //for iControl finally FreeAndNil(ignored); end; Result := false; end; { ContainsNonemptyControl } A: There's no need to use RTTI for this. Every TControl descendant has a Font property. At TControl level its visibility is protected but you can use this workaround to access it: type THackControl = class(TControl); ModifyFont(THackControl(AParent.Controls[I]).Font); A: One other thing worth mentioning. Every control has a ParentFont property, which - if set - allows the Form's font choice to ripple down to every control. I tend to make sure ParentFont is set true wherever possible, which also makes it easier to theme forms according to the current OS. Anyway, surely you shouldn't need to do anything to enable ClearType smoothing? It should just happen automatically if you use a TrueType font and the user has enabled the Cleartype "effect". A: Here's a C++Builder example of TOndrej's answer: struct THackControl : TControl { __fastcall virtual THackControl(Classes::TComponent* AOwner); TFont* Font() { return TControl::Font; }; }; for(int ControlIdx = 0; ControlIdx < ControlCount; ++ControlIdx) { ((THackControl*)Controls[ControlIdx])->Font()->Color = clRed; }
{ "language": "en", "url": "https://stackoverflow.com/questions/158780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a way to override ConfigurationManager.AppSettings? I really want to be able to have a way to take an app that currently gets its settings using ConfigurationManager.AppSettings["mysettingkey"] to actually have those settings come from a centralized database instead of the app.config file. I can make a custom config section for handling this sort of thing, but I really don't want other developers on my team to have to change their code to use my new DbConfiguration custom section. I just want them to be able to call AppSettings the way they always have but have it be loaded from a central database. Any ideas? A: If you don't mind hacking around the framework and you can reasonably assume the .net framework version the application is running on (i.e. it's a web application or an intranet application) then you could try something like this: using System; using System.Collections.Specialized; using System.Configuration; using System.Configuration.Internal; using System.Reflection; static class ConfigOverrideTest { sealed class ConfigProxy:IInternalConfigSystem { readonly IInternalConfigSystem baseconf; public ConfigProxy(IInternalConfigSystem baseconf) { this.baseconf = baseconf; } object appsettings; public object GetSection(string configKey) { if(configKey == "appSettings" && this.appsettings != null) return this.appsettings; object o = baseconf.GetSection(configKey); if(configKey == "appSettings" && o is NameValueCollection) { // create a new collection because the underlying collection is read-only var cfg = new NameValueCollection((NameValueCollection)o); // add or replace your settings cfg["test"] = "Hello world"; o = this.appsettings = cfg; } return o; } public void RefreshConfig(string sectionName) { if(sectionName == "appSettings") appsettings = null; baseconf.RefreshConfig(sectionName); } public bool SupportsUserConfig { get { return baseconf.SupportsUserConfig; } } } static void Main() { // initialize the ConfigurationManager object o = ConfigurationManager.AppSettings; // hack your proxy IInternalConfigSystem into the ConfigurationManager FieldInfo s_configSystem = typeof(ConfigurationManager).GetField("s_configSystem", BindingFlags.Static | BindingFlags.NonPublic); s_configSystem.SetValue(null, new ConfigProxy((IInternalConfigSystem)s_configSystem.GetValue(null))); // test it Console.WriteLine(ConfigurationManager.AppSettings["test"] == "Hello world" ? "Success!" : "Failure!"); } } A: Whatever you do you will need to add one layer of redirection? ConfigurationManager.AppSettings["key"] will always look in the configuration file. You can make a ConfigurationFromDatabaseManager but this will result in using different calling syntax: ConfigurationFromDatabaseManager.AppSettings["key"] instead of ConfigurationSettings["key"]. A: I'm not sure you can override it, but you can try the Add method of AppSettings to add your DB settings when the applications starts. A: I would try to write an application starter and load the settings from the database to the application domain. So the app doesn't know anything about how it's configuration is generated. Using machiene.config leads directly into dll-hell 2.0. A: If you can save you modified config file to disk - you can load alternative config file in different application domain: AppDomain.CreateDomain("second", null, new AppDomainSetup { ConfigurationFile = options.ConfigPath, }).DoCallBack(...); A: It appears there is a way to do this in .NET 3.5 by setting the allowOverride attribute in the appSettings definition section of machine.config. This allows you to override the entire section in your own app.config file and specify a new type to handle it.
{ "language": "en", "url": "https://stackoverflow.com/questions/158783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Get the NTLM credentials from the Response on an APSX page I have an ASPX page (On server A) which is invoked using NTLM credentials. Part of that page's job is to call an HTML page (On server B) and proxy it back to the client. (The firewall allows access to A, but not to B. The user would normally be allowed access to both servers.). Server B is also not open to anonymous access, so I need to supply credentials to it. If I hardcode some credentials (as per the attached code), it works, but ideally I would echo the credentials that were received by the .aspx page. Is there some way to get those NetworkCredentials so I can pass them on? protected void Page_Load(object sender, EventArgs e) { Response.Clear(); WebClient proxyFile = new WebClient(); CredentialCache cc = new CredentialCache(); cc.Add(new Uri("http://serverB/"), "NTLM", new NetworkCredential("userName", "password", "domain")); proxyFile.Credentials = cc; Stream proxyStream = proxyFile.OpenRead("http://serverB/Content/webPage.html"); int i; do { i = proxyStream.ReadByte(); if (i != -1) { Response.OutputStream.WriteByte((byte)i); } } while (i != -1); Response.End(); } A: You can certainly obtain the login name of the caller, but not the password. NTLM uses a challenge/response mechanism, so the password is never transmitted. Your server must have access to a password-equivalent (a hash) in order to form the challenge and check the response, but even if you can get hold of it that password-equivalent will be no use to you in trying to form credentials that will be accepted by server B. If you can set up impersonation, as described in another answer, even that doesn't necessarily get you what you want. By default, an impersonating server process is not allowed to transmit its identity to another server. That second hop is known as delegation and needs to be configured explicitly on the servers involved (and/or in Active Directory). Apart from delegation I think your only option is to maintain a database of credentials that server A can access and present to server B. Building that in a secure manner is a subtle and time-consuming process. On the other hand, there is a reason why delegation is disabled by default. When I log into a server, do I want it to be allowed to use my identity for accessing other servers? Delegation is the simplest option for you, but you'll need to be sure that server A can't be compromised to do irresponsible things with your users' identities. A: Page.User will get you the Security Principal of the user the page is running under. From there you should be able to figure it out. A: Can you in your scenario impersonate the callers identity? that way you wouldnt even need to pass along credentials, ex: <authentication mode="Windows" /> <identity impersonate="true" /> in web.config of server A. But this of course depends on your situation, as you may not want that for server A. But if you can this could solve your problem without custom code. Heres a link for setting up impersonation: http://msdn.microsoft.com/en-us/library/ms998351.aspx#paght000023_impersonatingorigcaller
{ "language": "en", "url": "https://stackoverflow.com/questions/158800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Change .NET Framework version of application pool to 3.5? I've installed .NET Framework 3.5 SP1 on web server (Server 2008 Enterprise), so running IIS 7.0. I want to change the version of .NET Framework used by an existing site. So I right-click on appropriate Application Pool and selected Edit Application Pool. The .NET Framework dropdown does not include an explicit entry for framework 3.5, but just 2.0.50727. Is this just because the version of the core RTL in 3.5 is still 2.0? Or do I need to do something additional to get IIS to see version 3.5? (Did try restarting IIS). A: Is this just because the version of the core RTL in 3.5 is still 2.0? YES A: The dropdown in question is using version of the CLR loaded into your application pool's process space, which must be unique (you can't load a 1.1 CLR into a process with an already loaded 2.0 CLR, and vice versa). However, .NET 3.5 uses the v2.0 CLR - the only thing added are new versions of libraries and some compiler support around the new features in VB.NET and C#. If you select 3.5 you'll be using 3.5 as long as your assemblies reference 3.5 versions of assemblies. A: The 3.5 framework still runs on top of the 2.0 CLR so what you are seeing is correct. Scott Hanselman has a nice blog post about the details of this: The marketing term ".NET Framework 3.5" refers to a few things. First, LINQ, which is huge, and includes new language compilers for C# and VB. Second, the REST support added to Windows Communication Foundation, as well as, third, the fact that ASP.NET AJAX is included, rather than a separate download as it was before in ASP.NET 2.0. There's a few other things in .NET 3.5, like SP1 of .NET 2.0 to fix bugs, but one way to get an idea of what's been added in .NET 3.5 is to look in c:\windows\assembly. A: You do not need to do anything more, other than have a properly configured web.config A: We just installed the 3.5 framework on our server (Windows Server 2003 /IIS6), rebooted, and that was it. Of course, you have to have applications developed against version 3.5 of the framework, but it isn't like the change from 1.1 to 2.0, where you need to change the .Net settings in your web site properties using IIS Manager.
{ "language": "en", "url": "https://stackoverflow.com/questions/158804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: jEditable input-box CSS style I am trying to style an input-box rendered using jEditable. I want to change the color of the table-cell when the text is double-clicked and made editable. Like this: alt text http://www.hongaijitsu.com/temp/foobar/public/Picture-2.png This is where I am at the moment: jEditable CSS Problem (double-click the text in the table-cells) HTML snippet: <tr class="odd"> <td class="dblclick" headers="customer-name">Drogo Shankara</td> <td class="dblclick" headers="mail">dshan@gmail.com</td> <td class="dblclick" headers="type">Web</td> <td class="dblclick" headers="status">Pending mail</td> </tr> jQuery code: $(function() { $(".dblclick").editable("#", { tooltip : "Doubleclick to edit...", event : "dblclick", css : 'inherit' }); }); Corresponding CSS: .dblclick form { height: inherit !important; width: inherit !important; border: 0; margin: 0; padding: 0; background: red; } .dblclick input { height: inherit !important; width: inherit !important; border: 0; margin: 0; padding: 0; font: inherit; color: inherit; } I want the input-box to inherit the height & width from the parent table-cell, but when I look at the input-box in Firebug it has an inline CSS height & width already set, causing the table-cell to scale when the td text is pressed. I try to override the inline CSS with inherit !important, but it doesn't work. There is some concept in play here that I haven't fully understood, but it could be something totally banal. Any ideas what is wrong? A: jQuery/JavaScript is manipulating the DOM and dynamically adding/setting the widths of the input fields each time you doubleclick. Since inline styles (here dynamically generated in the DOM) take precedence over all other styles, you can not alter dynamically rendered inline styles with new attributes in the attached class. If you would like to get rid of the strange jumping effect, remove the attribute setting the width of the entire table in your screen.css file: table { border-collapse: collapse; /width: 940px; ...remove/ } It seems that the code gets confused when calculating the width to set the input field to when using a fixed table width (or maybe there is css somewhere there that is "clashing"). When I removed the width from the table, the functionality works and looks ok. Hope this helps, let me know if it doesn't...
{ "language": "en", "url": "https://stackoverflow.com/questions/158806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Costs vs Consistant gets What does it indicate to see a query that has a low cost in the explain plan but a high consistent gets count in autotrace? In this case the cost was in the 100's and the CR's were in the millions. A: At best, the cost is the optimizer's estimate of the number of I/O's that a query would perform. So, at best, the cost is only likely to be accurate if the optimizer has found a very good plan-- if the optimizer's estimate of the cost is correct and the plan is ideal, that generally means that you're never going to bother looking at the plan because that query is going to perform reasonably well. Consistent gets, however, is an actual measure of the number of gets that a query actually performed. So that is a much more accurate benchmark to use. Although there are many, many things that can influence the cost, and a few things that can influence the number of consistent gets, it is probably reasonable to expect that if you have a very low cost and a very high number of consistent gets that the optimizer is probably working with poor estimates of the cardinality of the various steps (the ROWS column in the PLAN_TABLE tells you the expected number of rows returned in each step). That may indicate that you have missing or outdated statistics, that you are missing some histograms, that your initialization parameters or system statistics are wrong in some way, or that the CBO has problems for some other reason estimating the cardinality of your results. What version of Oracle are you using? A: The cost can represent two different things depending on version and whether you are running in cpu-based costing mode or not. Briefly, the cost represents that amount of time that the optimizer expects the query to execute for, but it is expressed in units of the amount of time that a single block read takes. For example if Oracle expects a single block read to take 1ms and the query to take 20ms, then the cost equals 20. Consistent gets do not match exactly with this for a number of reasons: the cost includes non-consistent (current) gets (eg reading and writing temp data), the cost includes CPU time, and a consistent get can be a multiblock read instead of a single block read and hence have a different duration. Oracle can also get the estimate of the cost completely wrong and it could end up requiring a lot more or less consistent gets than the estimate suggested. A useful method that can helo explain disconnects between predicted execution plan and actual performance is "cardinality feedback". See this presentation: http://www.centrexcc.com/Tuning%20by%20Cardinality%20Feedback.ppt.pdf
{ "language": "en", "url": "https://stackoverflow.com/questions/158814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How hard is it to migrate a web app from localhost to a hosting platform? Since I'm not a huge fan of any of the current solutions for managing the resources and knowledge that I have, I was thinking about making my own solution, which will involve custom code as well as possible integration of FOSS solutions. I would start development on my local machine, but if I like it, how difficult would it be to migrate over to a public server and let others also use this tool? What kinds of challenges might I be facing? A: In theory, nothing, beyond just the process of moving stuff to the new machine. You can set up your own servers, on your own ports (port 80 for example). You can even create your own fake domain at home, with just a tweak to the /etc/hosts files (or the equivalent on Windows). Now, if you're developing on Windows and hosting on unix, you'll have platform issues, so I'd suggest against that, at least for a first project. But other than that, it should be straightforward. A: You didn't hard code any paths to "localhost" did you? If so, that should be the first thing to strip out. Either use relative paths, or have a configurable {AppPath} variable of some kind that you only need ever change once. By the way, what language/framework are you using? it would help us provide sample code. A: I would add that documentation is a highly important factor in any project if it is to be quickly embraced by the public. The tendency when developing in-house projects, especially if they are just for your own personal use, is to neglect, or even completely ignore documentation of all kinds, both of usage, as well as in the code. If users aren't told how to use the product, they wont use it, and if other potential developers don't know how or why things are done the way they are, or what the purpose of things are, they either won't bother with trying, or will cause other problems unintentionally.
{ "language": "en", "url": "https://stackoverflow.com/questions/158816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Create JSON with .net First off, let me start off that I am not a .net developer. The reason why I am asking this question is that we rolled out our REST-API and one of our first integration partners is a .net shop. So basically we assumed that .net would provide some sort of wrapper to create JSON, but the developer in question created the string by hand. I've researched this topic a bit and I couldn't really find anything, though I believe .net provides something. :) 'current code Dim data As String data = "[hello, world]" In PHP I would do the following (assuming ext/json is available ;): <?php $json = array('hello', 'world'); $json = json_encode($json); I am also interested in what you use to decode the json into an array/object structure. Help is very appreciated. A: Json.Net is an easy to use library with some cool features. A: JavaScriptSerializer is very straight forward. Person person = new Person(); JavaScriptSerializer serializer = new JavaScriptSerializer(); String json = serializer.Serialize(person); A: I'm with Wayne - JSON.net works well. The nice is, it works well with no learning curve. A: There are a couple first-party and third-party options. Rick Strahl has a good overview. JSON.net is the most popular third-party option. A: See Is there a built in way in .Net AJAX to manually serialize an object to a JSON string? Which is to say, in .NET 2.0, Dim yourData As String() = { "Hello", "World" } Dim jsonSerialiser As New System.Web.Script.Serialization.JavaScriptSerializer Dim jsonString as String = jsonSerialiser.Serialize(yourData) In .NET 3.5, send them to Rick Strahl's blog, mentioned above A: Check out DataContractJsonSerializer.
{ "language": "en", "url": "https://stackoverflow.com/questions/158818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Managed C++ Assembly Attributes Is there a way to add assembly attributes to a Managed C++ assembly? In a typical C# project, there is usually a line of code in the AssemblyInfo.cs file like the following one: [assembly: AssemblyTitle("Some Assembly")] I have a private assembly attribute that I want to add (not one of the version attributes that could be added through a resource file), and I am not sure if this is possible. A: It is possible - the easy way is to add an AssemblyInfo.cpp file and put: #include attributes.h //your attribute decl [assembly: MyCustomAttribute()]; It can be in any file you want, though. edit - added required semicolon for assembly attribute
{ "language": "en", "url": "https://stackoverflow.com/questions/158821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Random element in a map what is a good way to select a random element from a map? C++. It is my understanding that maps don't have random access iterators. The key is a long long and the map is sparsely populated. A: Maybe draw up a random key, then use lower_bound to find the closest key actually contained. A: Continuing ryan_s theme of preconstructed maps and fast random lookup: instead of vector we can use a parallel map of iterators, which should speed up random lookup a bit. map<K, V> const original; ... // construct index-keyed lookup map map<unsigned, map<K, V>::const_iterator> fast_random_lookup; map<K, V>::const_iterator it = original.begin(), itEnd = original.end(); for (unsigned i = 0; it != itEnd; ++it, ++i) { fast_random_lookup[i] = it; } // lookup random value V v = *fast_random_lookup[random_0_to_n(original.size())]; A: map<...> MyMap; iterator item = MyMap.begin(); std::advance( item, random_0_to_n(MyMap.size()) ); A: If your map is static, then instead of a map, use a vector to store your key/value pairs in key order, binary search to look up values in log(n) time, and the vector index to get random pairs in constant time. You can wrap the vector/binary search to look like a map with a random access feature. A: I like James' answer if the map is small or if you don't need a random value very often. If it is large and you do this often enough to make speed important you might be able to keep a separate vector of key values to select a random value from. map<...> MyMap; vector<...> MyVecOfKeys; // <-- add keys to this when added to the map. map<...>::key_type key = MyVecOfKeys[ random_0_to_n(MyVecOfKeys.size()) ]; map<...>::data_type value = MyMap[ key ]; Of course if the map is really huge you might not be able to store a copy of all the keys like this. If you can afford it though you get the advantage of lookups in logarithmic time. A: Maybe you should consider Boost.MultiIndex, although note that it's a little too heavy-weighted. A: Here is the case when all map items must be access in random order. * *Copy the map to a vector. *Shuffle vector. In pseudo-code (It closely reflects the following C++ implementation): import random import time # populate map by some stuff for testing m = dict((i*i, i) for i in range(3)) # copy map to vector v = m.items() # seed PRNG # NOTE: this part is present only to reflect C++ r = random.Random(time.clock()) # shuffle vector random.shuffle(v, r.random) # print randomized map elements for e in v: print "%s:%s" % e, print In C++: #include <algorithm> #include <iostream> #include <map> #include <vector> #include <boost/date_time/posix_time/posix_time_types.hpp> #include <boost/foreach.hpp> #include <boost/random.hpp> int main() { using namespace std; using namespace boost; using namespace boost::posix_time; // populate map by some stuff for testing typedef map<long long, int> Map; Map m; for (int i = 0; i < 3; ++i) m[i * i] = i; // copy map to vector #ifndef OPERATE_ON_KEY typedef vector<pair<Map::key_type, Map::mapped_type> > Vector; Vector v(m.begin(), m.end()); #else typedef vector<Map::key_type> Vector; Vector v; v.reserve(m.size()); BOOST_FOREACH( Map::value_type p, m ) v.push_back(p.first); #endif // OPERATE_ON_KEY // make PRNG ptime now(microsec_clock::local_time()); ptime midnight(now.date()); time_duration td = now - midnight; mt19937 gen(td.ticks()); // seed the generator with raw number of ticks random_number_generator<mt19937, Vector::iterator::difference_type> rng(gen); // shuffle vector // rng(n) must return a uniformly distributed integer in the range [0, n) random_shuffle(v.begin(), v.end(), rng); // print randomized map elements BOOST_FOREACH( Vector::value_type e, v ) #ifndef OPERATE_ON_KEY cout << e.first << ":" << e.second << " "; #else cout << e << " "; #endif // OPERATE_ON_KEY cout << endl; } A: Has anyone tried this? https://github.com/mabdelazim/Random-Access-Map "C++ template class for random access map. This is like the std::map but you can access items random by index with syntax my_map.key(i) and my_map.data(i)" A: std::random_device dev; std::mt19937_64 rng(dev()); std::uniform_int_distribution<size_t> idDist(0, elements.size() - 1); auto elementId= elements.begin(); std::advance(elementId, idDist(rng)); Now elementId is random :)
{ "language": "en", "url": "https://stackoverflow.com/questions/158836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: mod_rewrite for trailing slash problem I'm pulling my hair out on what should be an insanely simple problem. We are running WebSphere IHS (Apache) through an F5 BigIP. BigIP is doing the https translation for us. Our url (changed for web, not valid) is https://superniftyserver.com/lawson/portal. When someone types in just that without the slash after portal, Apache assumes "portal" to be a file and not a directory. When Apache finds out what it is, it sends the 301 Permanent Redirect. But since Apache knows only http, it sends the URL as http://superniftyserver.com/lawson/portal/ which then creates problems. So I tried a server level httpd.conf change for mod_rewrite, this is one of the dozens of combinations I've tried. RewriteEngine on RewriteRule ^/lawson/portal(.*) /lawson/portal/$1 I also tried RewriteRule ^/lawson/portal$ /lawson/portal/ Among many other things... What am I missing? A: If you can't get an answer on the RewriteRule syntax, here are two other options for you: Write an custom iRule on BigIp (see F5 DevCentral) that looks for 301 responses and convert them to SSL; let the URL pass into your WebSphere server and do a programmatic redirect that sends out HTTPS. However, because F5 terminates the SSL connection, you have to set a custom header that you configure (see PQ86347) so the Java request.getScheme() works as you would expect. A: Fixed! SOL6912: Configuring an HTTP profile to rewrite URLs so that redirects from an HTTP server specify the HTTPS protocol Updated: 8/7/07 12:00 AM A ClientSSL virtual server is typically configured to accept HTTPS connections from a client, decrypt the SSL session, and send the unencrypted HTTP request to the web server. When a requested URI does not include a trailing slash (a forward slash, such as /, at the end of the URI), some web servers generate a courtesy redirect. Without a trailing slash, the web server will first treat the resource specified in the URI as a file. If the file cannot be found, the web server may search for a directory with the same name and if found, send an HTTP 302 redirect response back to the client with a trailing slash. The redirect will be returned to the client in HTTP mode rather than HTTPS, causing the SSL session to fail. Following is an example of how an HTTP 302 redirect response causes the SSL session to fail: · To request an SSL session, a user types https://www.f5.com/stuff without a trailing slash. · The client browser sends an SSL request to the ClientSSL virtual server, which resides on the BIG-IP LTM system. · The BIG-IP LTM system then decrypts the request and sends a GET /stuff command to the web server. · Since the /stuff file does not exist on the web server, but a /stuff/ virtual directory exists, the web server sends an HTTP 302 redirect response for the directory, but appends a trailing slash to the resource. When the web server sends the HTTP 302 redirect response, it specifies HTTP (not HTTPS). · When the client receives the HTTP 302 redirect response, it sends a new request to the BIG-IP LTM virtual server that specifies HTTP (not HTTPS). As a result, the SSL connection fails. Configuring an HTTP profile to rewrite URLs In BIG-IP LTM version 9.x you can configure an HTTP profile to rewrite URLs so that redirects from an HTTP server specify the HTTPS protocol. To do so, perform the following procedure: * *Log in to the Configuration utility. *Click Local Traffic. *Click Profiles. *Click the Create button. *Type a name for the profile. *Choose http from the Parent Profile drop-down menu. *Under Settings, set Redirect Rewrite to All, Matching, or Nodes, depending upon your configuration For example: o Choose All to rewrite any HTTP 301, 302, 303, 305, or 307 redirects to HTTPS o Choose Matching to rewrite redirects when the path and query URI components of the request and the redirect are identical (except for the trailing slash) o Choose Node to rewrite redirects when the redirect URI contains a node IP address instead of a host name, and you want the system to change it to the virtual server address *Click Finished. You must now associate the new HTTP profile with the ClientSSL virtual server. A: Try this: # Trailing slash problem RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^(.+[^/])$ https://<t:sitename/>$1/ [redirect,last] A: LoadModule rewrite_module modules/mod_rewrite.so make sure that line is somewhere in you httpd.conf file A: RewriteEngine on RewriteCond %{REQUEST_URI} ^/lawson/portal$ RewriteRule ^(.*)$ https://superniftyserver.com/lawson/portal/ [R=301,L]
{ "language": "en", "url": "https://stackoverflow.com/questions/158848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What are some good pop-up dialog boxes for Ruby on Rails I want to use modal pop-up windows in our web app in Ruby on Rails. Note that I don't want to create a new browser window, but a sub-window inside the existing webpage. We've tried things like TinyBox, but there are issues with returning error information from inside the controller. Any good method or tool that works well in ruby? A: There's also prototype-window. A: I've never used it myself (not yet at least) but have you tried RedBox? A: I'm investigating ModalBox at the moment and it's looking promising. There's a Google Group and Rails plugin which replaces the basic confirm popup with a modal dialog box. A: Try TopUp! It is developed in a Rails application and you can get it from GitHub. Please note that it is still beta. Feedback is always welcome ;) A: Facebox, jquery, that Github use is the best one. There is also a prototype version. A: I've used Lightbox Gone Wild for a while now, though I've modified it to display a DIV or other element that's already on the page (though hidden) and then return it to it's parent when the box is closed. I've used it make make Wizards that guide the user through a process. A: I've used facebox_render for all my rails projects. It's really easy to use and provided complete helper. You can easily render html or javascript in your RESTful controller. A: I have tried several of the ones mentioned above but after twiking it a bit I found that http://www.methods.co.nz/popup/popup.html works better for me, the only problem is that you have create an error routing similar to the one Rails uses, when returning to the popup window with errors the pop up window does not have a way to handle it
{ "language": "en", "url": "https://stackoverflow.com/questions/158851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Creating a file progress bar in PHP Does anyone know of any methods to create a file upload progress bar in PHP? I have often heard that it's impossible. I have one idea, but not sure if it would work: have a normal file upload, but instead submit to an iframe. When this is submitted, store the file information (size and temp location) in the session. At the same time, start an AJAX call to every say 10 seconds to check the size of the file compared to the size stored in the session. This would return the size to the AJAX and then a progress bar would be sized and maybe display the uploaded size to the user. Thoughts? A: You're pretty much figured out how to do it. The main problem is you usually don't have access to the size of the uploaded file until it's done uploading. There are workarounds for this: Enabling APC, you to access this information if you include a field called "APC_UPLOAD_PROGRESS" and use apc_fetch() for retrieving a cache entry with the status. There's also a plugin called uploadprogress but it's not very well documented and doesn't work on Windows (last I checked anyway). An alternative is to use Flash for doing it. See scripts like FancyUpload. Before APC came along I had to write a CGI script in C that wrote information to a text file. APC seems like a much better way to do it now though. Hope this helps. A: So far, the most common way of doing this is SWFUpload: http://www.swfupload.org/ However, it is possible with pure PHP, just very difficult and very experimental. I'll see if I can find the link. Edit: According to comments on php.net, as of 5.2 there is a hook to handle upload progress. http://us.php.net/features.file-upload#71564 More explanation: * *http://www.dinke.net/blog/2006/11/04/php-52-upload-progress-meter/en/ *http://blog.liip.ch/archive/2006/09/10/upload-progress-meter-extension-for-php-5-2.html Rasmus' Example: * *http://progphp.com/progress.phps A: You can try YUI or Prototype or JQuery A: From PHP 5.4 it is in session extension: http://php.net//manual/pl/session.upload-progress.php A: In pure PHP, you are correct: it's not possible. If you AJAX-ify this, then you could do what you're describing. The only progress meters I've ever seen are in Javascript or Flash, though I imagine Silverlight could do it also. A: "Old school", but a PHP + Perl technique: http://www.raditha.com/php/progress.php A: In my opinion, the best / easiest solution is to build a small flash widget, that consists of an 'Upload' button and a progress bar. Flash gives you very detailed feedback on how much data has been uploaded so far, and you can build a nice progress bar based on that. Doesn't require inefficient polling of the server, and in fact doesn't require any changes at all to your server code. Google for 'flash uploader' and you'll find many people have already written these widgets and are happy to sell them to you for a buck. A: I'd recommend looking at SWFUpload to accomplish what you want. It's fairly flexible and supports queueing of files, so you could even handle multi-file uploads. A: You will definately want to go with digitgerald's FancyUpload. It's Mootools & swfuplaod based, and it sports a nice queue with statusses, progress, eta etc. It's really the slickest method i've seen for uploading files. For my personal use case ivé used it to let the client select 1.2 gb of PDF files and upload them. Newer ones get renamed and versioned automatically, same are skipped, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/158853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Knowing whether SubVersion working copy has been updated Is there a way to have a file that is modified / touched whenever the WC is updated to a new revision? Or, as the second-best option, whenever svn update is executed? Here's the motivation: I want to have the SVN revision number inside my executable. So I have to run SubWCRev as part of the build. The output file of SubWCRev is re-created every time, even if the revision number has not changed. This means that the exe is linked on every build, even if nothing has changed. I want it to be linked only as needed. A: * *Get the SubWCRev output into a temporary file *Compare this file to the current revision-number file *Overwrite it with the temp file only if the two are different *Delete the temporary file You might even be able to do this with a .bat file (using fc). Something like... REM ***UNTESTED*** FC temp.rev curr.rev | FIND "FC: no dif" > nul IF NOT ERRORLEVEL 1 COPY /Y temp.rev curr.rev DEL temp.rev Edit: As an aside, you can do this in Mercurial by making the rev-number-file depend on .hg/dirstate. A: This sounds like a duplicate of this discussion here: Embedding SVN Revision number at compile time in a Windows app My approach, described in that question, works across platforms and can output to whatever format you program so it will work in any situation where including a file is a viable solution. In your case, you want to watch for the M modifier in svnversion's output: that will let you know that your WC has been modified. A: My answer will probably be too short, but might give you some direction. SVN has hooks. They are scripts that get executed everytime code is commited. Maybe?
{ "language": "en", "url": "https://stackoverflow.com/questions/158856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Am I subscribing to YUI Menu events improperly? I've read and followed YUI's tutorial for subscribing to Menu events. I also looked through the API and bits of the code for Menu, MenuBar, and Custom Events, but the following refuses to work // oMenuBar is a MenuBar instance with submenus var buyMenu = oMenuBar.getSubmenus()[1]; // this works buyMenu.subscribe('show', onShow, {foo: 'bar'}, false); // using the subscribe method doesn't work buyMenu.subscribe('mouseOver', onMouseOver, {foo: 'bar'}, false); // manually attaching a listener doesn't work YAHOO.util.Event.addListener(buyMenu, 'mouseOver', onMouseOver); // http://developer.yahoo.com/yui/docs/YAHOO.widget.Menu.html#event_keyPressEvent // there is a keyPress Event, but no spelling of it will trigger the handler buyMenu.subscribe('keypress', onShow, {foo: 'bar'}, false); buyMenu.subscribe('keypressed', onShow, {foo: 'bar'}, false); buyMenu.subscribe('keyPressed', onShow, {foo: 'bar'}, false); buyMenu.subscribe('keyPress', onShow, {foo: 'bar'}, false); Functionally, I'm trying to attach a keyPress listener for each submenu of the MenuBar. I do not want to add Bubbling library as a dependency. A: Todd Kloots here, author of the YUI Menu widget. When you are subscribing to DOM-based events, the event name is all lower case. So, for the "mouseover" event, subscribe as follows: buyMenu.subscribe('mouseover', onMouseOver, {foo: 'bar'}, false); Regarding your keypress event handler: you are subscribing correctly. However, remember that any key-related event handlers will only fire if the Menu has focus. So, make sure your Menu has focus before testing your key-related event handlers. Also - I would recommend listening for the "keydown" event rather than "keypress" as not all keys result in the firing of the "keypress" event in IE. If you have any other questions, please direct them to the ydn-javascript Y! Group as I monitor the messages on that group frequently. I hope that helps. * *Todd A: Based on my testing, the following will work: oMenu.subscribe('keypress', function () { alert("I'm your friendly neighborhood keypress listener.")}); but that only fires when the Menu is receiving the keypress event, so it would need to already have focus. A: Does onShow point to a function? eg. var onShow = function() { alert("Click!"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/158864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are doubles faster than floats in C#? I'm writing an application which reads large arrays of floats and performs some simple operations with them. I'm using floats, because I thought it'd be faster than doubles, but after doing some research I've found that there's some confusion about this topic. Can anyone elaborate on this? A: If load & store operations are the bottleneck, then floats will be faster, because they're smaller. If you're doing a significant number of calculations between loads and stores, it should be about equal. Someone else mentioned avoiding conversions between float & double, and calculations that use operands of both types. That's good advice, and if you use any math library functions that return doubles (for example), then keeping everything as doubles will be faster. A: I'm writing a ray tracer, and replacing the floats with doubles for my Color class gives me a 5% speedup. Replacing the Vectors floats with doubles is another 5% faster! Pretty cool :) That's with a Core i7 920 A: The short answer is, "use whichever precision is required for acceptable results." Your one guarantee is that operations performed on floating point data are done in at least the highest precision member of the expression. So multiplying two float's is done with at least the precision of float, and multiplying a float and a double would be done with at least double precision. The standard states that "[floating-point] operations may be performed with higher precision than the result type of the operation." Given that the JIT for .NET attempts to leave your floating point operations in the precision requested, we can take a look at documentation from Intel for speeding up our operations. On the Intel platform your floating point operations may be done in an intermediate precision of 80 bits, and converted down to the precision requested. From Intel's guide to C++ Floating-point Operations1 (sorry only have dead tree), they mention: * *Use a single precision type (for example, float) unless the extra precision obtained through double or long double is required. Greater precision types increase memory size and bandwidth requirements. ... *Avoid mixed data type arithmetic expressions That last point is important as you can slow yourself down with unnecessary casts to/from float and double, which result in JIT'd code which requests the x87 to cast away from its 80-bit intermediate format in between operations! 1. Yes, it says C++, but the C# standard plus knowledge of the CLR lets us know the information for C++ should be applicable in this instance. A: With 387 FPU arithmetic, float is only faster than double for certain long iterative operations like pow, log, etc (and only if the compiler sets the FPU control word appropriately). With packed SSE arithmetic, it makes a big difference though. A: Matthijs, You are wrong. 32-bit is far more efficient than 16-bit - in modern processors... Perhaps not memory-wise, but in effectiveness 32-bit is the way to go. You really should update your professor to something more "up-to-date". ;) Anyway, to answer the question; float and double has exactly the same performance, at least on my Intel i7 870 (as in theory). Here are my measurements: (I made an "algorithm" that I repeated for 10,000,000 times, and then repeated that for 300 times, and out of that I made a average.) double ----------------------------- 1 core = 990 ms 4 cores = 340 ms 6 cores = 282 ms 8 cores = 250 ms float ----------------------------- 1 core = 992 ms 4 cores = 340 ms 6 cores = 282 ms 8 cores = 250 ms A: I have always thought that the processors were optimized or the same regardless of float or double. Searching for optimizations on my intensive computations (lots of gets from a matrix, comparisons of two values) I found out that floats run about 13% faster. This surprised me, but I guess it is due to the nature of my problem. I don't do casts between float and double in the core of the operations, and my computations are mainly adding, multiplying and subtracting. This is on my i7 920, running a 64-bit operating system. A: I just read the "Microsoft .NET Framework-Application Development Foundation 2nd" for the MCTS exam 70-536 and there is a note on page 4 (chapter 1): NOTE Optimizing performance with built-in types The runtime optimizes the performance of 32-bit integer types (Int32 and UInt32), so use those types for counters and other frequently accessed integral variables. For floating-point operations, Double is the most efficient type because those operations are optimized by hardware. It's written by Tony Northrup. I don't know if he's an authority or not, but I would expect that the official book for the .NET exam should carry some weight. It is of course not a gaurantee. I just thought I'd add it to this discussion. A: I profiled a similar question a few weeks ago. The bottom line is that for x86 hardware, there is no significant difference in the performance of floats versus doubles unless you become memory bound, or you start running into cache issue. In that case floats will generally have the advantage because they are smaller. Current Intel CPUs perform all floating point operations in 80 bit wide registers so the actual speed of the computation shouldn't vary between floats and doubles. A: This indicates that floats are slightly faster than doubles: http://www.herongyang.com/cs_b/performance.html In general, any time you do a comparison on performance, you should take into account any special cases, like does using one type require additional conversions or data massaging? Those add up and can belie generic benchmarks like this. A: Floats should be faster on a 32-bit system, but profile the code to make sure you're optimizing the right thing.
{ "language": "en", "url": "https://stackoverflow.com/questions/158889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Limiting HTML Input into Text Box How do I limit the types of HTML that a user can input into a textbox? I'm running a small forum using some custom software that I'm beta testing, but I need to know how to limit the HTML input. Any suggestions? A: You didn't state what the forum was built with, but if it's PHP, check out: http://htmlpurifier.org/ Library Features: Whitelist, Removal, Well-formed, Nesting, Attributes, XSS safe, Standards safe A: i'd suggest a slightly alternative approach: * *don't filter incoming user data (beyond prevention of sql injection). user data should be kept as pure as possible. *filter all outgoing data from the database, this is where things like tag stripping, etc.. should happen keeping user data clean allows you more flexibility in how it's displayed. filtering all outgoing data is a good habit to get into (along the never trust data meme). A: Once the text is submitted, you could strip any/all tags that don't match your predefined set using a regex in PHP. It would look something like the following: find open tag (<) if contents != allowed tag, remove tag (from <..>) A: * *Parse the input provides and strip out all html tags that don't match exactly the list you are allowing. This can either be a complex regex, or you can do a stateful iteration through the char[] of the input string building the allowed input string and stripping unwanted attributes on tags like img. *Use a different code system (BBCode, Markdown) *Find some code online that already does this, to use as a basis for your implementation. For example Slashcode must perform this, so look for its implementation in the Perl and use the regexes (that I assume are there) A: Regardless what you use, be sure to be informed of what kind of HTML content can be dangerous. e.g. a < script > tag is pretty obvious, but a < style > tag is just as bad in IE, because it can invoke JScript commands. In fact, any style="..." attribute can invoke script in IE. < object > would be one more tag to be weary of. A: PHP comes with a simple function strip_tag to strip HTML tags. It allows for certain tags to not be stripped. Example #1 strip_tags() example <?php $text = '<p>Test paragraph.</p><!-- Comment --> <a href="#fragment">Other text</a>'; echo strip_tags($text); echo "\n"; // Allow <p> and <a> echo strip_tags($text, '<p><a>'); ?> The above example will output: Test paragraph. Other text <p>Test paragraph.</p> <a href="#fragment">Other text</a> Personally for a forum, I would use BBCode or Markdown because the amount of support and features provided such as live preview.
{ "language": "en", "url": "https://stackoverflow.com/questions/158893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to make a Windows Forms .NET application display as tray icon? What needs to be done to have your .NET application show up in Window's system tray as icon? And how do you handle mousebutton clicks on said icon? A: You can add the NotifyIcon component from the toolbox onto your main form. This has events such as MouseDoubleClick that you can use to handle various events. Edit: You have to make sure that you set the Icon property to a valid .ico file if you want it to show up properly in the systray. A: First, add a NotifyIcon control to the Form. Then wire up the Notify Icon to do what you want. If you want it to hide to tray on minimize, try this. Private Sub frmMain_Resize(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Resize If Me.WindowState = FormWindowState.Minimized Then Me.ShowInTaskbar = False Else Me.ShowInTaskbar = True End If End Sub Private Sub NotifyIcon1_MouseClick(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles NotifyIcon1.MouseClick Me.WindowState = FormWindowState.Normal End Sub I'll occasionally use the Balloon Text in order to notify a user - that is done as such: Me.NotifyIcon1.ShowBalloonTip(3000, "This is a notification title!!", "This is notification text.", ToolTipIcon.Info) A: To extend Tom's answer, I like to only make the icon visible if the application is minimized. To do this, set Visible = False for NotifyIcon and use the below code. I also have code below to hide the icon during close the prevent the annoying ghost tray icons that persist after application close. Private Sub Form_Resize(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Resize If Me.WindowState = FormWindowState.Minimized Then Hide() NotifyIcon1.Visible = True NotifyIcon1.ShowBalloonTip(3000, NotifyIcon1.Text, "Minimized to tray", ToolTipIcon.Info) End If End Sub Private Sub NotifyIcon1_MouseClick(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles NotifyIcon1.MouseClick Show() Me.WindowState = FormWindowState.Normal Me.Activate() NotifyIcon1.Visible = False End Sub Private Sub Form_FormClosing(sender As Object, e As FormClosingEventArgs) Handles Me.FormClosing NotifyIcon1.Visible = False Dim index As Integer While index < My.Application.OpenForms.Count If My.Application.OpenForms(index) IsNot Me Then My.Application.OpenForms(index).Close() End If index += 1 End While End Sub If you want to add a right click menu: VB.NET: How to Make a Right Click Menu for a Tray Icon Per the article (with mods for context): Setting up the Form for hosting the tray icon context menu * *In the Properties set FormBorderStyle to None. *Set ShowInTaskbar as False (because we don't want an icon appearing in taskbar when we right-click the tray icon!). *Set StartPosition to Manual. *Set TopMost to True. *Add a ContextMenuStrip to your new Form, and name it whatever you want. *Add items to the ContextMenuStrip (for this example just add one item called "Exit"). The Form code behind will look like this: Private Sub Form_Deactivate(sender As Object, e As EventArgs) Handles Me.Deactivate Me.Close() End Sub Private Sub Form_Load(sender As Object, e As EventArgs) Handles MyBase.Load ContextMenuStrip1.Show(Cursor.Position) Me.Left = ContextMenuStrip1.Left + 1 Me.Top = ContextMenuStrip1.Top + 1 End Sub Private Sub ExitToolStripMenuItem_Click(sender As Object, e As EventArgs) Handles ExitToolStripMenuItem.Click MainForm.NotifyIcon1.Visible = False End End Sub I then change the notifyicon mouse event to this (TrayIconMenuForm is the name of my Form for providing the context menu): Private Sub NotifyIcon1_MouseClick(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles NotifyIcon1.MouseClick Select Case e.Button Case Windows.Forms.MouseButtons.Left Show() Me.WindowState = FormWindowState.Normal Me.Activate() NotifyIcon1.Visible = False Case Windows.Forms.MouseButtons.Right TrayIconMenuForm.Show() 'Shows the Form that is the parent of "traymenu" TrayIconMenuForm.Activate() 'Set the Form to "Active", that means that that will be the "selected" window TrayIconMenuForm.Width = 1 'Set the Form width to 1 pixel, that is needed because later we will set it behind the "traymenu" TrayIconMenuForm.Height = 1 'Set the Form Height to 1 pixel, for the same reason as above Case Else 'Do nothing End Select End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/158895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: MFC File upload How would I upload a file to a webserver using c++ and MFC. We are not using .Net. Would I need to open a socket and do everything myself? If so, where is a good reference to follow? A: You don't want to use direct socket calls. It's hard to get HTTP right this way. The easier way is to the WinINet APIs. Check out the docs for InternetOpen, this will likely be the first call you make. Functions you will likely need: * *InternetOpen *InternetConnect *HttpOpenRequest *HttpSendRequest *HttpQueryInfo *InternetCloseHandle You can find docs for all of these on msdn A: Here is the code I ended up using. I stripped out the error checking, and other notification stuff. This does a multi-part form upload. DWORD dwTotalRequestLength; DWORD dwChunkLength; DWORD dwReadLength; DWORD dwResponseLength; CHttpFile* pHTTP = NULL; dwChunkLength = 64 * 1024; void* pBuffer = malloc(dwChunkLength); CFile file ; CInternetSession session("sendFile"); CHttpConnection *connection = NULL; try { //Create the multi-part form data that goes before and after the actual file upload. CString strHTTPBoundary = _T("FFF3F395A90B452BB8BEDC878DDBD152"); CString strPreFileData = MakePreFileData(strHTTPBoundary, file.GetFileName()); CString strPostFileData = MakePostFileData(strHTTPBoundary); CString strRequestHeaders = MakeRequestHeaders(strHTTPBoundary); dwTotalRequestLength = strPreFileData.GetLength() + strPostFileData.GetLength() + file.GetLength(); connection = session.GetHttpConnection("www.YOURSITE.com",NULL,INTERNET_DEFAULT_HTTP_PORT); pHTTP = connection->OpenRequest(CHttpConnection::HTTP_VERB_POST, _T("/YOUURL/submit_file.pl")); pHTTP->AddRequestHeaders(strRequestHeaders); pHTTP->SendRequestEx(dwTotalRequestLength, HSR_SYNC | HSR_INITIATE); //Write out the headers and the form variables pHTTP->Write((LPSTR)(LPCSTR)strPreFileData, strPreFileData.GetLength()); //upload the file. dwReadLength = -1; int length = file.GetLength(); //used to calculate percentage complete. while (0 != dwReadLength) { dwReadLength = file.Read(pBuffer, dwChunkLength); if (0 != dwReadLength) { pHTTP->Write(pBuffer, dwReadLength); } } file.Close(); //Finish the upload. pHTTP->Write((LPSTR)(LPCSTR)strPostFileData, strPostFileData.GetLength()); pHTTP->EndRequest(HSR_SYNC); //get the response from the server. LPSTR szResponse; CString strResponse; dwResponseLength = pHTTP->GetLength(); while (0 != dwResponseLength ) { szResponse = (LPSTR)malloc(dwResponseLength + 1); szResponse[dwResponseLength] = '\0'; pHTTP->Read(szResponse, dwResponseLength); strResponse += szResponse; free(szResponse); dwResponseLength = pHTTP->GetLength(); } AfxMessageBox(strResponse); //close everything up. pHTTP->Close(); connection->Close(); session.Close(); CString CHelpRequestUpload::MakeRequestHeaders(CString& strBoundary) { CString strFormat; CString strData; strFormat = _T("Content-Type: multipart/form-data; boundary=%s\r\n"); strData.Format(strFormat, strBoundary); return strData; } CString CHelpRequestUpload::MakePreFileData(CString& strBoundary, CString& strFileName) { CString strFormat; CString strData; strFormat = _T("--%s"); strFormat += _T("\r\n"); strFormat += _T("Content-Disposition: form-data; name=\"user\""); strFormat += _T("\r\n\r\n"); strFormat += _T("%s"); strFormat += _T("\r\n"); strFormat += _T("--%s"); strFormat += _T("\r\n"); strFormat += _T("Content-Disposition: form-data; name=\"email\""); strFormat += _T("\r\n\r\n"); strFormat += _T("%s"); strFormat += _T("\r\n"); strFormat += _T("--%s"); strFormat += _T("\r\n"); strFormat += _T("Content-Disposition: form-data; name=\"filename\"; filename=\"%s\""); strFormat += _T("\r\n"); strFormat += _T("Content-Type: audio/x-flac"); strFormat += _T("\r\n"); strFormat += _T("Content-Transfer-Encoding: binary"); strFormat += _T("\r\n\r\n"); strData.Format(strFormat, strBoundary, m_Name, strBoundary, m_Email, strBoundary, strFileName); return strData; } CString CHelpRequestUpload::MakePostFileData(CString& strBoundary) { CString strFormat; CString strData; strFormat = _T("\r\n"); strFormat += _T("--%s"); strFormat += _T("\r\n"); strFormat += _T("Content-Disposition: form-data; name=\"submitted\""); strFormat += _T("\r\n\r\n"); strFormat += _T(""); strFormat += _T("\r\n"); strFormat += _T("--%s--"); strFormat += _T("\r\n"); strData.Format(strFormat, strBoundary, strBoundary); return strData; } A: WinInet as suggested. Bear in mind that there are MFC classes that wrap these APIs. If for some reason these APIs aren't flexible engouh for your requirements (e.g. you need to implement connection through a proxy including authentication), give a look at WinHTTP. It's a superset of WinInet (no MFC wrappers though for WinHTTP). A: You could use BITS: http://www.codeproject.com/KB/IP/bitsman.aspx A: You can also use XMLHTTP. Even if you aren't sending XML. Built on WinINet, but a bit easier to use (if you're used to working with COM anyway). See MSDN: http://msdn.microsoft.com/en-us/library/ms759148.aspx A: If you have an ftp server, check out the CFtpConnection class.
{ "language": "en", "url": "https://stackoverflow.com/questions/158908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Cropping an UIImage I've got some code that resizes an image so I can get a scaled chunk of the center of the image - I use this to take a UIImage and return a small, square representation of an image, similar to what's seen in the album view of the Photos app. (I know I could use a UIImageView and adjust the crop mode to achieve the same results, but these images are sometimes displayed in UIWebViews). I've started to notice some crashes in this code and I'm a bit stumped. I've got two different theories and I'm wondering if either is on-base. Theory 1) I achieve the cropping by drawing into an offscreen image context of my target size. Since I want the center portion of the image, I set the CGRect argument passed to drawInRect to something that's larger than the bounds of my image context. I was hoping this was Kosher, but am I instead attempting to draw over other memory that I shouldn't be touching? Theory 2) I'm doing all of this in a background thread. I know there are portions of UIKit that are restricted to the main thread. I was assuming / hoping that drawing to an offscreen view wasn't one of these. Am I wrong? (Oh, how I miss NSImage's drawInRect:fromRect:operation:fraction: method.) A: To crop retina images while keeping the same scale and orientation, use the following method in a UIImage category (iOS 4.0 and above): - (UIImage *)crop:(CGRect)rect { if (self.scale > 1.0f) { rect = CGRectMake(rect.origin.x * self.scale, rect.origin.y * self.scale, rect.size.width * self.scale, rect.size.height * self.scale); } CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect); UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation]; CGImageRelease(imageRef); return result; } A: Swift Extension extension UIImage { func crop(var rect: CGRect) -> UIImage { rect.origin.x*=self.scale rect.origin.y*=self.scale rect.size.width*=self.scale rect.size.height*=self.scale let imageRef = CGImageCreateWithImageInRect(self.CGImage, rect) let image = UIImage(CGImage: imageRef, scale: self.scale, orientation: self.imageOrientation)! return image } } A: You can make a UIImage category and use it wherever you need. Based on HitScans response and comments bellow it. @implementation UIImage (Crop) - (UIImage *)crop:(CGRect)rect { rect = CGRectMake(rect.origin.x*self.scale, rect.origin.y*self.scale, rect.size.width*self.scale, rect.size.height*self.scale); CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], rect); UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation]; CGImageRelease(imageRef); return result; } @end You can use it this way: UIImage *imageToCrop = <yourImageToCrop>; CGRect cropRect = <areaYouWantToCrop>; //for example //CGRectMake(0, 40, 320, 100); UIImage *croppedImage = [imageToCrop crop:cropRect]; A: Swift 3 version func cropImage(imageToCrop:UIImage, toRect rect:CGRect) -> UIImage{ let imageRef:CGImage = imageToCrop.cgImage!.cropping(to: rect)! let cropped:UIImage = UIImage(cgImage:imageRef) return cropped } let imageTop:UIImage = UIImage(named:"one.jpg")! // add validation with help of this bridge function CGRectMake -> CGRect (credits to this answer answered by @rob mayoff): func CGRectMake(_ x: CGFloat, _ y: CGFloat, _ width: CGFloat, _ height: CGFloat) -> CGRect { return CGRect(x: x, y: y, width: width, height: height) } The usage is: if var image:UIImage = UIImage(named:"one.jpg"){ let croppedImage = cropImage(imageToCrop: image, toRect: CGRectMake( image.size.width/4, 0, image.size.width/2, image.size.height) ) } Output: A: Here is my UIImage crop implementation which obeys the imageOrientation property. All orientations were thoroughly tested. inline double rad(double deg) { return deg / 180.0 * M_PI; } UIImage* UIImageCrop(UIImage* img, CGRect rect) { CGAffineTransform rectTransform; switch (img.imageOrientation) { case UIImageOrientationLeft: rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -img.size.height); break; case UIImageOrientationRight: rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -img.size.width, 0); break; case UIImageOrientationDown: rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -img.size.width, -img.size.height); break; default: rectTransform = CGAffineTransformIdentity; }; rectTransform = CGAffineTransformScale(rectTransform, img.scale, img.scale); CGImageRef imageRef = CGImageCreateWithImageInRect([img CGImage], CGRectApplyAffineTransform(rect, rectTransform)); UIImage *result = [UIImage imageWithCGImage:imageRef scale:img.scale orientation:img.imageOrientation]; CGImageRelease(imageRef); return result; } A: Heads up: all these answers assume a CGImage-backed image object. image.CGImage can return nil, if the UIImage is backed by a CIImage, which would be the case if you created this image using a CIFilter. In that case, you might have to draw the image in a new context, and return that image (slow). UIImage* crop(UIImage *image, rect) { UIGraphicsBeginImageContextWithOptions(rect.size, false, [image scale]); [image drawAtPoint:CGPointMake(-rect.origin.x, -rect.origin.y)]; cropped_image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return cropped_image; } A: Best solution for cropping an UIImage in Swift, in term of precision, pixels scaling ...: private func squareCropImageToSideLength(let sourceImage: UIImage, let sideLength: CGFloat) -> UIImage { // input size comes from image let inputSize: CGSize = sourceImage.size // round up side length to avoid fractional output size let sideLength: CGFloat = ceil(sideLength) // output size has sideLength for both dimensions let outputSize: CGSize = CGSizeMake(sideLength, sideLength) // calculate scale so that smaller dimension fits sideLength let scale: CGFloat = max(sideLength / inputSize.width, sideLength / inputSize.height) // scaling the image with this scale results in this output size let scaledInputSize: CGSize = CGSizeMake(inputSize.width * scale, inputSize.height * scale) // determine point in center of "canvas" let center: CGPoint = CGPointMake(outputSize.width/2.0, outputSize.height/2.0) // calculate drawing rect relative to output Size let outputRect: CGRect = CGRectMake(center.x - scaledInputSize.width/2.0, center.y - scaledInputSize.height/2.0, scaledInputSize.width, scaledInputSize.height) // begin a new bitmap context, scale 0 takes display scale UIGraphicsBeginImageContextWithOptions(outputSize, true, 0) // optional: set the interpolation quality. // For this you need to grab the underlying CGContext let ctx: CGContextRef = UIGraphicsGetCurrentContext() CGContextSetInterpolationQuality(ctx, kCGInterpolationHigh) // draw the source image into the calculated rect sourceImage.drawInRect(outputRect) // create new image from bitmap context let outImage: UIImage = UIGraphicsGetImageFromCurrentImageContext() // clean up UIGraphicsEndImageContext() // pass back new image return outImage } Instructions used to call this function: let image: UIImage = UIImage(named: "Image.jpg")! let squareImage: UIImage = self.squareCropImageToSideLength(image, sideLength: 320) self.myUIImageView.image = squareImage Note: the initial source code inspiration written in Objective-C has been found on "Cocoanetics" blog. A: None of the answers here handle all of the scale and rotation issues 100% correctly. Here's a synthesis of everything said so far, up-to-date as of iOS7/8. It's meant to be included as a method in a category on UIImage. - (UIImage *)croppedImageInRect:(CGRect)rect { double (^rad)(double) = ^(double deg) { return deg / 180.0 * M_PI; }; CGAffineTransform rectTransform; switch (self.imageOrientation) { case UIImageOrientationLeft: rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -self.size.height); break; case UIImageOrientationRight: rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -self.size.width, 0); break; case UIImageOrientationDown: rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -self.size.width, -self.size.height); break; default: rectTransform = CGAffineTransformIdentity; }; rectTransform = CGAffineTransformScale(rectTransform, self.scale, self.scale); CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], CGRectApplyAffineTransform(rect, rectTransform)); UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation]; CGImageRelease(imageRef); return result; } A: Below code snippet might help. import UIKit extension UIImage { func cropImage(toRect rect: CGRect) -> UIImage? { if let imageRef = self.cgImage?.cropping(to: rect) { return UIImage(cgImage: imageRef) } return nil } } A: Swift version of awolf's answer, which worked for me: public extension UIImage { func croppedImage(inRect rect: CGRect) -> UIImage { let rad: (Double) -> CGFloat = { deg in return CGFloat(deg / 180.0 * .pi) } var rectTransform: CGAffineTransform switch imageOrientation { case .left: let rotation = CGAffineTransform(rotationAngle: rad(90)) rectTransform = rotation.translatedBy(x: 0, y: -size.height) case .right: let rotation = CGAffineTransform(rotationAngle: rad(-90)) rectTransform = rotation.translatedBy(x: -size.width, y: 0) case .down: let rotation = CGAffineTransform(rotationAngle: rad(-180)) rectTransform = rotation.translatedBy(x: -size.width, y: -size.height) default: rectTransform = .identity } rectTransform = rectTransform.scaledBy(x: scale, y: scale) let transformedRect = rect.applying(rectTransform) let imageRef = cgImage!.cropping(to: transformedRect)! let result = UIImage(cgImage: imageRef, scale: scale, orientation: imageOrientation) return result } } A: Update 2014-05-28: I wrote this when iOS 3 or so was the hot new thing, I'm certain there are better ways to do this by now, possibly built-in. As many people have mentioned, this method doesn't take rotation into account; read some additional answers and spread some upvote love around to keep the responses to this question helpful for everyone. Original response: I'm going to copy/paste my response to the same question elsewhere: There isn't a simple class method to do this, but there is a function that you can use to get the desired results: CGImageCreateWithImageInRect(CGImageRef, CGRect) will help you out. Here's a short example using it: CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect); // or use the UIImage wherever you like [UIImageView setImage:[UIImage imageWithCGImage:imageRef]]; CGImageRelease(imageRef); A: Looks a little bit strange but works great and takes into consideration image orientation: var image:UIImage = ... let img = CIImage(image: image)!.imageByCroppingToRect(rect) image = UIImage(CIImage: img, scale: 1, orientation: image.imageOrientation) A: swift3 extension UIImage { func crop(rect: CGRect) -> UIImage? { var scaledRect = rect scaledRect.origin.x *= scale scaledRect.origin.y *= scale scaledRect.size.width *= scale scaledRect.size.height *= scale guard let imageRef: CGImage = cgImage?.cropping(to: scaledRect) else { return nil } return UIImage(cgImage: imageRef, scale: scale, orientation: imageOrientation) } } A: CGSize size = [originalImage size]; int padding = 20; int pictureSize = 300; int startCroppingPosition = 100; if (size.height > size.width) { pictureSize = size.width - (2.0 * padding); startCroppingPosition = (size.height - pictureSize) / 2.0; } else { pictureSize = size.height - (2.0 * padding); startCroppingPosition = (size.width - pictureSize) / 2.0; } // WTF: Don't forget that the CGImageCreateWithImageInRect believes that // the image is 180 rotated, so x and y are inverted, same for height and width. CGRect cropRect = CGRectMake(startCroppingPosition, padding, pictureSize, pictureSize); CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage], cropRect); UIImage *newImage = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:originalImage.imageOrientation]; [m_photoView setImage:newImage]; CGImageRelease(imageRef); Most of the responses I've seen only deals with a position of (0, 0) for (x, y). Ok that's one case but I'd like my cropping operation to be centered. What took me a while to figure out is the line following the WTF comment. Let's take the case of an image captured with a portrait orientation: * *The original image height is higher than its width (Woo, no surprise so far!) *The image that the CGImageCreateWithImageInRect method imagines in its own world is not really a portrait though but a landscape (That is also why if you don't use the orientation argument in the imageWithCGImage constructor, it will show up as 180 rotated). *So, you should kind of imagine that it is a landscape, the (0, 0) position being the top right corner of the image. Hope it makes sense! If it does not, try different values you'll see that the logic is inverted when it comes to choosing the right x, y, width, and height for your cropRect. A: - (UIImage *)getSubImage:(CGRect) rect{ CGImageRef subImageRef = CGImageCreateWithImageInRect(self.CGImage, rect); CGRect smallBounds = CGRectMake(rect.origin.x, rect.origin.y, CGImageGetWidth(subImageRef), CGImageGetHeight(subImageRef)); UIGraphicsBeginImageContext(smallBounds.size); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextDrawImage(context, smallBounds, subImageRef); UIImage* smallImg = [UIImage imageWithCGImage:subImageRef]; UIGraphicsEndImageContext(); return smallImg; } A: (UIImage *)squareImageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize { double ratio; double delta; CGPoint offset; //make a new square size, that is the resized imaged width CGSize sz = CGSizeMake(newSize.width, newSize.width); //figure out if the picture is landscape or portrait, then //calculate scale factor and offset if (image.size.width > image.size.height) { ratio = newSize.width / image.size.width; delta = (ratio*image.size.width - ratio*image.size.height); offset = CGPointMake(delta/2, 0); } else { ratio = newSize.width / image.size.height; delta = (ratio*image.size.height - ratio*image.size.width); offset = CGPointMake(0, delta/2); } //make the final clipping rect based on the calculated values CGRect clipRect = CGRectMake(-offset.x, -offset.y, (ratio * image.size.width) + delta, (ratio * image.size.height) + delta); //start a new context, with scale factor 0.0 so retina displays get //high quality image if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) { UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0); } else { UIGraphicsBeginImageContext(sz); } UIRectClip(clipRect); [image drawInRect:clipRect]; UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage; } A: On iOS9.2SDK ,I use below method to convert frame from UIView to UIImage -(UIImage *)getNeedImageFrom:(UIImage*)image cropRect:(CGRect)rect { CGSize cropSize = rect.size; CGFloat widthScale = image.size.width/self.imageViewOriginal.bounds.size.width; CGFloat heightScale = image.size.height/self.imageViewOriginal.bounds.size.height; cropSize = CGSizeMake(rect.size.width*widthScale, rect.size.height*heightScale); CGPoint pointCrop = CGPointMake(rect.origin.x*widthScale, rect.origin.y*heightScale); rect = CGRectMake(pointCrop.x, pointCrop.y, cropSize.width, cropSize.height); CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect); UIImage *croppedImage = [UIImage imageWithCGImage:subImage]; CGImageRelease(subImage); return croppedImage; } A: Swift 2.0 Update (CIImage compatibility) Expanding off of Maxim's Answer but works if your image is CIImage based, as well. public extension UIImage { func imageByCroppingToRect(rect: CGRect) -> UIImage? { if let image = CGImageCreateWithImageInRect(self.CGImage, rect) { return UIImage(CGImage: image) } else if let image = (self.CIImage)?.imageByCroppingToRect(rect) { return UIImage(CIImage: image) } return nil } } A: Here's an updated Swift 3 version based on Noodles answer func cropping(to rect: CGRect) -> UIImage? { if let cgCrop = cgImage?.cropping(to: rect) { return UIImage(cgImage: cgCrop) } else if let ciCrop = ciImage?.cropping(to: rect) { return UIImage(ciImage: ciCrop) } return nil } A: Follow Answer of @Arne. I Just fixing to Category function. put it in Category of UIImage. -(UIImage*)cropImage:(CGRect)rect{ UIGraphicsBeginImageContextWithOptions(rect.size, false, [self scale]); [self drawAtPoint:CGPointMake(-rect.origin.x, -rect.origin.y)]; UIImage* cropped_image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return cropped_image; } A: Swift 5: extension UIImage { func cropped(rect: CGRect) -> UIImage? { guard let cgImage = cgImage else { return nil } UIGraphicsBeginImageContextWithOptions(rect.size, false, 0) let context = UIGraphicsGetCurrentContext() context?.translateBy(x: 0.0, y: self.size.height) context?.scaleBy(x: 1.0, y: -1.0) context?.draw(cgImage, in: CGRect(x: rect.minX, y: rect.minY, width: self.size.width, height: self.size.height), byTiling: false) let croppedImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return croppedImage } } A: I wasn't satisfied with other solutions because they either draw several time (using more power than necessary) or have problems with orientation. Here is what I used for a scaled square croppedImage from a UIImage * image. CGFloat minimumSide = fminf(image.size.width, image.size.height); CGFloat finalSquareSize = 600.; //create new drawing context for right size CGRect rect = CGRectMake(0, 0, finalSquareSize, finalSquareSize); CGFloat scalingRatio = 640.0/minimumSide; UIGraphicsBeginImageContext(rect.size); //draw [image drawInRect:CGRectMake((minimumSide - photo.size.width)*scalingRatio/2., (minimumSide - photo.size.height)*scalingRatio/2., photo.size.width*scalingRatio, photo.size.height*scalingRatio)]; UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); A: I use the method below. -(UIImage *)getNeedImageFrom:(UIImage*)image cropRect:(CGRect)rect { CGSize cropSize = rect.size; CGFloat widthScale = image.size.width/self.imageViewOriginal.bounds.size.width; CGFloat heightScale = image.size.height/self.imageViewOriginal.bounds.size.height; cropSize = CGSizeMake(rect.size.width*widthScale, rect.size.height*heightScale); CGPoint pointCrop = CGPointMake(rect.origin.x*widthScale, rect.origin.y*heightScale); rect = CGRectMake(pointCrop.x, pointCrop.y, cropSize.width, cropSize.height); CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect); UIImage *croppedImage = [UIImage imageWithCGImage:subImage]; CGImageRelease(subImage); return croppedImage; } A: Look at https://github.com/vvbogdan/BVCropPhoto - (UIImage *)croppedImage { CGFloat scale = self.sourceImage.size.width / self.scrollView.contentSize.width; UIImage *finalImage = nil; CGRect targetFrame = CGRectMake((self.scrollView.contentInset.left + self.scrollView.contentOffset.x) * scale, (self.scrollView.contentInset.top + self.scrollView.contentOffset.y) * scale, self.cropSize.width * scale, self.cropSize.height * scale); CGImageRef contextImage = CGImageCreateWithImageInRect([[self imageWithRotation:self.sourceImage] CGImage], targetFrame); if (contextImage != NULL) { finalImage = [UIImage imageWithCGImage:contextImage scale:self.sourceImage.scale orientation:UIImageOrientationUp]; CGImageRelease(contextImage); } return finalImage; } - (UIImage *)imageWithRotation:(UIImage *)image { if (image.imageOrientation == UIImageOrientationUp) return image; CGAffineTransform transform = CGAffineTransformIdentity; switch (image.imageOrientation) { case UIImageOrientationDown: case UIImageOrientationDownMirrored: transform = CGAffineTransformTranslate(transform, image.size.width, image.size.height); transform = CGAffineTransformRotate(transform, M_PI); break; case UIImageOrientationLeft: case UIImageOrientationLeftMirrored: transform = CGAffineTransformTranslate(transform, image.size.width, 0); transform = CGAffineTransformRotate(transform, M_PI_2); break; case UIImageOrientationRight: case UIImageOrientationRightMirrored: transform = CGAffineTransformTranslate(transform, 0, image.size.height); transform = CGAffineTransformRotate(transform, -M_PI_2); break; case UIImageOrientationUp: case UIImageOrientationUpMirrored: break; } switch (image.imageOrientation) { case UIImageOrientationUpMirrored: case UIImageOrientationDownMirrored: transform = CGAffineTransformTranslate(transform, image.size.width, 0); transform = CGAffineTransformScale(transform, -1, 1); break; case UIImageOrientationLeftMirrored: case UIImageOrientationRightMirrored: transform = CGAffineTransformTranslate(transform, image.size.height, 0); transform = CGAffineTransformScale(transform, -1, 1); break; case UIImageOrientationUp: case UIImageOrientationDown: case UIImageOrientationLeft: case UIImageOrientationRight: break; } // Now we draw the underlying CGImage into a new context, applying the transform // calculated above. CGContextRef ctx = CGBitmapContextCreate(NULL, image.size.width, image.size.height, CGImageGetBitsPerComponent(image.CGImage), 0, CGImageGetColorSpace(image.CGImage), CGImageGetBitmapInfo(image.CGImage)); CGContextConcatCTM(ctx, transform); switch (image.imageOrientation) { case UIImageOrientationLeft: case UIImageOrientationLeftMirrored: case UIImageOrientationRight: case UIImageOrientationRightMirrored: // Grr... CGContextDrawImage(ctx, CGRectMake(0, 0, image.size.height, image.size.width), image.CGImage); break; default: CGContextDrawImage(ctx, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage); break; } // And now we just create a new UIImage from the drawing context CGImageRef cgimg = CGBitmapContextCreateImage(ctx); UIImage *img = [UIImage imageWithCGImage:cgimg]; CGContextRelease(ctx); CGImageRelease(cgimg); return img; } A: Swift 5.0 update public extension UIImage { func cropped(rect: CGRect) -> UIImage? { if let image = self.cgImage!.cropping(to: rect) { return UIImage(cgImage: image) } else if let image = (self.ciImage)?.cropped(to: rect) { return UIImage(ciImage: image) } return nil } }
{ "language": "en", "url": "https://stackoverflow.com/questions/158914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "218" }
Q: Easiest way to migrate Word 2003 custom macro toolbars into Word 2007? I have a series of macros and toolbars that I developed for Word 2003. Now that my office is upgrading to Word 2007, I need to migrate them. The macros themselves migrate with zero effort, but the toolbars are a different issue. A random subset of the toolbars show up in the "Add-Ins" ribbon tab, but I haven't found a way to control which ones. Something that may be a complication is that I deploy the macros by placing a template into a user's Word STARTUP folder (C:\Documents and Settings\username\Application Data\Microsoft\Word\STARTUP). While I can add macros from normal.dot into the Quick Access Toolbar, I cannot add macros from this startup template. I'd like a better, more structured layout anyway. So, what's the easiest way to replicate my custom macro toolbars in Word 2007? A: The macros and toolbars that I developed for Word 2003 are in a number of .dot files. I simply put these .dot files into my Startup folder. I restarted Word'07 an wallah, these Macro toolbars appeared in the Add-Ins ribbon. Good Luck
{ "language": "en", "url": "https://stackoverflow.com/questions/158930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Find top 3 closest targets in Actionscript 3 I have an array of characters that are Points and I want to take any character and be able to loop through that array and find the top 3 closest (using Point.distance) neighbors. Could anyone give me an idea of how to do this? A: This is a new and improved version of the code I posted last night. It's composed of two classes, the PointTester and the TestCase. This time around I was able to test it too! We start with the TestCase.as package { import flash.geom.Point; import flash.display.Sprite; public class TestCase extends Sprite { public function TestCase() { // some data to test with var pointList:Array = new Array(); pointList.push(new Point(0, 0)); pointList.push(new Point(0, 0)); pointList.push(new Point(0, 0)); pointList.push(new Point(1, 2)); pointList.push(new Point(9, 9)); // the point we want to test against var referencePoint:Point = new Point(10, 10); var resultPoints:Array = PointTester.findClosest(referencePoint, pointList, 3); trace("referencePoint is at", referencePoint.x, referencePoint.y); for each(var result:Object in resultPoints) { trace("Point is at:", result.point.x, ", ", result.point.y, " that's ", result.distance, " units away"); } } } } And this would be PointTester.as package { import flash.geom.Point; public class PointTester { public static function findClosest(referencePoint:Point, pointList:Array, maxCount:uint = 3):Array{ // this array will hold the results var resultList:Array = new Array(); // loop over each point in the test data for each (var testPoint:Point in pointList) { // we store the distance between the two in a temporary variable var tempDistance:Number = getDistance(testPoint, referencePoint); // if the list is shorter than the maximum length we don't need to do any distance checking // if it's longer we compare the distance to the last point in the list, if it's closer we add it if (resultList.length <= maxCount || tempDistance < resultList[resultList.length - 1].distance) { // we store the testing point and it's distance to the reference point in an object var tmpObject:Object = { distance : tempDistance, point : testPoint }; // and push that onto the array resultList.push(tmpObject); // then we sort the array, this way we don't need to compare the distance to any other point than // the last one in the list resultList.sortOn("distance", Array.NUMERIC ); // and we make sure the list is kept at at the proper number of entries while (resultList.length > maxCount) resultList.pop(); } } return resultList; } public static function getDistance(point1:Point, point2:Point):Number { var x:Number = point1.x - point2.x; var y:Number = point1.y - point2.y; return Math.sqrt(x * x + y * y); } } } A: It might be worth mentioning that, if the number of points is large enough for performance to be important, then the goal could be achieved more quickly by keeping two lists of points, one sorted by X and the other by Y. One could then find the closest 3 points in O(logn) time rather than O(n) time by looping through every point. A: If you use grapefrukt's solution you can change the getDistance method to return x*x + y*y; instead of return Math.sqrt( x * x + y * y );
{ "language": "en", "url": "https://stackoverflow.com/questions/158933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why I can't parse a SimpleDateFormat with pattern "MMMMM dd" in Java? I need to parse a string like "February 12, 1981" as a Date. I use SimpleDateFormat. But if I do: new SimpleDateFormat("MMMMM dd, yyyy").parse("February 12, 1981") I get java.text.ParseException. I tried to reduce it to see where the problem is. First: new SimpleDateFormat("MMMMM").parse("February") works. Then: new SimpleDateFormat("MMMMM dd").parse("February 12") doesn't work anymore. Anyone know why? I also tried new SimpleDateFormat("MMMMM' 'dd"). I'm using JRE 1.6.0_06. A: What version of JDK/JRE are you using? This works fine for me with 1.4.2_14, 1.5.0_16, and 1.6.0_07: SimpleDateFormat df = new SimpleDateFormat("MMMMM dd, yyyy"); Date parsed = df.parse("February 12, 1981"); System.out.println(parsed); output: Thu Feb 12 00:00:00 EST 1981
{ "language": "en", "url": "https://stackoverflow.com/questions/158935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Resetting Buffers in Vim Is it possible to reset the alternate buffer in a vim session to what it was previously? By alternate buffer, I mean the one that is referred to by #, i.e. the one that is displayed when you enter cntl-^. Say I've got two files open main.c and other.c and :ls gives me: 1 %a "main.c" lines 27 2 # "other.c" lines 56 Say I open another file, e.g. refer.c, :ls will now give me: 1 %a "main.c" lines 27 2 "other.c" lines 56 3 # "refer.c" lines 125 If I delete the buffer containing refer.c, :ls now shows: 1 %a "main.c" lines 27 2 "other.c" lines 56 But if I do a cntl-^, refer.c will be displayed again! Is there some way to get vim to reset the alternate buffer back to what it last was automatically? A "history" of alternate buffers? Or am I stuck with doing a :2 b to reload other.c into the alternate buffer? Or maybe there is a good reason for this behaviour? A: In this case, "alternate" just means "previous". So, yes, :b2 (or 2 ctrl-6) is probably the easiest way to change which two buffers will be toggled by ctrl-6. Also, take a look at the :keepalt command. A: As you'll come to expect with Vim, there is an excellent reason for this behaviour. :bd (mnemonic for buffer delete) does not delete the buffer, per se, it deletes it from the main buffer list! If you try :ls! or :buffers! you will see it is is still available but with a u adjacent to it's buffer number, indicating it is now "unlisted" (that is, unlisted unless you list it with an exclamation mark!). I'm making it sound as horrible as possible, but as with most of Vim it works once you understand it, and the use of exclamation mark / bang to force the command is consistent. To get rid of the buffer completely you need to wipe it using :bw. When you have done that you will still have the same problem, but this time, attempting to switch to the alternate buffer with CTRL-^ will elicit No alternate file (because this time it really has gone). To switch to the file you want, yes, use the buffer number: :b2, or whatever the buffer number is of the file you want, and that will establish a new alternate buffer. I find it's easy to remember buffer numbers or look them up with :buffers or :buffers! really quickly, and of course changing to them is then quick, but of course there's a range of techniques in Vim for changing buffers, especially including marks. You've also discovered another great Vim feature here, the unlisted buffers. When you're dealing with a few extra files it's sometimes helpful to "delete" them from the :buffers list using :bd, just to get them out of sight, but although hidden they're not unavailable, and you can check which one you want with :buffers! and then :b<num> to pull it up, without having to undelete it or anything.
{ "language": "en", "url": "https://stackoverflow.com/questions/158940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: TFS checkin RSS feed How can I to generate an RSS feed of Team Foundation Server commit messages? In Visual Studio's Source Control Explorer, the "View History" option produces a nice GUI view. Likewise, the command line tf history /recursive /stopafter:40 . produces a nice GUI view. I'd like an RSS feed that would supply the same, or similar, information -- or even a way to wring some text out of TFS that I can reconstitute into RSS. This is similar to the question, Sending SVN commits to an RSS feed, but I do have control over the private TFS repository. A: One of the links from Grant's answer seems even better than the original page. It contains source code for ASPX file that generates an RSS feed of TFS checkins which returns information about the most recent N checkins: http://blogs.msdn.com/jefflu/archive/2005/07/27/443900.aspx I haven't tried it out yet, and it doesn't appear to include the checkin comment, which is the most important part from my perspective. A: http://blogs.msdn.com/abhinaba/archive/2005/12/21/506277.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/158941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What are the legalities of repackaging other's RSS feeds into a new presentation? I know that services like my.yahoo.com allow you to add content from RSS feeds to your personal page, but in general they are links which draw the user to the site which provided the feed. What are the legalities and implications of using RSS feeds as a data source for a site which repackages the data so as to be unrecognizable that it came from said source. Does credit need to be given? It is a copyright violation? What is ethical? What if credit is stated? Does this change your opinion? Does permission need to be granted? A: Of course it's ethical! What on earth is RSS for if not for syndication, into as many varied and wonderful forms as developers can think up? Permission, of course, must be asked for - in the form of a "GET /feed/ HTTP 1.0". And it must be granted in the form of a "200 OK" - or denied in the form of a "403 Forbidden". Screen scraping is at least morally ambiguous, since perhaps the author only wants humans, and not programs, to view the content (assuming you believe it's within the rights of the author to make that distinction). But RSS? Seriously? No one forces anyone to make a syndicated, easily-mungable format of their content. It's not just useful for new presentations, it's meant for it. A: In my opinion it depends on the data source company as to whether they allow it in their terms and conditions. It probably also depends on where your servers are located (i.e. Which legal framework they fall under.) Unless it is allowed explicitly or you have written consent I don't think it's ethical. It also depends on how big your legal department is. A: Well, legalities aside it isn't ethical to not give credit to the source. The AP for example wants credit A: I would say publishing someone else's work without giving them credit will definitely lead to lawsuits or at least strongly worded cease and desist letters (followed by lawsuits). A: The difference between what you are proposing and services like my.yahoo.com, Netvibes, Bloglines, Google Reader, etc, is that you are the one choosing the feeds, whereas with those other services the user is specifying the feed, and is therefore aware of it's original source. Even though content is being published in feeds, and is therefore expected to be used with services like the ones I mentioned above, the publisher still retains the copyright over their content, and would usually expect it to be republished as-as. It is also customary to provide the link back to the original source of the content and republishing content without it would be frowned upon at the very least. A: I've wondered the same thing for a while and am very hesitant to republish RSS feeds FeedForAll says there is no inherent right to reproduce content. You're asking whether it's ok to mangle the content, I'm pretty sure it's not alright to even reproduce the content. I think it would be like putting <iframe src='www.stackoverflow.com'> </iframe> on my website. BTW. This is not a subjective question and this it is important. I'd re-ask this question or edit the title and get more relevant feedback. A: Talk to your lawyer. A: From AP's RSS site... AP provides these RSS feeds to individuals for personal, noncommercial use under the following terms and conditions. All others, including AP members or Press Association subscribers must obtain express written permission prior to use of these RSS feeds. AP provides these RSS feeds at no charge to you for your personal, noncommercial use. You agree not to associate the RSS feeds with any content that might harm the reputation of The Associated Press. AP provides this content "as is" and AP shall not be held liable for your use of the information or the feeds. TO THE FULLEST EXTENT ALLOWED, AP DISCLAIMS ALL WARRANTIES INCLUDING WARRANTIES FOR MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A PARTICULAR PURPOSE. You agree to use the RSS feeds only to provide headlines, each with a functional link to the associated AP story that shall display the full content immediately (e.g., no jump pages or other intermediate or interstitial pages). You further agree not to frame or otherwise control the browser window (if any) in which the AP content opens, including limiting the size or position of such window. You agree to provide proper attribution to The Associated Press in reasonable proximity to your use of the RSS feed(s), and you agree that you will not modify the format or branding of the headlines, digests and other information provided in the RSS feeds. The RSS feeds may not be spliced into or otherwise redistributed by third-party RSS providers. No content, including any advertisements or other promotional content, shall be added to the RSS feeds. AP reserves the right to object to your presentation of the RSS feeds and the right to require you to cease using the RSS feeds at any time. AP further reserves the right to terminate its distribution of the RSS feeds or change the content or formatting of the RSS feeds at any time without notice to you. By accessing the RSS feeds or the XML instructions provided herein, you indicate that you understand and agree to these terms and conditions. Note: If you do not qualify to use the RSS feeds under this license or are an AP member or Press Association subscriber and wish to uses these feeds, please contact AP Digital.link text A: From Reuters RSS site... Reuters offers RSS as a free service to any individual user or non-profit organization, subject to the following terms and conditions: Use will be for non-commercial purposes. Use is limited to platforms in which a functional link is made available allowing immediate display of the full article or video on the Reuters.com platform, as specified in the feed. Use is accompanied by proper attribution to Reuters as the source. By accessing our RSS service you are indicating your understanding and agreement that you will not use Reuters RSS in contravention of the above conditions. Reuters reserves the right to discontinue this service at any time and further reserves the right to request the immediate cessation of any specific use of its RSS service. If you would like Reuters news for your commercial website, please visit about.reuters.com/media.
{ "language": "en", "url": "https://stackoverflow.com/questions/158943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: When should an oldschool flash developer use flex? What are the key differences between Flash and Flex? I have over five years experience with flash and feel very comfortable developing with it and ActionScript3. I find myself more and more curious about Flex and want to know when it is best to use flash or flex. Also, is everything that can be done with MXML, able to be done with AS3? I have a strong understanding of AS3 and OOP and would like to know the diffrences between using AS3 and MXML in Flex. A: It depends on what kind of applications you are developing now with Flash. I have been a Flash developer (mainly applications) for 7 years. I must honestly say that I was extremely glad when Flex 2 was released because it had the component framework (good components, layout managers, ...) I did not have in Flash. This is IMO the biggest difference between Flash and Flex (or the Flex framework). MXML is a real blessing, especially when using data binding. In the end, everything is compiled down to ActionScript (check the -keep compiler option), but MXML just saves you so much time. A: Flex is great if you quickly want to build a UI, you can mock up a functioning UI in a couple hours. Since it still can be limiting for some custom UI's it's not perfect for everything but if something should "look" more or less like an application and fit in a grid it's super quick to mock up the UI in MXML. Also don't be intimidated of how most Flex apps look (ugly, imo), you can customize everything or easily create your own components. Putting actionscript in mxml is the same as putting css or javascript in html = really bad. Unfortunately even Adobe has this in multiple examples (probably mostly because it's easier & faster for demostrations).. My personal opinion is that this applies to bindings too, as i don't want to put my data in the UI (mxml). As an experienced developer I'm sure you don't do any development on the timeline (to clarify the Flash = timeline misconception). Still with Flex you have the UI separated in a framework that handles a lot of the burden with layout so that you can concentrate on the business logic. The rest of the workflow is close to what you probably already have with Flash. A: Flash and Flex provide different ways to produce different things. I am not familiar with Flash, but I would expect that it is dependent on a time-oriented way to produce something, whereas Flex is geared toward more traditional software development. That is, rather than dealing with time and frames in Flash, one is dealing with describing where components should be placed with MXML and how those components work with ActionScript. One should also be able to write a Flex app with just AS3 and no need MXML. The main difference between AS3 and MXML in Flex, as far as I know, is that MXML is not intended to be used with application logic, but rather it is intended to be used like HTML/CSS in web pages and puts components and content onto the Flex app. ActionScript is used to program behaviors, components, and other things outside or what MXML does. Thus, if you want to attach an event to a component one would write ActionScript code. Hope that helps. I am still learning about Flex myself. A: Some other differences that come to mind: Flash allows you to create graphical assets and then work with them immediately. To use those same things in Flex, you need to use Flash to export them to a swf or swc first. Flex has a layout manager, so applications that have variable window size are waaaay easier to make. For instance, you can take a window and set it to 90% width of the window, and it will change size... not scale mind you, but actually change its width as the window is made larger or smaller. This is not easy outside of the Flex framework. Data Binding in Flex is a huge timesaver. It essentially creates all of the code you'd need to write in AS3 by simply saying blah="{foo}" The curley braces denote "bind to this". The Flex Debugger is vastly superior to the Flash one. There is also a Profiler. Since I started with Flex and not Flash, I'm not sure what kind of IDE is best for Flash dev, but the Eclipse based Flex Builder is quite nice. The code hinting is great. Subclipse integration is great. Really, Flash and Flex are different beasts. You should know and understand AS3 if you want to use Flex, and since you do, you're in a perfect position to take advantage of Flex's features. Flash is not going anywhere as a tool for making more visually creative pieces, but Flex offers a lot of advantages for application development. A: I prefer Flash IDE vs Flex (aka Flex Builder aka Flash Builder for my comment) In general i would say it depends on the size of the project. I find it easier to start and finish small projects quickly in Flash. I would advise Flex for larger projects because it has various debug tools that can save you plenty of time (although i would still just use Flash my self) But maybe if you really get used to flex, that might not matter. some Cons of Flex from my experience. * *When working on a team of 4 on a large project, Flex failed to keep the project *settings from one computer to another. (we shared files using SVN) *Flex constantly conflicted with SVN for us. *I felt distant from the art assets. some Pros of Flex * *being able to follow variable references from one class to another at the click of a button. *being able to easily see many variables while debugging. w/o needing to trace them. *and Flash used to not have Custom Class Code hinting, but now with CS5 it does. *I think you can use the newest features of Flash Player w/o waiting for a new Flash CS#, for example MoleHill (a new 3d api that uses the GPU) has a beta release out right now. and i think the Flex SDK can already use it. hope this helps. it should be noted that I am a rare case that doesn't prefer flex, most people strongly prefer flex, so you should give it a try at least. A: MXML compiles to action script so it's really like a higher level version of that. So, yes, everything that can be done with MXML can be done with actionscript (but not the other way around). A: Flash CSx: * *GUI\Layout: Basic GUI class framework *Graphical Content: Great for editing graphical library objects with or without animation *Code: Lacks a good code editor Flex/Flash Builder + Flex Framework: * *GUI\Layout: Advanced GUI class framework and layout engine (Flex) *Graphical Content: Lacks drawing capabilities of Flash, but you can include Flash generated graphics by exporting them for ActionScript into a SWC and importing/referencing the SWC in Flash Builder. *Code: Much better code editor than Flash; not sure if it's on par with FlashDevelop *Other: Supports MXML, which is basically just another style of laying out content. Instead of writing a bunch of "c = new C()", "c.prop = x", "c.addChild"... you can structure display objects and thier children using XML constructs, and the MXML compiler will convert it all back into the less-readable, but basically the same AS3 code. These technologies are all related and interoperable. They are natural and predictable extensions of the Flash player and ActionScript techonolgies, but for some reason Adobe developed the Flex/Flex-builder/MXML technologies as a totally separate product, and market it as something totally new and oh-so-amazing. Whatever. So now we have to go back and forth between the two to use all the features, which is LAME. They also have to waste time and resources developing unnecessary, but helpful, packages like the "Flex Component Kit" to reduce the number of steps necessary to get Flash content into Flash Builder. You have to go back and forth between these applications, because of their mutually exclusive features -- Flash Builder lacks graphics editing, and Flash CSx lacks MXML and a good code editor -- but they're interoperable in the sense that you can use Flex classes in Flash, Flash classes (and their embedded graphics) in Flex, you can use Flash Builder and MXML without Flex, etc. I think they need a single, truly integrated Flash IDE, so they need to merge Flash Builder into the Flash CSx editor.
{ "language": "en", "url": "https://stackoverflow.com/questions/158954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What types should I use to represent percentages in C# and SQL Server? I'm thinking floats. For the record I'm also using NHibernate. A: The answer is application-dependent. Others have pointed out that decimal is better than float for representing an exact value. But sometimes it's better to use a float for the increased precision (e.g. a set of calculated weights that add up to 100% is probably better represented as a float) A: decimal You won't lose precision due to rounding. A: It depends on how accurate you need. If you don't need any decimal places you could even use tiny int. But generally floats are good if you need some decimal places. I use LLBLGen and floats for %'s A: It largely depends on how much precision you need; The most important thing is to be consistent and clear. Take precautions to ensure that you are consistent across the field's use.. i.e. don't store it as a float (n = .5) and then try to reconstitute it as if it were an integer in another part of your code (mypercentage = n/100). As long as you make sure not to do that and you are not building a laser that requires 30 significant digits of precision, just pick your favorite flavor between int, double, or whatever floats your boat. waka-waka. A: With regard to SQL Server, it's rare that I'd store a percentage. 9 times out of 10 you want to store the data that is used to derive the percentage and calculate as needed. If and only if you have empirical data showing the calculation is too slow, then go ahead store it. Use a decimal as previously suggested to avoid rounding issues. A: It depends on what you are using them for. if it's for display, an int would do just fine, and be faster than a float. If it's for infrequent mathematics, an int would still do fine (or even a short, in either case). If you're doing a lot of math, then a float would probably be best, performance-wise. Of course, unless you're doing a LOT of manipulation of the percentages, it won't really matter in the end, performance-wise, given modern processor speed. EDIT: Of course, 'int' assumes you are just using strict, whole-number percents. If you aren't, you'd ALWAYS be better with float or dec.
{ "language": "en", "url": "https://stackoverflow.com/questions/158966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Changing Vim indentation behavior by file type Could someone explain to me in simple terms the easiest way to change the indentation behavior of Vim based on the file type? For instance, if I open a Python file it should indent with 2 spaces, but if I open a Powershell script it should use 4 spaces. A: This might be known by most of us, but anyway (I was puzzled my first time): Doing :set et (:set expandtabs) does not change the tabs already existing in the file, one has to do :retab. For example: :set et :retab and the tabs in the file are replaced by enough spaces. To have tabs back simply do: :set noet :retab A: Put autocmd commands based on the file suffix in your ~/.vimrc autocmd BufRead,BufNewFile *.c,*.h,*.java set noic cin noexpandtab autocmd BufRead,BufNewFile *.pl syntax on The commands you're looking for are probably ts= and sw= A: For those using autocmd, it is a best practice to group those together. If a grouping is related to file-type detection, you might have something like this: augroup filetype_c autocmd! :autocmd FileType c setlocal tabstop=2 shiftwidth=2 softtabstop=2 expandtab :autocmd FileType c nnoremap <buffer> <localleader>c I/*<space><esc><s-a><space>*/<esc> augroup end Groupings help keep the .vimrc organized especially once a filetype has multiple rules associated with it. In the above example, a comment shortcut specific to .c files is defined. The initial call to autocmd! tells vim to delete any previously defined autocommands in said grouping. This will prevent duplicate definition if .vimrc is sourced again. See the :help augroup for more info. A: Today, you could try editorconfig, there is also a vim plugin for it. With this, you are able not only change indentation size in vim, but in many other editors, keep consistent coding styles. Below is a simple editorconfig, as you can see, the python files will have 4 spaces for indentation, and pug template files will only have 2. # 4 space indentation for python files [*.py] indent_style = space indent_size = 4 # 2 space indentation for pug templates [*.pug] indent_size = 2 A: You can add .vim files to be executed whenever vim switches to a particular filetype. For example, I have a file ~/.vim/after/ftplugin/html.vim with this contents: setlocal shiftwidth=2 setlocal tabstop=2 Which causes vim to use tabs with a width of 2 characters for indenting (the noexpandtab option is set globally elsewhere in my configuration). This is described here: http://vimdoc.sourceforge.net/htmldoc/usr_05.html#05.4, scroll down to the section on filetype plugins. A: While you can configure Vim's indentation just fine using the indent plugin or manually using the settings, I recommend using a python script called Vindect that automatically sets the relevant settings for you when you open a python file. Use this tip to make using Vindect even more effective. When I first started editing python files created by others with various indentation styles (tab vs space and number of spaces), it was incredibly frustrating. But Vindect along with this indent file Also recommend: * *pythonhelper *python_match *python_ifold A: In Lua (for Neovim users) you can use RUNTIMEPATH/ftplugin/*yourfiletype*.lua with options like: vim.opt_local.shiftwidth = 2 vim.opt_local.tabstop = 2 Just be sure to use string values in quotes. For example: vim.opt_local.foldmethod = 'marker' A: I usually work with expandtab set, but that's bad for makefiles. I recently added: :autocmd FileType make set noexpandtab to the end of my .vimrc file and it recognizes Makefile, makefile, and *.mk as makefiles and does not expand tabs. Presumably, you can extend this. A: Use ftplugins or autocommands to set options. ftplugin In ~/.vim/ftplugin/python.vim: setlocal shiftwidth=2 softtabstop=2 expandtab And don't forget to turn them on in ~/.vimrc: filetype plugin indent on (:h ftplugin for more information) autocommand In ~/.vimrc: autocmd FileType python setlocal shiftwidth=2 softtabstop=2 expandtab I would also suggest learning the difference between tabstop and softtabstop. A lot of people don't know about softtabstop. A: Personally, I use these settings in .vimrc: autocmd FileType python set tabstop=8|set shiftwidth=2|set expandtab autocmd FileType ruby set tabstop=8|set shiftwidth=2|set expandtab A: edit your ~/.vimrc, and add different file types for different indents,e.g. I want html/rb indent for 2 spaces, and js/coffee files indent for 4 spaces: " by default, the indent is 2 spaces. set shiftwidth=2 set softtabstop=2 set tabstop=2 " for html/rb files, 2 spaces autocmd Filetype html setlocal ts=2 sw=2 expandtab autocmd Filetype ruby setlocal ts=2 sw=2 expandtab " for js/coffee/jade files, 4 spaces autocmd Filetype javascript setlocal ts=4 sw=4 sts=0 expandtab autocmd Filetype coffeescript setlocal ts=4 sw=4 sts=0 expandtab autocmd Filetype jade setlocal ts=4 sw=4 sts=0 expandtab refer to: Setting Vim whitespace preferences by filetype A: I use a utility that I wrote in C called autotab. It analyzes the first few thousand lines of a file which you load and determines values for the Vim parameters shiftwidth, tabstop and expandtab. This is compiled using, for instance, gcc -O autotab.c -o autotab. Instructions for integrating with Vim are in the comment header at the top. Autotab is fairly clever, but can get confused from time to time, in particular by that have been inconsistently maintained using different indentation styles. If a file evidently uses tabs, or a combination of tabs and spaces, for indentation, Autotab will figure out what tab size is being used by considering factors like alignment of internal elements across successive lines, such as comments. It works for a variety of programming languages, and is forgiving for "out of band" elements which do not obey indentation increments, such as C preprocessing directives, C statement labels, not to mention the obvious blank lines.
{ "language": "en", "url": "https://stackoverflow.com/questions/158968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "434" }
Q: ASP.NET UpdatePanel Time Out I'm making a request from an UpdatePanel that takes more then 90 seconds. I'm getting this timeout error: Microsoft JScript runtime error: Sys.WebForms.PageRequestManagerTimeoutException: The server request timed out. Does anyone know if there is a way to increase the amount of time before the call times out? A: There is a property on the ScriptManager which allows you to set the time-out in seconds. The default value is 90 seconds. AsyncPostBackTimeout="300" A: Please follow the steps below: Step 1: In web.config, set httpRuntime maxRequestLength="1024000" executionTimeout="999999" Step 2: Add the following setting to your web page's ScriptManager: AsyncPostBackTimeout ="360000" This will solve your problem. A: In my case the ScriptManager object was created in a Master Page file that was then shared with the Content Page files. So to change the ScriptManager.AsyncPostBackTimeout property in the Content Page, I had to access the object in the Content Page's aspx.cs file: protected void Page_Load(object sender, EventArgs e) { . . . ScriptManager _scriptMan = ScriptManager.GetCurrent(this); _scriptMan.AsyncPostBackTimeout = 36000; } A: This might be configurable by changing the ASP script timeout in IIS. It's located in the properties of your web site, virtual directory, configuration button, then on the options tab. or set it by setting the Server.ScriptTimeout property. A: This did the trick (basically just ignoring all timeouts): <script type="text/javascript"> Sys.WebForms.PageRequestManager.getInstance().add_endRequest(function (sender, args) { if (args.get_error() && args.get_error().name === 'Sys.WebForms.PageRequestManagerTimeoutException') { args.set_errorHandled(true); } }); </script> A: Well, I suppose that would work if you just want the request thrown away with the potential that it never completely executed... Add an AsyncPostBackTimeOut property to the ScriptManager tag to change your default timeout from 90 seconds to something more reasonable for your application. Further, look into changing the web service receiving the call to move faster. 90 seconds may as well be infinity in internet time. A: The problem you are facing is when your application runs into a timeout on a SQL database query. It's taking more time than the default to return the output. So you need to increase the ConnectionTimeout property. You can do it in several ways: * *A connection string has a ConnectionTimeout property. It is a property that determines the maximum number of seconds your code will wait for a connection of the database to be opened. You can set connection timeout in connection string section in web.config. <connectionstrings> <add name="ConnectionString" connectionstring="Database=UKTST1;Server=BRESAWN;uid=" system.data.sqlclient="/><br mode=" hold=" /><br mode=" html="> <asp:ToolkitScriptManager runat=" server=" AsyncPostBackTimeOut=" 6000="><br mode="> </add> </connectionstrings> *You can put AsyncPostBackTimeout="6000" in .aspx page <asp:ToolkitScriptManager runat="server" AsyncPostBackTimeOut="6000"> </asp:ToolkitScriptManager> *You can set timeout in SqlCommand, where you are calling the stored procedure in .cs file. command.CommandTimeout = 30*1000; Hope you have a solution!
{ "language": "en", "url": "https://stackoverflow.com/questions/158975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: How to relate objects from multiple contexts using the Entity Framework I am very new to the entity framework, so please bear with me... How can I relate two objects from different contexts together? The example below throws the following exception: System.InvalidOperationException: The relationship between the two objects cannot be defined because they are attached to different ObjectContext objects. void MyFunction() { using (TCPSEntities model = new TCPSEntities()) { EmployeeRoles er = model.EmployeeRoles.First(p=>p.EmployeeId == 123); er.Roles = GetDefaultRole(); model.SaveChanges(); } } private static Roles GetDefaultRole() { Roles r = null; using (TCPSEntities model = new TCPSEntities()) { r = model.Roles.First(p => p.RoleId == 1); } return r; } Using one context is not an option because we are using the EF in an ASP.NET application. A: Another approach that you could use here is to detach objects from one context, and then attach them to another context. That's a bit of a hack, and it may not work in your situation, but it might be an option. public void GuestUserTest() { SlideLincEntities ctx1 = new SlideLincEntities(); GuestUser user = GuestUser.CreateGuestUser(); user.UserName = "Something"; ctx1.AddToUser(user); ctx1.SaveChanges(); SlideLincEntities ctx2 = new SlideLincEntities(); ctx1.Detach(user); user.UserName = "Something Else"; ctx2.Attach(user); ctx2.SaveChanges(); } A: Yep - working across 2 or more contexts is not supported in V1 of Entity Framework. Just in case you haven't already found it, there is a good faq on EF at http://blogs.msdn.com/dsimmons/pages/entity-framework-faq.aspx A: From what I understand, you want to instantiate your model (via the "new XXXXEntities()" bit) as rarely as possible. According to MS (http://msdn.microsoft.com/en-us/library/cc853327.aspx), that's a pretty substantial performance hit. So wrapping it in a using() structure isn't a good idea. What I've done in my projects is to access it through a static method that always provides the same instance of the context: private static PledgeManagerEntities pledgesEntities; public static PledgeManagerEntities PledgeManagerEntities { get { if (pledgesEntities == null) { pledgesEntities = new PledgeManagerEntities(); } return pledgesEntities; } set { pledgesEntities = value; } } And then I retrieve it like so: private PledgeManagerEntities entities = Data.PledgeManagerEntities; A: You will have to use the same context (you can pass the context to the getdefaultrole method) or rethink the relationships and extend the entity. EDIT: Wanted to add this was for the example provided, using asp.net will require you to fully think out your context and relationship designs. You could simply pass the context.. IE: void MyFunction() { using (TCPSEntities model = new TCPSEntities()) { EmployeeRoles er = model.EmployeeRoles.First(p=>p.EmployeeId == 123); er.Roles = GetDefaultRole(model); model.SaveChanges(); } } private static Roles GetDefaultRole(TCPSEntities model) { Roles r = null; r = model.Roles.First(p => p.RoleId == 1); return r; }
{ "language": "en", "url": "https://stackoverflow.com/questions/158986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: PrintDialog filter list of printers I need to, preferably in C# - but c++ will do, find a way to filter the list of printers in the windows print dialog for any windows printing. I have come across WinAPIOverride and have figured I am going to have to write my own dll which overrides the method to get the printers list, then filter it and return it. I would then have to inject the dll into all running processes. Can anybody assist me with something that is already developed or perhaps an easier way of accomplishing this? The only way the list of printers comes out is from the API method call and I have even considered modifying the registry, but this will slow down the response of the print dialog box to the point that it would be annoying to the user. A: I don't think that (re)writing a DLL is the easiest method. Why not use WMI to retrieve the wanted information (printers in this case)? The following code is for retrieving all the locally installed printers: (code samples borrowed from here) ManagementScope objScope = new ManagementScope(ManagementPath.DefaultPath); //For the local Access objScope.Connect(); SelectQuery selectQuery = new SelectQuery(); selectQuery.QueryString = "Select * from win32_Printer"; ManagementObjectSearcher MOS = new ManagementObjectSearcher(objScope, selectQuery); ManagementObjectCollection MOC = MOS.Get(); foreach (ManagementObject mo in MOC) { listBox1.Items.Add(mo["Name"].ToString().ToUpper()); } To get the printers known accross a domain use this: ConnectionOptions objConnection = new ConnectionOptions(); objConnection.Username = "USERNAME"; objConnection.Password = "PASSWORD"; objConnection.Authority = "ntlmdomain:DDI"; //Where DDI is the name of my domain // Make sure the user you specified have enough permission to access the resource. ManagementScope objScope = new ManagementScope(@"\\10.0.0.4\root\cimv2",objConnection); //For the local Access objScope.Connect(); SelectQuery selectQuery = new SelectQuery(); selectQuery.QueryString = "Select * from win32_Printer"; ManagementObjectSearcher MOS = new ManagementObjectSearcher(objScope, selectQuery); ManagementObjectCollection MOC = MOS.Get(); foreach (ManagementObject mo in MOC) { listBox1.Items.Add(mo["Name"].ToString().ToUpper()); } Of course, the list is not "filtered" as you would like as you didn't specify any criteria. But I'm sure you can manage from here on by yourself. A: Thanks for the interesting code. The idea is to apply a filtered printer list to the system as globally as possible without interfering with the user. This means the filtered list has to apply to the standard windows print dialogs unfortunately... So your WMI code, albeit kind of cool, would not be appropriate. If I were building my own print dialogs, it could come in pretty handy ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/158993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Runtime CSS fails silently when Flex application is loaded by a non-Flex application I have compiled a CSS swf file which includes embedded skins for my Flex application. In our download product, this CSS works fine. On the web, a non-Flex container is loading the Flex application, and the CSS fails silently. When the application is debugged, the following runtime error can be seen in the trace output, but is not raised as an exception. method not implemented mx.core::IFlexDisplayObject/mx.core:IFlexDisplayObject::setActualSize() over-binding 0 in GlobalStyle__embed_css__319796825 accessor not implemented mx.core:IFlexDisplayObject::measuredHeight over-binding 0 in GlobalStyle__embed_css__319796825 accessor not implemented mx.core:IFlexDisplayObject::measuredWidth over-binding 0 in GlobalStyle__embed_css__319796825 method not implemented mx.core::IFlexDisplayObject/mx.core:IFlexDisplayObject::move() over-binding 0 in GlobalStyle__embed_css__319796825 Update Now that I've determined the actual problem, I've editted the question to be more useful and direct. A: The non-Flex application is using content from the library with the same class name as the Flex skins embedded in the CSS swf. Because Flash looks to the most global swf for class definitions, it is using the classes defined by the non-Flex application. Because this content does not extend UIMovieClip, it is causing the StyleManager to fail. Due to potential security errors, Adobe has wrapped most of this process in try-catch blocks to supress expected runtime errors. Rename the classes used by one application or the other in order to resolve this issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/158996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Max and min values in a C++ enum Is there a way to find the maximum and minimum defined values of an enum in c++? A: No. An enum in C or C++ is simply a list of constants. There is no higher structure that would hold such information. Usually when I need this kind of information I include in the enum a max and min value something like this: enum { eAaa = 1, eBbb, eCccc, eMin = eAaaa, eMax = eCccc } See this web page for some examples of how this can be useful: Stupid Enum Tricks A: enum My_enum { FIRST_VALUE = 0, MY_VALUE1, MY_VALUE2, ... MY_VALUEN, LAST_VALUE }; after definition, My_enum::LAST_VALUE== N+1 A: No, not in standard C++. You could do it manually: enum Name { val0, val1, val2, num_values }; num_values will contain the number of values in the enum. A: Although the accepted answer correctly states that there is no standardized way to get the min and max values of enum elements, there is at least one possible way in newer versions of gcc (>= 9.0), which allows to write this: enum class Fruits { Apples, Oranges, Pears, Bananas }; int main() { std::cout << "Min value for Fruits is " << EnumMin<Fruits>::value << std::endl; // 0 std::cout << "Max value for Fruits is " << EnumMax<Fruits>::value << std::endl; // 3 std::cout << "Name: " << getName<Fruits, static_cast<Fruits>(0)>().cStr() << std::endl; // Apples std::cout << "Name: " << getName<Fruits, static_cast<Fruits>(3)>().cStr() << std::endl; // Bananas std::cout << "Name: " << getName<Fruits, static_cast<Fruits>(99)>().cStr() << std::endl; // (Fruits)99 } This works without any custom traits or hints. It's a very rough proof of concept and I'm sure it can be extended much further, this is just to show that this is possible today. This snippet compiles in C++14 and with a few tweaks, it can definitely run also in C++11, but I don't think this would have been possible in pre-C++11 WARNING: This might break in the future compiler releases. LIVE DEMO A: No, there is no way to find the maximum and minimum defined values of any enum in C++. When this kind of information is needed, it is often good practice to define a Last and First value. For example, enum MyPretendEnum { Apples, Oranges, Pears, Bananas, First = Apples, Last = Bananas }; There do not need to be named values for every value between First and Last. A: If it is certain that all enum values are in ascending order (or at least the last one is guaranteed to have the greatest value) then magic enum library can be used without need to add an extra element to enum definition. It is used like this: #include <magic_enum.hpp> ... const size_t maxValue = static_cast<size_t>(magic_enum::enum_value<MyEnum>(magic_enum::enum_count<MyEnum>() - 1)); Magic enum git: https://github.com/Neargye/magic_enum A: you don't even need them, what I do is just I say for example if you have: enum Name{val0,val1,val2}; if you have switch statement and to check if the last value was reached do as the following: if(selectedOption>=val0 && selectedOption<=val2){ //code }
{ "language": "en", "url": "https://stackoverflow.com/questions/159006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87" }
Q: generate HTML document server-side with jquery and classic asp Here is my question, Would it be possible, knowing that classic asp support server-side javascript, to be able to generate "server side HTML" to send to the client like Response.write $(page).html() Of course it would be great to use jQuery to do it because it's easy to parse complicated structure and manipulate them. The only problem I can think of that would prevent me from doing this would be that classic asp exposes only 3 objects (response, server, request) and none of them provide "dom building facilities" like the one jQuery uses all the time. How could we possibly create a blank document object? Edit : I have to agree with you that it's definitely not a good idea performance wise. Let me explain why we would need it. I am actually transforming various JSON feed into complicated, sometimes nested report in HTML. Client side it works really well, even with complicated set and long report. However, some of our client would like to access the "formatted" report using tools like EXCEL (using webquery which are depleted of any javascript). So in that particular case, I would need to be able to response.write the .html() content of what would be the jQuery work. A: I such situations I use an XML DOM as surrogate for the HTML DOM I would have in a browser. jQuery can manipulating an XML DOM however jQuery expects window to be present in its context. It may be possible to fool jQuery (or tweak it) so that it would work server-side but it could be quite fragile. Personnally I just use a small library of helper functions that make manipulating an XML DOM a little less painful, For example as:- function XmlWrapper(elem) { this.element = elem; } XmlWrapper.prototype.addChild = function(name) { var elem = this.element.ownerDocument.createElement(name); return new XmlWrapper(this.element.appendChild(elem)); } Now your page code can do:- var dom = Server.CreateObject("MSXML2.DOMDocument.3.0"); dom.loadXML("<html />"); var html = XmlWapper(dom.documentElement); var head = html.addChild("head"); var body = html.addChild("body"); var tHead = body.addChild("table").addChild("tHead"); As you create code that manipulates the DOM 'in the raw' you will see patterns you can re-factor as methods of the XmlWrapper class. A: Yes it is possible. No, it wouldn't be fast at all and I don't see any reason for doing it as jQuery is often used for doing things that are only relevant on the client. A: I have to ask what possible reason you have for doing this? If you want to build a DOM document server-side as opposed to writing HTML output, theres more likely to be an XML library of some kind that you can interface to ASP. jQuery is for client-side stuff, whilst server-side Javascript exists its not a common use-case.
{ "language": "en", "url": "https://stackoverflow.com/questions/159011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the use of a Dispatcher Object in WPF? What is the use of a Dispatcher Object in WPF? A: Almost every WPF element has thread affinity. This means that access to such an element should be made only from the thread that created the element. In order to do so, every element that requires thread affinity is derived, eventually, from DispatcherObject class. This class provides a property named Dispatcher that returns the Dispatcher object associated with the WPF element. The Dispatcher class is used to perform work on its attached thread. It has a queue of work items and it is in charge of executing the work items on the dispatcher thread. You can find on the following link some more details on the subject: https://www.codeproject.com/Articles/101423/WPF-Inside-Out-Dispatcher A: A dispatcher is often used to invoke calls on another thread. An example would be if you have a background thread working, and you need to update the UI thread, you would need a dispatcher to do it. A: In my experience we use Prism Event Aggregator. When the event happens it calls the Dispatcher.Invoke() to update the UI. This is because only the Dispatcher can update the objects in your UI from a non-UI thread. public PaginatedObservableCollection<OrderItems> Orders { get; } = new PaginatedObservableCollection<OrderItems>(20); _eventAggregator.GetEvent<OrderEvent>().Subscribe(orders => { MainDispatcher.Invoke(() => AddOrders(orders)); }); private void AddOrders(List<OrderItems> orders) { foreach (OrderItems item in orders) Orders.Add(item); }
{ "language": "en", "url": "https://stackoverflow.com/questions/159015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Named string formatting in C# Is there any way to format a string by name rather than position in C#? In python, I can do something like this example (shamelessly stolen from here): >>> print '%(language)s has %(#)03d quote types.' % \ {'language': "Python", "#": 2} Python has 002 quote types. Is there any way to do this in C#? Say for instance: String.Format("{some_variable}: {some_other_variable}", ...); Being able to do this using a variable name would be nice, but a dictionary is acceptable too. A: The framework itself does not provide a way to do this, but you can take a look at this post by Scott Hanselman. Example usage: Person p = new Person(); string foo = p.ToString("{Money:C} {LastName}, {ScottName} {BirthDate}"); Assert.AreEqual("$3.43 Hanselman, {ScottName} 1/22/1974 12:00:00 AM", foo); This code by James Newton-King is similar and works with sub-properties and indexes, string foo = "Top result for {Name} was {Results[0].Name}".FormatWith(student)); James's code relies on System.Web.UI.DataBinder to parse the string and requires referencing System.Web, which some people don't like to do in non-web applications. EDIT: Oh and they work nicely with anonymous types, if you don't have an object with properties ready for it: string name = ...; DateTime date = ...; string foo = "{Name} - {Birthday}".FormatWith(new { Name = name, Birthday = date }); A: See https://stackoverflow.com/questions/271398?page=2#358259 With the linked-to extension you can write this: var str = "{foo} {bar} {baz}".Format(foo=>"foo", bar=>2, baz=>new object()); and you'll get "foo 2 System.Object". A: I have an implementation I just posted to my blog here: http://haacked.com/archive/2009/01/04/fun-with-named-formats-string-parsing-and-edge-cases.aspx It addresses some issues that these other implementations have with brace escaping. The post has details. It does the DataBinder.Eval thing too, but is still very fast. A: Interpolated strings were added into C# 6.0 and Visual Basic 14 Both were introduced through new Roslyn compiler in Visual Studio 2015. * *C# 6.0: return "\{someVariable} and also \{someOtherVariable}" OR return $"{someVariable} and also {someOtherVariable}" * *source: what's new in C#6.0   *VB 14: return $"{someVariable} and also {someOtherVariable}" * *source: what's new in VB 14 Noteworthy features (in Visual Studio 2015 IDE): * *syntax coloring is supported - variables contained in strings are highlighted *refactoring is supported - when renaming, variables contained in strings get renamed, too *actually not only variable names, but expressions are supported - e.g. not only {index} works, but also {(index + 1).ToString().Trim()} Enjoy! (& click "Send a Smile" in the VS) A: I think the closest you'll get is an indexed format: String.Format("{0} has {1} quote types.", "C#", "1"); There's also String.Replace(), if you're willing to do it in multiple steps and take it on faith that you won't find your 'variables' anywhere else in the string: string MyString = "{language} has {n} quote types."; MyString = MyString.Replace("{language}", "C#").Replace("{n}", "1"); Expanding this to use a List: List<KeyValuePair<string, string>> replacements = GetFormatDictionary(); foreach (KeyValuePair<string, string> item in replacements) { MyString = MyString.Replace(item.Key, item.Value); } You could do that with a Dictionary<string, string> too by iterating it's .Keys collections, but by using a List<KeyValuePair<string, string>> we can take advantage of the List's .ForEach() method and condense it back to a one-liner: replacements.ForEach(delegate(KeyValuePair<string,string>) item) { MyString = MyString.Replace(item.Key, item.Value);}); A lambda would be even simpler, but I'm still on .Net 2.0. Also note that the .Replace() performance isn't stellar when used iteratively, since strings in .Net are immutable. Also, this requires the MyString variable be defined in such a way that it's accessible to the delegate, so it's not perfect yet. A: You can also use anonymous types like this: public string Format(string input, object p) { foreach (PropertyDescriptor prop in TypeDescriptor.GetProperties(p)) input = input.Replace("{" + prop.Name + "}", (prop.GetValue(p) ?? "(null)").ToString()); return input; } Of course it would require more code if you also want to parse formatting, but you can format a string using this function like: Format("test {first} and {another}", new { first = "something", another = "something else" }) A: My open source library, Regextra, supports named formatting (amongst other things). It currently targets .NET 4.0+ and is available on NuGet. I also have an introductory blog post about it: Regextra: helping you reduce your (problems){2}. The named formatting bit supports: * *Basic formatting *Nested properties formatting *Dictionary formatting *Escaping of delimiters *Standard/Custom/IFormatProvider string formatting Example: var order = new { Description = "Widget", OrderDate = DateTime.Now, Details = new { UnitPrice = 1500 } }; string template = "We just shipped your order of '{Description}', placed on {OrderDate:d}. Your {{credit}} card will be billed {Details.UnitPrice:C}."; string result = Template.Format(template, order); // or use the extension: template.FormatTemplate(order); Result: We just shipped your order of 'Widget', placed on 2/28/2014. Your {credit} card will be billed $1,500.00. Check out the project's GitHub link (above) and wiki for other examples. A: private static Regex s_NamedFormatRegex = new Regex(@"\{(?!\{)(?<key>[\w]+)(:(?<fmt>(\{\{|\}\}|[^\{\}])*)?)?\}", RegexOptions.Compiled); public static StringBuilder AppendNamedFormat(this StringBuilder builder,IFormatProvider provider, string format, IDictionary<string, object> args) { if (builder == null) throw new ArgumentNullException("builder"); var str = s_NamedFormatRegex.Replace(format, (mt) => { string key = mt.Groups["key"].Value; string fmt = mt.Groups["fmt"].Value; object value = null; if (args.TryGetValue(key,out value)) { return string.Format(provider, "{0:" + fmt + "}", value); } else { return mt.Value; } }); builder.Append(str); return builder; } public static StringBuilder AppendNamedFormat(this StringBuilder builder, string format, IDictionary<string, object> args) { if (builder == null) throw new ArgumentNullException("builder"); return builder.AppendNamedFormat(null, format, args); } Example: var builder = new StringBuilder(); builder.AppendNamedFormat( @"你好,{Name},今天是{Date:yyyy/MM/dd}, 这是你第{LoginTimes}次登录,积分{Score:{{ 0.00 }}}", new Dictionary<string, object>() { { "Name", "wayjet" }, { "LoginTimes",18 }, { "Score", 100.4 }, { "Date",DateTime.Now } }); Output: 你好,wayjet,今天是2011-05-04, 这是你第18次登录,积分{ 100.40 } A: Check this one: public static string StringFormat(string format, object source) { var matches = Regex.Matches(format, @"\{(.+?)\}"); List<string> keys = (from Match matche in matches select matche.Groups[1].Value).ToList(); return keys.Aggregate( format, (current, key) => { int colonIndex = key.IndexOf(':'); return current.Replace( "{" + key + "}", colonIndex > 0 ? DataBinder.Eval(source, key.Substring(0, colonIndex), "{0:" + key.Substring(colonIndex + 1) + "}") : DataBinder.Eval(source, key).ToString()); }); } Sample: string format = "{foo} is a {bar} is a {baz} is a {qux:#.#} is a really big {fizzle}"; var o = new { foo = 123, bar = true, baz = "this is a test", qux = 123.45, fizzle = DateTime.Now }; Console.WriteLine(StringFormat(format, o)); Performance is pretty ok compared to other solutions. A: There doesn't appear to be a way to do this out of the box. Though, it looks feasible to implement your own IFormatProvider that links to an IDictionary for values. var Stuff = new Dictionary<string, object> { { "language", "Python" }, { "#", 2 } }; var Formatter = new DictionaryFormatProvider(); // Interpret {0:x} where {0}=IDictionary and "x" is hash key Console.WriteLine string.Format(Formatter, "{0:language} has {0:#} quote types", Stuff); Outputs: Python has 2 quote types The caveat is that you can't mix FormatProviders, so the fancy text formatting can't be used at the same time. A: There is no built-in method for handling this. Here's one method string myString = "{foo} is {bar} and {yadi} is {yada}".Inject(o); Here's another Status.Text = "{UserName} last logged in at {LastLoginDate}".FormatWith(user); A third improved method partially based on the two above, from Phil Haack Update: This is now built-in as of C# 6 (released in 2015). String Interpolation $"{some_variable}: {some_other_variable}" A: I doubt this will be possible. The first thing that comes to mind is how are you going to get access to local variable names? There might be some clever way using LINQ and Lambda expressions to do this however. A: Here's one I made a while back. It extends String with a Format method taking a single argument. The nice thing is that it'll use the standard string.Format if you provide a simple argument like an int, but if you use something like anonymous type it'll work too. Example usage: "The {Name} family has {Children} children".Format(new { Children = 4, Name = "Smith" }) Would result in "The Smith family has 4 children." It doesn't do crazy binding stuff like arrays and indexers. But it is super simple and high performance. public static class AdvancedFormatString { /// <summary> /// An advanced version of string.Format. If you pass a primitive object (string, int, etc), it acts like the regular string.Format. If you pass an anonmymous type, you can name the paramters by property name. /// </summary> /// <param name="formatString"></param> /// <param name="arg"></param> /// <returns></returns> /// <example> /// "The {Name} family has {Children} children".Format(new { Children = 4, Name = "Smith" }) /// /// results in /// "This Smith family has 4 children /// </example> public static string Format(this string formatString, object arg, IFormatProvider format = null) { if (arg == null) return formatString; var type = arg.GetType(); if (Type.GetTypeCode(type) != TypeCode.Object || type.IsPrimitive) return string.Format(format, formatString, arg); var properties = TypeDescriptor.GetProperties(arg); return formatString.Format((property) => { var value = properties[property].GetValue(arg); return Convert.ToString(value, format); }); } public static string Format(this string formatString, Func<string, string> formatFragmentHandler) { if (string.IsNullOrEmpty(formatString)) return formatString; Fragment[] fragments = GetParsedFragments(formatString); if (fragments == null || fragments.Length == 0) return formatString; return string.Join(string.Empty, fragments.Select(fragment => { if (fragment.Type == FragmentType.Literal) return fragment.Value; else return formatFragmentHandler(fragment.Value); }).ToArray()); } private static Fragment[] GetParsedFragments(string formatString) { Fragment[] fragments; if ( parsedStrings.TryGetValue(formatString, out fragments) ) { return fragments; } lock (parsedStringsLock) { if ( !parsedStrings.TryGetValue(formatString, out fragments) ) { fragments = Parse(formatString); parsedStrings.Add(formatString, fragments); } } return fragments; } private static Object parsedStringsLock = new Object(); private static Dictionary<string,Fragment[]> parsedStrings = new Dictionary<string,Fragment[]>(StringComparer.Ordinal); const char OpeningDelimiter = '{'; const char ClosingDelimiter = '}'; /// <summary> /// Parses the given format string into a list of fragments. /// </summary> /// <param name="format"></param> /// <returns></returns> static Fragment[] Parse(string format) { int lastCharIndex = format.Length - 1; int currFragEndIndex; Fragment currFrag = ParseFragment(format, 0, out currFragEndIndex); if (currFragEndIndex == lastCharIndex) { return new Fragment[] { currFrag }; } List<Fragment> fragments = new List<Fragment>(); while (true) { fragments.Add(currFrag); if (currFragEndIndex == lastCharIndex) { break; } currFrag = ParseFragment(format, currFragEndIndex + 1, out currFragEndIndex); } return fragments.ToArray(); } /// <summary> /// Finds the next delimiter from the starting index. /// </summary> static Fragment ParseFragment(string format, int startIndex, out int fragmentEndIndex) { bool foundEscapedDelimiter = false; FragmentType type = FragmentType.Literal; int numChars = format.Length; for (int i = startIndex; i < numChars; i++) { char currChar = format[i]; bool isOpenBrace = currChar == OpeningDelimiter; bool isCloseBrace = isOpenBrace ? false : currChar == ClosingDelimiter; if (!isOpenBrace && !isCloseBrace) { continue; } else if (i < (numChars - 1) && format[i + 1] == currChar) {//{{ or }} i++; foundEscapedDelimiter = true; } else if (isOpenBrace) { if (i == startIndex) { type = FragmentType.FormatItem; } else { if (type == FragmentType.FormatItem) throw new FormatException("Two consequtive unescaped { format item openers were found. Either close the first or escape any literals with another {."); //curr character is the opening of a new format item. so we close this literal out string literal = format.Substring(startIndex, i - startIndex); if (foundEscapedDelimiter) literal = ReplaceEscapes(literal); fragmentEndIndex = i - 1; return new Fragment(FragmentType.Literal, literal); } } else {//close bracket if (i == startIndex || type == FragmentType.Literal) throw new FormatException("A } closing brace existed without an opening { brace."); string formatItem = format.Substring(startIndex + 1, i - startIndex - 1); if (foundEscapedDelimiter) formatItem = ReplaceEscapes(formatItem);//a format item with a { or } in its name is crazy but it could be done fragmentEndIndex = i; return new Fragment(FragmentType.FormatItem, formatItem); } } if (type == FragmentType.FormatItem) throw new FormatException("A format item was opened with { but was never closed."); fragmentEndIndex = numChars - 1; string literalValue = format.Substring(startIndex); if (foundEscapedDelimiter) literalValue = ReplaceEscapes(literalValue); return new Fragment(FragmentType.Literal, literalValue); } /// <summary> /// Replaces escaped brackets, turning '{{' and '}}' into '{' and '}', respectively. /// </summary> /// <param name="value"></param> /// <returns></returns> static string ReplaceEscapes(string value) { return value.Replace("{{", "{").Replace("}}", "}"); } private enum FragmentType { Literal, FormatItem } private class Fragment { public Fragment(FragmentType type, string value) { Type = type; Value = value; } public FragmentType Type { get; private set; } /// <summary> /// The literal value, or the name of the fragment, depending on fragment type. /// </summary> public string Value { get; private set; } } } A: here is a simple method for any object: using System.Text.RegularExpressions; using System.ComponentModel; public static string StringWithFormat(string format, object args) { Regex r = new Regex(@"\{([A-Za-z0-9_]+)\}"); MatchCollection m = r.Matches(format); var properties = TypeDescriptor.GetProperties(args); foreach (Match item in m) { try { string propertyName = item.Groups[1].Value; format = format.Replace(item.Value, properties[propertyName].GetValue(args).ToString()); } catch { throw new FormatException("The format string is not valid"); } } return format; } And here how to use it: DateTime date = DateTime.Now; string dateString = StringWithFormat("{Month}/{Day}/{Year}", date); output : 2/27/2012 A: I implemented this is a simple class that duplicates the functionality of String.Format (except for when using classes). You can either use a dictionary or a type to define fields. https://github.com/SergueiFedorov/NamedFormatString C# 6.0 is adding this functionality right into the language spec, so NamedFormatString is for backwards compatibility. A: I solved this in a slightly different way to the existing solutions. It does the core of the named item replacement (not the reflection bit that some have done). It is extremely fast and simple... This is my solution: /// <summary> /// Formats a string with named format items given a template dictionary of the items values to use. /// </summary> public class StringTemplateFormatter { private readonly IFormatProvider _formatProvider; /// <summary> /// Constructs the formatter with the specified <see cref="IFormatProvider"/>. /// This is defaulted to <see cref="CultureInfo.CurrentCulture">CultureInfo.CurrentCulture</see> if none is provided. /// </summary> /// <param name="formatProvider"></param> public StringTemplateFormatter(IFormatProvider formatProvider = null) { _formatProvider = formatProvider ?? CultureInfo.CurrentCulture; } /// <summary> /// Formats a string with named format items given a template dictionary of the items values to use. /// </summary> /// <param name="text">The text template</param> /// <param name="templateValues">The named values to use as replacements in the formatted string.</param> /// <returns>The resultant text string with the template values replaced.</returns> public string FormatTemplate(string text, Dictionary<string, object> templateValues) { var formattableString = text; var values = new List<object>(); foreach (KeyValuePair<string, object> value in templateValues) { var index = values.Count; formattableString = ReplaceFormattableItem(formattableString, value.Key, index); values.Add(value.Value); } return String.Format(_formatProvider, formattableString, values.ToArray()); } /// <summary> /// Convert named string template item to numbered string template item that can be accepted by <see cref="string.Format(string,object[])">String.Format</see> /// </summary> /// <param name="formattableString">The string containing the named format item</param> /// <param name="itemName">The name of the format item</param> /// <param name="index">The index to use for the item value</param> /// <returns>The formattable string with the named item substituted with the numbered format item.</returns> private static string ReplaceFormattableItem(string formattableString, string itemName, int index) { return formattableString .Replace("{" + itemName + "}", "{" + index + "}") .Replace("{" + itemName + ",", "{" + index + ",") .Replace("{" + itemName + ":", "{" + index + ":"); } } It is used in the following way: [Test] public void FormatTemplate_GivenANamedGuid_FormattedWithB_ShouldFormatCorrectly() { // Arrange var template = "My guid {MyGuid:B} is awesome!"; var templateValues = new Dictionary<string, object> { { "MyGuid", new Guid("{A4D2A7F1-421C-4A1D-9CB2-9C2E70B05E19}") } }; var sut = new StringTemplateFormatter(); // Act var result = sut.FormatTemplate(template, templateValues); //Assert Assert.That(result, Is.EqualTo("My guid {a4d2a7f1-421c-4a1d-9cb2-9c2e70b05e19} is awesome!")); } Hope someone finds this useful! A: Even though the accepted answer gives some good examples, the .Inject as well as some of the Haack examples do not handle escaping. Many also rely heavily on Regex (slower), or DataBinder.Eval which is not available on .NET Core, and in some other environments. With that in mind, I've written a simple state machine based parser that streams through characters, writing to a StringBuilder output, character by character. It is implemented as String extension method(s) and can take both a Dictionary<string, object> or object with parameters as input (using reflection). It handles unlimited levels of {{{escaping}}} and throws FormatException when input contains unbalanced braces and/or other errors. public static class StringExtension { /// <summary> /// Extension method that replaces keys in a string with the values of matching object properties. /// </summary> /// <param name="formatString">The format string, containing keys like {foo} and {foo:SomeFormat}.</param> /// <param name="injectionObject">The object whose properties should be injected in the string</param> /// <returns>A version of the formatString string with keys replaced by (formatted) key values.</returns> public static string FormatWith(this string formatString, object injectionObject) { return formatString.FormatWith(GetPropertiesDictionary(injectionObject)); } /// <summary> /// Extension method that replaces keys in a string with the values of matching dictionary entries. /// </summary> /// <param name="formatString">The format string, containing keys like {foo} and {foo:SomeFormat}.</param> /// <param name="dictionary">An <see cref="IDictionary"/> with keys and values to inject into the string</param> /// <returns>A version of the formatString string with dictionary keys replaced by (formatted) key values.</returns> public static string FormatWith(this string formatString, IDictionary<string, object> dictionary) { char openBraceChar = '{'; char closeBraceChar = '}'; return FormatWith(formatString, dictionary, openBraceChar, closeBraceChar); } /// <summary> /// Extension method that replaces keys in a string with the values of matching dictionary entries. /// </summary> /// <param name="formatString">The format string, containing keys like {foo} and {foo:SomeFormat}.</param> /// <param name="dictionary">An <see cref="IDictionary"/> with keys and values to inject into the string</param> /// <returns>A version of the formatString string with dictionary keys replaced by (formatted) key values.</returns> public static string FormatWith(this string formatString, IDictionary<string, object> dictionary, char openBraceChar, char closeBraceChar) { string result = formatString; if (dictionary == null || formatString == null) return result; // start the state machine! // ballpark output string as two times the length of the input string for performance (avoids reallocating the buffer as often). StringBuilder outputString = new StringBuilder(formatString.Length * 2); StringBuilder currentKey = new StringBuilder(); bool insideBraces = false; int index = 0; while (index < formatString.Length) { if (!insideBraces) { // currently not inside a pair of braces in the format string if (formatString[index] == openBraceChar) { // check if the brace is escaped if (index < formatString.Length - 1 && formatString[index + 1] == openBraceChar) { // add a brace to the output string outputString.Append(openBraceChar); // skip over braces index += 2; continue; } else { // not an escaped brace, set state to inside brace insideBraces = true; index++; continue; } } else if (formatString[index] == closeBraceChar) { // handle case where closing brace is encountered outside braces if (index < formatString.Length - 1 && formatString[index + 1] == closeBraceChar) { // this is an escaped closing brace, this is okay // add a closing brace to the output string outputString.Append(closeBraceChar); // skip over braces index += 2; continue; } else { // this is an unescaped closing brace outside of braces. // throw a format exception throw new FormatException($"Unmatched closing brace at position {index}"); } } else { // the character has no special meaning, add it to the output string outputString.Append(formatString[index]); // move onto next character index++; continue; } } else { // currently inside a pair of braces in the format string // found an opening brace if (formatString[index] == openBraceChar) { // check if the brace is escaped if (index < formatString.Length - 1 && formatString[index + 1] == openBraceChar) { // there are escaped braces within the key // this is illegal, throw a format exception throw new FormatException($"Illegal escaped opening braces within a parameter - index: {index}"); } else { // not an escaped brace, we have an unexpected opening brace within a pair of braces throw new FormatException($"Unexpected opening brace inside a parameter - index: {index}"); } } else if (formatString[index] == closeBraceChar) { // handle case where closing brace is encountered inside braces // don't attempt to check for escaped braces here - always assume the first brace closes the braces // since we cannot have escaped braces within parameters. // set the state to be outside of any braces insideBraces = false; // jump over brace index++; // at this stage, a key is stored in current key that represents the text between the two braces // do a lookup on this key string key = currentKey.ToString(); // clear the stringbuilder for the key currentKey.Clear(); object outObject; if (!dictionary.TryGetValue(key, out outObject)) { // the key was not found as a possible replacement, throw exception throw new FormatException($"The parameter \"{key}\" was not present in the lookup dictionary"); } // we now have the replacement value, add the value to the output string outputString.Append(outObject); // jump to next state continue; } // if } else { // character has no special meaning, add it to the current key currentKey.Append(formatString[index]); // move onto next character index++; continue; } // else } // if inside brace } // while // after the loop, if all braces were balanced, we should be outside all braces // if we're not, the input string was misformatted. if (insideBraces) { throw new FormatException("The format string ended before the parameter was closed."); } return outputString.ToString(); } /// <summary> /// Creates a Dictionary from an objects properties, with the Key being the property's /// name and the Value being the properties value (of type object) /// </summary> /// <param name="properties">An object who's properties will be used</param> /// <returns>A <see cref="Dictionary"/> of property values </returns> private static Dictionary<string, object> GetPropertiesDictionary(object properties) { Dictionary<string, object> values = null; if (properties != null) { values = new Dictionary<string, object>(); PropertyDescriptorCollection props = TypeDescriptor.GetProperties(properties); foreach (PropertyDescriptor prop in props) { values.Add(prop.Name, prop.GetValue(properties)); } } return values; } } Ultimately, all the logic boils down into 10 main states - For when the state machine is outside a bracket and likewise inside a bracket, the next character is either an open brace, an escaped open brace, a closed brace, an escaped closed brace, or an ordinary character. Each of these conditions is handled individually as the loop progresses, adding characters to either an output StringBuffer or a key StringBuffer. When a parameter is closed, the value of the key StringBuffer is used to look up the parameter's value in the dictionary, which then gets pushed into the output StringBuffer. At the end, the value of the output StringBuffer is returned. A: string language = "Python"; int numquotes = 2; string output = language + " has "+ numquotes + " language types."; Edit: What I should have said was, "No, I don't believe what you want to do is supported by C#. This is as close as you are going to get."
{ "language": "en", "url": "https://stackoverflow.com/questions/159017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "156" }
Q: Are C++ enums signed or unsigned? Are C++ enums signed or unsigned? And by extension is it safe to validate an input by checking that it is <= your max value, and leave out >= your min value (assuming you started at 0 and incremented by 1)? A: Even some old answers got 44 upvotes, I tend to disagree with all of them. In short, I don't think we should care about the underlying type of the enum. First off, C++03 Enum type is a distinct type of its own having no concept of sign. Since from C++03 standard dcl.enum 7.2 Enumeration declarations 5 Each enumeration defines a type that is different from all other types.... So when we are talking about the sign of an enum type, say when comparing 2 enum operands using the < operator, we are actually talking about implicitly converting the enum type to some integral type. It is the sign of this integral type that matters. And when converting enum to integral type, this statement applies: 9 The value of an enumerator or an object of an enumeration type is converted to an integer by integral promotion (4.5). And, apparently, the underlying type of the enum get nothing to do with the Integral Promotion. Since the standard defines Integral Promotion like this: 4.5 Integral promotions conv.prom .. An rvalue of an enumeration type (7.2) can be converted to an rvalue of the first of the following types that can represent all the values of the enumeration (i.e. the values in the range bmin to bmax as described in 7.2: int, unsigned int, long, or unsigned long. So, whether an enum type becomes signed int or unsigned int depends on whether signed int can contain all the values of the defined enumerators, not the underlying type of the enum. See my related question Sign of C++ Enum Type Incorrect After Converting to Integral Type A: You shouldn't rely on any specific representation. Read the following link. Also, the standard says that it is implementation-defined which integral type is used as the underlying type for an enum, except that it shall not be larger than int, unless some value cannot fit into int or an unsigned int. In short: you cannot rely on an enum being either signed or unsigned. A: The compiler can decide whether or not enums are signed or unsigned. Another method of validating enums is to use the enum itself as a variable type. For example: enum Fruit { Apple = 0, Banana, Pineapple, Orange, Kumquat }; enum Fruit fruitVariable = Banana; // Okay, Banana is a member of the Fruit enum fruitVariable = 1; // Error, 1 is not a member of enum Fruit even though it has the same value as banana. A: In the future, with C++0x, strongly typed enumerations will be available and have several advantages (such as type-safety, explicit underlying types, or explicit scoping). With that you could be better assured of the sign of the type. A: In addition to what others have already said about signed/unsigned, here's what the standard says about the range of an enumerated type: 7.2(6): "For an enumeration where e(min) is the smallest enumerator and e(max) is the largest, the values of the enumeration are the values of the underlying type in the range b(min) to b(max), where b(min) and b(max) are, respectively, the smallest and largest values of the smallest bitfield that can store e(min) and e(max). It is possible to define an enumeration that has values not defined by any of its enumerators." So for example: enum { A = 1, B = 4}; defines an enumerated type where e(min) is 1 and e(max) is 4. If the underlying type is signed int, then the smallest required bitfield has 4 bits, and if ints in your implementation are two's complement then the valid range of the enum is -8 to 7. If the underlying type is unsigned, then it has 3 bits and the range is 0 to 7. Check your compiler documentation if you care (for example if you want to cast integral values other than enumerators to the enumerated type, then you need to know whether the value is in the range of the enumeration or not - if not the resulting enum value is unspecified). Whether those values are valid input to your function may be a different issue from whether they are valid values of the enumerated type. Your checking code is probably worried about the former rather than the latter, and so in this example should at least be checking >=A and <=B. A: Check it with std::is_signed<std::underlying_type + scoped enums default to int https://en.cppreference.com/w/cpp/language/enum implies: main.cpp #include <cassert> #include <iostream> #include <type_traits> enum Unscoped {}; enum class ScopedDefault {}; enum class ScopedExplicit : long {}; int main() { // Implementation defined, let's find out. std::cout << std::is_signed<std::underlying_type<Unscoped>>() << std::endl; // Guaranteed. Scoped defaults to int. assert((std::is_same<std::underlying_type<ScopedDefault>::type, int>())); // Guaranteed. We set it ourselves. assert((std::is_same<std::underlying_type<ScopedExplicit>::type, long>())); } GitHub upstream. Compile and run: g++ -std=c++17 -Wall -Wextra -pedantic-errors -o main main.cpp ./main Output: 0 Tested on Ubuntu 16.04, GCC 6.4.0. A: You shouldn't depend on them being signed or unsigned. If you want to make them explicitly signed or unsigned, you can use the following: enum X : signed int { ... }; // signed enum enum Y : unsigned int { ... }; // unsigned enum A: You shouldn't rely on it being either signed or unsigned. According to the standard it is implementation-defined which integral type is used as the underlying type for an enum. In most implementations, though, it is a signed integer. In C++0x strongly typed enumerations will be added which will allow you to specify the type of an enum such as: enum X : signed int { ... }; // signed enum enum Y : unsigned int { ... }; // unsigned enum Even now, though, some simple validation can be achieved by using the enum as a variable or parameter type like this: enum Fruit { Apple, Banana }; enum Fruit fruitVariable = Banana; // Okay, Banana is a member of the Fruit enum fruitVariable = 1; // Error, 1 is not a member of enum Fruit // even though it has the same value as banana. A: Let's go to the source. Here's what the C++03 standard (ISO/IEC 14882:2003) document says in 7.2-5 (Enumeration declarations): The underlying type of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration. It is implementation-defined which integral type is used as the underlying type for an enumeration except that the underlying type shall not be larger than int unless the value of an enumerator cannot fit in an int or unsigned int. In short, your compiler gets to choose (obviously, if you have negative numbers for some of your ennumeration values, it'll be signed). A: While some of the above answers are arguably proper, they did not answer my practical question. The compiler (gcc 9.3.0) emitted warnings for: enum FOO_STATUS { STATUS_ERROR = (1 << 31) }; The warning was issued on use: unsigned status = foo_status_get(); if (STATUS_ERROR == status) { (Aside from the fact this code is incorrect ... do not ask.) When asked properly, the compiler does not emit an error. enum FOO_STATUS { STATUS_ERROR = (1U << 31) }; Note that 1U makes the expression unsigned.
{ "language": "en", "url": "https://stackoverflow.com/questions/159034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "116" }
Q: How can foreign key constraints be temporarily disabled using T-SQL? Are disabling and enabling foreign key constraints supported in SQL Server? Or is my only option to drop and then re-create the constraints? A: First post :) For the OP, kristof's solution will work, unless there are issues with massive data and transaction log balloon issues on big deletes. Also, even with tlog storage to spare, since deletes write to the tlog, the operation can take a VERY long time for tables with hundreds of millions of rows. I use a series of cursors to truncate and reload large copies of one of our huge production databases frequently. The solution engineered accounts for multiple schemas, multiple foreign key columns, and best of all can be sproc'd out for use in SSIS. It involves creation of three staging tables (real tables) to house the DROP, CREATE, and CHECK FK scripts, creation and insertion of those scripts into the tables, and then looping over the tables and executing them. The attached script is four parts: 1.) creation and storage of the scripts in the three staging (real) tables, 2.) execution of the drop FK scripts via a cursor one by one, 3.) Using sp_MSforeachtable to truncate all the tables in the database other than our three staging tables and 4.) execution of the create FK and check FK scripts at the end of your ETL SSIS package. Run the script creation portion in an Execute SQL task in SSIS. Run the "execute Drop FK Scripts" portion in a second Execute SQL task. Put the truncation script in a third Execute SQL task, then perform whatever other ETL processes you need to do prior to attaching the CREATE and CHECK scripts in a final Execute SQL task (or two if desired) at the end of your control flow. Storage of the scripts in real tables has proven invaluable when the re-application of the foreign keys fails as you can select * from sync_CreateFK, copy/paste into your query window, run them one at a time, and fix the data issues once you find ones that failed/are still failing to re-apply. Do not re-run the script again if it fails without making sure that you re-apply all of the foreign keys/checks prior to doing so, or you will most likely lose some creation and check fk scripting as our staging tables are dropped and recreated prior to the creation of the scripts to execute. ---------------------------------------------------------------------------- 1) /* Author: Denmach DateCreated: 2014-04-23 Purpose: Generates SQL statements to DROP, ADD, and CHECK existing constraints for a database. Stores scripts in tables on target database for execution. Executes those stored scripts via independent cursors. DateModified: ModifiedBy Comments: This will eliminate deletes and the T-log ballooning associated with it. */ DECLARE @schema_name SYSNAME; DECLARE @table_name SYSNAME; DECLARE @constraint_name SYSNAME; DECLARE @constraint_object_id INT; DECLARE @referenced_object_name SYSNAME; DECLARE @is_disabled BIT; DECLARE @is_not_for_replication BIT; DECLARE @is_not_trusted BIT; DECLARE @delete_referential_action TINYINT; DECLARE @update_referential_action TINYINT; DECLARE @tsql NVARCHAR(4000); DECLARE @tsql2 NVARCHAR(4000); DECLARE @fkCol SYSNAME; DECLARE @pkCol SYSNAME; DECLARE @col1 BIT; DECLARE @action CHAR(6); DECLARE @referenced_schema_name SYSNAME; --------------------------------Generate scripts to drop all foreign keys in a database -------------------------------- IF OBJECT_ID('dbo.sync_dropFK') IS NOT NULL DROP TABLE sync_dropFK CREATE TABLE sync_dropFK ( ID INT IDENTITY (1,1) NOT NULL , Script NVARCHAR(4000) ) DECLARE FKcursor CURSOR FOR SELECT OBJECT_SCHEMA_NAME(parent_object_id) , OBJECT_NAME(parent_object_id) , name FROM sys.foreign_keys WITH (NOLOCK) ORDER BY 1,2; OPEN FKcursor; FETCH NEXT FROM FKcursor INTO @schema_name , @table_name , @constraint_name WHILE @@FETCH_STATUS = 0 BEGIN SET @tsql = 'ALTER TABLE ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' DROP CONSTRAINT ' + QUOTENAME(@constraint_name) + ';'; --PRINT @tsql; INSERT sync_dropFK ( Script ) VALUES ( @tsql ) FETCH NEXT FROM FKcursor INTO @schema_name , @table_name , @constraint_name ; END; CLOSE FKcursor; DEALLOCATE FKcursor; ---------------Generate scripts to create all existing foreign keys in a database -------------------------------- ---------------------------------------------------------------------------------------------------------- IF OBJECT_ID('dbo.sync_createFK') IS NOT NULL DROP TABLE sync_createFK CREATE TABLE sync_createFK ( ID INT IDENTITY (1,1) NOT NULL , Script NVARCHAR(4000) ) IF OBJECT_ID('dbo.sync_createCHECK') IS NOT NULL DROP TABLE sync_createCHECK CREATE TABLE sync_createCHECK ( ID INT IDENTITY (1,1) NOT NULL , Script NVARCHAR(4000) ) DECLARE FKcursor CURSOR FOR SELECT OBJECT_SCHEMA_NAME(parent_object_id) , OBJECT_NAME(parent_object_id) , name , OBJECT_NAME(referenced_object_id) , OBJECT_ID , is_disabled , is_not_for_replication , is_not_trusted , delete_referential_action , update_referential_action , OBJECT_SCHEMA_NAME(referenced_object_id) FROM sys.foreign_keys WITH (NOLOCK) ORDER BY 1,2; OPEN FKcursor; FETCH NEXT FROM FKcursor INTO @schema_name , @table_name , @constraint_name , @referenced_object_name , @constraint_object_id , @is_disabled , @is_not_for_replication , @is_not_trusted , @delete_referential_action , @update_referential_action , @referenced_schema_name; WHILE @@FETCH_STATUS = 0 BEGIN BEGIN SET @tsql = 'ALTER TABLE ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + CASE @is_not_trusted WHEN 0 THEN ' WITH CHECK ' ELSE ' WITH NOCHECK ' END + ' ADD CONSTRAINT ' + QUOTENAME(@constraint_name) + ' FOREIGN KEY ('; SET @tsql2 = ''; DECLARE ColumnCursor CURSOR FOR SELECT COL_NAME(fk.parent_object_id , fkc.parent_column_id) , COL_NAME(fk.referenced_object_id , fkc.referenced_column_id) FROM sys.foreign_keys fk WITH (NOLOCK) INNER JOIN sys.foreign_key_columns fkc WITH (NOLOCK) ON fk.[object_id] = fkc.constraint_object_id WHERE fkc.constraint_object_id = @constraint_object_id ORDER BY fkc.constraint_column_id; OPEN ColumnCursor; SET @col1 = 1; FETCH NEXT FROM ColumnCursor INTO @fkCol, @pkCol; WHILE @@FETCH_STATUS = 0 BEGIN IF (@col1 = 1) SET @col1 = 0; ELSE BEGIN SET @tsql = @tsql + ','; SET @tsql2 = @tsql2 + ','; END; SET @tsql = @tsql + QUOTENAME(@fkCol); SET @tsql2 = @tsql2 + QUOTENAME(@pkCol); --PRINT '@tsql = ' + @tsql --PRINT '@tsql2 = ' + @tsql2 FETCH NEXT FROM ColumnCursor INTO @fkCol, @pkCol; --PRINT 'FK Column ' + @fkCol --PRINT 'PK Column ' + @pkCol END; CLOSE ColumnCursor; DEALLOCATE ColumnCursor; SET @tsql = @tsql + ' ) REFERENCES ' + QUOTENAME(@referenced_schema_name) + '.' + QUOTENAME(@referenced_object_name) + ' (' + @tsql2 + ')'; SET @tsql = @tsql + ' ON UPDATE ' + CASE @update_referential_action WHEN 0 THEN 'NO ACTION ' WHEN 1 THEN 'CASCADE ' WHEN 2 THEN 'SET NULL ' ELSE 'SET DEFAULT ' END + ' ON DELETE ' + CASE @delete_referential_action WHEN 0 THEN 'NO ACTION ' WHEN 1 THEN 'CASCADE ' WHEN 2 THEN 'SET NULL ' ELSE 'SET DEFAULT ' END + CASE @is_not_for_replication WHEN 1 THEN ' NOT FOR REPLICATION ' ELSE '' END + ';'; END; -- PRINT @tsql INSERT sync_createFK ( Script ) VALUES ( @tsql ) -------------------Generate CHECK CONSTRAINT scripts for a database ------------------------------ ---------------------------------------------------------------------------------------------------------- BEGIN SET @tsql = 'ALTER TABLE ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + CASE @is_disabled WHEN 0 THEN ' CHECK ' ELSE ' NOCHECK ' END + 'CONSTRAINT ' + QUOTENAME(@constraint_name) + ';'; --PRINT @tsql; INSERT sync_createCHECK ( Script ) VALUES ( @tsql ) END; FETCH NEXT FROM FKcursor INTO @schema_name , @table_name , @constraint_name , @referenced_object_name , @constraint_object_id , @is_disabled , @is_not_for_replication , @is_not_trusted , @delete_referential_action , @update_referential_action , @referenced_schema_name; END; CLOSE FKcursor; DEALLOCATE FKcursor; --SELECT * FROM sync_DropFK --SELECT * FROM sync_CreateFK --SELECT * FROM sync_CreateCHECK --------------------------------------------------------------------------- 2.) ----------------------------------------------------------------------------------------------------------------- ----------------------------execute Drop FK Scripts -------------------------------------------------- DECLARE @scriptD NVARCHAR(4000) DECLARE DropFKCursor CURSOR FOR SELECT Script FROM sync_dropFK WITH (NOLOCK) OPEN DropFKCursor FETCH NEXT FROM DropFKCursor INTO @scriptD WHILE @@FETCH_STATUS = 0 BEGIN --PRINT @scriptD EXEC (@scriptD) FETCH NEXT FROM DropFKCursor INTO @scriptD END CLOSE DropFKCursor DEALLOCATE DropFKCursor -------------------------------------------------------------------------------- 3.) ------------------------------------------------------------------------------------------------------------------ ----------------------------Truncate all tables in the database other than our staging tables -------------------- ------------------------------------------------------------------------------------------------------------------ EXEC sp_MSforeachtable 'IF OBJECT_ID(''?'') NOT IN ( ISNULL(OBJECT_ID(''dbo.sync_createCHECK''),0), ISNULL(OBJECT_ID(''dbo.sync_createFK''),0), ISNULL(OBJECT_ID(''dbo.sync_dropFK''),0) ) BEGIN TRY TRUNCATE TABLE ? END TRY BEGIN CATCH PRINT ''Truncation failed on''+ ? +'' END CATCH;' GO ------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------- ----------------------------execute Create FK Scripts and CHECK CONSTRAINT Scripts--------------- ----------------------------tack me at the end of the ETL in a SQL task------------------------- ------------------------------------------------------------------------------------------------- DECLARE @scriptC NVARCHAR(4000) DECLARE CreateFKCursor CURSOR FOR SELECT Script FROM sync_createFK WITH (NOLOCK) OPEN CreateFKCursor FETCH NEXT FROM CreateFKCursor INTO @scriptC WHILE @@FETCH_STATUS = 0 BEGIN --PRINT @scriptC EXEC (@scriptC) FETCH NEXT FROM CreateFKCursor INTO @scriptC END CLOSE CreateFKCursor DEALLOCATE CreateFKCursor ------------------------------------------------------------------------------------------------- DECLARE @scriptCh NVARCHAR(4000) DECLARE CreateCHECKCursor CURSOR FOR SELECT Script FROM sync_createCHECK WITH (NOLOCK) OPEN CreateCHECKCursor FETCH NEXT FROM CreateCHECKCursor INTO @scriptCh WHILE @@FETCH_STATUS = 0 BEGIN --PRINT @scriptCh EXEC (@scriptCh) FETCH NEXT FROM CreateCHECKCursor INTO @scriptCh END CLOSE CreateCHECKCursor DEALLOCATE CreateCHECKCursor A: Find the constraint SELECT * FROM sys.foreign_keys WHERE referenced_object_id = object_id('TABLE_NAME') Execute the SQL generated by this SQL SELECT 'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) + '.[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT ' + name FROM sys.foreign_keys WHERE referenced_object_id = object_id('TABLE_NAME') Safeway. Note: Added solution for droping the constraint so that table can be dropped or modified without any constraint error. A: To disable the constraint you have ALTER the table using NOCHECK ALTER TABLE [TABLE_NAME] NOCHECK CONSTRAINT [ALL|CONSTRAINT_NAME] To enable you to have to use double CHECK: ALTER TABLE [TABLE_NAME] WITH CHECK CHECK CONSTRAINT [ALL|CONSTRAINT_NAME] * *Pay attention to the double CHECK CHECK when enabling. *ALL means for all constraints in the table. Once completed, if you need to check the status, use this script to list the constraint status. Will be very helpfull: SELECT (CASE WHEN OBJECTPROPERTY(CONSTID, 'CNSTISDISABLED') = 0 THEN 'ENABLED' ELSE 'DISABLED' END) AS STATUS, OBJECT_NAME(CONSTID) AS CONSTRAINT_NAME, OBJECT_NAME(FKEYID) AS TABLE_NAME, COL_NAME(FKEYID, FKEY) AS COLUMN_NAME, OBJECT_NAME(RKEYID) AS REFERENCED_TABLE_NAME, COL_NAME(RKEYID, RKEY) AS REFERENCED_COLUMN_NAME FROM SYSFOREIGNKEYS ORDER BY TABLE_NAME, CONSTRAINT_NAME,REFERENCED_TABLE_NAME, KEYNO A: (Copied from from http://www.sqljunkies.com/WebLog/roman/archive/2005/01/30/7037.aspx, which is now archived in the Wayback Machine) Foreign key constraints and check constraint are very useful for enforcing data integrity and business rules. There are certain scenarios though where it is useful to temporarily turn them off because their behavior is either not needed or could do more harm than good. I sometimes disable constraint checking on tables during data loads from external sources or when I need to script a table drop/recreate with reloading the data back into the table. I usually do it in scenarios where I don't want a time consuming process to fail because one or a few of many million rows have bad data in it. But I always turn the constraints back on once the process is finished and also in some cases I run data integrity checks on the imported data. If you disable a foreign key constraint, you will be able to insert a value that does not exist in the parent table. If you disable a check constraint, you will be able to put a value in a column as if the check constraint was not there. Here are a few examples of disabling and enabling table constraints: -- Disable all table constraints ALTER TABLE MyTable NOCHECK CONSTRAINT ALL -- Enable all table constraints ALTER TABLE MyTable WITH CHECK CHECK CONSTRAINT ALL -- Disable single constraint ALTER TABLE MyTable NOCHECK CONSTRAINT MyConstraint -- Enable single constraint ALTER TABLE MyTable WITH CHECK CHECK CONSTRAINT MyConstraint A: Answer marked '905' looks good but does not work. Following worked for me. Any Primary Key, Unique Key, or Default constraints CAN NOT be disabled. In fact, if 'sp_helpconstraint '' shows 'n/a' in status_enabled - Means it can NOT be enabled/disabled. -- To generate script to DISABLE select 'ALTER TABLE ' + object_name(id) + ' NOCHECK CONSTRAINT [' + object_name(constid) + ']' from sys.sysconstraints where status & 0x4813 = 0x813 order by object_name(id) -- To generate script to ENABLE select 'ALTER TABLE ' + object_name(id) + ' CHECK CONSTRAINT [' + object_name(constid) + ']' from sys.sysconstraints where status & 0x4813 = 0x813 order by object_name(id) A: You should actually be able to disable foreign key constraints the same way you temporarily disable other constraints: Alter table MyTable nocheck constraint FK_ForeignKeyConstraintName Just make sure you're disabling the constraint on the first table listed in the constraint name. For example, if my foreign key constraint was FK_LocationsEmployeesLocationIdEmployeeId, I would want to use the following: Alter table Locations nocheck constraint FK_LocationsEmployeesLocationIdEmployeeId even though violating this constraint will produce an error that doesn't necessarily state that table as the source of the conflict. A: Your best option is to DROP and CREATE foreign key constraints. I didn't find examples in this post that would work for me "as-is", one would not work if foreign keys reference different schemas, the other would not work if foreign key references multiple columns. This script considers both, multiple schemas and multiple columns per foreign key. Here is the script that generates "ADD CONSTRAINT" statements, for multiple columns it will separate them by comma (be sure to save this output before executing DROP statements): PRINT N'-- CREATE FOREIGN KEY CONSTRAINTS --'; SET NOCOUNT ON; SELECT ' PRINT N''Creating '+ const.const_name +'...'' GO ALTER TABLE ' + const.parent_obj + ' ADD CONSTRAINT ' + const.const_name + ' FOREIGN KEY ( ' + const.parent_col_csv + ' ) REFERENCES ' + const.ref_obj + '(' + const.ref_col_csv + ') GO' FROM ( SELECT QUOTENAME(fk.NAME) AS [const_name] ,QUOTENAME(schParent.NAME) + '.' + QUOTENAME(OBJECT_name(fkc.parent_object_id)) AS [parent_obj] ,STUFF(( SELECT ',' + QUOTENAME(COL_NAME(fcP.parent_object_id, fcp.parent_column_id)) FROM sys.foreign_key_columns AS fcP WHERE fcp.constraint_object_id = fk.object_id FOR XML path('') ), 1, 1, '') AS [parent_col_csv] ,QUOTENAME(schRef.NAME) + '.' + QUOTENAME(OBJECT_NAME(fkc.referenced_object_id)) AS [ref_obj] ,STUFF(( SELECT ',' + QUOTENAME(COL_NAME(fcR.referenced_object_id, fcR.referenced_column_id)) FROM sys.foreign_key_columns AS fcR WHERE fcR.constraint_object_id = fk.object_id FOR XML path('') ), 1, 1, '') AS [ref_col_csv] FROM sys.foreign_key_columns AS fkc INNER JOIN sys.foreign_keys AS fk ON fk.object_id = fkc.constraint_object_id INNER JOIN sys.objects AS oParent ON oParent.object_id = fkc.parent_object_id INNER JOIN sys.schemas AS schParent ON schParent.schema_id = oParent.schema_id INNER JOIN sys.objects AS oRef ON oRef.object_id = fkc.referenced_object_id INNER JOIN sys.schemas AS schRef ON schRef.schema_id = oRef.schema_id GROUP BY fkc.parent_object_id ,fkc.referenced_object_id ,fk.NAME ,fk.object_id ,schParent.NAME ,schRef.NAME ) AS const ORDER BY const.const_name Here is the script that generates "DROP CONSTRAINT" statements: PRINT N'-- DROP FOREIGN KEY CONSTRAINTS --'; SET NOCOUNT ON; SELECT ' PRINT N''Dropping ' + fk.NAME + '...'' GO ALTER TABLE [' + sch.NAME + '].[' + OBJECT_NAME(fk.parent_object_id) + ']' + ' DROP CONSTRAINT ' + '[' + fk.NAME + '] GO' FROM sys.foreign_keys AS fk INNER JOIN sys.schemas AS sch ON sch.schema_id = fk.schema_id ORDER BY fk.NAME A: Right click the table design and go to Relationships and choose the foreign key on the left-side pane and in the right-side pane, set Enforce foreign key constraint to 'Yes' (to enable foreign key constraints) or 'No' (to disable it). A: The SQL-92 standard allows for a constaint to be declared as DEFERRABLE so that it can be deferred (implicitly or explicitly) within the scope of a transaction. Sadly, SQL Server is still missing this SQL-92 functionality. For me, changing a constraint to NOCHECK is akin to changing the database structure on the fly -- dropping constraints certainly is -- and something to be avoided (e.g. users require increased privileges). A: --Drop and Recreate Foreign Key Constraints SET NOCOUNT ON DECLARE @table TABLE( RowId INT PRIMARY KEY IDENTITY(1, 1), ForeignKeyConstraintName NVARCHAR(200), ForeignKeyConstraintTableSchema NVARCHAR(200), ForeignKeyConstraintTableName NVARCHAR(200), ForeignKeyConstraintColumnName NVARCHAR(200), PrimaryKeyConstraintName NVARCHAR(200), PrimaryKeyConstraintTableSchema NVARCHAR(200), PrimaryKeyConstraintTableName NVARCHAR(200), PrimaryKeyConstraintColumnName NVARCHAR(200) ) INSERT INTO @table(ForeignKeyConstraintName, ForeignKeyConstraintTableSchema, ForeignKeyConstraintTableName, ForeignKeyConstraintColumnName) SELECT U.CONSTRAINT_NAME, U.TABLE_SCHEMA, U.TABLE_NAME, U.COLUMN_NAME FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE U INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS C ON U.CONSTRAINT_NAME = C.CONSTRAINT_NAME WHERE C.CONSTRAINT_TYPE = 'FOREIGN KEY' UPDATE @table SET PrimaryKeyConstraintName = UNIQUE_CONSTRAINT_NAME FROM @table T INNER JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS R ON T.ForeignKeyConstraintName = R.CONSTRAINT_NAME UPDATE @table SET PrimaryKeyConstraintTableSchema = TABLE_SCHEMA, PrimaryKeyConstraintTableName = TABLE_NAME FROM @table T INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS C ON T.PrimaryKeyConstraintName = C.CONSTRAINT_NAME UPDATE @table SET PrimaryKeyConstraintColumnName = COLUMN_NAME FROM @table T INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE U ON T.PrimaryKeyConstraintName = U.CONSTRAINT_NAME --SELECT * FROM @table --DROP CONSTRAINT: SELECT ' ALTER TABLE [' + ForeignKeyConstraintTableSchema + '].[' + ForeignKeyConstraintTableName + '] DROP CONSTRAINT ' + ForeignKeyConstraintName + ' GO' FROM @table --ADD CONSTRAINT: SELECT ' ALTER TABLE [' + ForeignKeyConstraintTableSchema + '].[' + ForeignKeyConstraintTableName + '] ADD CONSTRAINT ' + ForeignKeyConstraintName + ' FOREIGN KEY(' + ForeignKeyConstraintColumnName + ') REFERENCES [' + PrimaryKeyConstraintTableSchema + '].[' + PrimaryKeyConstraintTableName + '](' + PrimaryKeyConstraintColumnName + ') GO' FROM @table GO I do agree with you, Hamlin. When you are transfer data using SSIS or when want to replicate data, it seems quite necessary to temporarily disable or drop foreign key constraints and then re-enable or recreate them. In these cases, referential integrity is not an issue, because it is already maintained in the source database. Therefore, you can rest assured regarding this matter. A: If you want to disable all constraints in the database just run this code: -- disable all constraints EXEC sp_MSforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all" To switch them back on, run: (the print is optional of course and it is just listing the tables) -- enable all constraints exec sp_MSforeachtable @command1="print '?'", @command2="ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all" I find it useful when populating data from one database to another. It is much better approach than dropping constraints. As you mentioned it comes handy when dropping all the data in the database and repopulating it (say in test environment). If you are deleting all the data you may find this solution to be helpful. Also sometimes it is handy to disable all triggers as well, you can see the complete solution here. A: WITH CHECK CHECK is almost certainly required! This point was raised in some of the answers and comments but I feel that it is important enough to call it out again. Re-enabling a constraint using the following command (no WITH CHECK) will have some serious drawbacks. ALTER TABLE MyTable CHECK CONSTRAINT MyConstraint; WITH CHECK | WITH NOCHECK Specifies whether the data in the table is or is not validated against a newly added or re-enabled FOREIGN KEY or CHECK constraint. If not specified, WITH CHECK is assumed for new constraints, and WITH NOCHECK is assumed for re-enabled constraints. If you do not want to verify new CHECK or FOREIGN KEY constraints against existing data, use WITH NOCHECK. We do not recommend doing this, except in rare cases. The new constraint will be evaluated in all later data updates. Any constraint violations that are suppressed by WITH NOCHECK when the constraint is added may cause future updates to fail if they update rows with data that does not comply with the constraint. The query optimizer does not consider constraints that are defined WITH NOCHECK. Such constraints are ignored until they are re-enabled by using ALTER TABLE table WITH CHECK CHECK CONSTRAINT ALL. Note: WITH NOCHECK is the default for re-enabling constraints. I have to wonder why... * *No existing data in the table will be evaluated during the execution of this command - successful completion is no guarantee that the data in the table is valid according to the constraint. *During the next update of the invalid records, the constraint will be evaluated and will fail - resulting in errors that may be unrelated to the actual update that is made. *Application logic that relies on the constraint to ensure that data is valid may fail. *The query optimizer will not make use of any constraint that is enabled in this way. The sys.foreign_keys system view provides some visibility into the issue. Note that it has both an is_disabled and an is_not_trusted column. is_disabled indicates whether future data manipulation operations will be validated against the constraint. is_not_trusted indicates whether all of the data currently in the table has been validated against the constraint. ALTER TABLE MyTable WITH CHECK CHECK CONSTRAINT MyConstraint; Are your constraints to be trusted? Find out... SELECT * FROM sys.foreign_keys WHERE is_not_trusted = 1; A: SET NOCOUNT ON DECLARE @table TABLE( RowId INT PRIMARY KEY IDENTITY(1, 1), ForeignKeyConstraintName NVARCHAR(200), ForeignKeyConstraintTableSchema NVARCHAR(200), ForeignKeyConstraintTableName NVARCHAR(200), ForeignKeyConstraintColumnName NVARCHAR(200), PrimaryKeyConstraintName NVARCHAR(200), PrimaryKeyConstraintTableSchema NVARCHAR(200), PrimaryKeyConstraintTableName NVARCHAR(200), PrimaryKeyConstraintColumnName NVARCHAR(200), UpdateRule NVARCHAR(100), DeleteRule NVARCHAR(100) ) INSERT INTO @table(ForeignKeyConstraintName, ForeignKeyConstraintTableSchema, ForeignKeyConstraintTableName, ForeignKeyConstraintColumnName) SELECT U.CONSTRAINT_NAME, U.TABLE_SCHEMA, U.TABLE_NAME, U.COLUMN_NAME FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE U INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS C ON U.CONSTRAINT_NAME = C.CONSTRAINT_NAME WHERE C.CONSTRAINT_TYPE = 'FOREIGN KEY' UPDATE @table SET T.PrimaryKeyConstraintName = R.UNIQUE_CONSTRAINT_NAME, T.UpdateRule = R.UPDATE_RULE, T.DeleteRule = R.DELETE_RULE FROM @table T INNER JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS R ON T.ForeignKeyConstraintName = R.CONSTRAINT_NAME UPDATE @table SET PrimaryKeyConstraintTableSchema = TABLE_SCHEMA, PrimaryKeyConstraintTableName = TABLE_NAME FROM @table T INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS C ON T.PrimaryKeyConstraintName = C.CONSTRAINT_NAME UPDATE @table SET PrimaryKeyConstraintColumnName = COLUMN_NAME FROM @table T INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE U ON T.PrimaryKeyConstraintName = U.CONSTRAINT_NAME --SELECT * FROM @table SELECT ' BEGIN TRANSACTION BEGIN TRY' --DROP CONSTRAINT: SELECT ' ALTER TABLE [' + ForeignKeyConstraintTableSchema + '].[' + ForeignKeyConstraintTableName + '] DROP CONSTRAINT ' + ForeignKeyConstraintName + ' ' FROM @table SELECT ' END TRY BEGIN CATCH ROLLBACK TRANSACTION RAISERROR(''Operation failed.'', 16, 1) END CATCH IF(@@TRANCOUNT != 0) BEGIN COMMIT TRANSACTION RAISERROR(''Operation completed successfully.'', 10, 1) END ' --ADD CONSTRAINT: SELECT ' BEGIN TRANSACTION BEGIN TRY' SELECT ' ALTER TABLE [' + ForeignKeyConstraintTableSchema + '].[' + ForeignKeyConstraintTableName + '] ADD CONSTRAINT ' + ForeignKeyConstraintName + ' FOREIGN KEY(' + ForeignKeyConstraintColumnName + ') REFERENCES [' + PrimaryKeyConstraintTableSchema + '].[' + PrimaryKeyConstraintTableName + '](' + PrimaryKeyConstraintColumnName + ') ON UPDATE ' + UpdateRule + ' ON DELETE ' + DeleteRule + ' ' FROM @table SELECT ' END TRY BEGIN CATCH ROLLBACK TRANSACTION RAISERROR(''Operation failed.'', 16, 1) END CATCH IF(@@TRANCOUNT != 0) BEGIN COMMIT TRANSACTION RAISERROR(''Operation completed successfully.'', 10, 1) END' GO A: I have a more useful version if you are interested. I lifted a bit of code from here a website where the link is no longer active. I modifyied it to allow for an array of tables into the stored procedure and it populates the drop, truncate, add statements before executing all of them. This gives you control to decide which tables need truncating. /****** Object: UserDefinedTableType [util].[typ_objects_for_managing] Script Date: 03/04/2016 16:42:55 ******/ CREATE TYPE [util].[typ_objects_for_managing] AS TABLE( [schema] [sysname] NOT NULL, [object] [sysname] NOT NULL ) GO create procedure [util].[truncate_table_with_constraints] @objects_for_managing util.typ_objects_for_managing readonly --@schema sysname --,@table sysname as --select -- @table = 'TABLE', -- @schema = 'SCHEMA' declare @exec_table as table (ordinal int identity (1,1), statement nvarchar(4000), primary key (ordinal)); --print '/*Drop Foreign Key Statements for ['+@schema+'].['+@table+']*/' insert into @exec_table (statement) select 'ALTER TABLE ['+SCHEMA_NAME(o.schema_id)+'].['+ o.name+'] DROP CONSTRAINT ['+fk.name+']' from sys.foreign_keys fk inner join sys.objects o on fk.parent_object_id = o.object_id where exists ( select * from @objects_for_managing chk where chk.[schema] = SCHEMA_NAME(o.schema_id) and chk.[object] = o.name ) ; --o.name = @table and --SCHEMA_NAME(o.schema_id) = @schema insert into @exec_table (statement) select 'TRUNCATE TABLE ' + src.[schema] + '.' + src.[object] from @objects_for_managing src ; --print '/*Create Foreign Key Statements for ['+@schema+'].['+@table+']*/' insert into @exec_table (statement) select 'ALTER TABLE ['+SCHEMA_NAME(o.schema_id)+'].['+o.name+'] ADD CONSTRAINT ['+fk.name+'] FOREIGN KEY (['+c.name+']) REFERENCES ['+SCHEMA_NAME(refob.schema_id)+'].['+refob.name+'](['+refcol.name+'])' from sys.foreign_key_columns fkc inner join sys.foreign_keys fk on fkc.constraint_object_id = fk.object_id inner join sys.objects o on fk.parent_object_id = o.object_id inner join sys.columns c on fkc.parent_column_id = c.column_id and o.object_id = c.object_id inner join sys.objects refob on fkc.referenced_object_id = refob.object_id inner join sys.columns refcol on fkc.referenced_column_id = refcol.column_id and fkc.referenced_object_id = refcol.object_id where exists ( select * from @objects_for_managing chk where chk.[schema] = SCHEMA_NAME(o.schema_id) and chk.[object] = o.name ) ; --o.name = @table and --SCHEMA_NAME(o.schema_id) = @schema declare @looper int , @total_records int, @sql_exec nvarchar(4000) select @looper = 1, @total_records = count(*) from @exec_table; while @looper <= @total_records begin select @sql_exec = (select statement from @exec_table where ordinal =@looper) exec sp_executesql @sql_exec print @sql_exec set @looper = @looper + 1 end A: One script to rule them all: this combines truncate and delete commands with sp_MSforeachtable so that you can avoid dropping and recreating constraints - just specify the tables that need to be deleted rather than truncated and for my purposes I have included an extra schema filter for good measure (tested in 2008r2) declare @schema nvarchar(max) = 'and Schema_Id=Schema_id(''Value'')' declare @deletiontables nvarchar(max) = '(''TableA'',''TableB'')' declare @truncateclause nvarchar(max) = @schema + ' and o.Name not in ' + + @deletiontables; declare @deleteclause nvarchar(max) = @schema + ' and o.Name in ' + @deletiontables; exec sp_MSforeachtable 'alter table ? nocheck constraint all', @whereand=@schema exec sp_MSforeachtable 'truncate table ?', @whereand=@truncateclause exec sp_MSforeachtable 'delete from ?', @whereand=@deleteclause exec sp_MSforeachtable 'alter table ? with check check constraint all', @whereand=@schema A: You can temporarily disable constraints on your tables, do work, then rebuild them. Here is an easy way to do it... Disable all indexes, including the primary keys, which will disable all foreign keys, then re-enable just the primary keys so you can work with them... DECLARE @sql AS NVARCHAR(max)='' select @sql = @sql + 'ALTER INDEX ALL ON [' + t.[name] + '] DISABLE;'+CHAR(13) from sys.tables t where type='u' select @sql = @sql + 'ALTER INDEX ' + i.[name] + ' ON [' + t.[name] + '] REBUILD;'+CHAR(13) from sys.key_constraints i join sys.tables t on i.parent_object_id=t.object_id where i.type='PK' exec dbo.sp_executesql @sql; go [Do something, like loading data] Then re-enable and rebuild the indexes... DECLARE @sql AS NVARCHAR(max)='' select @sql = @sql + 'ALTER INDEX ALL ON [' + t.[name] + '] REBUILD;'+CHAR(13) from sys.tables t where type='u' exec dbo.sp_executesql @sql; go A: You can easily turn of CONSTRAINT using : ALTER TABLE TableName NOCHECK CONSTRAINT ALL After you finish the transaction do not forget to turn them on again using: ALTER TABLE TableName CHECK CONSTRAINT ALL
{ "language": "en", "url": "https://stackoverflow.com/questions/159038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "910" }
Q: Wasabi like web programming language Are there any open source or commercial web programming language that function much like Fog Creek's Wasabi? As in you write your web app in this parent language and it then compiles down to php for Linux hosts and ASP.NET for Windows hosts. A: Haxe is the closest I've seen, but it only compiles to PHP (and mod_neko), not to ASP. A: Genexus is a commercial development tool that does that. It can generate several other languages. Its oriented toward database apps, it generates database schemas and queries from its internal language. That said, I have worked with it, and I don't like it. It's quite buggy and its programming language is very archaic. A: Before you bother, consider whether it's really worth it. Supporting one platform with multiple configurations is bad enough; do you really need to support both ASP.NET and PHP? If you're writing an in-house application, then you probably want to stick to as few technologies as possible. If you're writing software to sell, then is it really a problem if your product requires a particular platform? A: As far as I know, Fog Creek had to develop Wasabi because there wasn't such a tool. There are a few toolkits trying to be portable, but none that compiled to ASP or PHP that I know of (besides Wasabi, that is). A: People act like Joel went mad with Wasabi, but I think it makes perfect sense if you put all the pieces together. * *FogBugz was originally written in VB. *Joel hates to throw out working code to start over. *Joel was faced with a server market split between MS and Apache w/PHP servers. Given the circumstances, it's a rational decision to say, "OK, then, we'll just write a VB to PHP translator." And once you've taken that step, to say, "Well, since we've essentially have a compiler here, why not extend it with the features we want that Microsoft has never added to VB?" Thanks to Wasabi, code that would have to be written twice (or more, given some duplicated server/JavaScript code) is written only once. Multitarget development is pretty common. It's the reality when you can't dictate your target environment. A: Pick a real mature application server platform like Java. It runs everywhere...
{ "language": "en", "url": "https://stackoverflow.com/questions/159039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the correct term for the documentation that we put just above a method declaration? I'm writing a whitepaper and realized that I am not sure what the official term is for the kind of internal documentation that we put as a comment block before a declaration of definition. The same thing that eventually becomes JavaDoc member documentation. It's not simply internal documentation, and I'm not sure "header documentation" would be a good term. Note that I'm looking for a general term, not one specific to a particular language (e.g., Java/Perl) A: This is called a method specification or procedure specification. That is, it specifies the behaviour of the procedure rather than the implementation details. Some text books refer to it as the contract of the method but that may be a bit ambiguous. A: At my organization we call it a method or function doc-comment. Function-level documentation is probably the more widely used term. A: I always call it method (or function) comment, to distinguish it from class or file comments. A: It's often professionally referred to as a "requirements clause", or an "insurance clause". A: I call it code comments, simple like that. A: I usually refer to it as "inline documentation." To me that's what it's about — the fact that your documentation is in your source code, so there's more of a chance the docs will stay in sync with the code. (This is no guarantee, of course, but it does encourage programmers to eat their vegetables. It means the developer can change the documentation at the same time and in the same place the behavior changes, rather than after the fact and in another place.)
{ "language": "en", "url": "https://stackoverflow.com/questions/159059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Windows services with windows forms in the same process I have a c# application that runs as a windows service controlling socket connections and other things. Also, there is another windows forms application to control and configure this service (systray with start, stop, show form with configuration parameters). I'm using .net remoting to do the IPC and that was fine, but now I want to show some real traffic and other reports and remoting will not meet my performance requirements. So I want to combine both applications in one. Here is the problem: When I started the form from the windows service, nothing happened. Googling around I've found that I have to right click the service, go to Log on and check the "Allow service to interact with desktop" option. Since I don't want to ask my users to do that, I got some code googling again to set this option in the user's regedit during installation time. The problem is that even setting this option, it doesn't work. I have to open the Log On options of the service (it is checked), uncheck and check again. So, how to solve that? How is the best way to have a windows service with a systray control in the same process, available to any user logging in? UPDATE: Thanks for the comments so far, guys. I agree it is better to use IPC and I know that it is bad to mix windows services and user interfaces. Even though, I want to know how to do that. A: Two separate processes that communicate using your technology of choice. Services with UI is a bad idea. Don't go down this road - you'll regret it. I've had very good results having service communication through a simple socket connection - document your service protocol well, keep it as simple as possible, and it'll be easier than you think. A: In practice you should not couple your service with the management UI. A: I agree with Greg. Perhaps you could examine a different IPC mechanism. Perhaps use sockets and your own protocol. Or, if your service control app can only control the service on the local machine, you can use named pipes (even faster). A: Here is a way mixing up Services and Forms http://www.codeproject.com/KB/system/SystemTrayIconInSvc.aspx A: I figured out how to do this from this article (click on the "Change" link in the Methods table). string wmiPath = "Win32_Service.Name='" + SERVICE_NAME + "'"; using (ManagementObject service = new ManagementObject(wmiPath)) { object[] parameters = new object[11]; parameters[5] = true; // Enable desktop interaction service.InvokeMethod("Change", parameters); } A: I have the solution in a few steps, this is the plan * *we are not going to create a service project with a a windows form, instead we are going to create a visual studio solution that contains a windows service project, a windows form project and a setup project. *The idea is to have a database or a file or anything you are comfortable with storing data in which you would store the parameters your windows service will always use to run. So your windows service and your windows form application should be able to modify and retrieve data from it. *To the Main Form Of Your Windows Application drag and drop a NotifyIcon on the form, in the properties tab, browse and select an .ico image(You can sreate one in visual studio but that is another topic you can get on google or contact me) That it will display in the system tray when you run the application and the main form is active or shown, try it, run the application. *Add both of them as outputs in the setup project of the solution. To Add a project to a setup project they must be in the same solution. Right click the setup project in the solution explorer, highlight add and then select project output, add the windows service and the windows form outputs and you will see them in the solution explorer under the setup project. *adding a windows service goes further than this but that also is another topic google it *Creating shortcut for the windows application and adding it to the startup folder is also another topic google or contact me. NOTE Program your form in such a way that the close button doesn't show and the form goes Me.visible = false and double clicking the icon in the system tray is the only way to set me.visible=true.that way anytime the computer starts up, your windows form application is also started and visible is immediately set to false but since it has a notifyicon with an icon image, it will show in the system tray and double clicking it makes the form visible to edit the settings that you are storing for the service, the service also starts automatically since you would have set it in setting up the service in the setup project. my mail is iamjavademon@gmail.com for a better illustration using screen shots And explain in full A: It is very simply - your need to create one thread for perform application events. Like this( source code for C++ with CLR, but your can make this in C#): ref class RunWindow{ public: static void MakeWindow(Object^ data) { Application::EnableVisualStyles(); Application::SetCompatibleTextRenderingDefault(false); Application::Run(gcnew TMainForm()); }; }; And create thread in main int main(array<System::String ^> ^args) { bool bService = RunAsService(L"SimpleServiceWithIconInTrayAndWindow"); if (bService) { System::Threading::Thread ^thread = gcnew System::Threading::Thread(gcnew ParameterizedThreadStart(RunWindow::MakeWindow)); thread->Start(); ServiceBase::Run(gcnew simpleWinService()); Application::Exit(); } else { Application::EnableVisualStyles(); Application::SetCompatibleTextRenderingDefault(false); // Create the main window and run it Application::Run(gcnew TMainForm()); } return 0; } A: The main problems with interactive services are: * *Security - other process could send it messages through its message pump, thereby gaining access to a SYSTEM/LOCAL process. *Incompleteness - an interactive service never sees shell messages, hence it can't interact with Notification Area icons. We regularly use TCP and UDP connections to pass info from services to other exes, and, in some cases, MSMQ.
{ "language": "en", "url": "https://stackoverflow.com/questions/159076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Composite primary keys versus unique object ID field I inherited a database built with the idea that composite keys are much more ideal than using a unique object ID field and that when building a database, a single unique ID should never be used as a primary key. Because I was building a Rails front-end for this database, I ran into difficulties getting it to conform to the Rails conventions (though it was possible using custom views and a few additional gems to handle composite keys). The reasoning behind this specific schema design from the person who wrote it had to do with how the database handles ID fields in a non-efficient manner and when it's building indexes, tree sorts are flawed. This explanation lacked any depth and I'm still trying to wrap my head around the concept (I'm familiar with using composite keys, but not 100% of the time). Can anyone offer opinions or add any greater depth to this topic? A: Most of the commonly used engines (MS SQL Server, Oracle, DB2, MySQL, etc.) would not experience noticeable issues using a surrogate key system. Some may even experience a performance boost from the use of a surrogate, but performance issues are highly platform-specific. In general terms, the natural key (and by extension, composite key) verses surrogate key debate has a long history with no likely “right answer” in sight. The arguments for natural keys (singular or composite) usually include some the following: 1) They are already available in the data model. Most entities being modeled already include one or more attributes or combinations of attributes that meet the needs of a key for the purposes of creating relations. Adding an additional attribute to each table incorporates an unnecessary redundancy. 2) They eliminate the need for certain joins. For example, if you have customers with customer codes, and invoices with invoice numbers (both of which are "natural" keys), and you want to retrieve all the invoice numbers for a specific customer code, you can simply use "SELECT InvoiceNumber FROM Invoice WHERE CustomerCode = 'XYZ123'". In the classic surrogate key approach, the SQL would look something like this: "SELECT Invoice.InvoiceNumber FROM Invoice INNER JOIN Customer ON Invoice.CustomerID = Customer.CustomerID WHERE Customer.CustomerCode = 'XYZ123'". 3) They contribute to a more universally-applicable approach to data modeling. With natural keys, the same design can be used largely unchanged between different SQL engines. Many surrogate key approaches use specific SQL engine techniques for key generation, thus requiring more specialization of the data model to implement on different platforms. Arguments for surrogate keys tend to revolve around issues that are SQL engine specific: 1) They enable easier changes to attributes when business requirements/rules change. This is because they allow the data attributes to be isolated to a single table. This is primarily an issue for SQL engines that do not efficiently implement standard SQL constructs such as DOMAINs. When an attribute is defined by a DOMAIN statement, changes to the attribute can be performed schema-wide using an ALTER DOMAIN statement. Different SQL engines have different performance characteristics for altering a domain, and some SQL engines do not implement DOMAINS at all, so data modelers compensate for these situations by adding surrogate keys to improve the ability to make changes to attributes. 2) They enable easier implementations of concurrency than natural keys. In the natural key case, if two users are concurrently working with the same information set, such as a customer row, and one of the users modifies the natural key value, then an update by the second user will fail because the customer code they are updating no longer exists in the database. In the surrogate key case, the update will process successfully because immutable ID values are used to identify the rows in the database, not mutable customer codes. However, it is not always desirable to allow the second update – if the customer code changed it is possible that the second user should not be allowed to proceed with their change because the actual “identity” of the row has changed – the second user may be updating the wrong row. Neither surrogate keys nor natural keys, by themselves, address this issue. Comprehensive concurrency solutions have to be addressed outside of the implementation of the key. 3) They perform better than natural keys. Performance is most directly affected by the SQL engine. The same database schema implemented on the same hardware using different SQL engines will often have dramatically different performance characteristics, due to the SQL engines data storage and retrieval mechanisms. Some SQL engines closely approximate flat-file systems, where data is actually stored redundantly when the same attribute, such as a Customer Code, appears in multiple places in the database schema. This redundant storage by the SQL engine can cause performance issues when changes need to be made to the data or schema. Other SQL engines provide a better separation between the data model and the storage/retrieval system, allowing for quicker changes of data and schema. 4) Surrogate keys function better with certain data access libraries and GUI frameworks. Due to the homogeneous nature of most surrogate key designs (example: all relational keys are integers), data access libraries, ORMs, and GUI frameworks can work with the information without needing special knowledge of the data. Natural keys, due to their heterogeneous nature (different data types, size etc.), do not work as well with automated or semi-automated toolkits and libraries. For specialized scenarios, such as embedded SQL databases, designing the database with a specific toolkit in mind may be acceptable. In other scenarios, databases are enterprise information resources, accessed concurrently by multiple platforms, applications, report systems, and devices, and therefore do not function as well when designed with a focus on any particular library or framework. In addition, databases designed to work with specific toolkits become a liability when the next great toolkit is introduced. I tend to fall on the side of natural keys (obviously), but I am not fanatical about it. Due to the environment I work in, where any given database I help design may be used by a variety of applications, I use natural keys for the majority of the data modeling, and rarely introduce surrogates. However, I don’t go out of my way to try to re-implement existing databases that use surrogates. Surrogate-key systems work just fine – no need to change something that is already functioning well. There are some excellent resources discussing the merits of each approach: http://www.google.com/search?q=natural+key+surrogate+key http://www.agiledata.org/essays/keys.html http://www.informationweek.com/news/software/bi/201806814 A: Using 'unique (object) ID' fields simplifies joins, but you should aim to have the other (possibly composite) key still unique -- do NOT relax the not-null constraints and DO maintain the unique constraint. If the DBMS can't handle unique integers effectively, it has big problems. However, using both a 'unique (object) ID' and the other key does use more space (for the indexes) than just the other key, and has two indexes to update on each insert operation. So it isn't a freebie -- but as long as you maintain the original key, too, then you'll be OK. If you eliminate the other key, you are breaking the design of your system; all hell will break loose eventually (and you might or might not spot that hell broke loose). A: I basically am a member of the surrogate key team, and even if I appreciate and understand arguments such as the ones presented here by JeremyDWill, I am still looking for the case where "natural" key is better than surrogate ... Other posts dealing with this issue usually refer to relational database theory and database performance. Another interesting argument, always forgotten in this case, is related to table normalisation and code productivity: each time I create a table, shall I lose time * *identifying its primary key and its physical characteristics (type, size) *remembering these characteristics each time I want to refer to it in my code? *explaining my PK choice to other developers in the team? My answer is no to all of these questions: * *I have no time to lose trying to identify "the best Primary Key" when dealing with a list of persons. *I do not want to remember that the Primary Key of my "computer" table is a 64 characters long string (does Windows accepts that many characters for a computer name?). *I don't want to explain my choice to other developers, where one of them will finally say "Yeah man, but consider that you have to manage computers over different domains? Does this 64 characters string allow you to store the domain name + the computer name?". So I've been working for the last five years with a very basic rule: each table (let's call it 'myTable') has its first field called 'id_MyTable' which is of uniqueIdentifier type. Even if this table supports a "many-to-many" relation, such as a 'ComputerUser' table, where the combination of 'id_Computer' and 'id_User' forms a very acceptable Primary Key, I prefer to create this 'id_ComputerUser' field being a uniqueIdentifier, just to stick to the rule. The major advantage is that you don't have to care animore about the use of Primary Key and/or Foreign Key within your code. Once you have the table name, you know the PK name and type. Once you know which links are implemented in your data model, you'll know the name of available foreign keys in the table. I am not sure that my rule is the best one. But it is a very efficient one! A: A practical approach to developing a new architecture is one that utilizes surrogate keys for tables that will contain thousands of multi-column highly unique records and composite keys for short descriptionary tables. I usually find that the colleges dictate the use of surrogate keys while the real world programmers prefer composite keys. You really need to apply the right type of primary key to the table - not just one way or the other. A: I've been developing database applications for 15 years and I have yet to come across a case where a non-surrogate key was a better choice than a surrogate key. I'm not saying that such a case does not exist, I'm just saying when you factor in the practical issues of actually developing an application that accesses the database, usually the benefits of a surrogate key start to overwhelm the theoretical purity of non-surrogate keys. A: using natural keys makes a nightmare using any automatic ORM as persistence layer. Also, foreign keys on multiple column tend to overlap one another and this will give further problem when navigating and updating the relationship in a OO way. Still you could transform the natural key in an unique constrain and add an auto generated id; this doesn't remove the problem with the foreign keys, though, those will have to be changed by hand; hopefully multiple columns and overlapping constraints will be a minority of all the relationship, so you could concentrate on refactoring where it matter most. natural pk have their motivation and usages scenario and are not a bad thing(tm), they just tend to not get along well with ORM. my feeling is that as any other concepts, natural keys and table normalization should be used when sensible and not as blind design constraints A: I'm going to be short and sweet here: Composite primary keys are not good these days. Add in surrogate arbitrary keys if you can and maintain the current key schemes via unique constraints. ORM is happy, you're happy, original programmer not-so-happy but unless he's your boss then he can just deal with it. A: the primary key should be constant and meaningless; non-surrogate keys usually fail one or both requirements, eventually * *if the key is not constant, you have a future update issue that can get quite complicated *if the key is not meaningless, then it is more likely to change, i.e. not be constant; see above take a simple, common example: a table of Inventory items. It may be tempting to make the item number (sku number, barcode, part code, or whatever) the primary key, but then a year later all the item numbers change and you're left with a very messy update-the-whole-database problem... EDIT: there's an additional issue that is more practical than philosophical. In many cases you're going to find a particular row somehow, then later update it or find it again (or both). With composite keys there is more data to keep track of and more contraints in the WHERE clause for the re-find or update (or delete). It is also possible that one of the key segments may have changed in the meantime!. With a surrogate key, there is always only one value to retain (the surrogate ID) and by definition it cannot change, which simplifies the situation significantly. A: Composite keys can be good - they may affect performance - but they are not the only answer, in much the same way that a unique (surrogate) key isn't the only answer. What concerns me is the vagueness in the reasoning for choosing composite keys. More often than not vagueness about anything technical indicates a lack of understanding - maybe following someone else's guidelines, in a book or article.... There is nothing wrong with a single unique ID - infact if you've got an application connected to a database server and you can choose which database you're using it will all be good, and you can pretty much do anything with your keys and not really suffer too badly. There has been, and will be, a lot written about this, because there is no single answer. There are methods and approaches that need to be applied carefully in a skilled manner. I've had lots of problems with ID's being provided automatically by the database - and I avoid them wherever possible, but still use them occasionally. A: ... how the database handles ID fields in a non-efficient manner and when it's building indexes, tree sorts are flawed ... This was almost certainly nonsense, but may have related to the issue of index block contention when assigning incrementing numbers to a PK at a high rate from different sessions. If so then the REVERSE KEY index is there to help, albeit at the expense of a larger index size due to a change in block-split algorithm. http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref998 Go synthetic, particularly if it aids more rapid development with your toolset. A: I am not a experienced one but still i m in favor of Using primary key as id here is the explanation using an example.. The format of external data may change over time. For example, you might think that the ISBN of a book would make a good primary key in a table of books. After all, ISBNs are unique. But as this particular book is being written, the publishing industry in the United States is gearing up for a major change as additional digits are added to all ISBNs. If we’d used the ISBN as the primary key in a table of books, we’d have to update each row to reflect this change. But then we’d have another problem. There’ll be other tables in the database that reference rows in the books table via the primary key. We can’t change the key in the books table unless we first go through and update all of these references. And that will involve dropping foreign key constraints, updating tables, updating the books table, and finally reestablishing the constraints. All in all, this is something of a pain. The problems go away if we use our own internal value as a primary key. No third party can come along and arbitrarily tell us to change our schema—we control our own keyspace. And if something such as the ISBN does need to change, it can change without affecting any of the existing relationships in the database. In effect, we’ve decoupled the knitting together of rows from the external representation of data in those rows. Although the explanation is quite bookish but i think it explains the things in a simpler way. A: It sounds like the person who created the database is on the natural keys side of the great natural keys vs. surrogate keys debate. I've never heard of any problems with btrees on ID fields, but I also haven't studied it in any great depth... I fall on the surrogate key side: You have less repetition when using a surrogate key, because you're only repeating a single value in the other tables. Since humans rarely join tables by hand, we don't care if it's a number or not. Also, since there's only one fixed-size column to look up in the index, it's safe to assume surrogates have a faster lookup time by primary key as well. A: @JeremyDWill Thank you for providing some much-needed balance to the debate. In particular, thanks for the info on DOMAINs. I actually use surrogate keys system-wide for the sake of consistency, but there are tradeoffs involved. The most common cause for me to curse using surrogate keys is when I have a lookup table with a short list of canonical values—I'd use less space and all my queries would be shorter/easier/faster if I had just made the values the PK instead of having to join to the table. A: You can do both - since any big company database is likely to be used by several applications, including human DBAs running one-off queries and data imports, designing it purely for the benefit of ORM systems is not always practical or desirable. What I tend to do these days is to add a "RowID" property to each table - this field is a GUID, and so unique to each row. This is NOT the primary key - that is a natural key (if possible). However, any ORM layers working on top of this database can use the RowID to identify their derived objects. Thus you might have: CREATE TABLE dbo.Invoice ( CustomerId varchar(10), CustomerOrderNo varchar(10), InvoiceAmount money not null, Comments nvarchar(4000), RowId uniqueidentifier not null default(newid()), primary key(CustomerId, CustomerOrderNo) ) So your DBA is happy, your ORM architect is happy, and your database integrity is preserved! A: I just wanted to add something here that I don't ever see covered when discussing auto-generated integer identity fields with relational databases (because I see them a lot), and that is, it's base type can an will overflow at some point. Now I'm not trying to say this automatically makes composite ids the way to go, but it's just a matter of fact that even though more data could be logically added to a table (which is still unique), the single auto-generated integer identity could prevent this from happening. Yes I realize that for most situations it's unlikely, and using a 64bit integer gives you lots of headroom, and realistically the database probably should have been designed differently if an overflow like this ever occurred. But that doesn't prevent someone from doing it... a table using a single auto-generated 32bit integer as it's identity, which is expected to store all transactions at a global level for a particular fast-food company, is going fail as soon as it tries to insert it's 2,147,483,648th transaction (and that is a completely feasible scenario). It's just something to note, that people tend to gloss over or just ignore entirely. If any table is going to be inserted into with regularity, considerations should be made to just how often and how much data will accumulate over time, and whether or not an integer based identifier should even be used.
{ "language": "en", "url": "https://stackoverflow.com/questions/159087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: DependencyProperty and DataBinding? In WPF: Can someone please explain the relationship between DependencyProperty and Databinding? I have a property in my code behind I want to be the source of my databinding. When does a DependencyProperty (or does it) come into play if I want to bind this object to textboxes on the XAML. A: The target in a binding must always be a DependencyProperty, but any property (even plain properties) can be the source. The problem with plain properties is that the binding will only pick up the value once and it won't change after that because change notification is missing from the plain source property. To provide that change notification without making it a DependencyProperty, one can: * *Implement INotifyPropertyChanged on the class defining the property. *Create a PropertyNameChanged event. (Backward compatibility.) WPF will work better with the first choice. A: What is the DependencyProperty? The DependencyProperty class is one of the most important design bases hidden deep in the .Net Framework WPF. This class is protected by sealed from the .NET Framework. This property differs from the one-dimensional general property in that it not only stores field values, but also takes advantage of the various functions provided within the class. Most importantly, there is a full foundation for data binding. You can also send notifications whenever you bind something. DependencyProperty Wpf Xaml Binding It's already a late answer, but I'll introduce the results of my research.
{ "language": "en", "url": "https://stackoverflow.com/questions/159088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Issue with XDocument and the BOM (Byte Order Mark) Is there any way to output the contents of an XDocument without the BOM? When reading the output with Flash, it causes errors. A: If you're writing the XML with an XmlWriter, you can set the Encoding to one that has been initialized to leave out the BOM. EG: System.Text.UTF8Encoding's constructor takes a boolean to specify whether you want the BOM, so: XmlWriter writer = XmlWriter.Create("foo.xml"); writer.Settings.Encoding = new System.Text.UTF8Encoding(false); myXDocument.WriteTo(writer); Would create an XmlWriter with UTF-8 encoding and without the Byte Order Mark. A: Slight mod to Chris Wenham's answer. You can't modify the encoding once the XmlWriter is created, but you can set it using the XmlWriterSettings when creating the XmlWriter XmlWriterSettings settings = new XmlWriterSettings(); settings.Encoding = new System.Text.UTF8Encoding(false); XmlWriter writer = XmlWriter.Create("foo.xml", settings); myXDocument.WriteTo(writer); A: I couldn't add a comment above, but if anyone uses Chris Wenham's suggestion, remember to Dispose of the writer! I spent some time wondering why my output was truncated, and that was the reason. Suggest a using(XmlWriter...) {...} change to Chris' suggestion A: Kind of a combination of postings, maybe something like this: MemoryStream ms = new MemoryStream(); StreamWriter writer = new StreamWriter(ms, new UTF8Encoding(false)); xmlDocument.Save(writer); A: As stated, this problem has a bad smell. According to this support note, Flash uses the BOM to disambiguate between UTF-16BE and UTF-16LE, which is as it should be. So you shouldn't be getting an error from Flash: XDocument produces UTF16 encoded well-formed XML, and Macromedia claims that Flash can read UTF16 encoded well-formed XML. This makes me suspect that whatever the problem is that you're encountering, it probably isn't being caused by the BOM. If it were me, I'd dig around more, with the expectation that the actual problem is somewhere else. A: You could probably use System.Text.Encoding.Convert() on the output; Just as something to try, not something I have tested. A: Convert it to a string, then remove the mark yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/159097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is it possible to use a C++ .lib file from within a C# program? Is it possible to use a C++ .lib file from within a C# program? A: There are plenty of ways. Read about "interop" in MSDN.. One way is to expose the lib as a DLL, and then use pinvoke to call these functions from a C# project. That limits you to a C-style interface, though. If your interface is more complex (e.g. object oriented) you can create a C++/CLI layer that will expose the lib's class structure to your C# program. This means you'll have to create a managed C++ (or C++/CLI as it's now called) project; then design an interface in managed code that will be implemented by calls to native C++ (i.e. your lib). Another way of doing this is by wrapping your lib with a COM interface. But COM's a pain, so I wouldn't... A: Not directly. You can create a C++/CLI assembly that consumes the lib and then access that from C#, or you can wrap the lib as a DLL. A: What you need is a managed wrapper (C++/CLI) around the native C/C++ library that you are working with. If you are looking for any C++/CLI book I'd recommend Nishant Sivakumar's C++/CLI in Action A: Already answered to wrap it but here's an example . Good luck! A: I would take a look at swig, we use this to good effect on our project to expose our C++ API to other language platforms. It's a well maintained project that effectively builds a thin wrapper around your C++ library that can allow languages such as C# to communicate directly with your native code - saving you the trouble of having to implement (and debug) glue code. A: No. You can only use a full .dll from a C# program. A: That depends, do you have any limitations on this scenario? If you have a lib file, it should be possible to first compile it into a DLL file, secondly exposing the functions you want to call in the DLL interface, and thirdly, call them using C# native methods (have a look at pinvoke.net on how to do this bit). A: you can't use a lib, but like the others said, you can use it if you wrap it into a dll. swig can take the headers of your .lib, and if they are not too complex it can generate the dll for you which you would then call with a pinvoke from c# which would also be generated by swig. if your library is complex and has reference counted smart pointers everywhere, you should find an alternative.
{ "language": "en", "url": "https://stackoverflow.com/questions/159103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Which Type of Input is Least Vulnerable to Attack? Which type of input is least vulnerable to Cross-Site Scripting (XSS) and SQL Injection attacks. PHP, HTML, BBCode, etc. I need to know for a forum I'm helping a friend set up. A: (I just posted this in a comment, but it seems a few people are under the impression that select lists, radio buttons, etc don't need to be sanitized.) Don't count on radio buttons being secure. You should still sanitize the data on the server. People could create an html page on their local machine, and make a text box with the same name as your radio button, and have that data get posted back. A more advanced user could use a proxy like WebScarab, and just tweak the parameters as they are posted back to the server. A good rule of thumb is to always use parameterized SQL statements, and always escape user-generated data before putting it into the HTML. A: We need to know more about your situation. Vulnerable how? Some things you should always do: * *Escape strings before storing them in a database to guard against SQL injections *HTML encode strings when printing them back to the user from an unknown source, to prevent malicious html/javascript I would never execute php provided by a user. BBCode/UBBCode are fine, because they are converted to semantically correct html, though you may want to look into XSS vulnerabilities related to malformed image tags. If you allow HTML input, you can whitelist certain elements, but this will be a complicated approach that is prone to errors. So, given all of the preceding, I would say that using a good off-the-shelf BBCode library would be your best bet. A: None of them are. All data that is expected at the server can be manipulated by those with the knowledge and motivation. The browser and form that you expect people to be using is only one of several valid ways to submit data to your server/script. Please familiarize yourself with the topic of XSS and related issues * *http://shiflett.org/articles/input-filtering *http://shiflett.org/blog/2007/mar/allowing-html-and-preventing-xss A: Any kind of boolean. You can even filter invalid input quite easily. ;-) A: There's lots of BB code parsers that sanitize input for HTML and so on. If there's not one available as a package, then you could look at one of the open source forum software packages for guidance. BB code makes sense as it's the "standard" for forums. A: The input that is the least vulnerable to attack is the "non-input". Are you asking the right question? A: For Odin's sake, please don't sanitize inputs. Don't be afraid of users entering whatever they want into your forms. User input is not inherently unsafe. The accepted answer leads to those kinds of web interfaces like my bank's, where Mr. O'Reilly cannot open an account, because he has an illegal character in his name. What is unsafe is always how you use the user input. The correct way to avoid SQL injections is to use prepared statements. If your database abstraction layer doesn't let you use those, use the correct escaping functions rigorously (myslq_escape et al). The correct way to prevent XSS attacks is never something like striptags(). Escape everything - in PHP, something like htmlentities() is what you're looking for, but it depends on whether you are outputing the string as part of HTML text, an HTML attribute, or inside of Javascript, etc. Use the right tool for the right context. And NEVER just print the user's input directly to the page. Finally, have a look at the Top 10 vulnerabilities of web applications, and do the right thing to prevent them. http://www.applicure.com/blog/owasp-top-10-2010
{ "language": "en", "url": "https://stackoverflow.com/questions/159114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I match any character across multiple lines in a regular expression? For example, this regex (.*)<FooBar> will match: abcde<FooBar> But how do I get it to match across multiple lines? abcde fghij<FooBar> A: "." normally doesn't match line-breaks. Most regex engines allows you to add the S-flag (also called DOTALL and SINGLELINE) to make "." also match newlines. If that fails, you could do something like [\S\s]. A: For Eclipse, the following expression worked: Foo jadajada Bar" Regular expression: Foo[\S\s]{1,10}.*Bar* A: If you're using Eclipse search, you can enable the "DOTALL" option to make '.' match any character including line delimiters: just add "(?s)" at the beginning of your search string. Example: (?s).*<FooBar> A: Try this: ((.|\n)*)<FooBar> It basically says "any character or a newline" repeated zero or more times. A: Note that (.|\n)* can be less efficient than (for example) [\s\S]* (if your language's regexes support such escapes) and than finding how to specify the modifier that makes . also match newlines. Or you can go with POSIXy alternatives like [[:space:][:^space:]]*. A: Use: /(.*)<FooBar>/s The s causes dot (.) to match carriage returns. A: Use RegexOptions.Singleline. It changes the meaning of . to include newlines. Regex.Replace(content, searchText, replaceText, RegexOptions.Singleline); A: In notepad++ you can use this <table (.|\r\n)*</table> It will match the entire table starting from rows and columns You can make it greedy, using the following, that way it will match the first, second and so forth tables and not all at once <table (.|\r\n)*?</table> A: In many regex dialects, /[\S\s]*<Foobar>/ will do just what you want. Source A: ([\s\S]*)<FooBar> The dot matches all except newlines (\r\n). So use \s\S, which will match ALL characters. A: In a Java-based regular expression, you can use [\s\S]. A: This works for me and is the simplest one: (\X*)<FooBar> A: It depends on the language, but there should be a modifier that you can add to the regex pattern. In PHP it is: /(.*)<FooBar>/s The s at the end causes the dot to match all characters including newlines. A: Generally, . doesn't match newlines, so try ((.|\n)*)<foobar>. A: Solution: Use pattern modifier sU will get the desired matching in PHP. Example: preg_match('/(.*)/sU', $content, $match); Sources: * *Pattern Modifiers A: In JavaScript you can use [^]* to search for zero to infinite characters, including line breaks. $("#find_and_replace").click(function() { var text = $("#textarea").val(); search_term = new RegExp("[^]*<Foobar>", "gi");; replace_term = "Replacement term"; var new_text = text.replace(search_term, replace_term); $("#textarea").val(new_text); }); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <button id="find_and_replace">Find and replace</button> <br> <textarea ID="textarea">abcde fghij&lt;Foobar&gt;</textarea> A: The question is, can the . pattern match any character? The answer varies from engine to engine. The main difference is whether the pattern is used by a POSIX or non-POSIX regex library. A special note about lua-patterns: they are not considered regular expressions, but . matches any character there, the same as POSIX-based engines. Another note on matlab and octave: the . matches any character by default (demo): str = "abcde\n fghij<Foobar>"; expression = '(.*)<Foobar>*'; [tokens,matches] = regexp(str,expression,'tokens','match'); (tokens contain a abcde\n fghij item). Also, in all of boost's regex grammars the dot matches line breaks by default. Boost's ECMAScript grammar allows you to turn this off with regex_constants::no_mod_m (source). As for oracle (it is POSIX based), use the n option (demo): select regexp_substr('abcde' || chr(10) ||' fghij<Foobar>', '(.*)<Foobar>', 1, 1, 'n', 1) as results from dual POSIX-based engines: A mere . already matches line breaks, so there isn't a need to use any modifiers, see bash (demo). The tcl (demo), postgresql (demo), r (TRE, base R default engine with no perl=TRUE, for base R with perl=TRUE or for stringr/stringi patterns, use the (?s) inline modifier) (demo) also treat . the same way. However, most POSIX-based tools process input line by line. Hence, . does not match the line breaks just because they are not in scope. Here are some examples how to override this: * *sed - There are multiple workarounds. The most precise, but not very safe, is sed 'H;1h;$!d;x; s/\(.*\)><Foobar>/\1/' (H;1h;$!d;x; slurps the file into memory). If whole lines must be included, sed '/start_pattern/,/end_pattern/d' file (removing from start will end with matched lines included) or sed '/start_pattern/,/end_pattern/{{//!d;};}' file (with matching lines excluded) can be considered. *perl - perl -0pe 's/(.*)<FooBar>/$1/gs' <<< "$str" (-0 slurps the whole file into memory, -p prints the file after applying the script given by -e). Note that using -000pe will slurp the file and activate 'paragraph mode' where Perl uses consecutive newlines (\n\n) as the record separator. *gnu-grep - grep -Poz '(?si)abc\K.*?(?=<Foobar>)' file. Here, z enables file slurping, (?s) enables the DOTALL mode for the . pattern, (?i) enables case insensitive mode, \K omits the text matched so far, *? is a lazy quantifier, (?=<Foobar>) matches the location before <Foobar>. *pcregrep - pcregrep -Mi "(?si)abc\K.*?(?=<Foobar>)" file (M enables file slurping here). Note pcregrep is a good solution for macOS grep users. See demos. Non-POSIX-based engines: * *php - Use the s modifier PCRE_DOTALL modifier: preg_match('~(.*)<Foobar>~s', $s, $m) (demo) *c# - Use RegexOptions.Singleline flag (demo): - var result = Regex.Match(s, @"(.*)<Foobar>", RegexOptions.Singleline).Groups[1].Value;- var result = Regex.Match(s, @"(?s)(.*)<Foobar>").Groups[1].Value; *powershell - Use the (?s) inline option: $s = "abcde`nfghij<FooBar>"; $s -match "(?s)(.*)<Foobar>"; $matches[1] *perl - Use the s modifier (or (?s) inline version at the start) (demo): /(.*)<FooBar>/s *python - Use the re.DOTALL (or re.S) flags or (?s) inline modifier (demo): m = re.search(r"(.*)<FooBar>", s, flags=re.S) (and then if m:, print(m.group(1))) *java - Use Pattern.DOTALL modifier (or inline (?s) flag) (demo): Pattern.compile("(.*)<FooBar>", Pattern.DOTALL) *kotlin - Use RegexOption.DOT_MATCHES_ALL : "(.*)<FooBar>".toRegex(RegexOption.DOT_MATCHES_ALL) *groovy - Use (?s) in-pattern modifier (demo): regex = /(?s)(.*)<FooBar>/ *scala - Use (?s) modifier (demo): "(?s)(.*)<Foobar>".r.findAllIn("abcde\n fghij<Foobar>").matchData foreach { m => println(m.group(1)) } *javascript - Use [^] or workarounds [\d\D] / [\w\W] / [\s\S] (demo): s.match(/([\s\S]*)<FooBar>/)[1] *c++ (std::regex) Use [\s\S] or the JavaScript workarounds (demo): regex rex(R"(([\s\S]*)<FooBar>)"); *vba vbscript - Use the same approach as in JavaScript, ([\s\S]*)<Foobar>. (NOTE: The MultiLine property of the RegExp object is sometimes erroneously thought to be the option to allow . match across line breaks, while, in fact, it only changes the ^ and $ behavior to match start/end of lines rather than strings, the same as in JavaScript regex) behavior.) *ruby - Use the /m MULTILINE modifier (demo): s[/(.*)<Foobar>/m, 1] *rtrebase-r - Base R PCRE regexps - use (?s): regmatches(x, regexec("(?s)(.*)<FooBar>",x, perl=TRUE))[[1]][2] (demo) *ricustringrstringi - in stringr/stringi regex funtions that are powered with the ICU regex engine. Also use (?s): stringr::str_match(x, "(?s)(.*)<FooBar>")[,2] (demo) *go - Use the inline modifier (?s) at the start (demo): re: = regexp.MustCompile(`(?s)(.*)<FooBar>`) *swift - Use dotMatchesLineSeparators or (easier) pass the (?s) inline modifier to the pattern: let rx = "(?s)(.*)<Foobar>" *objective-c - The same as Swift. (?s) works the easiest, but here is how the option can be used: NSRegularExpression* regex = [NSRegularExpression regularExpressionWithPattern:pattern options:NSRegularExpressionDotMatchesLineSeparators error:&regexError]; *re2, google-apps-script - Use the (?s) modifier (demo): "(?s)(.*)<Foobar>" (in Google Spreadsheets, =REGEXEXTRACT(A2,"(?s)(.*)<Foobar>")) NOTES ON (?s): In most non-POSIX engines, the (?s) inline modifier (or embedded flag option) can be used to enforce . to match line breaks. If placed at the start of the pattern, (?s) changes the bahavior of all . in the pattern. If the (?s) is placed somewhere after the beginning, only those .s will be affected that are located to the right of it unless this is a pattern passed to Python's re. In Python re, regardless of the (?s) location, the whole pattern . is affected. The (?s) effect is stopped using (?-s). A modified group can be used to only affect a specified range of a regex pattern (e.g., Delim1(?s:.*?)\nDelim2.* will make the first .*? match across newlines and the second .* will only match the rest of the line). POSIX note: In non-POSIX regex engines, to match any character, [\s\S] / [\d\D] / [\w\W] constructs can be used. In POSIX, [\s\S] is not matching any character (as in JavaScript or any non-POSIX engine), because regex escape sequences are not supported inside bracket expressions. [\s\S] is parsed as bracket expressions that match a single character, \ or s or S. A: We can also use (.*?\n)*? to match everything including newline without being greedy. This will make the new line optional (.*?|\n)*? A: In Ruby you can use the 'm' option (multiline): /YOUR_REGEXP/m See the Regexp documentation on ruby-doc.org for more information. A: In the context of use within languages, regular expressions act on strings, not lines. So you should be able to use the regex normally, assuming that the input string has multiple lines. In this case, the given regex will match the entire string, since "<FooBar>" is present. Depending on the specifics of the regex implementation, the $1 value (obtained from the "(.*)") will either be "fghij" or "abcde\nfghij". As others have said, some implementations allow you to control whether the "." will match the newline, giving you the choice. Line-based regular expression use is usually for command line things like egrep. A: I had the same problem and solved it in probably not the best way but it works. I replaced all line breaks before I did my real match: mystring = Regex.Replace(mystring, "\r\n", "") I am manipulating HTML so line breaks don't really matter to me in this case. I tried all of the suggestions above with no luck. I am using .NET 3.5 FYI. A: Try: .*\n*.*<FooBar> assuming you are also allowing blank newlines. As you are allowing any character including nothing before <FooBar>. A: I wanted to match a particular if block in Java: ... ... if(isTrue){ doAction(); } ... ... } If I use the regExp if \(isTrue(.|\n)*} it included the closing brace for the method block, so I used if \(!isTrue([^}.]|\n)*} to exclude the closing brace from the wildcard match. A: Often we have to modify a substring with a few keywords spread across lines preceding the substring. Consider an XML element: <TASK> <UID>21</UID> <Name>Architectural design</Name> <PercentComplete>81</PercentComplete> </TASK> Suppose we want to modify the 81, to some other value, say 40. First identify .UID.21..UID., then skip all characters including \n till .PercentCompleted.. The regular expression pattern and the replace specification are: String hw = new String("<TASK>\n <UID>21</UID>\n <Name>Architectural design</Name>\n <PercentComplete>81</PercentComplete>\n</TASK>"); String pattern = new String ("(<UID>21</UID>)((.|\n)*?)(<PercentComplete>)(\\d+)(</PercentComplete>)"); String replaceSpec = new String ("$1$2$440$6"); // Note that the group (<PercentComplete>) is $4 and the group ((.|\n)*?) is $2. String iw = hw.replaceFirst(pattern, replaceSpec); System.out.println(iw); <TASK> <UID>21</UID> <Name>Architectural design</Name> <PercentComplete>40</PercentComplete> </TASK> The subgroup (.|\n) is probably the missing group $3. If we make it non-capturing by (?:.|\n) then the $3 is (<PercentComplete>). So the pattern and replaceSpec can also be: pattern = new String("(<UID>21</UID>)((?:.|\n)*?)(<PercentComplete>)(\\d+)(</PercentComplete>)"); replaceSpec = new String("$1$2$340$5") and the replacement works correctly as before. A: Typically searching for three consecutive lines in PowerShell, it would look like: $file = Get-Content file.txt -raw $pattern = 'lineone\r\nlinetwo\r\nlinethree\r\n' # "Windows" text $pattern = 'lineone\nlinetwo\nlinethree\n' # "Unix" text $pattern = 'lineone\r?\nlinetwo\r?\nlinethree\r?\n' # Both $file -match $pattern # output True Bizarrely, this would be Unix text at the prompt, but Windows text in a file: $pattern = 'lineone linetwo linethree ' Here's a way to print out the line endings: 'lineone linetwo linethree ' -replace "`r",'\r' -replace "`n",'\n' # Output lineone\nlinetwo\nlinethree\n A: Option 1 One way would be to use the s flag (just like the accepted answer): /(.*)<FooBar>/s Demo 1 Option 2 A second way would be to use the m (multiline) flag and any of the following patterns: /([\s\S]*)<FooBar>/m or /([\d\D]*)<FooBar>/m or /([\w\W]*)<FooBar>/m Demo 2 RegEx Circuit jex.im visualizes regular expressions:
{ "language": "en", "url": "https://stackoverflow.com/questions/159118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "559" }
Q: DLL versions and Visual Studio attach to process I'm trying to use VS's attach to process tool to debug add-ins I'm developing for Sql Server Reporting Services. I am able to correctly debug it with attach to process when I copy dll's and pdb's in my project debug/bin dir to the ReportServer/bin dir. But, if I use my msbuild script and copy those dlls and pdbs to the ReportServer/bin dir I get the wrong version. 1) How do I check the current version of a dll/tell if a dll is incompatible with another version? 2) And how do I tell what dll's/versions are loaded by the ReportServer process? Thanks! A: I don't know anything about Sql Server Reporting Services, but 1) you can inspect the version of a DLL with ildasm.exe 2) when you use VS 'attach to process', in the 'Modules' window it shows the version numbers of all the loaded assemblies
{ "language": "en", "url": "https://stackoverflow.com/questions/159135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Getting MAC Address I need a cross platform method of determining the MAC address of a computer at run time. For windows the 'wmi' module can be used and the only method under Linux I could find was to run ifconfig and run a regex across its output. I don't like using a package that only works on one OS, and parsing the output of another program doesn't seem very elegant not to mention error prone. Does anyone know a cross platform method (windows and linux) method to get the MAC address? If not, does anyone know any more elegant methods then those I listed above? A: The pure python solution for this problem under Linux to get the MAC for a specific local interface, originally posted as a comment by vishnubob and improved by on Ben Mackey in this activestate recipe #!/usr/bin/python import fcntl, socket, struct def getHwAddr(ifname): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) info = fcntl.ioctl(s.fileno(), 0x8927, struct.pack('256s', ifname[:15])) return ':'.join(['%02x' % ord(char) for char in info[18:24]]) print getHwAddr('eth0') This is the Python 3 compatible code: #!/usr/bin/env python3 # -*- coding: utf-8 -*- import fcntl import socket import struct def getHwAddr(ifname): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) info = fcntl.ioctl(s.fileno(), 0x8927, struct.pack('256s', bytes(ifname, 'utf-8')[:15])) return ':'.join('%02x' % b for b in info[18:24]) def main(): print(getHwAddr('enp0s8')) if __name__ == "__main__": main() A: Using my answer from here: https://stackoverflow.com/a/18031868/2362361 It would be important to know to which iface you want the MAC for since many can exist (bluetooth, several nics, etc.). This does the job when you know the IP of the iface you need the MAC for, using netifaces (available in PyPI): import netifaces as nif def mac_for_ip(ip): 'Returns a list of MACs for interfaces that have given IP, returns None if not found' for i in nif.interfaces(): addrs = nif.ifaddresses(i) try: if_mac = addrs[nif.AF_LINK][0]['addr'] if_ip = addrs[nif.AF_INET][0]['addr'] except IndexError, KeyError: #ignore ifaces that dont have MAC or IP if_mac = if_ip = None if if_ip == ip: return if_mac return None Testing: >>> mac_for_ip('169.254.90.191') '2c:41:38:0a:94:8b' A: You can do this with psutil which is cross-platform: import psutil nics = psutil.net_if_addrs() print [j.address for j in nics[i] for i in nics if i!="lo" and j.family==17] A: netifaces is a good module to use for getting the mac address (and other addresses). It's crossplatform and makes a bit more sense than using socket or uuid. import netifaces netifaces.interfaces() # ['lo', 'eth0', 'tun2'] netifaces.ifaddresses('eth0')[netifaces.AF_LINK] # [{'addr': '08:00:27:50:f2:51', 'broadcast': 'ff:ff:ff:ff:ff:ff'}] * *pypi location *Good Intro to netifaces A: The cross-platform getmac package will work for this, if you don't mind taking on a dependency. It works with Python 2.7+ and 3.4+. It will try many different methods until either getting a address or returning None. from getmac import get_mac_address eth_mac = get_mac_address(interface="eth0") win_mac = get_mac_address(interface="Ethernet 3") ip_mac = get_mac_address(ip="192.168.0.1") ip6_mac = get_mac_address(ip6="::1") host_mac = get_mac_address(hostname="localhost") updated_mac = get_mac_address(ip="10.0.0.1", network_request=True) Disclaimer: I am the author of the package. Update (Jan 14 2019): the package now only supports Python 2.7+ and 3.4+. You can still use an older version of the package if you need to work with an older Python (2.5, 2.6, 3.2, 3.3). A: Sometimes we have more than one net interface. A simple method to find out the mac address of a specific interface, is: def getmac(interface): try: mac = open('/sys/class/net/'+interface+'/address').readline() except: mac = "00:00:00:00:00:00" return mac[0:17] to call the method is simple myMAC = getmac("wlan0") A: One other thing that you should note is that uuid.getnode() can fake the MAC addr by returning a random 48-bit number which may not be what you are expecting. Also, there's no explicit indication that the MAC address has been faked, but you could detect it by calling getnode() twice and seeing if the result varies. If the same value is returned by both calls, you have the MAC address, otherwise you are getting a faked address. >>> print uuid.getnode.__doc__ Get the hardware address as a 48-bit positive integer. The first time this runs, it may launch a separate program, which could be quite slow. If all attempts to obtain the hardware address fail, we choose a random 48-bit number with its eighth bit set to 1 as recommended in RFC 4122. A: Note that you can build your own cross-platform library in python using conditional imports. e.g. import platform if platform.system() == 'Linux': import LinuxMac mac_address = LinuxMac.get_mac_address() elif platform.system() == 'Windows': # etc This will allow you to use os.system calls or platform-specific libraries. A: To get the eth0 interface MAC address, import psutil nics = psutil.net_if_addrs()['eth0'] for interface in nics: if interface.family == 17: print(interface.address) A: Python 2.5 includes an uuid implementation which (in at least one version) needs the mac address. You can import the mac finding function into your own code easily: from uuid import getnode as get_mac mac = get_mac() The return value is the mac address as 48 bit integer. A: nice actually return a dictionary within the dictionary it returns a list and within the list, it returns a staple. but nics['Ethernet'][0].address iteration solve the problem. import psutil nics = psutil.net_if_addrs() mac_address = nics['Ethernet'][0].address print(mac_address) A: This cross-platform code does not 100% work on Windows. This works on Windows: import psutil print([(k, addr.address) for k, v in psutil.net_if_addrs().items() for addr in v if addr.family == -1]) Example: [ ('Local Area Connection', '01-23-45-67-89-0A'), ('Wireless Network Connection', '23-45-67-89-0A-BC'), ('Bluetooth Network Connection', '45-67-89-0A-BC-DE'), ('isatap.{01ABCDEF-0123-4567-890A-0123456789AB}', '00-00-00-00-00-00-00-01') ] A: I dont know of a unified way, but heres something that you might find useful: http://www.codeguru.com/Cpp/I-N/network/networkinformation/article.php/c5451 What I would do in this case would be to wrap these up into a function, and based on the OS it would run the proper command, parse as required and return only the MAC address formatted as you want. Its ofcourse all the same, except that you only have to do it once, and it looks cleaner from the main code. A: For Linux you can retrieve the MAC address using a SIOCGIFHWADDR ioctl. struct ifreq ifr; uint8_t macaddr[6]; if ((s = socket(AF_INET, SOCK_DGRAM, IPPROTO_IP)) < 0) return -1; strcpy(ifr.ifr_name, "eth0"); if (ioctl(s, SIOCGIFHWADDR, (void *)&ifr) == 0) { if (ifr.ifr_hwaddr.sa_family == ARPHRD_ETHER) { memcpy(macaddr, ifr.ifr_hwaddr.sa_data, 6); return 0; ... etc ... You've tagged the question "python". I don't know of an existing Python module to get this information. You could use ctypes to call the ioctl directly. A: For Linux let me introduce a shell script that will show the mac address and allows to change it (MAC sniffing). ifconfig eth0 | grep HWaddr |cut -dH -f2|cut -d\ -f2 00:26:6c:df:c3:95 Cut arguements may dffer (I am not an expert) try: ifconfig etho | grep HWaddr eth0 Link encap:Ethernet HWaddr 00:26:6c:df:c3:95 To change MAC we may do: ifconfig eth0 down ifconfig eth0 hw ether 00:80:48:BA:d1:30 ifconfig eth0 up will change mac address to 00:80:48:BA:d1:30 (temporarily, will restore to actual one upon reboot). A: Alternatively, import uuid mac_id=(':'.join(['{:02x}'.format((uuid.getnode() >> ele) & 0xff)
{ "language": "en", "url": "https://stackoverflow.com/questions/159137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "135" }
Q: Branch Current changes then "rollback" in TFS I have a specific changeset that I want to "rollback" my Development branch to, but I want to take all of the the changes after that specific changeset and put them in to a new branch. Is this possible in TFS? If so, how could I do such a thing? Thanks, Dave A: Well.. The easiest way is to do exactly what you just said. Branch the existing code into a new spot. Then get the changeset you want, checkout the project, and check the changeset back in. A: If you have the Team Foundation Power Tools installed, you can use the command tfpt rollback to create a changeset in your client that will take care of the rollback. A: I ended up branching at the changeset I wanted, naming it Development-stable, then renamed Development to Development-experimental and then renamed Development-stable to Development.
{ "language": "en", "url": "https://stackoverflow.com/questions/159138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Groovy executing shell commands Groovy adds the execute method to String to make executing shells fairly easy; println "ls".execute().text but if an error happens, then there is no resulting output. Is there an easy way to get both the standard error and standard out? (other than creating a bunch of code to; create two threads to read both inputstreams, then using a parent stream to wait for them to complete then convert the strings back to text?) It would be nice to have something like; def x = shellDo("ls /tmp/NoFile") println "out: ${x.out} err:${x.err}" A: def exec = { encoding, execPath, execStr, execCommands -> def outputCatcher = new ByteArrayOutputStream() def errorCatcher = new ByteArrayOutputStream() def proc = execStr.execute(null, new File(execPath)) def inputCatcher = proc.outputStream execCommands.each { cm -> inputCatcher.write(cm.getBytes(encoding)) inputCatcher.flush() } proc.consumeProcessOutput(outputCatcher, errorCatcher) proc.waitFor() return [new String(outputCatcher.toByteArray(), encoding), new String(errorCatcher.toByteArray(), encoding)] } def out = exec("cp866", "C:\\Test", "cmd", ["cd..\n", "dir\n", "exit\n"]) println "OUT:\n" + out[0] println "ERR:\n" + out[1] A: "ls".execute() returns a Process object which is why "ls".execute().text works. You should be able to just read the error stream to determine if there were any errors. There is a extra method on Process that allow you to pass a StringBuffer to retrieve the text: consumeProcessErrorStream(StringBuffer error). Example: def proc = "ls".execute() def b = new StringBuffer() proc.consumeProcessErrorStream(b) println proc.text println b.toString() A: // a wrapper closure around executing a string // can take either a string or a list of strings (for arguments with spaces) // prints all output, complains and halts on error def runCommand = { strList -> assert ( strList instanceof String || ( strList instanceof List && strList.each{ it instanceof String } ) \ ) def proc = strList.execute() proc.in.eachLine { line -> println line } proc.out.close() proc.waitFor() print "[INFO] ( " if(strList instanceof List) { strList.each { print "${it} " } } else { print strList } println " )" if (proc.exitValue()) { println "gave the following error: " println "[ERROR] ${proc.getErrorStream()}" } assert !proc.exitValue() } A: I find this more idiomatic: def proc = "ls foo.txt doesnotexist.txt".execute() assert proc.in.text == "foo.txt\n" assert proc.err.text == "ls: doesnotexist.txt: No such file or directory\n" As another post mentions, these are blocking calls, but since we want to work with the output, this may be necessary. A: Ok, solved it myself; def sout = new StringBuilder(), serr = new StringBuilder() def proc = 'ls /badDir'.execute() proc.consumeProcessOutput(sout, serr) proc.waitForOrKill(1000) println "out> $sout\nerr> $serr" displays: out> err> ls: cannot access /badDir: No such file or directory A: To add one more important information to above provided answers - For a process def proc = command.execute(); always try to use def outputStream = new StringBuffer(); proc.waitForProcessOutput(outputStream, System.err) //proc.waitForProcessOutput(System.out, System.err) rather than def output = proc.in.text; to capture the outputs after executing commands in groovy as the latter is a blocking call (SO question for reason). A: command = "ls *" def execute_state=sh(returnStdout: true, script: command) but if the command failure the process will terminate
{ "language": "en", "url": "https://stackoverflow.com/questions/159148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "210" }
Q: Check out from a remote SVN repository TO a remote location? Is there a way with SVN to check out from a remote repository to another remote location rather than my local file system? Something like: svn co http://myrepository/svn/project ssh me@otherlocation.net:/var/www/project A: I think you could do: ssh me@other.net 'svn co http://repository/svn/project /var/www/project' This takes advantage of the fact that ssh lets you execute a command remotely. A: Nope. If you want to copy a repository, look into svnsync. A: You could use Subversion with SSHFS.
{ "language": "en", "url": "https://stackoverflow.com/questions/159152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Sql 05 express management studio versus standard What are the differences between the free sql express 05 management studio and the licensed version? A: Management Studio Express cannot manage the following: * *SQL Server Analysis Services *Integration Services *Notification Services *Reporting Services *SQL Server Agent *SQL Server 2005 Mobile Edition ( from this page. Look at the Note in the Overview section. ) These are all features that are not supported by SQL Server Express. Also note that the full version of SQL Management Studio is included with SQL Server 2005. A: See the "SQL Server 2005 Features Comparison" at http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx
{ "language": "en", "url": "https://stackoverflow.com/questions/159154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the difference between DTCPing and DTCTester? I've used DTCTester before to diagnose MSDTC problems. However, I just noticed DTCPing seems to do about the same thing. What's the difference between these two? From what I can tell so far, DTCPing needs to run on both client and server machines, whereas DTCTester only needs to run from the client. Are there any other differences? A: Testing DTC settings is very common when installing BizTalk Server, so from the BTS documentation, so - from http://msdn.microsoft.com/en-us/library/aa561924.aspx Use the DTCTester utility to verify transaction support between two computers if SQL Server is installed on one of the computers. The DTCTester utility uses ODBC to verify transaction support against a SQL Server database. For more information about DTCTester see How to Use DTCTester Tool. Use DTCPing to verify transaction support between two computers if SQL Server is not installed on either computer. The DTCPing tool must be run on both the client and server computer and is a good alternative to the DTCTester utility when SQL Server is not installed on either computer. For more information about DTCPing, see How to troubleshoot MS DTC firewall issues. A: This is not a direct answer to this question – but an important point to note. * *The DTCPing windows application should be open in both the servers before you start test. *As per How To Use DTCTester Tool Create an ODBC data source for your SQL Server through the ODBC utility in Control Panel. References: * *MSDTC problems *MSDTC THROUGH A FIREWALL TO AN SQL CLUSTER WITH RPC *Troubleshooting MSDTC issues with the DTCPing tool
{ "language": "en", "url": "https://stackoverflow.com/questions/159173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you convert pixels to printed inches in JavaScript? I want to resize the font of a SPAN element's style until it the SPAN's text is 7.5 inches wide when printed out on paper, but JavaScript only reports the SPAN's clientWidth property in pixels. <span id="test">123456</span> And then: #test { font-size:1.2in; /* adjust this for yourself until printout measures 7.5in wide */ } And then: console.log(document.getElementById('test').clientWidth); I've determined experimentally on one machine that it uses approximately 90 DPI as a conversion factor, because the above code logs approximately 675, at least under Firefox 3. This number is not necessarily the same under different browser, printer, screen, etc. configurations. So, how do I find the DPI the browser is using? What can I call to get back "90" on my system? A: To summarize: This is not a problem you can solve using HTML. Apart from the CSS2 print properties, there is no defined or expected way for browsers to print things. Firstly, A pixel (in CSS) is not neccessarily the same size as a pixel (on your screen), so the fact that a certain value works for you doesn't mean it will translate to other setups. Secondly, users can change the text size using features like page zoom, etc. Thirdly, because there are is no defined way of how to lay out web pages for print purposes, each browser does it differently. Just print preview something in firefox vs IE and see the difference. Fourthly, printing brings in a whole slew of other variables, such as the DPI of the printer, the paper size. Additionally, most printer drivers support user-scaling of the output set in the print dialog which the browser never sees. Finally, most likely because printing is not a goal of HTML, the 'print engine' of the web browser is (in all browsers I've seen anyway) disconnected from the rest of it. There is no way for your javascript to 'talk' to the print engine, or vice versa, so the "DPI number the browser is using for print previews" is not exposed in any way. I'd recommend a PDF file. A: I've determined experimentally on one machine that it uses approximately 90 DPI as a conversion factor, because the above code logs approximately 675, at least under Firefox 3. 1) Is this the same on every machine/browser? Definitely NOT. Every screen resolution / printer / print settings combo is gonna be a little different. There's really on way to know what the print size will be unless you're using em's instead of pixels. A: * *No *You don't This is actually further complicated by the screen resolution settings as well. Good luck. A: I think this does what you want. But I agree with the other posters, HTML isn't really suited for this sort of thing. Anyway, hope you find this useful. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> <style type="text/css"> #container {width: 7in; border: solid 1px red;} #span {display: table-cell; width: 1px; border: solid 1px blue; font-size: 12px;} </style> <script language="javascript" type="text/javascript"> function resizeText() { var container = document.getElementById("container"); var span = document.getElementById("span"); var containerWidth = container.clientWidth; var spanWidth = span.clientWidth; var nAttempts = 900; var i=1; var font_size = 12; while ( spanWidth < containerWidth && i < nAttempts ) { span.style.fontSize = font_size+"px"; spanWidth = span.clientWidth; font_size++; i++; } } </script> </head> <body> <div id="container"> <span id="span">test</span> </div> <a href="javascript:resizeText();">resize text</a> </body> </html> A: The web is not a good medium for printed materials. A: As made clear by the other answers, printing from the web is tricky business. It's unpredictable and sadly not all browsers and configuration react the same. I would point you in this direction however: You can attach a CSS to your document that is targeted specifically for printing like so <link rel="stylesheet" type="text/css" href="print.css" media="print" /> That way you can format the printed output of your page in a separate style sheet and keep your regular stylesheet for displaying on screen. I've done this before with decent results -- although it required quite a bit of tweaking to ensure that the printed document comes out the way you want it. An interesting article on the subject of printing from the web: A List Apart - Going To Print Some info from the W3C about CSS print profile: W3C - CSS Print Profile A: If you are generating content that is meant to look a specific way, you may want to look into a format that is meant to be printed, like PDF. HTML/CSS is meant to be adaptable to different configurations of screen size, resolution, color-depth, etc. It is not meant for saying a box should be exactly 7.5 inches wide. A: Have you tried @media print { #test { width: 7in; }} A: This is a merged answer of what I've learned from the posts of Orion Edwards (especially the link to webkit.org) and Bill. It seems the answer is actually always 96 DPI, although you're free to run the following (admittedly sloppy) code and I would love to hear if you get a different answer: var bob = document.body.appendChild(document.createElement('div')); bob.innerHTML = "<div id='jake' style='width:1in'>j</div>"; alert(document.getElementById('jake').clientWidth); As the webkit.org article says, this number is fairly standard, and isn't affected by your printer or platform, because those use a different DPI. Even if it isn't, using that above code you could find out the answer you need. Then you can use the DPI in your JavaScript, limiting the clientWidth like Bill did, combined with CSS that uses inches, to have something that prints out correctly so long as the user doesn't do any funky scaling like Orion Edwards mentioned--or at least, after that point, it's up to the user, who may be wanting to do something beyond what the programmer had in mind, like printing out on 11x17 paper something that the programmer designed to fit into 8.5x11, but still, it will "work right" for what the user wants even then.
{ "language": "en", "url": "https://stackoverflow.com/questions/159183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Eclipse RCP Toolbar buttons with the Eclipse Look In Eclipse, its easy to specify buttons for your toolbar using the ActionSets extension point. However, when I need to specify some items programmatically, I can't get the same look. I don't believe that the framework is using native buttons for these, but so far, I can't find the right recipe to match the Eclipse look. I wanted to see if anyone has found the right snippet to duplicate this functionality in code. A: It's difficult to tell from your question, but it sounds like you may be attempting to add a ControlContribution to the toolbar and returning a Button. This would make the button on the toolbar appear like a native button like you seem to be describing. This would look something like this: IToolBarManager toolBarManager = actionBars.getToolBarManager(); toolBarManager.add(new ControlContribution("Toggle Chart") { @Override protected Control createControl(Composite parent) { Button button = new Button(parent, SWT.PUSH); button.addSelectionListener(new SelectionAdapter() { @Override public void widgetSelected(SelectionEvent e) { // Perform action } }); } }); Instead you should add an Action to the toolbar. This will create a button on the toolbar that matches the standard eclipse toolbar buttons. This would look something like this: Action myAction = new Action("", imageDesc) { @Override public void run() { // Perform action } }; IToolBarManager toolBarManager = actionBars.getToolBarManager(); toolBarManager.add(myAction); A: Could you perhaps put in an extract of the code you have for adding actions programmatically to the toolbar? I assume you do this in an ApplicationActionBarAdvisor class? Their should be no difference in the look of buttons you add declaratively vs those you add programatically.
{ "language": "en", "url": "https://stackoverflow.com/questions/159190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: timeout error when on an ad hoc network I am doing an InternetConnect (Wininet) to an FTP server that happens to be running on an iPhone. When the user is on a normal WiFi network it works fine for him. When he has an ad hoc network with his iPhone he gets an ERROR_INTERNET_TIMEOUT. I presume this is some kind of routing problem. I am curious as to why this gets ERROR_INTERNET_TIMEOUT and not ERROR_INTERNET_CANNOT_CONNECT. Most users, if they are blocked by, for example, a firewall, will get ERROR_INTERNET_CANNOT_CONNECT. I don't understand enough about low-level TCP/IP to understand what kind of situation would cause a timeout error instead of a connect error. I'm really more intellectually curious in understanding this than I am in actually solving the user's problem. ;-) Can anyone explain what is happening with the network packets (the more detailed the better)? edit: note that, as far as I know, the user doesn't have an outgoing firewall enabled, it's not a firewall issue. I think it's some kind of routing issue. I have seen similar issues when a user is connected a VPN and their routing is set up incorrectly and all packets go to their work instead of the iPhone. I want to know what's going on with the packets in this situation: the socket connects but at the next step (whatever that is) they can't communicate. A: Firewalls these days choose to not respond at all to packets that they deem suspicious, this is to prevent port scanners from detecting that there is a machine at the IP. So that could be what is happening in your case, the firewall may simply be dropping the packet and causing a timeout rather than a failure to connect error.
{ "language": "en", "url": "https://stackoverflow.com/questions/159214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I create a bundle of reusable code in Xcode? I am developing an iPhone app and have to parse xml files in order to put them into a database. I will also be using those same xml parsers in my app so users can import their own data. I was wondering how I can extract those xml parsers into a bundle or a library so I can use them both in my iPhone app and in a command line app where I just populate a sqlite3 database. Thanks in advance! A: Create a static library project, then use the interproject dependency feature of Xcode to build them in the correct order and link the app with the static library. You'll need to have a common build directory set for all the projects for this to work correctly (at least you did around Xcode 3.0, didn't check if this is still a problem with 3.1). You can set the build directory from the target or project's build settings (in the Get Info pane). To create an interpoject dependency: * *Drag the library project into the application project's Files & Groups pane. *Set up target dependency in the application target's Get Info pane. Make it dependent on the library's target. *Drag the library product in the application target's Link With Libraries step. You can find the library product by expanding the library project within the app project's Files & Groups (click the arrow). Sounds more complicated than it is. It isn't much. (Small extras: yes, you need a common build folder as indicated in the Xcode Project Management Guide, and the Xcode Build System Guide can help you "get" Xcode's build system, which -- at the cost of starting a religion war -- I think is one of the most flexible and simple build systems out there.)
{ "language": "en", "url": "https://stackoverflow.com/questions/159221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }