text
stringlengths
8
267k
meta
dict
Q: VBScript -- Using error handling I want to use VBScript to catch errors and log them (ie on error "log something") then resume the next line of the script. For example, On Error Resume Next 'Do Step 1 'Do Step 2 'Do Step 3 When an error occurs on step 1, I want it to log that error (or perform other custom functions with it) then resume at step 2. Is this possible? and how can I implement it? EDIT: Can I do something like this? On Error Resume myErrCatch 'Do step 1 'Do step 2 'Do step 3 myErrCatch: 'log error Resume Next A: You can regroup your steps functions calls in a facade function : sub facade() call step1() call step2() call step3() call step4() call step5() end sub Then, let your error handling be in an upper function that calls the facade : sub main() On error resume next call facade() If Err.Number <> 0 Then ' MsgBox or whatever. You may want to display or log your error there msgbox Err.Description Err.Clear End If On Error Goto 0 end sub Now, let's suppose step3() raises an error. Since facade() doesn't handle errors (there is no On error resume next in facade()), the error will be returned to main() and step4() and step5() won't be executed. Your error handling is now refactored in 1 code block A: VBScript has no notion of throwing or catching exceptions, but the runtime provides a global Err object that contains the results of the last operation performed. You have to explicitly check whether the Err.Number property is non-zero after each operation. On Error Resume Next DoStep1 If Err.Number <> 0 Then WScript.Echo "Error in DoStep1: " & Err.Description Err.Clear End If DoStep2 If Err.Number <> 0 Then WScript.Echo "Error in DoStop2:" & Err.Description Err.Clear End If 'If you no longer want to continue following an error after that block's completed, 'call this. On Error Goto 0 The "On Error Goto [label]" syntax is supported by Visual Basic and Visual Basic for Applications (VBA), but VBScript doesn't support this language feature so you have to use On Error Resume Next as described above. A: Note that On Error Resume Next is not set globally. You can put your unsafe part of code eg into a function, which will interrupted immediately if error occurs, and call this function from sub containing precedent OERN statement. ErrCatch() Sub ErrCatch() Dim Res, CurrentStep On Error Resume Next Res = UnSafeCode(20, CurrentStep) MsgBox "ErrStep " & CurrentStep & vbCrLf & Err.Description End Sub Function UnSafeCode(Arg, ErrStep) ErrStep = 1 UnSafeCode = 1 / (Arg - 10) ErrStep = 2 UnSafeCode = 1 / (Arg - 20) ErrStep = 3 UnSafeCode = 1 / (Arg - 30) ErrStep = 0 End Function A: I'm exceptionally new to VBScript, so this may not be considered best practice or there may be a reason it shouldn't be done this that way I'm not yet aware of, but this is the solution I came up with to trim down the amount of error logging code in my main code block. Dim oConn, connStr Set oConn = Server.CreateObject("ADODB.Connection") connStr = "Provider=SQLOLEDB;Server=XX;UID=XX;PWD=XX;Databse=XX" ON ERROR RESUME NEXT oConn.Open connStr If err.Number <> 0 Then : showError() : End If Sub ShowError() 'You could write the error details to the console... errDetail = "<script>" & _ "console.log('Description: " & err.Description & "');" & _ "console.log('Error number: " & err.Number & "');" & _ "console.log('Error source: " & err.Source & "');" & _ "</script>" Response.Write(errDetail) '...you could display the error info directly in the page... Response.Write("Error Description: " & err.Description) Response.Write("Error Source: " & err.Source) Response.Write("Error Number: " & err.Number) '...or you could execute additional code when an error is thrown... 'Insert error handling code here err.clear End Sub A: What @cid provided is a great answer. I took the liberty to extend it to next level by adding custom throw handler (like in javascript). Hope someone finds its useful. option Explicit Dim ErrorCodes Set ErrorCodes = CreateObject("Scripting.Dictionary") ErrorCodes.Add "100", "a should not be 1" ErrorCodes.Add "110", "a should not be 2 either." ErrorCodes.Add "120", "a should not be anything at all." Sub throw(iNum) Err.Clear Dim key key = CStr(iNum) If ErrorCodes.Exists(key) Then Err.Description = ErrorCodes(key) Else Err.Description = "Error description missing." End If Err.Source = "Dummy stage" Err.Raise iNum 'raise a user-defined error End Sub Sub facade(a) if a=1 then throw 100 end if if a = 2 then throw 110 end if throw 120 End Sub Sub Main on error resume next facade(3) if err.number <> 0 then Wscript.Echo Err.Number, Err.Description end if on error goto 0 End Sub Main
{ "language": "en", "url": "https://stackoverflow.com/questions/157747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "94" }
Q: Browser Helper Objects (BHO) in Windows Vista only with admin rights? For a university project I programmed a Internet Explorer Browser Helper Object to process web document information while browsing. It were running successful on Windows XP with IE6 and IE7. Now I have the issue that under Windows Vista the same BHO needs administrator rights to run. Browser and BHO running if you start the IE as administrator but if you start as normal user it crashes. The BHO is of course registered on the system and activated in the browser. What can I do that a user with non-admin rights can run the registered and activated BHO? Or is maybe something else the reason and I totally miss it? Thank you very much for your help! A: Not sure if your problem is related to custom actions in your installer but the following two links should help you. * *Building a BHO with the UAC in mind - http://simonguest.com/blogs/smguest/archive/2006/11/19/Building-Browser-Helper-Objects-using-Managed-Code.aspx (a little over half way down) *Using the NoImpersonate script - http://blogs.msdn.com/astebner/archive/2007/05/28/2958062.aspx A: You should use a debugger to determine why the addon is crashing. Chances are good that you're attempting to write to a protected location, and when that fails, your code fails to check for an error result. Using Process Monitor and watching for Access_Denied returns is often helpful, but using a full-debugger is the right way to go.
{ "language": "en", "url": "https://stackoverflow.com/questions/157755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I determine the running Mac OS X version programmatically? I have a program which needs to behave slightly differently on Tiger than on Leopard. Does anybody know of a system call which will allow me to accurately determine which version of Mac OS X I am running. I have found a number of macro definitions to determine the OS of the build machine, but nothing really good to determine the OS of the running machine. Thanks, Joe A: The API is through the Gestalt Manager. See "Determining OS Version" in the CocoaDev site. A: In terminal: system_profiler SPSoftwareDataType Gives: Software: System Software Overview: System Version: Mac OS X 10.5.5 (9F33) Kernel Version: Darwin 9.5.0 Boot Volume: Main Boot Mode: Normal Computer Name: phoenix User Name: Douglas F Shearer (dougal) Time since boot: 2 days 16:55 Or: sw_vers Gives: ProductName: Mac OS X ProductVersion: 10.5.5 BuildVersion: 9F33 A: Is the OS version really what you want? There may be a more appropriate thing to test for, such as the presence of, or version number of, a particular framework. A: See this article here But in short, if you're using carbon, use the Gestalt() call, and if you're using cocoa, there is a constant called NSAppKitVersionNumber which you can simply check against. Edit: For Mac OSX 10.8 and above, don't use Gestalt() anymore. See this answer for more details: How do I determine the OS version at runtime in OS X or iOS (without using Gestalt)? A: Could you just check for the presence of a capability? For instance: if (NSClassFromString(@"NSKeyedArchiver") != Nil) or if ([arrayController respondsToSelector: @selector(selectedIndexes)]) then you know that the operating system does what you need it to do, not that Apple's product marketing group gave it a particular number ;-) A: within your program you can use Gestalt. Here is the code I am using for my program to obtain the OS version. long version = 0; OSStatus rc0 = Gestalt(gestaltSystemVersion, &version); if((rc0 == 0) && (version >= 0x1039)) { // will work with version 10.3.9 // works best with version 10.4.9 return; // version is good } if(rc0) { printf("gestalt rc=%i\n", (int)rc0); } else { printf("gestalt version=%08x\n", version); } A: respondsToSelector: almost certainly is better than you maintaining a table of what given releases do and do not implement. Be lazy. Let the runtime tell you whether it can do something or not, and fall back to older methods when you need to. Your code will be far less fragile because you don't have to maintain your own global data that the rest of your code has to keep checking with. A: Run this in the command line: system_profiler SPSoftwareDataType | grep Mac
{ "language": "en", "url": "https://stackoverflow.com/questions/157759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Date/time formats for various countries Is there any source on the web where I could find date and time formats used in individual countries in the World? I was checking languages listed in Control panel in Windows, but there are some countries missing (for example countries in Africa etc.). I found some locale tables on the web, but these usualy differ from settings in Windows, so I don't know which version to use. Thank you, Petr A: The Common Locale Data Repository is an excellent resource for locale data. From the website you can download an xml version of the database, which includes datetime formats, number formats, and lots of other locale specific data. A: This webpage shows how to use date and time based on culture settings: http://msdn.microsoft.com/en-us/library/5hh873ya.aspx I'm assuming you're programming something so this would probably help you create a date-time based on the environmental settings. As for using Windows settings vs researched settings, go with Windows settings if you're making something for Windows. A: This wikipedia page is the most comprehensive I've found, but it suffers from much the same problem you noted in Control Panel. Maybe you can help by updating it with any information you've found independently?
{ "language": "en", "url": "https://stackoverflow.com/questions/157761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Table Column Formatting I'm trying to format a column in a <table/> using a <col/> element. I can set background-color, width, etc., but can't set the font-weight. Why doesn't it work? <table> <col style="font-weight:bold; background-color:#CCC;"> <col> <tr> <td>1</td> <td>2</td> </tr> <tr> <td>3</td> <td>4</td> </tr> </table> A: Your best bet is to apply your styling directly to the <td> tags. I've never used the <col> tag, but most browsers let you apply formatting at the <table> and <td>/<th> level, but not at an intermediate level. For example if you have <table> <tr class="Highlight"> <td>One</td> <td>Two</td> </tr> <tr> <td>A</td> <td>B</td> </tr> </table> then this CSS won't work tr.Highlight { background:yellow } but this will tr.Highlight td { background:yellow } Also: I assume your code above is just for demonstration purposes and you're not actually going to apply styles inline. A: As far as I know, you can only format the following using CSS on the <col> element: * *background-color *border *width *visibility This page has more info. Herb is right - it's better to style the <td>'s directly. What I do is the following: <style type="text/css"> #mytable tr > td:first-child { color: red;} /* first column */ #mytable tr > td:first-child + td { color: green;} /* second column */ #mytable tr > td:first-child + td + td { color: blue;} /* third column */ </style> </head> <body> <table id="mytable"> <tr> <td>text 1</td> <td>text 2</td> <td>text 3</td> </tr> <tr> <td>text 4</td> <td>text 5</td> <td>text 6</td> </tr> </table> This won't work in IE however. A: You might have just needed this: tr td:first-child label { font-weight: bold; } A: Have you tried applying the style through a CSS class? The following appears to work: <style type="text/css"> .xx { background: yellow; color: red; font-weight: bold; padding: 0 30px; text-align: right; } <table border="1"> <col width="150" /> <col width="50" class="xx" /> <col width="80" /> <thead> <tr> <th>1</th> <th>2</th> <th>3</th> <th>4</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> <td>3</td> <td>4</td> </tr> </tbody> </table> Reference for the col element A: Reading through this as I was attempting to style a table such that the first column would be bold text and the the other four columns would be normal text. Using col tag seemed like the way to go but while I could set the widths of the columns with the width attribute the font-weight: bold wouldn't work Thanks for pointing me in the direction of the solution. By styling all the td elements td {font-weight: bold;} and then using an adjacent sibling selector to select columns 2-5 and style them back to normal td + td {font-weight: normal;} Voila, alls good :) A: A col tag must be inside of a colgroup tag, This may be something to do with the problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/157770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How can I easily turn a .Net Windows Form app into an Asp.net app using Visual Studio 2005? I have a pretty basic windows form app in .Net. All the code is C#. I'd like to turn it into an Asp.net web app. How can I easily do this? I think there's an easy way since the controls I drag/drop onto the windows form designer are pretty much the same that I drag/drop onto the aspx design page. Note: the windows form app doesn't make any network requests or anything... just reads some text files from the local machine. A: There are two big problems here; first - they might look the same, but they are implemented completely differently - all of the UI work will need to be redone, largely from scratch. You will probably be able to re-use your actual "doing" code, though (i.e. the logic that manipulates the files). Second - define "local machine". For an ASP.NET application, this is usually the server, which might not be what you want. You can do some things clientside via javascript, but the sandbox security model will prevent you doing much file IO. I would suggest perhaps looking at Silverlight, which is somewhere between the two - or perhaps just use ClickOnce to deploy your existing winform exe to remote clients. A: You'll likely have issues reading files from the local machine via ASP.Net: for an ASP.Net app, the local machine is the web server, not the computer where the user is sitting. Also, there's a lot more to it than you'd think. odds are somewhere in your app you're relying on the fact the a windows forms app is inherently stateful, and moving to ASP.Net will be a rude awakening in that respect. A: The interface is going to have to change, as the controls are different. If you have supporting business classes, and other items of that nature you can copy those over, but otherwise the UI will need to be re-built. A: ASP.NET and Windows Forms are 2 completely different models. Yes, the designers are similar, but the underlying representation of the page/form is different. The major difference is that ASP.NET is stateless, so you have to adjust your method of storing data in between operations and push it to the Session object. For local apps that only you will use, my recommendation is to stick with Windows Forms. A: That would really depend on how the app is designed. If you have all the "business logic" of the app in the Windows Forms then you will have a difficult time converting it over. If the logic is in it's own layer it will be much easier. Please realize there are a lot of differences between Windows and Web Forms; one of the largest is that web forms are disconnected from the user and state information is sent with each request. Winforms are certainly more full featured. A: Unfortunately, it is not going to be that simple. Your C# code and logic will transfer over easily, but the WinForms UI is completely different from ASP.NET UI. If you are interested in a "web application" that can be designed using the same kind of non-HTML GUI designer as your existing C# app, look at Microsoft Silverlight. It is designed to be a version of the new Windows Presentation Foundation (WPF, the successor to WinForms) for the web.
{ "language": "en", "url": "https://stackoverflow.com/questions/157773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I prevent Firefox XMLSerializer from capitalizing nodes I'm creating an XML Document in JavaScript on the client side, and then transforming it back to a string to send to the server. Mozilla has a handy method to accomplish this: XMLSerializer().serializeToString(), which I'm using. However, there seems to be a bug in this method: it returns all node names in uppercase and all attribute names in lowercase(regardless of the capitalization I used to create the node). Is there any way to circumvent this behavior and get back the XML string with my original capitalization? More generally, is there any way to create an XML Document in Mozilla and return it to a string without having your capitalization overridden? A: It looks like you are working with an HTML document. Try operating on XML document instead. var oDocument = new DOMParser().parseFromString("<root />", "text/xml"); oDocument.documentElement.appendChild(oDocument.createElementNS("http://myns", "x:test")); alert(new XMLSerializer().serializeToString(oDocument)); or var oDocument = document.implementation.createDocument("", "", null); oDocument.appendChild(oDocument.createElementNS("http://myns", "x:test")); alert(new XMLSerializer().serializeToString(oDocument)); Regards
{ "language": "en", "url": "https://stackoverflow.com/questions/157781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I get the MAX row with a GROUP BY in LINQ query? I am looking for a way in LINQ to match the follow SQL Query. Select max(uid) as uid, Serial_Number from Table Group BY Serial_Number Really looking for some help on this one. The above query gets the max uid of each Serial Number because of the Group By Syntax. A: using (DataContext dc = new DataContext()) { var q = from t in dc.TableTests group t by t.SerialNumber into g select new { SerialNumber = g.Key, uid = (from t2 in g select t2.uid).Max() }; } A: var q = from s in db.Serials group s by s.Serial_Number into g select new {Serial_Number = g.Key, MaxUid = g.Max(s => s.uid) } A: This can be done using GroupBy and SelectMany in LINQ lamda expression var groupByMax = list.GroupBy(x=>x.item1).SelectMany(y=>y.Where(z=>z.item2 == y.Max(i=>i.item2))); A: In methods chain form: db.Serials.GroupBy(i => i.Serial_Number).Select(g => new { Serial_Number = g.Key, uid = g.Max(row => row.uid) }); A: I've checked DamienG's answer in LinqPad. Instead of g.Group.Max(s => s.uid) should be g.Max(s => s.uid) Thank you! A: The answers are OK if you only require those two fields, but for a more complex object, maybe this approach could be useful: from x in db.Serials group x by x.Serial_Number into g orderby g.Key select g.OrderByDescending(z => z.uid) .FirstOrDefault() ... this will avoid the "select new" A: Building upon the above, I wanted to get the best result in each group into a list of the same type as the original list: var bests = from x in origRecords group x by x.EventDescriptionGenderView into g orderby g.Key select g.OrderByDescending(z => z.AgeGrade) .FirstOrDefault(); List<MasterRecordResultClaim> records = new List<MasterRecordResultClaim>(); foreach (var bestresult in bests) { records.Add(bestresult); } EventDescriptionGenderView is a meld of several fields into a string. This picks the best AgeGrade for each event.
{ "language": "en", "url": "https://stackoverflow.com/questions/157786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: What's the WPF equivalent of WinForms components? Windows Forms allows you to develop Components, non-visual elements that can have a designer. Built-in components include the BackgroundWorker, Timer, and a lot of ADO .NET objects. It's a nice way to provide easy configuration of a complicated object, and it it enables designer-assisted data binding. I've been looking at WPF, and it doesn't seem like there's any concept of components. Am I right about this? Is there some method of creating components (or something like a component) that I've missed? I've accepted Bob's answer because after a lot of research I feel like fancy Adorners are probably the only way to do this. A: Just from my own observations, it seems like Microsoft is trying to move away from having components and similar things in the GUI. I think WPF tries to limit most of what's in the XAML to strictly GUI things. Data binding I guess would be the only exception. I know I try to keep most everything else in the code-behind or in separate classes or assemblies. Probably not exactly the answer you wanted, but it's my $0.02. A: I have the same question. The advantage of a component-like mechanism is that the designer can add it in Blend, configure it in the designer with the Properties editor, and use data binding. What do you think of the solution below? It works. public class TimerComponent : FrameworkElement { public Timer Timer { get; protected set; } public TimerComponent() { if (!System.ComponentModel.DesignerProperties.GetIsInDesignMode(this)) { Visibility = Visibility.Collapsed; Timer = new Timer(OnTimerTick, null, Timeout.Infinite, Timeout.Infinite); } } void OnTimerTick(object ignore) { Dispatcher.BeginInvoke(new Action(RaiseTickEvent)); } #region DueTime Dependency Property public int DueTime { get { return (int)GetValue(DueTimeProperty); } set { SetValue(DueTimeProperty, value); } } public static readonly DependencyProperty DueTimeProperty = DependencyProperty.Register("DueTime", typeof(int), typeof(TimerComponent), new UIPropertyMetadata(new PropertyChangedCallback(OnDueTimeChanged))); static void OnDueTimeChanged(DependencyObject obj, DependencyPropertyChangedEventArgs e) { var target = obj as TimerComponent; if (target.Timer != null) { var newDueTime = (int)e.NewValue; target.Timer.Change(newDueTime, target.Period); } } #endregion #region Period Dependency Property public int Period { get { return (int)GetValue(PeriodProperty); } set { SetValue(PeriodProperty, value); } } public static readonly DependencyProperty PeriodProperty = DependencyProperty.Register("Period", typeof(int), typeof(TimerComponent), new UIPropertyMetadata(new PropertyChangedCallback(OnPeriodChanged))); static void OnPeriodChanged(DependencyObject obj, DependencyPropertyChangedEventArgs e) { var target = obj as TimerComponent; if (target.Timer != null) { var newPeriod = (int)e.NewValue; target.Timer.Change(target.DueTime, newPeriod); } } #endregion #region Tick Routed Event public static readonly RoutedEvent TickEvent = EventManager.RegisterRoutedEvent( "Tick", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(TimerComponent)); public event RoutedEventHandler Tick { add { AddHandler(TickEvent, value); } remove { RemoveHandler(TickEvent, value); } } private void RaiseTickEvent() { RoutedEventArgs newEventArgs = new RoutedEventArgs(TimerComponent.TickEvent); RaiseEvent(newEventArgs); } #endregion } And is used as follows. <StackPanel> <lib:TimerComponent Period="{Binding ElementName=textBox1, Path=Text}" Tick="OnTimerTick" /> <TextBox x:Name="textBox1" Text="1000" /> <Label x:Name="label1" /> </StackPanel> A: So far, the only approach I see that makes sense is to make an instance of the class a static resource and configure it from XAML. This works, but it'd be nice if there were something like the WinForms designer component tray that these could live in. A: You can put whatever you like inside a resource dictionary, including classes that have no relation what so ever to Wpf. The following XAML adds the string "Hello" directly into a window (the actual string, not a control that shows the string), you can use the same method to place anything - including classes you write yourself into a XAML file. <Window x:Class="MyApp.Window1" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Window.Resources> <sys:String x:Key="MyString">Hello</sys:String> </Window.Resources> </Window>
{ "language": "en", "url": "https://stackoverflow.com/questions/157795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: GB English, or US English? If you have an API, and you are a UK-based developer with a highly international audience, should your API be setColour() or setColor() (To take one word as a simple example.) UK-based engineers are often quite defensive about their 'correct' spellings but it could be argued that US spelling is more 'standard' in the international market. I guess the question is does it matter? Do developers in other locales struggle with GB spelling, or is it normally quite apparent what things mean? Should it all be US-English? A: I would tend to use US-English as that has become the norm in other APIs. Speaking as an English programmer, I don't have any problem using "color", for example. A: I'm not a native speaker. In writing, I always try to use en-gb. However, in programming I always use en-us instead of British English for much the same reason that I don't use German or French for my identifiers. A: Assuming this is a Java or C# API it probably doesn't matter given the pervasiveness of auto-complete functionality in the IDEs. If this is for a dynamic language or one where modern IDEs aren't the norm I would go with the American spellings. Of course I am an American and am therefore obviously biased, but it seems like most of the code I see from developers who aren't native English speakers use US spellings for their variable names etc. A: As an English programmer, I use en-US for my software dev. American english dominates so well in almost every other API that it's much easier to stick to one type of spelling and remove the ambiguity. It's a waste of time searching for a method only to find the spelling is off by one letter due to spelling localisations. A: I would go with current standards and pick the US English spelling. HTML and CSS are acknowledged standards with the spelling "color", secondly, if you are working with a framework like .NET then chances are you already have color available in different name spaces. The mental tax on having to deal with two spellings would hamper rather than help developers. Label myLabel.color = setColour(); A: I have trouble with APIs that are not in US-English. Just because of the spelling differences. I know what the words mean but the different spelling trips me up. Most of the libraries and frameworks I'm familiar with use the US spellings. Of course, I'm an American so... US-English is my native language. A: I got another sample: Serialise and Serialize. :) Personally, I don't think it matters much. I've worked on projects that were using UK-English spelling countries and they use the UK spelling. It still is English and it doesn't really matter much due to Intellisense. A: The majority of the development documentation (just like MSDN) is in American English. So it might be better to stay with the main-stream and use American English in your API if you are targeting international audience. A: Generally I would be a stickler for GB spelling but I think that the code reads a lot better if it is consistent and I find: Color lineColor = Color.Red; to look a lot better than: Color lineColour = Color.Red; I guess that ultimately it doesn't really matter. A: As a Canadian, I run into this all the time. It's very difficult for me to type "color" as my muscle memory keeps reaching for the 'u'. However, I tend to adopt the language of the libraries. Java has the Color class, so I use color. A: I try to find alternative words if possible. I will let -ize slide. For Colour I could possibly use Hue, Ink, Foreground/Background... If not, as an Englishman, I will use en-GB because I have some pride left in my country and origins. If it was to be part of a bigger project however, especially an international one, I would keep the entire project consistent above having a small part be in one language variation and the rest in another. A: Depends where you see most of your customers. I personally prefer using English-GB (e.g. Colour) in my private code, but I go to Color for externally published applications/API/code! A: Even though I'm usually very pedantic about correct spelling, as a UK developer I would always go with the American 'color' spelling. In all programming languages I've encountered, it's like this, so for the sake of consistency, using 'color' makes a lot of sense. A: I'm one of these people who's heart rate and blood pressure rises each time I'm forced to use American English in setup files, etc, due to the fact that the software doesn't give the option for British English, but that's just me :) My personal opinion on this one however would be to provide both spellings, give them setColor() and setColour(), write up the code in one of them, and just have the second one pass the parameters through. This way you keep both groups happy, granted your intellisense gets a bit longer, but at least people can't complain about you using the 'wrong' language. A: I'm also going to have to side with US-English, simply to keep things consistent (as others have already noted here). Although I am a native US-English speaker, I have done software projects with both German and Swedish software companies, and in both cases the temptation occasionally would strike my teammates to use German or Swedish text in the code -- usually for comments, but sometimes also for variable or method names. Even though I can speak those languages, it's really jarring on the eyes and makes it harder to bring a new non-speaker into the project. Most European software companies (at least the ones I've worked with) behave the same way -- the code stays in English, simply because that makes the code more international-friendly should another programmer come on board. The internal documentation usually tends to be done in the native language, though. That said, the distinction here is about two different dialects of English, which isn't quite as extreme as seeing two totally different languages in the same source code file. So I would say, keep the API in US-English, but your comments in GB-English if it suits you better. A: I'd say look at how other libraries in your language choose and follow their convention. The designers of the programming language and its built-in APIs made a choice, and whether the users are international or not they are used to seeing spellings consistent with this choice. You are not targeting speakers of a different language but users of a programming language. Odds are they've learned quite a few words in the foreign language from the built-in APIs, and they might not be aware there's differences between US English and GB English. Don't confuse them by switching sides of the pond. I use .NET languages primarily, and the .NET Framework uses US English spellings. On this platform, I'd stick with US English. I'm not aware of any languages standardized on GB English, but if yours has done so then by all means stay consistent with the language. A: I agree with the "go for American" troupe. I myself prefer en-GB when writing e-mails and such, but American English is pretty much the standard in all programming circles. A: Even though British English is what is spoken throughout the world - I recommend using American English which like other people have said dominate the market. A: It comes naturally for me to work in UK English without even thinking about it. However, if you are developing internal procedures it doesn't really matter. If you are creating APIs that will be used publicly, and your audience is international, why not implement both? A: Definitely US english. A: First, I'm in the US. In my current project, it's always "color" however, the word we can't seem to pick a spelling for is "grey" vs "gray". It's actually gotten quite annoying. A: I always use en-GB for all my programming. I guess it's due to a heavy influence of British novels. However, might it not be possible to have two different sets of APIs (one for en-US, one for en-GB) which internally call the same function? This might bloat the header files though, so maybe depending upon a preprocessor definition, a conditional compilation? If you're using C++, you could do something like below... #ifdef ENGB typedef struct Colour { //blahblahblah }; void SetColour(Colour c); #else typedef struct Color { //blahblahblah }; void SetColor(Color c); #endif Depending upon whether the client programmer defines ENGB or not as below #define ENGB he could use the APIs in the culture that he prefers. Maybe overkill for such a trivial purpose, but hey, if it seems important, why not! :) A: Selection of language for identifier names has nothing to do with audience and everything to do with the original language in which the framework or API was developed. I don't know very many languages but I cannot think of a single one that uses anything other than US English. The dangers of introducing subtle bugs due to different spellings are too great IMHO. A function override can easily become an pseudo-overload. A config file could become invalid due to difference in spellings. A situation might arise where the same conceptual object has been defined using multiple classes, using both en-US and en-GB. So therefore, whether a piece of code is purely for internal use or intended for external use as well, the spellings used must always match the original language of the platform/framework/compiler/API. A: If all of your programmers are British, use en-gb. If your code will be seen by programmers outside of Britain, then en-us would be a better choice. One minor point, we rely on a translation service to copy our documentation in to other languages. We have found we get better translations when using en-us as the source. A: You need to consider your audience. Who is going to use the code and what are they expecting to see? I work in a company that has offices in both Canada & US. We use Canadian spelling (very similar to British English) when producing documentation and code in Canada and US spelling for what is used in the US. Some things cross borders, but the difference in spelling is rarely an issue. It acutally can generate some interesting dialogue when American's are not aware of different spellings for Candian and British English. Sometimes they are OK with it, othertimes they insist on it changing to the "correct" spelling. This also affects date formats (dd/mm/yyyy in Canada and mm/dd/yyyy in US) When there is an impasse, we typically go with the US spelling since the people in Canada are familiar with both variations. A: The main reason I've heard for choosing US over UK English is because the UK audience, when confronted with US spelling, realise it's a US application (or presume it is so), whereas a US audience confronted with UK spelling thinks '... hey, that's wrong.. it's color not colour' But like others have said, standardise. Pick on and stick with it. A: I prefer US English. A: If you look back a few hundred years, you will find the change has nothing at all to do with the US, much as many would think so. The changes stem from European influences, particularly the French. Before that time, the English word "colour" was then actually spelt "color". To try to standardise would be futile as half of Chinese children are learning the pre European influence and US understanding of English, while the other half are taking it up as it stands today, as the English language. If you think language is a problem, then you should consider the Taiwan Kg which weighs in at 600g. I've yet to find out how they managed that one, but I hope they are never employed as aircraft fuelling personnel!
{ "language": "en", "url": "https://stackoverflow.com/questions/157807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "115" }
Q: Field default value from query in MS Access I have a field on a table in MS Access, tblMyTable.SomeID, and I want to set the default value as a user preference in tblUserPref.DefaultSomeID. It doesn't appear that I can set the default value to use a query in the table definition of tblMyTable. I have a form where records are entered into tblMyTable. I've tried to set the default value of the field on the form, but doesn't seem to accept a query either. So, as a last resort, I'm trying to do it with VBA. I can query the value that I want in VBA, but I can't figure out which event to attach the code to. I want to run the code whenever a new blank record is opened in the form, before the user starts to type into it. I do not want to run the code when an existing record is opened or edited. However, if the code runs for both new blank records and for existing records, I can probably code around that. So far, all of the events I have tried on the field and on the form itself have not run when I wanted them to. Can anyone suggest which event I should use, and on which object? A: I'm not certain I've understood the problem, but I think you're asking to insert a value in the field that is drawn from a different table, based on some runtime information (such as the user name). In that case, you could use the domain lookup function, DLookup(), and you'd pass it the name of the field you want returned, the name of the table or query you're looking it up from, and the criteria for limiting the result to one row (which would, I assume, depend on values you can gather at runtime). That DLookup() formula could then be set permanently as the default value on the form control, and would not cause the form to be dirtied before you've created a real record. Of course, I may have completely misinterpreted what you're trying to do, so this may not work, but you seemed to want to look something up in a recordset and use the result as your value for new records, and DLookup() would allow you to do that without any coding at all (as well as having the benefit of not dirtying the record prematurely). A: I don't know how you're determining who the current user is, but I will assume it's something you can call programmatically. In the interest of simplicity, I am just going to use Access' built-in "CurrentUser" method for this example. (User-level security required, otherwise it defaults to "Admin".) Create a public function in a VBA module to return the current user's default value: Public Function InsertDefaultSomeID() As String InsertDefaultSomeID = DLookup("DefaultSomeID", "tblUserPref", _ "UserID='" & CurrentUser & "'") End Function In tblUserPref, you need a [UserID] field and a [DefaultSomeID] field. Define a default for your current user. Then, on your form bound to tblMyTable, open the Properties for the [SomeID] field and set the Default Value property to: =InsertDefaultSomeID() Save your form, log on as a user with a known default, and try inserting a new record. Your default value should be automatically populated. A: You probably want to put that code in the "Before Insert" event for the Form it self (none of the objects on the form). Correction: That won't actually trigger until your user starts entering data - so you just need to make sure that the fields you want to have defaults for come after the first data entry field. You could also check for a new record in the "On Current" event. Private Sub Form_Current() If Me.NewRecord Then Me.f2 = "humbug" End If End Sub The disadvantage with this is the new record is created/marked dirty immediately when you enter it. So, if you thoughtlessly step through the records, you can end up running of the end and creating several extra records with just default data in them - so you will have to do something to trap that sort of condition (e.g., a required field, etc.) A: You're right. You can't set the control's default value property to a value that isn't known at compile time. That value will be determined at run time. So the solution is to set the control's value property, not the defaultvalue property, during the form's current event. Note, getUserID() is a public function used to determine who the user is. Private Sub Form_Current() On Error GoTo Proc_Err Dim rs As DAO.Recordset Dim fOpenedRS As Boolean If Me.NewRecord = True Then Set rs = CurrentDb.OpenRecordset("SELECT DefaultSomeID " _ & "FROM tblUserPref WHERE UserID = " & getUserID()) fOpenedRS = True rs.MoveFirst Me!txtPref.Value = rs!DefaultSomeID End If Proc_Exit: If fOpenedRS = True Then rs.Close End If Set rs = Nothing Exit Sub Proc_Err: MsgBox Err.Number & vbCrLf & Err.Description Err.Clear Resume Proc_Exit End Sub A: He're a suggested alternative approach. Rather than explicitly INSERTing the default when the user has not specified an explicit value, instead leave that value as missing (I'd probably model this in a dedicated table and model the missing value by, well, not INSERTing a row, but I know many people aren't averse to having many nullable columns in their tables). Then you can replace the missing value in a query. This may or may not be valid in your application, as I say just another approach to handling missing data :)
{ "language": "en", "url": "https://stackoverflow.com/questions/157812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Looking for MacOS Threaded Networking sample code My code needs to run all networking routines in a separate NSThread. I have got a library, which I pass a callback routine for communication: my thread code library my callback (networking) library my thread code My callback routine must POST some data to an HTTP server (NSURLConnection), wait for the answer (start a NSRunLoop?), then return to the library. The library then processes the data. After the library returns to my thread, I can then post a notification to the main thread which handles drawing and user input. Is there any sample code covering how to use NSURLConnection in a NSThread? A: If you need to block until you've done the work and you're already on a separate thread, you could use +[NSURLConnection sendSynchronousRequest:returningResponse:error:]. It's a bit blunt though, so if you need more control you'll have to switch to an asynchronous NSURLRequest with delegate methods (i.e. callbacks) scheduled in the current NSRunLoop. In that case, one approach might be to let your delegate flag when it's done, and allow the run loop to process events until either the flag is set or a timeout is exceeded. A: Kelvin's link is dead, here's a bit of code that does what you are asking for (meant to be run in a method called by [NSThread detachNewThreadSelector:toTarget:WithObject:]). Note that the connection basically starts working as soo as you enter the run loop, and that "terminateRunLoop" is meant to be a BOOL set to NO on start and set to YES when the connection finishes loading or has an error. Why would you want to do this instead of a blocking synchronous request? One reason is that you may want to be able to cancel a long-running connection properly, even if you do not have a lot of them. Also I have seen the UI get hung up a bit if you start having a number of async requests going on in the main run loop. NSURLConnection *connection = [[NSURLConnection connectionWithRequest:request delegate:self] retain]; NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init]; while(!terminateRunLoop) { if ( ![[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]) { break; } [pool drain]; } [pool release]; } A: To answer your mini-question "start an NSRunLoop?": I'm not sure I understand, but it sounds like you are saying your pseudocode above is all being executed on a secondary thread (i.e., not the main event processing thread). If that's the case, there probably isn't any point in creating an NSRunLoop, because you can't do any useful work while waiting for the HTTP server to respond. Just let the thread block. A: NSURLConnection can be used synchronously, via its asynchronous delegate methods, in a background thread (e.g. NSThread or NSOperation). However, it does require knowledge of how NSRunLoop works. There is a blog post w/ sample code and an explanation, here: http://stackq.com/blog/?p=56 The sample code downloads an image, but I've used the concept to make sophisticated POST calls to a REST API. -Kelvin A: Any particular reason you're using threads? Unless you're opening a lot of connections, NSRunLoop on the main thread should be good enough. If you need to do blocking work in response, create your thread then.
{ "language": "en", "url": "https://stackoverflow.com/questions/157827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Unfamiliar character in SQL statement This is sort of SQL newbie question, I think, but here goes. I have a SQL Query (SQL Server 2005) that I've put together based on an example user-defined function: SELECT CASEID, GetNoteText(CASEID) FROM ( SELECT CASEID FROM ATTACHMENTS GROUP BY CASEID ) i GO the UDF works great (it concatenates data from multiple rows in a related table, if that matters at all) but I'm confused about the "i" after the FROM clause. The query works fine with the i but fails without it. What is the significance of the "i"? EDIT: As Joel noted below, it's not a keyword A: The i names the (subquery), which is required, and also needed for further joins. You will have to prefix columns in the outer query with the subquery name when there are conflicting column names between joined tables, like: SELECT c.CASEID, c.CASE_NAME, a.COUNT AS ATTACHMENTSCOUNT, o.COUNT as OTHERCOUNT, dbo.GetNoteText(c.CASEID) FROM CASES c LEFT OUTER JOIN ( SELECT CASEID, COUNT(*) AS COUNT FROM ATTACHMENTS GROUP BY CASEID ) a ON a.CASEID = c.CASEID LEFT OUTER JOIN ( SELECT CASEID, COUNT(*) AS COUNT FROM OTHER GROUP BY CASEID ) o ON o.CASEID = c.CASEID A: The "i" is giving your select statement an effective table name. It could also be written (I think - I'm not an MSSQLServer guy) as "AS i". A: When you use a subquery in the FROM clause, you need to give the query a name. Since the name doesn't really matter to you, something simple like 'i' or 'a' is often chosen. But you could put any name there you wanted- there's no significance to 'i' all by itself, and it's certainly not a keyword. If you have a really complex query, you may need to join your sub query with other queries or tables. In that case the name becomes more important and you should choose something more meaningful. A: As others stated, it's a table name alias for the subquery. Outside the subquery, you could use i.CASEID to reference into the subquery results. It's not too useful in this example, but when you have multiple subqueries, it is a very important disambiguation tool. Although, I'd choose a better variable name. Even "temp" is better. A: The i names your subquery so that if you have a complex query with numerous subqueries and you need to access the fields you can do so in an unambiguous way. It is good practice to give your subqueries more descriptive names to prevent your own confusion when you start getting into writing longer queries, there is nothing worse then having to scroll back up through a long sql statement because you have forgotten which i.id is the right one or which table/query c.name is being retrieved from. A: "Derived table" is a technical term for using a subquery in the FROM clause. The SQL Server Books Online syntax shows that table_alias is not optional in this case; "table_alias" is not enclosed in brackets and according to the Transact-SQL Syntax Conventions, things in brackets are optional. The keyword "AS" is optional though since it is enclosed in brackets...    derived_table [ AS ] table_alias [ ( column_alias [ ,...n ] ) ] FROM (Transact-SQL): http://msdn.microsoft.com/en-us/library/ms177634(SQL.90).aspx Transact-SQL Syntax Conventions: http://msdn.microsoft.com/en-us/library/ms177563(SQL.90).aspx A: The lesson to learned is to think of the person who will inherit your code. As others have said, if the code had been written like this: SELECT DT1.CASEID, GetNoteText(DT1.CASEID) FROM ( SELECT CASEID FROM ATTACHMENTS GROUP BY CASEID ) AS DT1 (CASEID); then there's an increased chance the reader would have figured it out and may even pick up on 'DT1' alluding to a 'derived table'.
{ "language": "en", "url": "https://stackoverflow.com/questions/157832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How much of a resource hog is Oracle Enterprise Manager? I'm using a medical records system built on an Oracle database. Our vendor just told us that we need to shut down Oracle's Enterprise Manager service when we're not using it, because it uses too much of the system's resources. I know I can get actual numbers by checking Sysinternals Process Explorer, but I was hoping that someone can give me info from their personal experience. Do I need to shut down EM when I'm done with it, or is he being overly concerned? A: We do the same thing on our testing and production servers too. I don't have any metrics to hand, but it did make a noticeable improvement in overall database response A: EM should not be that intrusive. I find that it takes about 10% cpu for less then 2 seconds every 30 seconds with the default install (YMMV) and when the system is under load, it doesn't even seem to do that. When I talk about EM here, I am NOT talking about the load on the oracle.exe process, but instead from the nmesrvc and the perl, cmd and emagent processes it spawns. To see its impact on the database itself requires a bit of an oracle expert. I find process explorer a nice tool to help review this real time because it shows the process hierarchy from the service parent nmesrvc. Frankly, if you're actually seeing an end user difference when stopping the dbconsole service, then your box is over capacity and you likely need to grow up or out. If you use a different tool to manage and monitor oracle and other application processes, there's not much need for the dbconsole process to run all the time. To get very specific questions about Oracle answered by some of the top people in the field, check out the Oracle-L mailing list. Response times are amazing and the quality of answers are typically better then you'll find in other places. A: I have found that just running Oracle EM can take a lot of resources depending on what you are asking it to do. I have found that I have rarely used the out of the box configuration and by removing services I don't need I can reduce the amount of resources EM needs considerably. In general, I run EM on a separate application server, not on my DB server. The real power and value of EM is when running / maintaining / monitoring multiple databases and having EM on its own server means I don't have to worry about it affecting any of the DBs. Everything that EM does, you can do manually and I usually go down this route if just managing one DB. However, this route does require a reasonable level of DBA knowledge. A: The only thing that immediately springs to mind to me is that the Enterprise Manager (for Oracle 9 and pre) was Java based. I guess that would give it the potential for a bit of runaway resource usage, but I have never seen any evidence of that on any of the machines that I have used it on here. A: Oracle's EM lets you configure out much of its overhead. This overhead consists of polling many of the services to report alerts if a threshold is met or to provide graphs of performance. That being said, if you configure these features out, then why run it at all. A: It's a hog, I like to run Oracle on Linux, and turn off the GUI after the initial install (Oracle's installer requires it).
{ "language": "en", "url": "https://stackoverflow.com/questions/157845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: obtaining a requestDispatcher What is the benefit of using the servletContext as opposed the request in order to obtain a requestDispatcher? servletContext.getRequestDispatcher(dispatchPath) and using argRequest.getRequestDispatcher(dispatchPath) A: It's there in the javadocs in black and white http://java.sun.com/javaee/5/docs/api/javax/servlet/ServletRequest.html#getRequestDispatcher(java.lang.String) The difference between this method and ServletContext.getRequestDispatcher(java.lang.String) is that this method can take a relative path. A: When you call getRequestDispatcher from ServletContext, you need to provide an absolute path, but for ServletRequest objects, you need to provide a relative path.
{ "language": "en", "url": "https://stackoverflow.com/questions/157846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CSS Layout, Vis Studio 2005, and AJAX Tab Container In a C# Web app, VS 2005 (I am avoiding 2008 because I find the IDE to be hard to deal with), I am getting into a layout stew. I am moving from absolute positioning toward CSS relative positioning. I'd like to divide the screen into four blocks: top (header band), middle left (a stacked menu), middle right (content - here the AJAX tab container), and bottom (footer band), with all 4 blocks positioned relatively, but the controls in the middle right (content) block positioned absolutely relative to the top left corner of the block. A nice side benefit would be to have the IDE design window show all controls as they actually would be displayed, but I doubt this is possible. The IDE is positioning all controls inside the tab panels relative to the top left of the design window; quite a mess. Right now, my prejudice is that CSS is good for relatively positioning blocks, artwork, text etc, but not good for input forms where it is important to line up lots of labels, text boxes, ddl's, check boxes, etc. At any rate, my CSS is not yet up to the task - does anyone know of a good article, book, blog, etc which discusses CSS as it is implemented in ASP.NET, and which might include an example with an AJAX tab control? Any help would be appreciated. Many thanks Mike Thomas A: So far as I know, there's nothing specific to ASP.NET, and you are right (at least in 2008) about it not using referenced stylesheets. These may be of use to you, however: http://www.positioniseverything.net/ - Position is Everything, an excellent CSS resource. http://www.alistapart.com/topics/code/css/ - A List Apart's CSS section, advice on specific techniques. Essentially, what you want is 4 main divs for your sight: header, footer, menu, content. Position each div as you like, and then adjust the positioning of their children, which most browsers will treat as 'position relative to my parent' (unless it's absolute, IIRC). A: I appreciate this probably isn't the comprehensive answer you wanted but it might help... To get the content of the right block to be absolutely positioned relative to its top left, you just need to give the container element a position: relative; style. This will turn that element into a containing block, causing all absolutely positioned child elements to be positioned against its boundaries. Heres a useful article on the subject: CSS Positioning. One of the issues with ASP.NET controls, and the AJAX Control Toolkit in particular which it sounds like you're using, is getting it to spit out valid markup. Its best to use controls that give you 100% control over markup, sometimes this means creating complex UserControls or adding small bits of nasty scripting to your aspx file. Regarding getting the IDE to render properly, don't even bother. I haven't used the design view in VS for nearly two years, and for good reason: it sucks and is based on IE. No IDE is ever going to give you the full rendering experience that a live test will, so personally I don't bother. I know the newer MS tools have much better support for this, although its still not perfect and as reliable as a real browser (obviously you want to be using Firefox here!). A: As Jeff says, there is nothing specific to ASP.NET with regards to how CSS is implemented. As for styling your forms with CSS have a read of these to give you some ideas: * *http://www.sitepoint.com/article/fancy-form-design-css/ *http://www.smashingmagazine.com/2006/11/11/css-based-forms-modern-solutions/ *http://www.quirksmode.org/css/forms.html A: Many thanks for all answers - there is a lot to read here, but what I've been through so far has gotten me over the current hump. I've found it a lot easier not to use VS 2005 IDE for lining up blocks in favor of Style Master, but I am still looking at other 3rd party tools before I buy. Many thanks, Mike Thomas
{ "language": "en", "url": "https://stackoverflow.com/questions/157850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Do Java listeners need to be removed? (In general) Imagine this sample java class: class A { void addListener(Listener obj); void removeListener(Listener obj); } class B { private A a; B() { a = new A(); a.addListener(new Listener() { void listen() {} } } Do I need to add a finalize method to B to call a.removeListener? Assume that the A instance will be shared with some other objects as well and will outlive the B instance. I am worried that I might be creating a garbage collector problem here. What is the best practice? A: My understanding of the GC is that, until the removeListener method is called, class A will be maintaining a reference to the listener and so it won't be a candidate for GC cleanup (and hence finalize won't be called). A: If you have added B as a listener to A, and A is meant to outlive B, the finalize call on B will never get called because there is an instance of B inside of A, so it will never get garbage collected. You could get around this by storing a reference to B in A as a WeakReference (which is not considered a reference during garage collection), but it would be better to explicitly deregister B from A when you no longer need it. In general it is advised in Java to not use the finalize method in Java because you can never be sure when it will be called, and you can not use it to deregister yourself from another class. A: You must be coming from C++ or some other language where people implement destructors. In Java you don't do that. You don't override finalize unless you really know what you're doing. In 10 years I never had to do that, and I still can't think of a good reason that would require me to do it. Back to your question, your listener is an independent object with its own life cycle and will collected after all other objects that reference it will be collected or when no other object will be pointing to it. This works very well. So no, you don't have to override finalize. A: A will indeed keep B alive through the anonymous instance. But I wouldn't override finalize to address that, rather use a static inner class who doesn't keep the B alive. A: In your situation the only garbage collection "problem" is that instances of B won't be garbage collected while there are hard-references to the shared instance of A. This is how garbage collection supposed to work in Java/.NET. Now, if you don't like the fact that instances of B aren't garbage-collected earlier, you need to ask yourself at what point you want them to stop listening to events from A? Once you have the answer, you'll know how to fix the design. A: There is a cycle in the reference graph. A references B and B references A. The garbage collector will detect cycles and see when there are no external references to A and B, and will then collect both. Attempting to use the finaliser here is wrong. If B is being destroyed, the reference to A is also being removed. The statement: "Assume that the A instance will be shared with some other objects as well and will outlive the B instance." is wrong. The only way that will happen is if the listener is explicitly removed from somewhere other than a finalizer. If references to A are passed around, that will imply a reference to B, and B will not be garbage collected because there are external references to the A-B cycle. Further update: If you want to break the cycle and not require B to explicitly remove the listener, you can use a WeakReference. Something like this: class A { void addListener(Listener obj); void removeListener(Listener obj); } class B { private static class InnerListener implements Listener { private WeakReference m_owner; private WeakReference m_source; InnerListener(B owner, A source) { m_owner = new WeakReference(owner); m_source = new WeakReference(source); } void listen() { // Handling reentrancy on this function left as an excercise. B b = (B)m_owner.get(); if (b == null) { if (m_source != null) { A a = (A) m_source.get(); if (a != null) { a.removeListener(this); m_source = null; } } return; } ... } } private A a; B() { a = new A(); a.addListener(new InnerListener(this, a)); } } Could be further generalised if needed across multiple classes. A: A holds a reference to B through the anonymous instance in implicitly used by the anonymous type created. This means B won't be freed until removeListener is called, and thus B's finalize won't be called. When A is destroyed, it's anonymous reference to B will also B destroyed opening the way to B being freed. But since B holds a reference to A this never happens. This seems like a design issue - if A has a calls a listener, why do you need B to also hold a reference to A? Why not pass the A that made the call to the listener, if necessary? A: How can A outlive B?: Example Usage of B and A: public static main(args) { B myB = new B(); myB = null; } Behaviour I'd expect: GC will remove myB and in the myB instance was to only reference to the A instance, so it will be removed too. With all their assigned listeners? Did you maybe mean: class B { private A a; B(A a) { this.a = a; a.addListener(new Listener() { void listen() {} } } With usage: public static main(args) { A myA = new A(); B myB = new B(myA); myB = null; } Because then I would really wonder what happens to that anonymous class.... A: I just found a huge memory leak, so I am going to call the code that created the leak to be wrong and my fix that does not leak as right. Here is the old code: (This is a common pattern I have seen all over) class Singleton { static Singleton getInstance() {...} void addListener(Listener listener) {...} void removeListener(Listener listener) {...} } class Leaky { Leaky() { // If the singleton changes the widget we need to know so register a listener Singleton singleton = Singleton.getInstance(); singleton.addListener(new Listener() { void handleEvent() { doSomething(); } }); } void doSomething() {...} } // Elsewhere while (1) { Leaky leaky = new Leaky(); // ... do stuff // leaky falls out of scope } Clearly, this is bad. Many Leaky's are being created and never get garbage collected because the listeners keep them alive. Here was my alternative that fixed my memory leak. This works because I only care about the event listener while the object exists. The listener should not keep the object alive. class Singleton { static Singleton getInstance() {...} void addListener(Listener listener) {...} void removeListener(Listener listener) {...} } class NotLeaky { private NotLeakyListener listener; NotLeaky() { // If the singleton changes the widget we need to know so register a listener Singleton singleton = Singleton.getInstance(); listener = new NotLeakyListener(this, singleton); singleton.addListener(listener); } void doSomething() {...} protected void finalize() { try { if (listener != null) listener.dispose(); } finally { super.finalize(); } } private static class NotLeakyListener implements Listener { private WeakReference<NotLeaky> ownerRef; private Singleton eventer; NotLeakyListener(NotLeaky owner, Singleton e) { ownerRef = new WeakReference<NotLeaky>(owner); eventer = e; } void dispose() { if (eventer != null) { eventer.removeListener(this); eventer = null; } } void handleEvent() { NotLeaky owner = ownerRef.get(); if (owner == null) { dispose(); } else { owner.doSomething(); } } } } // Elsewhere while (1) { NotLeaky notleaky = new NotLeaky(); // ... do stuff // notleaky falls out of scope } A: When the B is garbage collected it should allow the A to be garbage collected as well, and therefore any references in A as well. You don't need to explicitly remove the references in A. I don't know of any data on whether what you suggest would make the garbage collector run more efficiently, however, and when it's worth the bother, but I'd be interested in seeing it. A: A will indeed keep B from being garbage collected in you are using standard references to store your listeners. Alternatively when you are maintaining lists of listeners instead of defining new ArrayList<ListenerType>(); you could do something like new ArrayList<WeakReference<ListenerType>>(); By wrapping your object in a weakReference you can keep it from prolonging the life of the object. This only works of course if you are writing the class that holds the listeners A: Building on what @Alexander said about removing yourself as a listener: Unless there is some compelling reason not to, one thing I've learned from my co-workers is that instead of making an anonymous inner Listener, and needing to store it in a variable, make B implement Listener, and then B can remove itself when it needs to with a.removeListener(this)
{ "language": "en", "url": "https://stackoverflow.com/questions/157856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Life Cycle Tools Suite I am looking to replace the life cycle tools currently used by my development teams. Tools that I'm looking for: * *Version Control *Defect/Issue Tracking *Requirements Tracking *Test Case Management *(potentially) Project Management: Project Status, hours entry I have a new beefy server (Windows 2008 Server) to run all tools on. I'm looking at COTS and Open Source options, but haven't decided so far. Other factors: * *Distributed team (different physical sites) *Some Windows Development, some Linux Development *Software, Firmware, Technical Writing need to be able to use it Recommendations on a good suite that will work together? If Open Source, best approach to run on the Windows 2008 Server? A: Svn/Trac plus a few plugins will probably get you most of the way there for free. If you use the version supplied by visualsvn (they bundle both trac and subversion) its a nice easy setup too. http://www.visualsvn.com/server/ http://trac.edgewall.org/ http://trac-hacks.org/ A: Have a look at the tools by Atlassian- http://www.atlassian.com/ we've used some of their products (Jira/Confluence) and they link together well. Not exactly expensive either. As an admin / Wiki gardener they are easy to use and manage, which can sometimes be an important over looked requirement. A: Most common choice for version control system is Subversion. It has good tool support, most tools work with Subversion out of the box. You have a distributed team so you might consider a distributed version control system. For example Mercurial or Git. Mercurial has better support on Windows. Tool support is bit lacking compared to "traditional" version control systems like Subversion. All of the above are open source. For project management/issue tracking/requirements tracking there is open source Trac which is combined issue tracker, project management software and wiki. Trac work with Subversion, Git and Mercurial. Atlassian provides commercial JIRA for issue tracking/project management and Confluence for wiki. Jira works at least with Subversion. Fog Creek has Mercurial based Kiln for version control and FogBugz for issue tracking/project management. Both commercial. Both are available as hosted and as run-on-your-own-server versions. I have used Trac, which works, but you can expect some tinkering and configuration before it work like you want it to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/157862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Rails test hanging - how can I print the test name before execution? I'm having a test hang in our rails app can't figure out which one (since it hangs and doesn't get to the failure report). I found this blog post http://bmorearty.wordpress.com/2008/06/18/find-tests-more-easily-in-your-testlog/ which adds a setup hook to print the test name but when I try to do the same thing it gives me an error saying wrong number of arguments for setup (1 for 0). Any help at all would be appreciated. A: The printing of the test name is the responsibility of the TestRunner. If you are running your tests from the command line you can specify the -v option, to print out the test case names. example: ruby test_Foo.rb -v Loaded suite test_Foo Started test_blah(TestFoo): . test_blee(TestFoo): . Finished in 0.007 seconds. 2 tests, 15 assertions, 0 failures, 0 errors A: If you run test using rake it will work: rake test:units TESTOPTS="-v" A: This is what I use, in test_helper.rb class Test::Unit::TestCase # ... def setup_with_naming unless @@named[self.class.name] puts "\n#{self.class.name} " @@named[self.class.name] = true end setup_without_naming end alias_method_chain :setup, :naming unless defined? @@aliased @@aliased = true end The @@aliased variable keeps it from being re-aliased when you run it on multiple files at once; @@named keeps the name from being displayed before every test (just before the first to run). A: The definition in the blog post is shoulda specific (the setup(&block) method is defined in the module Thoughtbot::Shoulda in context.rb. shoulda.rb then has TestCase extend that module). The definition for pure test::unit is # File test/unit/testcase.rb, line 100 def setup end what you could do is def setup log_test end private def log_test if Rails::logger # When I run tests in rake or autotest I see the same log message multiple times per test for some reason. # This guard prevents that. unless @already_logged_this_test Rails::logger.info "\n\nStarting #{@method_name}\n#{'-' * (9 + @method_name.length)}\n" end @already_logged_this_test = true end edit if you really don't want to edit your files you can risk reopenning test case and extend run instead : class Test::Unit::TestCase alias :old_run :run def run log_test old_run end end this should work (I don't have ruby around to test though) I give up ! (in frustration) I checked the code and rails actually does magic behind the scene which is probably why redefining run doesn't work. The thing is : part of the magic is including ActiveSupport::CallBack and creating callbacks for setup and teardown. which means class Test::Unit::TestCase setup :log_test private def log_test if Rails::logger # When I run tests in rake or autotest I see the same log message multiple times per test for some reason. # This guard prevents that. unless @already_logged_this_test Rails::logger.info "\n\nStarting #{@method_name}\n#{'-' * (9 + @method_name.length)}\n" end @already_logged_this_test = true end end end should definitely work and actually running tests with it does work. where I am confused is that this is the first thing I tried and it failed with the same error you got A: In case anyone else has the same problem but with integration tests and a headless web server, the following ruby code will print out a test filename before running it Dir["test/integration/**/*.rb"].each do |filename| if filename.include?("_test.rb") p filename system "xvfb-run rake test TEST=#{filename}" end end A: What test framework are you using? Test/Unit? I would take a look at RSpec which would provide a little more context for your tests. You can also get a nice HTML report that looks like this: RSpec Result http://myskitch.com/robbyrussell/rspec_results-20070801-233809.jpg Had you had any failing tests, the specific test would be red and it would expand to the specific lines where the test failed to provide you more visibility on why the test failed and where you should look to address the issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/157873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: ZK ajax framework Can ZK easily be integrated in a struts web application? A: maybe you can find it interesting: http://www.zkoss.org/smalltalks/zk-sample/zk-sample.html moreover, you can browse a bit the zk forum you can find on http://www.zkoss.org/ itself, it is more than easy to find some discussion about that. hope it can help luca A: Fundamentally struts is a full-page-post-find-action-update-full-page framework. It was written in the last century and represent one of the most successful frameworks for doing that in the last century. You can get struts to work with zk. Yet this requires downgrading how you use zk to be something of the last century. Zk is not a full page-post-framework. To have all the productivity that zk gives you have to program using event driven desktop programming patterns. It is hard to explain just how different that is without looking at code. Yet it is far more productive. This is not instant - you have to unlearn how things are normally done last century to find a better way to do things this century. To see the difference consider exploring this sample application http://java.dzone.com/articles/using-desktop-model-view A: more concretely: http://docs.zkoss.org/wiki/ZK/How-Tos/Integrate-Other-Frameworks#Struts_.2B_Tiles_.2B_JSP_.28.2B_Spring.29
{ "language": "en", "url": "https://stackoverflow.com/questions/157896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Class for URL Querystring Manipulation? I am looking for a well tested class for manipulating URLs in .NET. Specifically I want to be able to add/update querystring values given a url. I have found various classes on the web that do this but none seem really robust and well tested. I also cannot find anything in the .NET framework; the Uri class doesn't let me manipulate the parameters in the querystring. There is code to do this in the framework but its all marked internal. Is there a nice robust class around for working with Urls and QueryStrings? A: shouldn't a simple string Dictionary suffice for this? an ordinary ASP.NET query string is just composed of key-value pairs separated by ampersands. A: Long time back we used SecureQueryString 2.0. since it inherits from NameValuePair (if I remember correctly), it provides ability to add, remove key values easily. It also provides support for encryption of the query if you want. It also provides ability to convert URL to NameValuePair and vice-versa.
{ "language": "en", "url": "https://stackoverflow.com/questions/157898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I prepend <%= request.getContextPath() %> to all relative URLs inside a jsp page? The subject says it all, almost. How do I automatically fix jsp pages so that relative URLs are mapped to the context path instead of the server root? That is, given for example <link rel="stylesheet" type="text/css" href="/css/style.css" /> how do I set-up things in a way that maps the css to my-server/my-context/css/style.css instead of my-server/css/style.css? Is there an automatic way of doing that, other than changing all lines like the above to <link rel="stylesheet" type="text/css" href="<%= request.getContextPath() %>/css/style.css" /> A: Look into the <BASE HREF=""> tag. This is an HTML tag which will mean all links on the page should start with your base URL. For example, if you specified <BASE HREF="http://www.example.com/prefix"> and then had <a href="/link/1.html"> then the link should actually take you to /prefix/link/1.html. This should also work on <LINK> (stylesheet) tags. A: The better way is to HttpServletResponse.encodeURL() which will construct the url appropria
{ "language": "en", "url": "https://stackoverflow.com/questions/157905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: In a .net Exception how to get a stacktrace with argument values I am trying to add an unhandled exception handler in .net (c#) that should be as helpfull for the 'user' as possible. The end users are mostly programers so they just need a hint of what object are they manipulating wrong. I'm developing a windows similar to the windows XP error report when an application crashes but that gives as much imediate information as possible imediatly about the exception thrown. While the stack trace enables me (since I have the source code) to pinpoint the source of the problem, the users dont have it and so they are lost without further information. Needless to say I have to spend lots of time supporting the tool. There are a few system exceptions like KeyNotFoundException thrown by the Dictionary collection that really bug me since they dont include in the message the key that wasnt found. I can fill my code with tons of try catch blocks but its rather agressive and is lots more code to maintain, not to mention a ton more of strings that have to end up being localized. Finally the question: Is there any way to obtain (at runtime) the values of the arguments of each function in the call stack trace? That alone could resolve 90% of the support calls. A: I don't think System.Diagnostics.StackFrame supplies argument information (other than the method signature). You could instrument the troublesome calls with trace logging via AOP, or even use its exception interception features to conditionally log without having to litter your code. Have a look around http://www.postsharp.org/. A: Likewise, I've not found anything to derive the parameters automatically at runtime. Instead, I've used a Visual Studio add-in to generate code that explicitly packages up the parameters, like this: public class ExceptionHandler { public static bool HandleException(Exception ex, IList<Param> parameters) { /* * Log the exception * * Return true to rethrow the original exception, * else false */ } } public class Param { public string Name { get; set; } public object Value { get; set; } } public class MyClass { public void RenderSomeText(int lineNumber, string text, RenderingContext context) { try { /* * Do some work */ throw new ApplicationException("Something bad happened"); } catch (Exception ex) { if (ExceptionHandler.HandleException( ex, new List<Param> { new Param { Name = "lineNumber", Value=lineNumber }, new Param { Name = "text", Value=text }, new Param { Name = "context", Value=context} })) { throw; } } } } EDIT: or alternatively, by making the parameter to HandleException a params array: public static bool HandleException(Exception ex, params Param[] parameters) { ... } ... if (ExceptionHandler.HandleException( ex, new Param { Name = "lineNumber", Value=lineNumber }, new Param { Name = "text", Value=text }, new Param { Name = "context", Value=context} )) { throw; } ... It's a bit of a pain generating the extra code to explicitly pass the parameters to the exception handler, but with the use of an add-in you can at least automate it. A custom attribute can be used to annotate any parameters that you don't want the add-in to pass to the exception handler: public UserToken RegisterUser( string userId, [NoLog] string password ) { } 2ND EDIT: Mind you, I'd completely forgotten about AVICode: http://www.avicode.com/ They use call interception techniques to provide exactly this kind of information, so it must be possible. A: Unfortunately you can't get the actual values of parameters from the callstack except with debugging tools actually attached to the application. However by using the StackTrace and StackFrame objects in System.Diagnostics you can walk the call stack and read out all of the methods invoked and the parameter names and types. You would do this like: System.Diagnostics.StackTrace callStack = new System.Diagnostics.StackTrace(); System.Diagnostics.StackFrame frame = null; System.Reflection.MethodBase calledMethod = null; System.Reflection.ParameterInfo [] passedParams = null; for (int x = 0; x < callStack.FrameCount; x++) { frame = callStack.GetFrame(x); calledMethod = frame.GetMethod(); passedParams = calledMethod.GetParameters(); foreach (System.Reflection.ParameterInfo param in passedParams) System.Console.WriteLine(param.ToString()); } If you need actual values then you're going to need to take minidumps and analyse them i'm afraid. Information on getting dump information can be found at: http://www.debuginfo.com/tools/clrdump.html A: There is a software tool from redgate out there, that looks very promissing. http://www.red-gate.com/products/dotnet-development/smartassembly/ Currently we are collecting our customers error reports by eMail and I sometimes struggle with some important data missing (mostly some very basic variables, like the id from the current record so I can reproduce the bug). I havn't tested this tool yet, but from my understanding it does what collects the argument values and local variables from the stack. A: If you could do what you are looking for, you would be defeating an integral part of .NET security. The best option in the case, it to attach to a debugger or profiler (either of them can access those values). The former can be attached, the latter need to be active before the program starts. A: I don't believe there is a built-in mechanism. Retrieving each frame of the stack trace, while allowing you to determine the method in the trace, only gives you the reflected type information on that method. No parameter information. I think this is why some exceptions, notably, ArgumentException, et. al. provide a mechanism to specify the value of the argument involved in the exception since there's no easy way to get it. A: Since the end users are developers, you can provide them with a version that enables logging of all the key values/arguments that are passed. And provide a facility so that they can turn on/off logging. A: it is theoretically possible to do what you want by taking advantage of the Portable Executable (PE) file format to get the variable types and offsets, but I ran into a [documentation] wall trying to do this a couple of years ago. Good luck! A: I'm not aware of anything like that, but have wanted it myself. I usually do it myself when throwing exceptions, but if it's not yours to throw, then you may be out with the rest of us. A: I don't know a way to obtain the values of the arguments of each function in the call stack trace, but one solution would be to catch the specific exception (KeyNotFoundException in your example) and re-throw it in a new Exception. That allows you associate any additional information you wish For example: Dim sKey as String = "some-key" Dim sValue as String = String.Empty Try sValue = Dictionary(sKey) Catch KeyEx As KeyNotFoundException Throw New KeyNotFoundException("Class.Function() - Couldn't find [" & sKey & "]", KeyEx) End Try I appreciate your statement about localising the error strings, but if your audience are 'mostly programmers' then convention already dictates a comprehension of English to some degree (rightly or wrongly - but that's another debate!) A: Once you have the exception the two things you are interested in are System.Diagnostics.StackTrace and System.Diagnostics.StackFrame There is an MSDN example here
{ "language": "en", "url": "https://stackoverflow.com/questions/157911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Creating File Just for the Sake on Unit Test This might be an interesting question. I need to test that if I can successfully upload and fetch the PDF file. This works for the text based files but I just wanted to check for PDF. For this unit test to run I need a PDF file. There are couple of options. I can create a dummy PDF file and store it some folder and read that file and save the file to the system. But now, my unit test is dependent on the PDF file. So, anyone who runs the unit test must have the PDF file which is kinda bad. Another way for me is to create a PDF file. This is not a big deal as I can simply create a dummy file with the .pdf extension OR I can even use some PDF third party tool to create PDF file. Another way also is to embed the PDF document as an embedded resource and then extract that from the assembly. What do you think is the best way to handle this issue? A: Save a PDF file with your tests in a resources directory. Your tests should be as simple as possible, and creating a file is just one more point that could fail. A: I normally add a real file along side tests that need external content. This way you're testing with a real file, and can easily replace it for different types of content testing. A: I think it's better to deal with the "real" objects as much as possible. Introducing "mock" (in this case it is not the exact term, though) objects can help only if handling the test data set is unfeasible. I don't think that putting a test file in your version control system is a big deal, so better go with it rather than writing lots of code which may lead to other bugs and testing. Use a PDF very close to the expected average file, too. A: Adding a pdf file (or a dummy file with pdf extension) to the resources is the way to go. You should be able to access it by relative path (e.g. ....\bla\foo.pdf) from your test unit. And do not try to create a valid pdf file just in order to test if you have read or write access. The KISS principle applies... A: My concern is that if I place a file in a different directory.. let's say resources under unit tests then don't I need the complete path to the file to access it. I am running my tests manually. Also, when I move to a different machine and place my solution in a folder with different name then the path to the file gets messed up. Unless there is some way that I can access the project's folder from within my application (there should be).
{ "language": "en", "url": "https://stackoverflow.com/questions/157917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Customizing PowerShell Prompt - Equivalent to CMD's $M$P$_$+$G? I've started to "play around" with PowerShell and am trying to get it to "behave". One of the things I'd like to do is to customize the PROMPT to be "similar" to what "$M$P$_$+$G" do on MS-Dos: A quick rundown of what these do: Character| Description $m The remote name associated with the current drive letter or the empty string if current drive is not a network drive. $p Current drive and path $_ ENTER-LINEFEED $+ Zero or more plus sign (+) characters depending upon the depth of the pushd directory stack, one character for each level pushed $g > (greater-than sign) So the final output is something like: \\spma1fp1\JARAVJ$ H:\temp ++> I've been able to add the $M and $_ functionality (and a nifty History feature) to my prompt as follows: function prompt { ## Get the history. Since the history may be either empty, ## a single item or an array, the @() syntax ensures ## that PowerShell treats it as an array $history = @(get-history) ## If there are any items in the history, find out the ## Id of the final one. ## PowerShell defaults the $lastId variable to '0' if this ## code doesn't execute. if($history.Count -gt 0) { $lastItem = $history[$history.Count - 1] $lastId = $lastItem.Id } ## The command that we're currently entering on the prompt ## will be next in the history. Because of that, we'll ## take the last history Id and add one to it. $nextCommand = $lastId + 1 ## Get the current location $currentDirectory = get-location ## Set the Windows Title to the current location $host.ui.RawUI.WindowTitle = "PS: " + $currentDirectory ## And create a prompt that shows the command number, ## and current location "PS:$nextCommand $currentDirectory >" } But the rest is not yet something I've managed to duplicate.... Thanks a lot for the tips that will surely come! A: This will get you the count of the locations on the pushd stack: $(get-location -Stack).count A: See if this does what you want: function prompt { ## Get the history. Since the history may be either empty, ## a single item or an array, the @() syntax ensures ## that PowerShell treats it as an array $history = @(get-history) ## If there are any items in the history, find out the ## Id of the final one. ## PowerShell defaults the $lastId variable to '0' if this ## code doesn't execute. if($history.Count -gt 0) { $lastItem = $history[$history.Count - 1] $lastId = $lastItem.Id } ## The command that we're currently entering on the prompt ## will be next in the history. Because of that, we'll ## take the last history Id and add one to it. $nextCommand = $lastId + 1 ## Get the current location $currentDirectory = get-location ## Set the Windows Title to the current location $host.ui.RawUI.WindowTitle = "PS: " + $currentDirectory ##pushd info $pushdCount = $(get-location -stack).count $pushPrompt = "" for ($i=0; $i -lt $pushdCount; $i++) { $pushPrompt += "+" } ## And create a prompt that shows the command number, ## and current location "PS:$nextCommand $currentDirectory `n$($pushPrompt)>" } A: Thanks to EBGReens's answer, my "prompt" is now capable of showing the depth of the stack: function prompt { ## Initialize vars $depth_string = "" ## Get the Stack -Pushd count $depth = (get-location -Stack).count ## Create a string that has $depth plus signs $depth_string = "+" * $depth ## Get the history. Since the history may be either empty, ## a single item or an array, the @() syntax ensures ## that PowerShell treats it as an array $history = @(get-history) ## If there are any items in the history, find out the ## Id of the final one. ## PowerShell defaults the $lastId variable to '0' if this ## code doesn't execute. if($history.Count -gt 0) { $lastItem = $history[$history.Count - 1] $lastId = $lastItem.Id } ## The command that we're currently entering on the prompt ## will be next in the history. Because of that, we'll ## take the last history Id and add one to it. $nextCommand = $lastId + 1 ## Get the current location $currentDirectory = get-location ## Set the Windows Title to the current location $host.ui.RawUI.WindowTitle = "PS: " + $currentDirectory ## And create a prompt that shows the command number, ## and current location "PS:$nextCommand $currentDirectory `n$($depth_string)>" } A: The following will give you the equivalent of $m. $mydrive = $pwd.Drive.Name + ":"; $networkShare = (gwmi -class "Win32_MappedLogicalDisk" -filter "DeviceID = '$mydrive'"); if ($networkShare -ne $null) { $networkPath = $networkShare.ProviderName } A: Thanks to the tips in: In PowerShell, how can I determine if the current drive is a networked drive or not? In PowerShell, how can I determine the root of a drive (supposing it's a networked drive) I've managed to get it working. My full profile is: function prompt { ## Initialize vars $depth_string = "" ## Get the Stack -Pushd count $depth = (get-location -Stack).count ## Create a string that has $depth plus signs $depth_string = "+" * $depth ## Get the history. Since the history may be either empty, ## a single item or an array, the @() syntax ensures ## that PowerShell treats it as an array $history = @(get-history) ## If there are any items in the history, find out the ## Id of the final one. ## PowerShell defaults the $lastId variable to '0' if this ## code doesn't execute. if($history.Count -gt 0) { $lastItem = $history[$history.Count - 1] $lastId = $lastItem.Id } ## The command that we're currently entering on the prompt ## will be next in the history. Because of that, we'll ## take the last history Id and add one to it. $nextCommand = $lastId + 1 ## Get the current location $currentDirectory = get-location ## Set the Windows Title to the current location $host.ui.RawUI.WindowTitle = "PS: " + $currentDirectory ## Get the current location's DRIVE LETTER $drive = (get-item ($currentDirectory)).root.name ## Make sure we're using a path that is not already UNC if ($drive.IndexOf(":") -ne "-1") { $root_dir = (get-wmiobject Win32_LogicalDisk | ? {$_.deviceid -eq $drive.Trim("\") } | % { $_.providername })+" " } else { $root_dir="" } ## And create a prompt that shows the command number, ## and current location "PS:$nextCommand $root_dir$currentDirectory `n$($depth_string)>" }
{ "language": "en", "url": "https://stackoverflow.com/questions/157923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Does LINQ's ExecuteCommand provide protection from SQL injection attacks? I've got a situation where I need to use LINQ's ExecuteCommand method to run an insert. Something like (simplified for purposes of this question): object[] oParams = { Guid.NewGuid(), rec.WebMethodID }; TransLogDataContext.ExecuteCommand ( "INSERT INTO dbo.Transaction_Log (ID, WebMethodID) VALUES ({0}, {1})", oParams); The question is if this is SQL injection proof in the same way parameterized queries are? A: Did some research, and I found this: In my simple testing, it looks like the parameters passed in the ExecuteQuery and ExecuteCommand methods are automatically SQL encoded based on the value being supplied. So if you pass in a string with a ' character, it will automatically SQL escape it to ''. I believe a similar policy is used for other data types like DateTimes, Decimals, etc. http://weblogs.asp.net/scottgu/archive/2007/08/27/linq-to-sql-part-8-executing-custom-sql-expressions.aspx (You have scroll way down to find it) This seems a little odd to me - most other .Net tools know better than to "SQL escape" anything; they use real query parameters instead. A: LINQ to SQL uses exec_sql with parameters, which is much safer than concatenating into the ad-hoc query string. It should be as safe againt SQL injection as using SqlCommand and its Paramaters collection (in fact, it's probably what LINQ to SQL uses internally). Then again, how safe is that?
{ "language": "en", "url": "https://stackoverflow.com/questions/157924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What's the best way of implementing a thread-safe Dictionary? I was able to implement a thread-safe Dictionary in C# by deriving from IDictionary and defining a private SyncRoot object: public class SafeDictionary<TKey, TValue>: IDictionary<TKey, TValue> { private readonly object syncRoot = new object(); private Dictionary<TKey, TValue> d = new Dictionary<TKey, TValue>(); public object SyncRoot { get { return syncRoot; } } public void Add(TKey key, TValue value) { lock (syncRoot) { d.Add(key, value); } } // more IDictionary members... } I then lock on this SyncRoot object throughout my consumers (multiple threads): Example: lock (m_MySharedDictionary.SyncRoot) { m_MySharedDictionary.Add(...); } I was able to make it work, but this resulted in some ugly code. My question is, is there a better, more elegant way of implementing a thread-safe Dictionary? A: Attempting to synchronize internally will almost certainly be insufficient because it's at too low a level of abstraction. Say you make the Add and ContainsKey operations individually thread-safe as follows: public void Add(TKey key, TValue value) { lock (this.syncRoot) { this.innerDictionary.Add(key, value); } } public bool ContainsKey(TKey key) { lock (this.syncRoot) { return this.innerDictionary.ContainsKey(key); } } Then what happens when you call this supposedly thread-safe bit of code from multiple threads? Will it always work OK? if (!mySafeDictionary.ContainsKey(someKey)) { mySafeDictionary.Add(someKey, someValue); } The simple answer is no. At some point the Add method will throw an exception indicating that the key already exists in the dictionary. How can this be with a thread-safe dictionary, you might ask? Well just because each operation is thread-safe, the combination of two operations is not, as another thread could modify it between your call to ContainsKey and Add. Which means to write this type of scenario correctly you need a lock outside the dictionary, e.g. lock (mySafeDictionary) { if (!mySafeDictionary.ContainsKey(someKey)) { mySafeDictionary.Add(someKey, someValue); } } But now, seeing as you're having to write externally locking code, you're mixing up internal and external synchronisation, which always leads to problems such as unclear code and deadlocks. So ultimately you're probably better to either: * *Use a normal Dictionary<TKey, TValue> and synchronize externally, enclosing the compound operations on it, or *Write a new thread-safe wrapper with a different interface (i.e. not IDictionary<T>) that combines the operations such as an AddIfNotContained method so you never need to combine operations from it. (I tend to go with #1 myself) A: You shouldn't publish your private lock object through a property. The lock object should exist privately for the sole purpose of acting as a rendezvous point. If performance proves to be poor using the standard lock then Wintellect's Power Threading collection of locks can be very useful. A: There are several problems with implementation method you are describing. * *You shouldn't ever expose your synchronization object. Doing so will open up yourself to a consumer grabbing the object and taking a lock on it and then you're toast. *You're implementing a non-thread safe interface with a thread safe class. IMHO this will cost you down the road Personally, I've found the best way to implement a thread safe class is via immutability. It really reduces the number of problems you can run into with thread safety. Check out Eric Lippert's Blog for more details. A: As Peter said, you can encapsulate all of the thread safety inside the class. You will need to be careful with any events you expose or add, making sure that they get invoked outside of any locks. public class SafeDictionary<TKey, TValue>: IDictionary<TKey, TValue> { private readonly object syncRoot = new object(); private Dictionary<TKey, TValue> d = new Dictionary<TKey, TValue>(); public void Add(TKey key, TValue value) { lock (syncRoot) { d.Add(key, value); } OnItemAdded(EventArgs.Empty); } public event EventHandler ItemAdded; protected virtual void OnItemAdded(EventArgs e) { EventHandler handler = ItemAdded; if (handler != null) handler(this, e); } // more IDictionary members... } Edit: The MSDN docs point out that enumerating is inherently not thread safe. That can be one reason for exposing a synchronization object outside your class. Another way to approach that would be to provide some methods for performing an action on all members and lock around the enumerating of the members. The problem with this is that you don't know if the action passed to that function calls some member of your dictionary (that would result in a deadlock). Exposing the synchronization object allows the consumer to make those decisions and doesn't hide the deadlock inside your class. A: You don't need to lock the SyncRoot property in your consumer objects. The lock you have within the methods of the dictionary is sufficient. To Elaborate: What ends up happening is that your dictionary is locked for a longer period of time than is necessary. What happens in your case is the following: Say thread A acquires the lock on SyncRoot before the call to m_mySharedDictionary.Add. Thread B then attempts to acquire the lock but is blocked. In fact, all other threads are blocked. Thread A is allowed to call into the Add method. At the lock statement within the Add method, thread A is allowed to obtain the lock again because it already owns it. Upon exiting the lock context within the method and then outside the method, thread A has released all locks allowing other threads to continue. You can simply allow any consumer to call into the Add method as the lock statement within your SharedDictionary class Add method will have the same effect. At this point in time, you have redundant locking. You would only lock on SyncRoot outside of one of the dictionary methods if you had to perform two operations on the dictionary object that needed to be guaranteed to occur consecutively. A: The .NET 4.0 class that supports concurrency is named ConcurrentDictionary. A: Just a thought why not recreate the dictionary? If reading is a multitude of writing then locking will synchronize all requests. example private static readonly object Lock = new object(); private static Dictionary<string, string> _dict = new Dictionary<string, string>(); private string Fetch(string key) { lock (Lock) { string returnValue; if (_dict.TryGetValue(key, out returnValue)) return returnValue; returnValue = "find the new value"; _dict = new Dictionary<string, string>(_dict) { { key, returnValue } }; return returnValue; } } public string GetValue(key) { string returnValue; return _dict.TryGetValue(key, out returnValue)? returnValue : Fetch(key); } A: Collections And Synchronization
{ "language": "en", "url": "https://stackoverflow.com/questions/157933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "110" }
Q: Hiding a password in a python script (insecure obfuscation only) I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection. Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ? A: Here is a simple method: * *Create a python module - let's call it peekaboo.py. *In peekaboo.py, include both the password and any code needing that password *Create a compiled version - peekaboo.pyc - by importing this module (via python commandline, etc...). *Now, delete peekaboo.py. *You can now happily import peekaboo relying only on peekaboo.pyc. Since peekaboo.pyc is byte compiled it is not readable to the casual user. This should be a bit more secure than base64 decoding - although it is vulnerable to a py_to_pyc decompiler. A: Douglas F Shearer's is the generally approved solution in Unix when you need to specify a password for a remote login. You add a --password-from-file option to specify the path and read plaintext from a file. The file can then be in the user's own area protected by the operating system. It also allows different users to automatically pick up their own own file. For passwords that the user of the script isn't allowed to know - you can run the script with elavated permission and have the password file owned by that root/admin user. A: for python3 obfuscation using base64 is done differently: import base64 base64.b64encode(b'PasswordStringAsStreamOfBytes') which results in b'UGFzc3dvcmRTdHJpbmdBc1N0cmVhbU9mQnl0ZXM=' note the informal string representation, the actual string is in quotes and decoding back to the original string base64.b64decode(b'UGFzc3dvcmRTdHJpbmdBc1N0cmVhbU9mQnl0ZXM=') b'PasswordStringAsStreamOfBytes' to use this result where string objects are required the bytes object can be translated repr = base64.b64decode(b'UGFzc3dvcmRTdHJpbmdBc1N0cmVhbU9mQnl0ZXM=') secret = repr.decode('utf-8') print(secret) for more information on how python3 handles bytes (and strings accordingly) please see the official documentation. A: A way that I have done this is as follows: At the python shell: >>> from cryptography.fernet import Fernet >>> key = Fernet.generate_key() >>> print(key) b'B8XBLJDiroM3N2nCBuUlzPL06AmfV4XkPJ5OKsPZbC4=' >>> cipher = Fernet(key) >>> password = "thepassword".encode('utf-8') >>> token = cipher.encrypt(password) >>> print(token) b'gAAAAABe_TUP82q1zMR9SZw1LpawRLHjgNLdUOmW31RApwASzeo4qWSZ52ZBYpSrb1kUeXNFoX0tyhe7kWuudNs2Iy7vUwaY7Q==' Then, create a module with the following code: from cryptography.fernet import Fernet # you store the key and the token key = b'B8XBLJDiroM3N2nCBuUlzPL06AmfV4XkPJ5OKsPZbC4=' token = b'gAAAAABe_TUP82q1zMR9SZw1LpawRLHjgNLdUOmW31RApwASzeo4qWSZ52ZBYpSrb1kUeXNFoX0tyhe7kWuudNs2Iy7vUwaY7Q==' # create a cipher and decrypt when you need your password cipher = Fernet(key) mypassword = cipher.decrypt(token).decode('utf-8') Once you've done this, you can either import mypassword directly or you can import the token and cipher to decrypt as needed. Obviously, there are some shortcomings to this approach. If someone has both the token and the key (as they would if they have the script), they can decrypt easily. However it does obfuscate, and if you compile the code (with something like Nuitka) at least your password won't appear as plain text in a hex editor. A: This is a pretty common problem. Typically the best you can do is to either A) create some kind of ceasar cipher function to encode/decode (just not rot13) or B) the preferred method is to use an encryption key, within reach of your program, encode/decode the password. In which you can use file protection to protect access the key. Along those lines if your app runs as a service/daemon (like a webserver) you can put your key into a password protected keystore with the password input as part of the service startup. It'll take an admin to restart your app, but you will have really good pretection for your configuration passwords. A: If you are working on a Unix system, take advantage of the netrc module in the standard Python library. It reads passwords from a separate text file (.netrc), which has the format decribed here. Here is a small usage example: import netrc # Define which host in the .netrc file to use HOST = 'mailcluster.loopia.se' # Read from the .netrc file in your home directory secrets = netrc.netrc() username, account, password = secrets.authenticators( HOST ) print username, password A: Your operating system probably provides facilities for encrypting data securely. For instance, on Windows there is DPAPI (data protection API). Why not ask the user for their credentials the first time you run then squirrel them away encrypted for subsequent runs? A: Here is my snippet for such thing. You basically import or copy the function to your code. getCredentials will create the encrypted file if it does not exist and return a dictionaty, and updateCredential will update. import os def getCredentials(): import base64 splitter='<PC+,DFS/-SHQ.R' directory='C:\\PCT' if not os.path.exists(directory): os.makedirs(directory) try: with open(directory+'\\Credentials.txt', 'r') as file: cred = file.read() file.close() except: print('I could not file the credentials file. \nSo I dont keep asking you for your email and password everytime you run me, I will be saving an encrypted file at {}.\n'.format(directory)) lanid = base64.b64encode(bytes(input(' LanID: '), encoding='utf-8')).decode('utf-8') email = base64.b64encode(bytes(input(' eMail: '), encoding='utf-8')).decode('utf-8') password = base64.b64encode(bytes(input(' PassW: '), encoding='utf-8')).decode('utf-8') cred = lanid+splitter+email+splitter+password with open(directory+'\\Credentials.txt','w+') as file: file.write(cred) file.close() return {'lanid':base64.b64decode(bytes(cred.split(splitter)[0], encoding='utf-8')).decode('utf-8'), 'email':base64.b64decode(bytes(cred.split(splitter)[1], encoding='utf-8')).decode('utf-8'), 'password':base64.b64decode(bytes(cred.split(splitter)[2], encoding='utf-8')).decode('utf-8')} def updateCredentials(): import base64 splitter='<PC+,DFS/-SHQ.R' directory='C:\\PCT' if not os.path.exists(directory): os.makedirs(directory) print('I will be saving an encrypted file at {}.\n'.format(directory)) lanid = base64.b64encode(bytes(input(' LanID: '), encoding='utf-8')).decode('utf-8') email = base64.b64encode(bytes(input(' eMail: '), encoding='utf-8')).decode('utf-8') password = base64.b64encode(bytes(input(' PassW: '), encoding='utf-8')).decode('utf-8') cred = lanid+splitter+email+splitter+password with open(directory+'\\Credentials.txt','w+') as file: file.write(cred) file.close() cred = getCredentials() updateCredentials() A: How about importing the username and password from a file external to the script? That way even if someone got hold of the script, they wouldn't automatically get the password. A: Place the configuration information in a encrypted config file. Query this info in your code using an key. Place this key in a separate file per environment, and don't store it with your code. A: More homegrown appraoch rather than converting authentication / passwords / username to encrytpted details. FTPLIB is just the example. "pass.csv" is the csv file name Save password in CSV like below : user_name user_password (With no column heading) Reading the CSV and saving it to a list. Using List elelments as authetntication details. Full code. import os import ftplib import csv cred_detail = [] os.chdir("Folder where the csv file is stored") for row in csv.reader(open("pass.csv","rb")): cred_detail.append(row) ftp = ftplib.FTP('server_name',cred_detail[0][0],cred_detail[1][0]) A: The best solution, assuming the username and password can't be given at runtime by the user, is probably a separate source file containing only variable initialization for the username and password that is imported into your main code. This file would only need editing when the credentials change. Otherwise, if you're only worried about shoulder surfers with average memories, base 64 encoding is probably the easiest solution. ROT13 is just too easy to decode manually, isn't case sensitive and retains too much meaning in it's encrypted state. Encode your password and user id outside the python script. Have he script decode at runtime for use. Giving scripts credentials for automated tasks is always a risky proposal. Your script should have its own credentials and the account it uses should have no access other than exactly what is necessary. At least the password should be long and rather random. A: base64 is the way to go for your simple needs. There is no need to import anything: >>> 'your string'.encode('base64') 'eW91ciBzdHJpbmc=\n' >>> _.decode('base64') 'your string' A: Base64 encoding is in the standard library and will do to stop shoulder surfers: >>> import base64 >>> print(base64.b64encode("password".encode("utf-8"))) cGFzc3dvcmQ= >>> print(base64.b64decode("cGFzc3dvcmQ=").decode("utf-8")) password A: Do you know pit? https://pypi.python.org/pypi/pit (py2 only (version 0.3)) https://github.com/yoshiori/pit (it will work on py3 (current version 0.4)) test.py from pit import Pit config = Pit.get('section-name', {'require': { 'username': 'DEFAULT STRING', 'password': 'DEFAULT STRING', }}) print(config) Run: $ python test.py {'password': 'my-password', 'username': 'my-name'} ~/.pit/default.yml: section-name: password: my-password username: my-name A: If running on Windows, you could consider using win32crypt library. It allows storage and retrieval of protected data (keys, passwords) by the user that is running the script, thus passwords are never stored in clear text or obfuscated format in your code. I am not sure if there is an equivalent implementation for other platforms, so with the strict use of win32crypt your code is not portable. I believe the module can be obtained here: http://timgolden.me.uk/pywin32-docs/win32crypt.html A: You could also consider the possibility of storing the password outside the script, and supplying it at runtime e.g. fred.py import os username = 'fred' password = os.environ.get('PASSWORD', '') print(username, password) which can be run like $ PASSWORD=password123 python fred.py fred password123 Extra layers of "security through obscurity" can be achieved by using base64 (as suggested above), using less obvious names in the code and further distancing the actual password from the code. If the code is in a repository, it is often useful to store secrets outside it, so one could add this to ~/.bashrc (or to a vault, or a launch script, ...) export SURNAME=cGFzc3dvcmQxMjM= and change fred.py to import os import base64 name = 'fred' surname = base64.b64decode(os.environ.get('SURNAME', '')).decode('utf-8') print(name, surname) then re-login and $ python fred.py fred password123 A: Why not have a simple xor? Advantages: * *looks like binary data *noone can read it without knowing the key (even if it's a single char) I get to the point where I recognize simple b64 strings for common words and rot13 as well. Xor would make it much harder. A: There are several ROT13 utilities written in Python on the 'Net -- just google for them. ROT13 encode the string offline, copy it into the source, decode at point of transmission.But this is really weak protection... A: This doesn't precisely answer your question, but it's related. I was going to add as a comment but wasn't allowed. I've been dealing with this same issue, and we have decided to expose the script to the users using Jenkins. This allows us to store the db credentials in a separate file that is encrypted and secured on a server and not accessible to non-admins. It also allows us a bit of a shortcut to creating a UI, and throttling execution. A: import base64 print(base64.b64encode("password".encode("utf-8"))) print(base64.b64decode(b'cGFzc3dvcmQ='.decode("utf-8")))
{ "language": "en", "url": "https://stackoverflow.com/questions/157938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: Create ArrayList from array Given an array of type Element[]: Element[] array = {new Element(1), new Element(2), new Element(3)}; How do I convert this array into an object of type ArrayList<Element>? ArrayList<Element> arrayList = ???; A: Given: Element[] array = new Element[] { new Element(1), new Element(2), new Element(3) }; The simplest answer is to do: List<Element> list = Arrays.asList(array); This will work fine. But some caveats: * *The list returned from asList has fixed size. So, if you want to be able to add or remove elements from the returned list in your code, you'll need to wrap it in a new ArrayList. Otherwise you'll get an UnsupportedOperationException. *The list returned from asList() is backed by the original array. If you modify the original array, the list will be modified as well. This may be surprising. A: You probably just need a List, not an ArrayList. In that case you can just do: List<Element> arraylist = Arrays.asList(array); A: Even though there are many perfectly written answers to this question, I will add my inputs. Say you have Element[] array = { new Element(1), new Element(2), new Element(3) }; New ArrayList can be created in the following ways ArrayList<Element> arraylist_1 = new ArrayList<>(Arrays.asList(array)); ArrayList<Element> arraylist_2 = new ArrayList<>( Arrays.asList(new Element[] { new Element(1), new Element(2), new Element(3) })); // Add through a collection ArrayList<Element> arraylist_3 = new ArrayList<>(); Collections.addAll(arraylist_3, array); And they very well support all operations of ArrayList arraylist_1.add(new Element(4)); // or remove(): Success arraylist_2.add(new Element(4)); // or remove(): Success arraylist_3.add(new Element(4)); // or remove(): Success But the following operations returns just a List view of an ArrayList and not actual ArrayList. // Returns a List view of array and not actual ArrayList List<Element> listView_1 = (List<Element>) Arrays.asList(array); List<Element> listView_2 = Arrays.asList(array); List<Element> listView_3 = Arrays.asList(new Element(1), new Element(2), new Element(3)); Therefore, they will give error when trying to make some ArrayList operations listView_1.add(new Element(4)); // Error listView_2.add(new Element(4)); // Error listView_3.add(new Element(4)); // Error More on List representation of array link. A: Simplest way to do so is by adding following code. Tried and Tested. String[] Array1={"one","two","three"}; ArrayList<String> s1= new ArrayList<String>(Arrays.asList(Array1)); A: You can do it in java 8 as follows ArrayList<Element> list = (ArrayList<Element>)Arrays.stream(array).collect(Collectors.toList()); A: Another Java8 solution (I may have missed the answer among the large set. If so, my apologies). This creates an ArrayList (as opposed to a List) i.e. one can delete elements package package org.something.util; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.stream.Collectors; public class Junk { static <T> ArrayList<T> arrToArrayList(T[] arr){ return Arrays.asList(arr) .stream() .collect(Collectors.toCollection(ArrayList::new)); } public static void main(String[] args) { String[] sArr = new String[]{"Hello", "cruel", "world"}; List<String> ret = arrToArrayList(sArr); // Verify one can remove an item and print list to verify so ret.remove(1); ret.stream() .forEach(System.out::println); } } Output is... Hello world A: We can easily convert an array to ArrayList. We use Collection interface's addAll() method for the purpose of copying content from one list to another. Arraylist arr = new Arraylist(); arr.addAll(Arrays.asList(asset)); A: Use the following code to convert an element array into an ArrayList. Element[] array = {new Element(1), new Element(2), new Element(3)}; ArrayList<Element>elementArray=new ArrayList(); for(int i=0;i<array.length;i++) { elementArray.add(array[i]); } A: Given Object Array: Element[] array = {new Element(1), new Element(2), new Element(3) , new Element(2)}; Convert Array to List: List<Element> list = Arrays.stream(array).collect(Collectors.toList()); Convert Array to ArrayList ArrayList<Element> arrayList = Arrays.stream(array) .collect(Collectors.toCollection(ArrayList::new)); Convert Array to LinkedList LinkedList<Element> linkedList = Arrays.stream(array) .collect(Collectors.toCollection(LinkedList::new)); Print List: list.forEach(element -> { System.out.println(element.i); }); OUTPUT 1 2 3 A: Another update, almost ending year 2014, you can do it with Java 8 too: ArrayList<Element> arrayList = Stream.of(myArray).collect(Collectors.toCollection(ArrayList::new)); A few characters would be saved, if this could be just a List List<Element> list = Stream.of(myArray).collect(Collectors.toList()); A: Already everyone has provided enough good answer for your problem. Now from the all suggestions, you need to decided which will fit your requirement. There are two types of collection which you need to know. One is unmodified collection and other one collection which will allow you to modify the object later. So, Here I will give short example for two use cases. * *Immutable collection creation :: When you don't want to modify the collection object after creation List<Element> elementList = Arrays.asList(array) *Mutable collection creation :: When you may want to modify the created collection object after creation. List<Element> elementList = new ArrayList<Element>(Arrays.asList(array)); A: Java 8’s Arrays class provides a stream() method which has overloaded versions accepting both primitive arrays and Object arrays. /**** Converting a Primitive 'int' Array to List ****/ int intArray[] = {1, 2, 3, 4, 5}; List<Integer> integerList1 = Arrays.stream(intArray).boxed().collect(Collectors.toList()); /**** 'IntStream.of' or 'Arrays.stream' Gives The Same Output ****/ List<Integer> integerList2 = IntStream.of(intArray).boxed().collect(Collectors.toList()); /**** Converting an 'Integer' Array to List ****/ Integer integerArray[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; List<Integer> integerList3 = Arrays.stream(integerArray).collect(Collectors.toList()); A: You could also use polymorphism to declare the ArrayList while calling the Arrays-interface as following: List<Element> arraylist = new ArrayList<Integer>(Arrays.asList(array)); Example: Integer[] array = {1}; // autoboxing List<Integer> arraylist = new ArrayList<Integer>(Arrays.asList(array)); This should work like a charm. A: new ArrayList<>(Arrays.asList(array)); A: If you use : new ArrayList<T>(Arrays.asList(myArray)); you may create and fill two lists ! Filling twice a big list is exactly what you don't want to do because it will create another Object[] array each time the capacity needs to be extended. Fortunately the JDK implementation is fast and Arrays.asList(a[]) is very well done. It create a kind of ArrayList named Arrays.ArrayList where the Object[] data points directly to the array. // in Arrays @SafeVarargs public static <T> List<T> asList(T... a) { return new ArrayList<>(a); } //still in Arrays, creating a private unseen class private static class ArrayList<E> private final E[] a; ArrayList(E[] array) { a = array; // you point to the previous array } .... } The dangerous side is that if you change the initial array, you change the List ! Are you sure you want that ? Maybe yes, maybe not. If not, the most understandable way is to do this : ArrayList<Element> list = new ArrayList<Element>(myArray.length); // you know the initial capacity for (Element element : myArray) { list.add(element); } Or as said @glglgl, you can create another independant ArrayList with : new ArrayList<T>(Arrays.asList(myArray)); I love to use Collections, Arrays, or Guava. But if it don't fit, or you don't feel it, just write another inelegant line instead. A: In Java 9 you can use: List<String> list = List.of("Hello", "World", "from", "Java"); List<Integer> list = List.of(1, 2, 3, 4, 5); A: Below code seems nice way of doing this. new ArrayList<T>(Arrays.asList(myArray)); A: Hi you can use this line of code , and it's the simplest way new ArrayList<>(Arrays.asList(myArray)); or in case you use Java 9 you can also use this method: List<String> list = List.of("Hello", "Java"); List<Integer> list = List.of(1, 2, 3); A: For normal size arrays, above answers hold good. In case you have huge size of array and using java 8, you can do it using stream. Element[] array = {new Element(1), new Element(2), new Element(3)}; List<Element> list = Arrays.stream(array).collect(Collectors.toList()); A: (old thread, but just 2 cents as none mention Guava or other libs and some other details) If You Can, Use Guava It's worth pointing out the Guava way, which greatly simplifies these shenanigans: Usage For an Immutable List Use the ImmutableList class and its of() and copyOf() factory methods (elements can't be null): List<String> il = ImmutableList.of("string", "elements"); // from varargs List<String> il = ImmutableList.copyOf(aStringArray); // from array For A Mutable List Use the Lists class and its newArrayList() factory methods: List<String> l1 = Lists.newArrayList(anotherListOrCollection); // from collection List<String> l2 = Lists.newArrayList(aStringArray); // from array List<String> l3 = Lists.newArrayList("or", "string", "elements"); // from varargs Please also note the similar methods for other data structures in other classes, for instance in Sets. Why Guava? The main attraction could be to reduce the clutter due to generics for type-safety, as the use of the Guava factory methods allow the types to be inferred most of the time. However, this argument holds less water since Java 7 arrived with the new diamond operator. But it's not the only reason (and Java 7 isn't everywhere yet): the shorthand syntax is also very handy, and the methods initializers, as seen above, allow to write more expressive code. You do in one Guava call what takes 2 with the current Java Collections. If You Can't... For an Immutable List Use the JDK's Arrays class and its asList() factory method, wrapped with a Collections.unmodifiableList(): List<String> l1 = Collections.unmodifiableList(Arrays.asList(anArrayOfElements)); List<String> l2 = Collections.unmodifiableList(Arrays.asList("element1", "element2")); Note that the returned type for asList() is a List using a concrete ArrayList implementation, but it is NOT java.util.ArrayList. It's an inner type, which emulates an ArrayList but actually directly references the passed array and makes it "write through" (modifications are reflected in the array). It forbids modifications through some of the List API's methods by way of simply extending an AbstractList (so, adding or removing elements is unsupported), however it allows calls to set() to override elements. Thus this list isn't truly immutable and a call to asList() should be wrapped with Collections.unmodifiableList(). See the next step if you need a mutable list. For a Mutable List Same as above, but wrapped with an actual java.util.ArrayList: List<String> l1 = new ArrayList<String>(Arrays.asList(array)); // Java 1.5 to 1.6 List<String> l1b = new ArrayList<>(Arrays.asList(array)); // Java 1.7+ List<String> l2 = new ArrayList<String>(Arrays.asList("a", "b")); // Java 1.5 to 1.6 List<String> l2b = new ArrayList<>(Arrays.asList("a", "b")); // Java 1.7+ For Educational Purposes: The Good ol' Manual Way // for Java 1.5+ static <T> List<T> arrayToList(final T[] array) { final List<T> l = new ArrayList<T>(array.length); for (final T s : array) { l.add(s); } return (l); } // for Java < 1.5 (no generics, no compile-time type-safety, boo!) static List arrayToList(final Object[] array) { final List l = new ArrayList(array.length); for (int i = 0; i < array.length; i++) { l.add(array[i]); } return (l); } A: According with the question the answer using java 1.7 is: ArrayList<Element> arraylist = new ArrayList<Element>(Arrays.<Element>asList(array)); However it's better always use the interface: List<Element> arraylist = Arrays.<Element>asList(array); A: // Guava import com.google.common.collect.ListsLists ... List<String> list = Lists.newArrayList(aStringArray); A: In java there are mainly 3 methods to convert an array to an arrayList * *Using Arrays.asList() method : Pass the required array to this method and get a List object and pass it as a parameter to the constructor of the ArrayList class. List<String> list = Arrays.asList(array); System.out.println(list); *Collections.addAll() method - Create a new list before using this method and then add array elements using this method to existing list. List<String> list1 = new ArrayList<String>(); Collections.addAll(list1, array); System.out.println(list1); *Iteration method - Create a new list. Iterate the array and add each element to the list. List<String> list2 = new ArrayList<String>(); for(String text:array) { list2.add(text); } System.out.println(list2); you can refer this document too A: You can use the following 3 ways to create ArrayList from Array. String[] array = {"a", "b", "c", "d", "e"}; //Method 1 List<String> list = Arrays.asList(array); //Method 2 List<String> list1 = new ArrayList<String>(); Collections.addAll(list1, array); //Method 3 List<String> list2 = new ArrayList<String>(); for(String text:array) { list2.add(text); } A: There is one more way that you can use to convert the array into an ArrayList. You can iterate over the array and insert each index into the ArrayList and return it back as in ArrayList. This is shown below. public static void main(String[] args) { String[] array = {new String("David"), new String("John"), new String("Mike")}; ArrayList<String> theArrayList = convertToArrayList(array); } private static ArrayList<String> convertToArrayList(String[] array) { ArrayList<String> convertedArray = new ArrayList<String>(); for (String element : array) { convertedArray.add(element); } return convertedArray; } A: Since Java 8 there is an easier way to transform: import java.util.List; import static java.util.stream.Collectors.toList; public static <T> List<T> fromArray(T[] array) { return Arrays.stream(array).collect(toList()); } A: You can convert using different methods * *List<Element> list = Arrays.asList(array); *List<Element> list = new ArrayList(); Collections.addAll(list, array); *Arraylist list = new Arraylist(); list.addAll(Arrays.asList(array)); For more detail you can refer to http://javarevisited.blogspot.in/2011/06/converting-array-to-arraylist-in-java.html A: Since this question is pretty old, it surprises me that nobody suggested the simplest form yet: List<Element> arraylist = Arrays.asList(new Element(1), new Element(2), new Element(3)); As of Java 5, Arrays.asList() takes a varargs parameter and you don't have to construct the array explicitly. A: as all said this will do so new ArrayList<>(Arrays.asList("1","2","3","4")); and the common newest way to create array is observableArrays ObservableList: A list that allows listeners to track changes when they occur. for Java SE you can try FXCollections.observableArrayList(new Element(1), new Element(2), new Element(3)); that is according to Oracle Docs observableArrayList() Creates a new empty observable list that is backed by an arraylist. observableArrayList(E... items) Creates a new observable array list with items added to it. Update Java 9 also in Java 9 it's a little bit easy: List<String> list = List.of("element 1", "element 2", "element 3"); A: You also can do it with stream in Java 8. List<Element> elements = Arrays.stream(array).collect(Collectors.toList()); A: new ArrayList<T>(Arrays.asList(myArray)); Make sure that myArray is the same type as T. You'll get a compiler error if you try to create a List<Integer> from an array of int, for example. A: You can create an ArrayList using Cactoos (I'm one of the developers): List<String> names = new StickyList<>( "Scott Fitzgerald", "Fyodor Dostoyevsky" ); There is no guarantee that the object will actually be of class ArrayList. If you need that guarantee, do this: ArrayList<String> list = new ArrayList<>( new StickyList<>( "Scott Fitzgerald", "Fyodor Dostoyevsky" ) ); A: the lambda expression that generates a list of type ArrayList<Element> (1) without an unchecked cast (2) without creating a second list (with eg. asList()) ArrayList<Element> list = Stream.of( array ).collect( Collectors.toCollection( ArrayList::new ) ); A: * *If we see the definition of Arrays.asList() method you will get something like this: public static <T> List<T> asList(T... a) //varargs are of T type. So, you might initialize arraylist like this: List<Element> arraylist = Arrays.asList(new Element(1), new Element(2), new Element(3)); Note : each new Element(int args) will be treated as Individual Object and can be passed as a var-args. *There might be another answer for this question too. If you see declaration for java.util.Collections.addAll() method you will get something like this: public static <T> boolean addAll(Collection<? super T> c, T... a); So, this code is also useful to do so Collections.addAll(arraylist, array); A: If the array is of a primitive type, the given answers won't work. But since Java 8 you can use: int[] array = new int[5]; Arrays.stream(array).boxed().collect(Collectors.toList()); A: Another simple way is to add all elements from the array to a new ArrayList using a for-each loop. ArrayList<Element> list = new ArrayList<>(); for(Element e : array) list.add(e); A: Another way (although essentially equivalent to the new ArrayList(Arrays.asList(array)) solution performance-wise: Collections.addAll(arraylist, array); A: Java 9 In Java 9, you can use List.of static factory method in order to create a List literal. Something like the following: List<Element> elements = List.of(new Element(1), new Element(2), new Element(3)); This would return an immutable list containing three elements. If you want a mutable list, pass that list to the ArrayList constructor: new ArrayList<>(List.of(// elements vararg)) JEP 269: Convenience Factory Methods for Collections JEP 269 provides some convenience factory methods for Java Collections API. These immutable static factory methods are built into the List, Set, and Map interfaces in Java 9 and later. A: With Stream (since java 16) new ArrayList<>(Arrays.stream(array).toList()); A: I've used the following helper method on occasions when I'm creating a ton of ArrayLists and need terse syntax: import java.util.ArrayList; import java.util.Arrays; class Main { @SafeVarargs public static <T> ArrayList<T> AL(T ...a) { return new ArrayList<T>(Arrays.asList(a)); } public static void main(String[] args) { var al = AL(AL(1, 2, 3, 4), AL(AL(5, 6, 7), AL(8, 9))); System.out.println(al); // => [[1, 2, 3, 4], [[5, 6, 7], [8, 9]]] } } Guava uses the same approach so @SafeVarargs appears to be safe here. See also Java SafeVarargs annotation, does a standard or best practice exist?. A: Element[] array = {new Element(1), new Element(2), new Element(3)}; List<Element> list = List.of(array); or List<Element> list = Arrays.asList(array); both ways we can convert it to a list. A: Use below code Element[] array = {new Element(1), new Element(2), new Element(3)}; ArrayList<Element> list = (ArrayList) Arrays.asList(array);
{ "language": "en", "url": "https://stackoverflow.com/questions/157944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4039" }
Q: What is the latency in an AMD PowerNow state change operation? In low latency trading applications we are very conscious of latency issues. There is some concern that our application may experience increased latency if the server on which it is running changes PowerNow state. Any kernel developers familiar with calling PowerNow changes and how much processor time is used for the operation and what the latency/delay characteristics are like? The same information for Intel SpeedStep would be useful but PowerNow is what we actually use. Thanks! A: The Linux kernel appears to assume an upper bound of a fifth of a millisecond for a PowerNow state change operation to complete. I would have thought a bigger worry than the cost of the state change itself, though, would be that downclocking the CPU will make your application run slower, increasing latency across the board. A: I doubt it has any latency. PowerNow just lowers core frequency and core voltage. I don't know that it will halt the CPU for a short time to do so and then resume processing after the change. AFAIK the change happens on the fly, the processing is not interrupted for that. Thus the bigger problem might be that you rely on a certain speed (e.g. you assume the processor can perform that many operations a second), however when the core frequency is lowered, it will behave like a slower CPU (less operations per second) and the core frequency doesn't jump to max just because the CPU is not 100% idle. It will jump up again, when the CPU thinks it needs more processing power than it currently has. On Linux PowerNow can cause bad issues if you run VMWare with Windows on it. Windows fails to correctly update the internal clock, as it seems to not detect that PowerNow is in effect (I guess because it runs within a virtual machine) and VMWare for Linux fails to handle the situation correctly as well. So the Windows clock will fall behind as soon as PowerNow is active and every now and then VMWare detects that and corrects the clock again. So far so well, but applications relying on the Windows clock will see this strange jump and behave rather oddly (e.g. a radio streaming software I know will jump within the MP3 stream and skip a couple of milliseconds every time the clock is resynced). If your application depends strongly on a steady program flow, you may like to disable the PowerNow feature completely. With the Internet radio stream software, that was the only way to solve the skipping issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/157947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I make the apple terminal window auto change colour scheme when I ssh to a specific server When I ssh into a remote production server I would like the colour scheme of my terminal window to change to something brigh and scary, preferably red, to warn me that I am touching a live scary server. How can I make it automatically detect that I have ssh'ed somewhere, and if that somewhere is on a specific list, change the colour scheme? I want to update the Scheme of Terminal.app, not know how I would do this in a pure linux/unix env A: Here's a combined solution based on a couple of existing answers that handles the exit. Also includes a little extra if you don't want to deal with 16 bit color values. This should be put in your ~/.bash_profile # Convert 8 bit r,g,b,a (0-255) to 16 bit r,g,b,a (0-65535) # to set terminal background. # r, g, b, a values default to 255 set_bg () { r=${1:-255} g=${2:-255} b=${3:-255} a=${4:-255} r=$(($r * 256 + $r)) g=$(($g * 256 + $g)) b=$(($b * 256 + $b)) a=$(($a * 256 + $a)) osascript -e "tell application \"Terminal\" to set background color of window 1 to {$r, $g, $b, $a}" } # Set terminal background based on hex rgba values # r,g,b,a default to FF set_bg_from_hex() { r=${1:-FF} g=${2:-FF} b=${3:-FF} a=${4:-FF} set_bg $((16#$r)) $((16#$g)) $((16#$b)) $((16#$s)) } # Wrapping ssh command with extra functionality ssh() { # If prod server of interest, change bg color if ...some check for server list then set_bg_from_hex 6A 05 0C end # Call original ssh command if command ssh "$@" then # on exit change back to your default set_bg_from_hex 24 34 52 fi } * *set_bg - takes 4 (8 bit) color values *set_bg_from_hex - takes 4 hex values. most of my color references I use are in hex, so this just makes it easier for me. It could be taken a step further to actually parse #RRGGBB instead of RR GG BB, but it works well for me. *ssh - wrapping the default ssh command with whatever custom logic you want. The if statement is used to handle the exit to reset the background color. A: Combining answers 1 and 2 have the following: Create ~/bin/ssh file as described in 1 with the following content: #!/bin/sh # https://stackoverflow.com/a/39489571/1024794 log(){ echo "$*" >> /tmp/ssh.log } HOSTNAME=`echo $@ | sed s/.*@//` log HOSTNAME=$HOSTNAME # to avoid changing color for commands like `ssh user@host "some bash script"` # and to avoid changing color for `git push` command: if [ $# -gt 3 ] || [[ "$HOSTNAME" = *"git-receive-pack"* ]]; then /usr/bin/ssh "$@" exit $? fi set_bg () { if [ "$1" != "Basic" ]; then trap on_exit EXIT; fi osascript ~/Dropbox/macCommands/StyleTerm.scpt "$1" } on_exit () { set_bg Basic } case $HOSTNAME in "178.222.333.44 -p 2222") set_bg "Homebrew" ;; "178.222.333.44 -p 22") set_bg "Ocean" ;; "192.168.214.111") set_bg "Novel" ;; *) set_bg "Grass" ;; esac /usr/bin/ssh "$@" Make it executable: chmod +x ~/bin/ssh. File ~/Dropbox/macCommands/StyleTerm.scpt has the following content: #https://superuser.com/a/209920/195425 on run argv tell application "Terminal" to set current settings of selected tab of front window to first settings set whose name is (item 1 of argv) end run Words Basic, Homebrew, Ocean, Novel, Grass are from mac os terminal settings cmd,: A: Put following script in ~/bin/ssh (ensure ~/bin/ is checked before /usr/bin/ in your PATH): #!/bin/sh HOSTNAME=`echo $@ | sed s/.*@//` set_bg () { osascript -e "tell application \"Terminal\" to set background color of window 1 to $1" } on_exit () { set_bg "{0, 0, 0, 50000}" } trap on_exit EXIT case $HOSTNAME in production1|production2|production3) set_bg "{45000, 0, 0, 50000}" ;; *) set_bg "{0, 45000, 0, 50000}" ;; esac /usr/bin/ssh "$@" Remember to make the script executable by running chmod +x ~/bin/ssh The script above extracts host name from line "username@host" (it assumes you login to remote hosts with "ssh user@host"). Then depending on host name it either sets red background (for production servers) or green background (for all other). As a result all your ssh windows will be with colored background. I assume here your default background is black, so script reverts the background color back to black when you logout from remote server (see "trap on_exit"). Please, note however this script does not track chain of ssh logins from one host to another. As a result the background will be green in case you login to testing server first, then login to production from it. A: You can set the $PS1 variable in your .bashrc. red='\e[0;31m' PS1="$\[${red}\]" EDIT: To do this open the Terminal. Then say #touch .bashrc You can then open .bashrc in textEdit or in TextWrangler and add the previous commands. A: A lesser-known feature of Terminal is that you can set the name of a settings profile to a command name and it will select that profile when you create a new terminal via either Shell > New Command… or Shell > New Remote Connection…. For example, duplicate your default profile, name it “ssh” and set its background color to red. Then use New Command… to run ssh host.example.com. It also matches on arguments, so you can have it choose different settings for different remote hosts, for example. A: Set the terminal colours in the server's /.bashrc I needed the same thing, something to make me aware that I was on a Staging or Production server and not in my Development environment, which can be very hard to tell, especially when in a Ruby console or something. To accomplish this, I used the setterm command in my server's ~./bashrc file to inverse the colours of the terminal when connecting and restore the colours when exiting. ~/.bashrc # Inverts console colours so that we know that we are in a remote server. # This is very important to avoid running commands on the server by accident. setterm --inversescreen on # This ensures we restore the console colours after exiting. function restore_screen_colours { setterm --inversescreen off } trap restore_screen_colours EXIT I then put this in all the servers' ~/.bashrc files so that I know when my terminal is on a remote server or not. Another bonus is that any of your development or devops team get the benefit of this without making it part of the onboarding process. Works great. A: Xterm-compatible Unix terminals have standard escape sequences for setting the background and foreground colors. I'm not sure if Terminal.app shares them; it should. case $HOSTNAME in live1|live2|live3) echo -e '\e]11;1\a' ;; testing1|testing2) echo -e '\e]11;2\a' ;; esac The second number specifies the desired color. 0=default, 1=red, 2=green, etc. So this snippet, when put in a shared .bashrc, will give you a red background on live servers and a green background on testing ones. You should also add something like this to reset the background when you log out. on_exit () { echo -e '\e]11;0\a' } trap on_exit EXIT EDIT: Google turned up a way to set the background color using AppleScript. Obviously, this only works when run on the same machine as Terminal.app. You can work around that with a couple wrapper functions: set_bg_color () { # color values are in '{R, G, B, A}' format, all 16-bit unsigned integers (0-65535) osascript -e "tell application \"Terminal\" to set background color of window 1 to $1" } sshl () { set_bg_color "{45000, 0, 0, 50000}" ssh "$@" set_bg_color "{0, 0, 0, 50000}" } You'd need to remember to run sshl instead of ssh when connecting to a live server. Another option is to write a wrapper function for ssh that scans its arguments for known live hostnames and sets the background accordingly. A: Another solution is to set the colors straight in the ssh config file: inside ~/.ssh/config Host Server1 HostName x.x.x.x User ubuntu IdentityFile ~/Desktop/keys/1.pem PermitLocalCommand yes LocalCommand osascript -e "tell application \"Terminal\" to set background color of window 1 to {27655, 0, 0, -16373}" Host Server2 HostName x.x.x.x User ubuntu IdentityFile ~/Desktop/keys/2.pem PermitLocalCommand yes LocalCommand osascript -e "tell application \"Terminal\" to set background color of window 1 to {37655, 0, 0, -16373}" A: Why not just changing the shell prompt whenever you are logged in via SSH? There are usually specific shell variables: SSH_CLIENT, SSH_CONNECTION, SSH_TTY A: You should change the color of username and host machine name. add the following line to your ~/.bash_profile file: export PS1=" \[\033[34m\]\u@\h \[\033[33m\]\w\[\033[31m\]\[\033[00m\] $ " The first part (purple colored) is what you're looking for. Preview: This is my preferred colors. You can customize each part of prompt's color by changing m codes (e.g. 34m) which are ANSI color codes. List of ANSI Color codes: * *Black: 30m *Red: 31m *Green: 32m *Yellow: 33m *Blue: 34m *Purple: 35m *Cyan: 36m *White: 37m
{ "language": "en", "url": "https://stackoverflow.com/questions/157959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Aqua Data Studio - Shortcuts for Autocomplete When the autocomplete listbox/dropdown is displayed in Aqua Data Studio, you have to hit enter in order for the current hightlighted item to complete the identifier. Is there a way that I can hit the tab key to autocomplete instead? This is the default behavior for Visual Studio and I cannot find the keyboard shortcuts editor in Aqua Data Studio. It would also be nice if while the autocomplete listbox is visible if the Home and End keys would go to the beginning or the end of the line instead of the top or the bottom options of the autocomplete listbox. A: Goto File->Options->Key Mappings. Select your Active KeyMap. Under Keymap Settings, Goto General -> Query:Auto Complete and Select it. Under Shortcut, click on the button Edit and you can change to suite your need. Make sure there are no conflicts. A: Look in the preferences for "keybindings"/"hotkeys".
{ "language": "en", "url": "https://stackoverflow.com/questions/157969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the consensus on "Voice-family" Hacks? I just started working for a pretty large company and my group manages all of their public facing websites. I opened the style sheet for the first time today and have seen over 20 instances of the designers using the voice-family hack to fix an IE bug. (I don't know why they allow graphic designers to write any kind of markup at all) What is the general public opinion of the voice-family hack. Is it worth the time to recommend using IE conditional comments to include custom styles sheets? A: The "voice-family" hack, better known as the Tantek Celik Box Model Hack, is used to hide specific CSS rules from IE4/5 on Windows because of incorrect implementations of the CSS standard in those browsers. It is an attempt to deliver the most correct single stylesheet to all browsers, without resorting to browser sniffing and multiple stylesheets. Ironically this hack is the result of many man-hours (months?) of experimentation and testing to develop a standards-compliant stylesheet that works across older, newer, and future browsers. It is one of several workarounds that been created to make up for the horrible state of browser compliance to the CSS standard. See Jeffrey Zeldman's Designing with Web Standards for an in-depth look at why adhering to standards (as much as possible) is a worthy goal, and why using browser sniffing and multiple stylesheets only causes headaches for the developer: http://www.amazon.com/Designing-Web-Standards-Jeffrey-Zeldman/dp/0321385551/ One example is the arms race to keep up with browser/operating system combinations, not to mention mobile phones and other future devices with browsing capability. The detection code has to be changed with each new combination, and because of the way that many browsers masquerade as Netscape Navigator, detection can become a full time job. Another good reference is the Web Standards Project, which has a lot of good information and tutorials on the subject: http://www.webstandards.org/ If you move your coding style towards standards-compliance, you will generally not have to be as concerned about the release of future browsers. Yes, you still have to test against them, but you don't have to write and then test custom stylesheets for each one. A: Hacks of any kind are dangerous as they are prone to have unintended effects in future browsers (lots broke with IE7). The safe ways of filtering CSS are: * *(For IE only) Using conditional comments. These will always work on Microsoft browsers and always be ignored by all other browsers as they are within comments *Feature targetting - using CSS selectors that are only supported by modern browsers to stop older browsers trying to interpret the rules. However the fact that a browser recognises syntax does not mean it handles it correctly. All you guarantee here is that older browsers won't try to render these rules not that modern ones will do them correctly Whenever possible use the subset of CSS supported correctly by all major browsers. This is improving over time as older buggier browsers drop to ignorable percentages of your users. A: My feeling on hacks like this is that you should avoid them if you can. If it is possible to get the correct rendering across browsers without resorting to such shenanigans, then you should do it the right way. However, sometimes browsers have buggy CSS implementations, and it is necessary to use hacks like this. A: Don't use conditional includes. Use a CSS selector instead, it is much more elegant. You can target classes at individual browsers (and/or versions): .myClass { ... } .ie6 .myClass { ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/158004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I specify IP Ranges (RegEx) for SkipHosts in AWStats config files? I'm updating some old AWStats config files to filter out some specific IP ranges. Here's the pertinent section of the config file: # Do not include access from clients that match following criteria. # If your log file contains IP addresses in host field, you must enter here # matching IP addresses criteria. # If DNS lookup is already done in your log file, you must enter here hostname # criteria, else enter ip address criteria. # The opposite parameter of "SkipHosts" is "OnlyHosts". # Note: Use space between each value. This parameter is not case sensitive. # Note: You can use regular expression values writing value with REGEX[value]. # Change : Effective for new updates only # Example: "127.0.0.1 REGEX[^192\.168\.] REGEX[^10\.]" # Example: "localhost REGEX[^.*\.localdomain$]" # Default: "" # SkipHosts="" I want to, for example, filter out X.Y.Z.[97-110] I tried this format (Note: Not these IP values, using private range as example): REGEX[^192\.168\.1\.[97-110]] But it causes the following error: CGI ErrorThe specified CGI application misbehaved by not returning a complete set of HTTP headers. I hate how everything uses a different RegEx syntax. Does anyone have any idea how this one works, and how I can specify a range here? A: Assuming that character classes are supported within REGEX[ ]: SkipHosts = "REGEX[^192\.168\.1\.(9[7-9]|10[0-9]|110)$]" A: The regex you used specifies 9 or 7 to 1 or 1 or 0 which messes up. You can use SkipHosts="REGEX[^192\.168\.1\.(97|98|99|100|101|102|103|104|105|106|107|108|109|110)]" if you're so inclined A: Does AWStats run if you leave SkipHosts empty? Otherwise, try the commandline utility to check for errors. For example, using Windows: c:\perlpath\perl.exe awstats.pl config=yourconfigfile -update -logfile=yourlogfile That should give more details.
{ "language": "en", "url": "https://stackoverflow.com/questions/158008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to use 'find' to search for files created on a specific date? How do I use the UNIX command find to search for files created on a specific date? A: Use this command to search for files and folders on /home/ add a time period of time according to your needs: find /home/ -ctime time_period Examples of time_period: * *More than 30 days ago: -ctime +30 *Less than 30 days ago: -ctime -30 *Exactly 30 days ago: -ctime 30 A: @Max: is right about the creation time. However, if you want to calculate the elapsed days argument for one of the -atime, -ctime, -mtime parameters, you can use the following expression ELAPSED_DAYS=$(( ( $(date +%s) - $(date -d '2008-09-24' +%s) ) / 60 / 60 / 24 - 1 )) Replace "2008-09-24" with whatever date you want and ELAPSED_DAYS will be set to the number of days between then and today. (Update: subtract one from the result to align with find's date rounding.) So, to find any file modified on September 24th, 2008, the command would be: find . -type f -mtime $(( ( $(date +%s) - $(date -d '2008-09-24' +%s) ) / 60 / 60 / 24 - 1 )) This will work if your version of find doesn't support the -newerXY predicates mentioned in @Arve:'s answer. A: It's two steps but I like to do it this way: First create a file with a particular date/time. In this case, the file is 2008-10-01 at midnight touch -t 0810010000 /tmp/t Now we can find all files that are newer or older than the above file (going by file modified date). You can also use -anewer for accessed and -cnewer file status changed. find / -newer /tmp/t find / -not -newer /tmp/t You could also look at files between certain dates by creating two files with touch touch -t 0810010000 /tmp/t1 touch -t 0810011000 /tmp/t2 This will find files between the two dates & times find / -newer /tmp/t1 -and -not -newer /tmp/t2 A: You could do this: find ./ -type f -ls |grep '10 Sep' Example: [root@pbx etc]# find /var/ -type f -ls | grep "Dec 24" 791235 4 -rw-r--r-- 1 root root 29 Dec 24 03:24 /var/lib/prelink/full 798227 288 -rw-r--r-- 1 root root 292323 Dec 24 23:53 /var/log/sa/sar24 797244 320 -rw-r--r-- 1 root root 321300 Dec 24 23:50 /var/log/sa/sa24 A: As pointed out by Max, you can't, but checking files modified or accessed is not all that hard. I wrote a tutorial about this, as late as today. The essence of which is to use -newerXY and ! -newerXY: Example: To find all files modified on the 7th of June, 2007: $ find . -type f -newermt 2007-06-07 ! -newermt 2007-06-08 To find all files accessed on the 29th of september, 2008: $ find . -type f -newerat 2008-09-29 ! -newerat 2008-09-30 Or, files which had their permission changed on the same day: $ find . -type f -newerct 2008-09-29 ! -newerct 2008-09-30 If you don't change permissions on the file, 'c' would normally correspond to the creation date, though. A: With the -atime, -ctime, and -mtime switches to find, you can get close to what you want to achieve. A: You can't. The -c switch tells you when the permissions were last changed, -a tests the most recent access time, and -m tests the modification time. The filesystem used by most flavors of Linux (ext3) doesn't support a "creation time" record. Sorry! A: I found this scriplet in a script that deletes all files older than 14 days: CNT=0 for i in $(find -type f -ctime +14); do ((CNT = CNT + 1)) echo -n "." >> $PROGRESS rm -f $i done echo deleted $CNT files, done at $(date "+%H:%M:%S") >> $LOG I think a little additional "man find" and looking for the -ctime / -atime etc. parameters will help you here. A: cp `ls -ltr | grep 'Jun 14' | perl -wne 's/^.*\s+(\S+)$/$1/; print $1 . "\n";'` /some_destination_dir
{ "language": "en", "url": "https://stackoverflow.com/questions/158044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "294" }
Q: Why can't I use Template Toolkit? I am trying to use TemplateToolkit instead of good ole' variable interpolation and my server is giving me a lot of grief. Here are the errors I am getting: *** 'D:\Inetpub\gic\source\extjs_source.plx' error message at: 2008/09/30 15:27:37 failed to create context: failed to create context: failed to load Template/Stash/XS.pm: Couldn't load Template::Stash::XS 2.20: Can't load 'D:/Perl/site/lib/auto/Template/Stash/XS/XS.dll' for module Template::Stash::XS: load_file:The specified procedure could not be found at D:/Perl/lib/DynaLoader.pm line 230. at D:/Perl/site/lib/Template/Stash/XS.pm line 31 BEGIN failed--compilation aborted at D:/Perl/site/lib/Template/Stash/XS.pm line 31. Compilation failed in require at D:/Perl/site/lib/Template/Config.pm line 82. The Platform is Windows Server 2003 and we are using ActiveState perl and PPM for the packages with IIS. A: From what I hear, if Template Toolkit is available for Strawberry Perl, you should definitely look into switching to Strawberry. A: I figured this one out after a long time. Apparently the ActiveState people didn't check much into the package because it requires Template::Stash::XS, but that's not actually available in PPM. To fix this issue just edit the Template/Config.pm and change Template::Stash::XS to Template::Stash.
{ "language": "en", "url": "https://stackoverflow.com/questions/158055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to position one element relative to another with jQuery? I have a hidden DIV which contains a toolbar-like menu. I have a number of DIVs which are enabled to show the menu DIV when the mouse hovers over them. Is there a built-in function which will move the menu DIV to the top right of the active (mouse hover) DIV? I'm looking for something like $(menu).position("topright", targetEl); A: Here is a jQuery function I wrote that helps me position elements. Here is an example usage: $(document).ready(function() { $('#el1').position('#el2', { anchor: ['br', 'tr'], offset: [-5, 5] }); }); The code above aligns the bottom-right of #el1 with the top-right of #el2. ['cc', 'cc'] would center #el1 in #el2. Make sure that #el1 has the css of position: absolute and z-index: 10000 (or some really large number) to keep it on top. The offset option allows you to nudge the coordinates by a specified number of pixels. The source code is below: jQuery.fn.getBox = function() { return { left: $(this).offset().left, top: $(this).offset().top, width: $(this).outerWidth(), height: $(this).outerHeight() }; } jQuery.fn.position = function(target, options) { var anchorOffsets = {t: 0, l: 0, c: 0.5, b: 1, r: 1}; var defaults = { anchor: ['tl', 'tl'], animate: false, offset: [0, 0] }; options = $.extend(defaults, options); var targetBox = $(target).getBox(); var sourceBox = $(this).getBox(); //origin is at the top-left of the target element var left = targetBox.left; var top = targetBox.top; //alignment with respect to source top -= anchorOffsets[options.anchor[0].charAt(0)] * sourceBox.height; left -= anchorOffsets[options.anchor[0].charAt(1)] * sourceBox.width; //alignment with respect to target top += anchorOffsets[options.anchor[1].charAt(0)] * targetBox.height; left += anchorOffsets[options.anchor[1].charAt(1)] * targetBox.width; //add offset to final coordinates left += options.offset[0]; top += options.offset[1]; $(this).css({ left: left + 'px', top: top + 'px' }); } A: tl;dr: (try it here) If you have the following HTML: <div id="menu" style="display: none;"> <!-- menu stuff in here --> <ul><li>Menu item</li></ul> </div> <div class="parent">Hover over me to show the menu here</div> then you can use the following JavaScript code: $(".parent").mouseover(function() { // .position() uses position relative to the offset parent, var pos = $(this).position(); // .outerWidth() takes into account border and padding. var width = $(this).outerWidth(); //show the menu directly over the placeholder $("#menu").css({ position: "absolute", top: pos.top + "px", left: (pos.left + width) + "px" }).show(); }); But it doesn't work! This will work as long as the menu and the placeholder have the same offset parent. If they don't, and you don't have nested CSS rules that care where in the DOM the #menu element is, use: $(this).append($("#menu")); just before the line that positions the #menu element. But it still doesn't work! You might have some weird layout that doesn't work with this approach. In that case, just use jQuery.ui's position plugin (as mentioned in an answer below), which handles every conceivable eventuality. Note that you'll have to show() the menu element before calling position({...}); the plugin can't position hidden elements. Update notes 3 years later in 2012: (The original solution is archived here for posterity) So, it turns out that the original method I had here was far from ideal. In particular, it would fail if: * *the menu's offset parent is not the placeholder's offset parent *the placeholder has a border/padding Luckily, jQuery introduced methods (position() and outerWidth()) way back in 1.2.6 that make finding the right values in the latter case here a lot easier. For the former case, appending the menu element to the placeholder works (but will break CSS rules based on nesting). A: Why complicating too much? Solution is very simple css: .active-div{ position:relative; } .menu-div{ position:absolute; top:0; right:0; display:none; } jquery: $(function(){ $(".active-div").hover(function(){ $(".menu-div").prependTo(".active-div").show(); },function(){$(".menu-div").hide(); }) It works even if, * *Two divs placed anywhere else *Browser Re-sized A: You can use the jQuery plugin PositionCalculator That plugin has also included collision handling (flip), so the toolbar-like menu can be placed at a visible position. $(".placeholder").on('mouseover', function() { var $menu = $("#menu").show();// result for hidden element would be incorrect var pos = $.PositionCalculator( { target: this, targetAt: "top right", item: $menu, itemAt: "top left", flip: "both" }).calculate(); $menu.css({ top: parseInt($menu.css('top')) + pos.moveBy.y + "px", left: parseInt($menu.css('left')) + pos.moveBy.x + "px" }); }); for that markup: <ul class="popup" id="menu"> <li>Menu item</li> <li>Menu item</li> <li>Menu item</li> </ul> <div class="placeholder">placeholder 1</div> <div class="placeholder">placeholder 2</div> Here is the fiddle: http://jsfiddle.net/QrrpB/1657/ A: NOTE: This requires jQuery UI (not just jQuery). You can now use: $("#my_div").position({ my: "left top", at: "left bottom", of: this, // or $("#otherdiv") collision: "fit" }); For fast positioning (jQuery UI/Position). You can download jQuery UI here. A: Something like this? $(menu).css("top", targetE1.y + "px"); $(menu).css("left", targetE1.x - widthOfMenu + "px"); A: This works for me: var posPersonTooltip = function(event) { var tPosX = event.pageX - 5; var tPosY = event.pageY + 10; $('#personTooltipContainer').css({top: tPosY, left: tPosX}); A: This is what worked for me in the end. var showMenu = function(el, menu) { //get the position of the placeholder element var pos = $(el).offset(); var eWidth = $(el).outerWidth(); var mWidth = $(menu).outerWidth(); var left = (pos.left + eWidth - mWidth) + "px"; var top = 3+pos.top + "px"; //show the menu directly over the placeholder $(menu).css( { position: 'absolute', zIndex: 5000, left: left, top: top } ); $(menu).hide().fadeIn(); };
{ "language": "en", "url": "https://stackoverflow.com/questions/158070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "360" }
Q: DotNetNuke "keepalive" I've got a couple DNN portals I manage and I need a solution to keep them "alive" during slack traffic times. After a given time of inactivity IIS will unload the DNN application from memory which will effect load time for the first client request. DNN has the "KeepAlive.aspx" file that I hit with a wget command from a CRON job every 5 minutes. I'm dubious of the effectiveness of the this method. Does anyone have any others ideas? A: A good website monitoring service will most likely provide you with a URL to check to see if the site is functioning, that is what the Keepalive URL is for. Have the service check the URL more frequently than 15 minutes and you should be good to go with keeping the site up. There's always a chance the site will go down for some other issue, but the keep alive service should bring it back up if that happens and another user hasn't already hit it. A: In the Global application start event, you could set up a cache item or timer with a timeout of 5 minutes, and in the callback code, ping a simple page that should return HTTP 200 - reset the cache/timer, and repeat. A: If you are looking for a service there are a number of them out there, some free some not * *Host-Tracker *Pingdom *KeepAliveForever I have used both Host-Tracker and Pingdom before, they are great as they notify you of outages as well
{ "language": "en", "url": "https://stackoverflow.com/questions/158091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Echo A Link, Get A Trailing Slash? I've discovered that any time I do the following: echo '<a href="http://" title="bla">huzzah</a>'; I end up with the following being rendered to the browser: <a href="http:///" title="bla">huzzah</a> This is particularly annoying when I link to a file with an extension, as it breaks the link. Any ideas why this is happening and how I can fix it? Update: For those asking about my exact implementation, here it is. In my troubleshooting I've dumbed it down as much as I could, so please don't mind where I concat plain text to plaintext... function print_it($item) { echo '<div class="listItem clearfix">'; echo '<div class="info">'; echo '<span class="title">'; if(isset($item[6])) { echo '<a href="http://" title="">' . 'me' . '</a>'; } echo '</span>'; echo '</div></div>'; } Update: In response to Matt Long, I pasted in your line and it rendered the same. Update: In response to Fire Lancer, I've put back in my original attempt, and will show you both below. echo substr($item[6],13) . '<br>'; echo '<a href="http://' . substr($item[6],13) . '" title="' . $item[0] . '">' . $item[0] . '</a>'; <span class="title">www.edu.gov.on.ca%2Feng%2Ftcu%2Fetlanding.html<br> <a href="http://www.edu.gov.on.ca%2Feng%2Ftcu%2Fetlanding.html" title="Employment Ontario">Employment Ontario</a></span> The reason for the substr'ing is due to the URL being run through rawurlencode() elsewhere, and linking to http%3A%2F%2F makes the page think it is a local/relative link. Update: I pasted the above response without really looking at it. So the HTML is correct when viewing source, but the actual page interprets it with another trailing slash after it. Solution: This was all a result of rawlurlencode(). If I decoded, or skipped the encoding all together, everything worked perfectly. Something about rawurlencode() makes the browser want to stick a trailing slash in there. A: Ive never had that, how ecactly are you echoing the link? All the following should work. echo '<a href="http://someothersite.com">Link</a>'; echo '<a href="anotherpage.php">Some page</a>'; echo '<a href="../pageinparentdir.php">Another page</a>'; etc edit, since you added the info. You can't just have http:// as href, even entering that link directly into a html page has that effect. eg: html: <a href="http://" title="bla">huzzah</a> link (in FF3): http:/// A: Firefox, especially, shows you the html source the way it's seeing it which is rarely the way you've sent it. Clearly something about your link or it's context is making the browser interpret a trailing slash. I wonder if it's a side effect of the url encoding. If you rawurldecode it will that help. If there are parts of the url that need to stay encoded you could search for the slashes and just put those back. A: The error must be elsewhere. echo writes the string, verbatim. No post-processing is done on any part. The additional slash is therefore added elsewhere in your code (prior to passing the string to echo). A: Do you get the same result if you use double quotes and escape internal double quotes like this? echo "<a href=\"http://\" title=\"bla\">huzzah</a>"; A: If I put that echo command in my PHP code, it outputs "http://" as expected (you can see that in the source of the generated output), but when I then mouse over the link in the resulting page (with IE7), it shows http:///. My guess is, that that's browser behaviour, because there can't be a http:// link without a host name or IP address (you can't just access the protocol). A: As some guys pointed out, 'http://' is not a valid link, so your browser adds the extra slash at the end. To view out it, try a lynx -dump http://yourdomain/yourfile.php (if you are fortunate enough to have a linux) or telnet from your box to your server in port 80, and typing this: GET /path/file.php HTTP/1.0 and look at the result. A: Have you looked into your PHP config settings? It might be magic_quotes_gpc deciding to escape things for you (I've been bitten several times by that setting, especially when working with AJAX/JSON traffic). Try making sure it is off and echoing again (you might need to edit your php.ini file, or add php_flag magic_quotes_gpc off to an .htaccess file in the directory you are working in, depending on your environment).
{ "language": "en", "url": "https://stackoverflow.com/questions/158104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Math.IEEERemainder returns negative results. Why? The .net framework includes Math.IEEERemainder(x, y) in addition to the standard mod operator. What is this function really doing? I dont understand the negative numbers that this produces. Example: Math.IEEERemainder(0, 2) = 0 Math.IEEERemainder(1, 2) = 1 Math.IEEERemainder(2, 2) = 0 Math.IEEERemainder(3, 2) = -1 A: If you read the example given at System.Math.IEEERemainder's MSDN page, you'll notice that two positive numbers can have a negative remainder. Return Value A number equal to x - (y Q), where Q is the quotient of x / y rounded to the nearest integer (if x / y falls halfway between two integers, the even integer is returned). So: 3 - (2 * (round(3 / 2))) = -1 /* ... Divide two double-precision floating-point values: 1) The IEEE remainder of 1.797693e+308/2.00 is 0.000000e+000 2) The IEEE remainder of 1.797693e+308/3.00 is -1.000000e+000 Note that two positive numbers can yield a negative remainder. */ Epilogue The actual question could be, "Why do we have two remainder operations?" When dealing with floating point data, you always need to be cognizant of your floating point standard. Since we're in the 21st century, most everything is on IEEE 754 and very few of us worry about say VAX F_Float versus IEEE 754. The C# standard states that the remainder operator (Section 7.7.3), when applied to floating point arguments, is analogous to the remainder operator when applied to integer arguments. That is, the same mathematical formula1 is used (with additional considerations for corner cases associated with floating point representations) in both integer and floating point remainder operations. Therefore, if you are looking to have your remainder operations on floating point numbers conform to your current IEEE 754 rounding modes, it is advisable to use Math.IEEERemainder. However, if your usage is not particularly sensitive to the subtle difference in rounding produced by the C# remainder operator, then continue using the operator. * *Given: z = x % y, then z = x - (x / y) * y
{ "language": "en", "url": "https://stackoverflow.com/questions/158120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Why do SocketChannel writes always complete for the full amount even on non-blocking sockets? Using the Sun Java VM 1.5 or 1.6 on Windows, I connect a non-blocking socket. I then fill a ByteBuffer with a message to output, and attempt to write() to the SocketChannel. I expect the write to complete only partially if the amount to be written is greater than the amount of space in the socket's TCP output buffer (this is what I expect intuitively, it's also pretty much my understanding of the docs), but that's not what happens. The write() always seems to return reporting the full amount written, even if it's several megabytes (the socket's SO_SNDBUF is 8KB, much, much less than my multi-megabyte output message). A problem here is that I can't test the code that handles the case where the output is partially written (registering an interest set of WRITE to a selector and doing a select() to wait until the remainder can be written), as that case never seems to happen. What am I not understanding? A: I managed to reproduce a situation that might be similar to yours. I think, ironically enough, your recipient is consuming the data faster than you're writing it. import java.io.InputStream; import java.net.ServerSocket; import java.net.Socket; public class MyServer { public static void main(String[] args) throws Exception { final ServerSocket ss = new ServerSocket(12345); final Socket cs = ss.accept(); System.out.println("Accepted connection"); final InputStream in = cs.getInputStream(); final byte[] tmp = new byte[64 * 1024]; while (in.read(tmp) != -1); Thread.sleep(100000); } } import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.nio.channels.SocketChannel; public class MyNioClient { public static void main(String[] args) throws Exception { final SocketChannel s = SocketChannel.open(); s.configureBlocking(false); s.connect(new InetSocketAddress("localhost", 12345)); s.finishConnect(); final ByteBuffer buf = ByteBuffer.allocate(128 * 1024); for (int i = 0; i < 10; i++) { System.out.println("to write: " + buf.remaining() + ", written: " + s.write(buf)); buf.position(0); } Thread.sleep(100000); } } If you run the above server and then make the above client attempt to write 10 chunks of 128 kB of data, you'll see that every write operation writes the whole buffer without blocking. However, if you modify the above server not to read anything from the connection, you'll see that only the first write operation on the client will write 128 kB, whereas all subsequent writes will return 0. Output when the server is reading from the connection: to write: 131072, written: 131072 to write: 131072, written: 131072 to write: 131072, written: 131072 ... Output when the server is not reading from the connection: to write: 131072, written: 131072 to write: 131072, written: 0 to write: 131072, written: 0 ... A: I've been working with UDP in Java and have seen some really "interesting" and completely undocumented behavior in the Java NIO stuff in general. The best way to determine what is happening is to look at the source which comes with Java. I also would wager rather highly that you might find a better implementation of what you're looking for in any other JVM implementation, such as IBM's, but I can't guarantee that without look at them myself. A: I'll make a big leap of faith and assume that the underlying network provider for Java is the same as for C...the O/S allocates more than just SO_SNDBUF for every socket. I bet if you put your send code in a for(1,100000) loop, you would eventually get a write that succeeds with a value smaller than requested. A: You really should look at an NIO framework like MINA or Grizzly. I've used MINA with great success in an enterprise chat server. It is also used in the Openfire chat server. Grizzly is used in Sun's JavaEE implementation. A: Where are you sending the data? Keep in mind that the network acts as a buffer that is at least equal in size to your SO_SNDBUF plus the receiver's SO_RCVBUF. Add this to the reading activity by the receiver as mentioned by Alexander and you can get a lot of data soaked up. A: I can't find it documented anywhere, but IIRC[1], send() is guaranteed to either a) send the supplied buffer completely, or b) fail. It will never complete the send partially. [1] I've written multiple Winsock implementations (for Win 3.0, Win 95, Win NT, etc), so this may be Winsock-specific (rather than generic sockets) behavior.
{ "language": "en", "url": "https://stackoverflow.com/questions/158121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Windows Forms: A modal form that gets opened/closed by the application rather than the user? I have what I believe to be a fairly well structured .NET 3.5 forms application (Unit Tests, Dependency Injection, SoC, the forms simply relay input and display output and don't do any logic, yadda yadda) I am just missing the winforms knowledge for how to get this bit to work. When a connection to the database is lost - a frequent occurrence - I am detecting and handling it and would like a modal form to pop up, blocking use of the application until the connection is re-established. I am not 100% sure how to do that since I am not waiting for user input, rather I am polling the database using a timer. My attempt was to design a form with a label on it and to do this: partial class MySustainedDialog : Form { public MySustainedDialog(string msg) { InitializeComponent(); lbMessage.Text = msg; } public new void Show() { base.ShowDialog(); } public new void Hide() { this.Close(); } } public class MyNoConnectionDialog : INoConnectionDialog { private FakeSustainedDialog _dialog; public void Show() { var w = new BackgroundWorker(); w.DoWork += delegate { _dialog = new MySustainedDialog("Connection Lost"); _dialog.Show(); }; w.RunWorkerAsync(); } public void Hide() { _dialog.Close(); } } This doesn't work since _dialog.Close() is a cross-thread call. I've been able to find information on how to resolve this issue within a windows form but not in a situation like this one where you need to create the form itself. Can someone give me some advice how to achieve what I am trying to do? EDIT: Please note, I only tried Background worker for lack of other ideas because I'm not tremendously familiar with how threading for the UI works so I am completely open to suggestions. I should also note that I do not want to close the form they are working on currently, I just want this to appear on top of it. Like an OK/Cancel dialog box but which I can open and close programmatically (and I need control over what it looks like to ) A: I'm not sure about the correctness of your overall approach, but to specifically answer your question try changing the MySustainedDialog Hide() function to as follows: public new void Hide() { if (this.InvokeRequired) { this.BeginInvoke((MethodInvoker)delegate { this.Hide(); }); return; } this.Close(); } A: There is no reason to use a background worker to actually launch the new instance of your form, you can simply do it from the UI thread. A: There are two approaches I've taken in similar situations. One is to operate in the main UI thread completely. You can do this by using a Windows.Forms.Timer instance, which will fire in the main UI thread. Upside is the simplicity and complete access to all UI components. Downside is that any blocking calls will have a huge impact on user experience, preventing any user interaction whatsoever. So if you need long-running commands that eventually result in a UI action (for example if checking for the database took, say, several seconds), then you need to go cross-thread. The simplest cross-thread solution from a code perspective is to call the Control.Invoke method from your BackgroundWorker. Invoke lets you "post" work to a control, essentially saying "plz go use your owning thread to run this." A: Might it be simpler to keep all the UI work on the main UI thread rather than using the BackgroundWorker? It's tricky to say without seeing more of your code, but I don't think you should need that. When you create your timer, you can assign it's Timer.SynchronizingObject to get it to use the main UI thread. And stick with that? Sorry, can't give a better answer without knowning more about the structure of your program.
{ "language": "en", "url": "https://stackoverflow.com/questions/158122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Best way to count file downloads on a website It's surprising how difficult it is to find a simple, concise answer to this question: * *I have a file, foo.zip, on my website *What can I do to find out how many people have accessed this file? *I could use Tomcat calls if necessary A: With the answer "The simplest way would probably be instead of linking directly to the file, link to a script which increments a counter and then forwards to the file in question." This is additional: $hit_count = @file_get_contents('count.txt'); $hit_count++; @file_put_contents('count.txt', $hit_count); header('Location: http://www.example.com/download/pics.zip'); // redirect to the real file to be downloaded Here count.txt is a simple plain text file, storing the counter info. You can save it in a database table along with downloadable_filename.ext also. A: Or you could parse the log file if you don't need the data in realtime. grep foo.zip /path/to/access.log | grep 200 | wc -l In reply to comment: The log file also contains bytes downloaded, but as someone else pointed out, this may not reflect the correct count if a user cancels the download on the client side. A: The simplest way would probably be instead of linking directly to the file, link to a script which increments a counter and then forwards to the file in question. A: Use the logs--each GET request for the file is another download (unless the visitor stopped the download partway through for some reason).
{ "language": "en", "url": "https://stackoverflow.com/questions/158124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Best open-source, cross-platform, compiled, GUI alternative to Visual Basic? I'm about to write a little GUI app that will sit in the system tray, doing a little FTP and ODBC. I'd like to develop in Linux, if possible. What would you recommend? Thanks a bunch! A: I'll probably be down mod but I think that FreePascal is your best bet. Most, if not all, of the functionalities are cross platform and resolved quite nicely. I'm not sure, but I could investigate, but the TTrayIcon is cross-platform and that's about what you need to get your app in the tray. It also has very good core connectivity with the major players on Databases. It's cross-platform in Windows, Linux, MAC OS and even in ARM and other embed environments. The only thing is it's Object Pascal and not VB'ish. A: Gambas is the obvious choice given the way you asked the question. But I don't think that it is probably what you really want. It is the closest thing to VB6 for Linux, though. If you really have to be compiled, Perl is an option (JIT) and is available ubiquitously on Linux. Most Linux apps in this situation, if they require being compiled, would use C/C++ wth the QT or GTK toolkits. But more often on Linux you would see Python or Perl being used. A: I believe jdesktop gives you cross-platform "System Tray" functionality for Java. (Edit: actually the functionality is in core Java, as of 6) And NetBeans is pretty good for developing GUIs, probably not as good as VB but not bad nonetheless. But Java may be overkill for your situation. A: For a "little GUI app" I would recommend Tk, either with Tcl or as Tkinter with Python. Tk is a very high level cross platform (and cross language) GUI toolkit that is very easy to use. Heck, I recommend Tk for large GUI apps too, but that's beside the point. If you go with Tcl you also get a really terrific distribution mechanism (tclkit/starkit/starpack) that makes it trivial to create single file executables, or two-file platform specific runtime + platform agnostic virtual filesystem. Python might give you better ODBC functionality, though that's just a hunch. I've not used ODBC with Tcl or Python. A: I have used several GUI toolkit for cross platform development, here are my top 4 suggestions in my preferred order: Eclipse RCP- It may be a heavyweight, but it is cross platform, produces native GUI components for each OS, and has many deployment features. wxWidgets - Open source GUI library, can use C++ or python (wxpython). Tkinter - really fast and easy, lighweight GUI toolkit for python, cross platform, may be as feature complete as the above options. Java Swing - Good library, but can "look like java" (it doesn't use native GUI components) A: How important is the "sit in the system tray" bit? I don't know of anything that will let you do that in a cross-platform way. A: I still think that wxWidgets is a grat cross-platform UI development toolkit that is under active development and has great community support. A: There are several VB alternatives: http://www.realsoftware.com/products/realbasic/ http://www.libertybasic.com/visual-basic.html and what about Delphi? A: I, for my sins, was a VB developer, I shifted to C# and then to C++ with Qt. I think its going to depend on your skills as a programmer, if you are highly dependent on VB's procedural nature then stick with BASIC as a language. If you tend to develop in classes and objects with VB you probably will find Python, C# or Java are good alternatives. Also when looking cross-platform it is not just the language but also the toolkit you will be using. Qt has been great for me, but there is also wxWidgets and GTK to name a couple. A: Mono by Miguel de Izaca - now owned / sponsored by Novell. It gives you 90% of the .NET framework in Linux. A: Since you specifically mention Visual Basic, you should check out Gambas. It's not a VB clone, but it's VB like.
{ "language": "en", "url": "https://stackoverflow.com/questions/158129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: When would I use AutoResetEvent and ManualResetEvent instead of Monitor.Wait()/Monitor.Pulse()? They both seem to fulfill the same purpose. When would I chose one over the other? A: You would use a WaitHandle when you want a thread to send or receive a binary signal without the need for a critical section. Monitor.Wait and Monitor.Pulse on the other hand require a critical section. Like most of the synchronization mechanisms in the BCL there is some overlap in how a the two you mentioned can be used. But, do not think for a moment that they fulfill the same purpose. Monitor.Wait and Monitor.Pulse are a much more primitive synchronization mechanism than an MRE or ARE. In fact, you can actually build an MRE or ARE using nothing more than the Monitor class. The most important concept to understand is how the Monitor.Wait and WaitHandle.WaitOne methods differ. Wait and WaitOne will both put the thread in the WaitSleepJoin state which means the thread becomes idle and only responds to either a Thread.Interrupt or the respective Pulse or Set call. But, and this is a major difference, Wait will leave a critical section and reacquire it in an atomic manner. WaitOne simply cannot do this. It is a difference so fundamental to the way these synchronization mechanisms behave that defines the scenarios in which they can be used. In most situations you would choose an MRE or ARE. These satisfy most situations where one thread needs to receive a signal from another. However, if you want to create your own signaling mechanism then you would need to use Wait and Pulse. But, again, the .NET BCL has most of the popular signaling mechanisms covered already. The following signaling mechanisms already exist1. * *ManualResetEvent (or ManualResetEventSlim) *AutoResetEvent *Semaphore (or SemaphoreSlim) *EventWaitHandle *CountdownEvent *Barrier 1An honorable mention goes to the BlockingCollection class. It is not a signaling mechanisms per se, but it does have the qualities of a signaling mechanism with the added benefit that you can attach data to the signal. In this case the signal means that an item is available in the collection and the data associated with that signal is the item itself. A: Use the events when you've got a thread that is waiting on one of or all of a number of events to do something. Use the monitor if you want to restrict access to a data structure by limiting how many threads can access it. Monitors usually protect a resource, whereas events tell you something's happening, like the application shutting down. Also, events can be named (see the OpenExisting method), this allows them to be used for synchronization across different processes. A: In my opinion, it's better to use Monitor if you can, Monitor.Wait and Monitor.Pulse/PulseAll are used for signalling between threads (as are Manual/AutoResetEvent) however Monitor is quicker, and doesn't use a native system resource. Also apparently Monitor is implemented in user mode and is managed, whereas Manual/AutoResetEvents require switching to kernel mode and p/invoke out to native win32 calls that use a wait handle. There are situations where you would need to use Manual/AutoResetEvent, for example, to signal between processes you can use named events, and I guess to signal native threads in your app. I am just regurgitating what I have read in this excellent article about threading. The whole article is worth reading, however the link takes you to the wait handle section that details the events and monitor wait/pulse. A: This tutorial has detailed descriptions of what you'll need to know: http://www.albahari.com/threading/ In particular, this will cover the XXXResetEvent classes, http://www.albahari.com/threading/part2.aspx and this will cover Wait/Pulse : http://www.albahari.com/threading/part4.aspx#_Wait_and_Pulse
{ "language": "en", "url": "https://stackoverflow.com/questions/158133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: .NET client calling a java webservice -- (how to change the root namespace) Currently we have a java webservice that we are trying to connect to via a .NET client. This is all done over SSL. Are there any well known gotchas as this seems to be a problem that has come up again and again. What are the most well known gotchas I should be looking for? The java web service is a SOAP/WSDL. There are no WS-* extensions like WS-Security. Ok, here is the exact problem I am looking to solve: We were given a java webservice to call from a C# client. I've tracked the problem down to the fact that the java webservice is expecting some modified xml that the C# client is not producing. The java webservice is expecting something along these lines: <?xml version="1.0" encoding="UTF-8" ?> <iAttr:MyObject1 xmlns="iAttr" xmlns:iAttr="http://www.foo.com/WS"> <iAttr:MyObject2 xmlns="isum" xmlns:isum="http://www.foo.com/WS"> <iAttr:OrderId>1001027892 </isum:OrderId> The problem is, that the xml/SOAP stuff that my client is generating is like this: <?xml version="1.0" encoding="UTF-8" ?> <iAttr:MyObject1 xmlns="iAttr" xmlns:iAttr="http://www.foo.com/WS"> <MyObject2> <OrderId>1001027892</OrderId> note: the lack of "iAttr" in the C# version. Question: How do I add the attributes problematically in C# to match what the java WS is expecting? A: I didn't write the service. Here is the weird thing: a java client making the same webservice call works perfectly. However a .NET client making the exact same webservice call breaks. A: Well, if you wrote your service the "right" way, then there shouldn't be any problems, at least not problems of language interop.
{ "language": "en", "url": "https://stackoverflow.com/questions/158139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you restrict the size of a file being uploaded with JavaScript (or Java) without transferring the entire file? Is there a way to validate on the client side browser whether the size of a file being uploaded from a JSP page is over a set size limit without forcing the user to upload the entire file only to find out it was too large? I would like to stay away from any proprietary controls or techniques like Flash or ActiveX if possible. Thanks! A: This isn't a perfect solution, but if you check the Content-Length HTTP header with request.getHeader("Content-Length") then you can choose to not transfer the entire file. By way of explanation, an extremely large file will not be transferred all at once. You'd have to actually open a stream representing that chunk of POST data and read from it for the entire thing to be transfered. On the other hand, if you're worried about denial-of-service attacks, then you can't really trust the Content-Length header, because it can easily be forged. In this case, you should set a limit and stream a transfer of this file, stopping as soon as you've exceeded that limit. A: Suggest you reconsider the Flash decision and take a look at the YUI Uploader, here: http://developer.yahoo.com/yui/uploader/ Among other things, the fileSelect event will tell you the size of the selected file in bytes immediately after it is selected but before it's uploaded, so you'll be able to restrict accordingly. A: With JSP or PHP you won't be able to restrict the file size because your page won't get the request until the upload has already happened. At that point you can decide not to save the file but that might be too late. There are some Java solutions out there, e.g. MyUploader or Hermes. Some even support multiple file uploads and resuming partial uploads, and some also give you the source code. You can also write your own, but it will need to be a signed applet in order to function because it needs to access the local filesystem. If you're using Apache as your webserver you'll need enough RAM in your machine to fit the whole file size in memory of all files being uploaded at a given time.
{ "language": "en", "url": "https://stackoverflow.com/questions/158149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Upgrading SVN 1.4 to 1.5.3 and CC.Net from 1.3 to 1.4 I think this is a multi-part question, so bear with me. Currently all of our developers use the version of Tortise built for SVN 1.4 and our SVN server is running 1.4. Our build server is running CC.Net and is using SVN 1.4. We want to upgrade. I've established that upgrading our clients to 1.5, then our server to 1.5 will work for us. However, the question comes in with CC.Net. Can we just upgrade the install of SVN on our build server to SVN 1.5? Or do we have to upgrade the install of CC.Net too? We'd like to also take this time to upgrade CC.Net, however, we'd like to make sure the SVN upgrade is done first, then come back and do CC.Net. Also adding to this mix is that in some of our projects we maintain a 'tools' folder that may or may not contain the binaries for SVN due to the nAnt scripts we use in those projects. I assume that if we upgrade the CC.Net server install of SVN to 1.5, we'll also need to update all of those projects as the CI server uses the same working directory as the nAnt scripts that get executed. clear as mud? A: Hard to answer as it seems you're asking for a plan for your environment, which I'm not in. However, here's what I'd do: * *Upgrade cc.net (you have a known good starting point, and this is the most likely breaking step. do it without any other variables so it is easier to roll back) *Test & Verify *Upgrade all the svn clients including the binaries in your "tools" folder *Test & Verify *Upgrade the svn server *Test & Verify *Test & Verify A: A little tip that may help you: SVN 1.4 clients can connect to a SVN 1.5 server, and SVN 1.5 clients can connect to a SVN 1.4 server, no problems -- just when you have a version mismatch, some of the newly added SVN features will not be available (but all the normal stuff will still work fine).
{ "language": "en", "url": "https://stackoverflow.com/questions/158150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I save a screenshot directly to a file in Windows? Is there a one button way to save a screenshot directly to a file in Windows? TheSoftwareJedi accurately answered above question for Windows 8 and 10. Below original extra material remains for posterity. This is a very important question as the 316K views shows as of 2021. Asked in 2008, SO closed this question around 2015 as being off-topic, probably because of the last question below. In Windows XP, one can press Alt-PrintScreen to copy an image of the active window, or Ctrl-PrintScreen to copy an image of the full desktop. This can then be pasted into applications that accept images: Photoshop, Microsoft Word, etc. I'm wondering: Is there a way to save the screenshot directly to a file? Do I really have to open an image program, like Paint.net or Photoshop, simply to paste an image, then save it? A: Might I suggest WinSnap http://www.ntwind.com/software/winsnap/download-free-version.html. It provides an autosave option and capture the alt+printscreen and other key combinations to capture screen, windows, dialog, etc. A: You can code something pretty simple that will hook the PrintScreen and save the capture in a file. Here is something to start to capture and save to a file. You will just need to hook the key "Print screen". using System; using System.Drawing; using System.IO; using System.Drawing.Imaging; using System.Runtime.InteropServices; public class CaptureScreen { static public void Main(string[] args) { try { Bitmap capture = CaptureScreen.GetDesktopImage(); string file = Path.Combine(Environment.CurrentDirectory, "screen.gif"); ImageFormat format = ImageFormat.Gif; capture.Save(file, format); } catch (Exception e) { Console.WriteLine(e); } } public static Bitmap GetDesktopImage() { WIN32_API.SIZE size; IntPtr hDC = WIN32_API.GetDC(WIN32_API.GetDesktopWindow()); IntPtr hMemDC = WIN32_API.CreateCompatibleDC(hDC); size.cx = WIN32_API.GetSystemMetrics(WIN32_API.SM_CXSCREEN); size.cy = WIN32_API.GetSystemMetrics(WIN32_API.SM_CYSCREEN); m_HBitmap = WIN32_API.CreateCompatibleBitmap(hDC, size.cx, size.cy); if (m_HBitmap!=IntPtr.Zero) { IntPtr hOld = (IntPtr) WIN32_API.SelectObject(hMemDC, m_HBitmap); WIN32_API.BitBlt(hMemDC, 0, 0,size.cx,size.cy, hDC, 0, 0, WIN32_API.SRCCOPY); WIN32_API.SelectObject(hMemDC, hOld); WIN32_API.DeleteDC(hMemDC); WIN32_API.ReleaseDC(WIN32_API.GetDesktopWindow(), hDC); return System.Drawing.Image.FromHbitmap(m_HBitmap); } return null; } protected static IntPtr m_HBitmap; } public class WIN32_API { public struct SIZE { public int cx; public int cy; } public const int SRCCOPY = 13369376; public const int SM_CXSCREEN=0; public const int SM_CYSCREEN=1; [DllImport("gdi32.dll",EntryPoint="DeleteDC")] public static extern IntPtr DeleteDC(IntPtr hDc); [DllImport("gdi32.dll",EntryPoint="DeleteObject")] public static extern IntPtr DeleteObject(IntPtr hDc); [DllImport("gdi32.dll",EntryPoint="BitBlt")] public static extern bool BitBlt(IntPtr hdcDest,int xDest,int yDest,int wDest,int hDest,IntPtr hdcSource,int xSrc,int ySrc,int RasterOp); [DllImport ("gdi32.dll",EntryPoint="CreateCompatibleBitmap")] public static extern IntPtr CreateCompatibleBitmap(IntPtr hdc, int nWidth, int nHeight); [DllImport ("gdi32.dll",EntryPoint="CreateCompatibleDC")] public static extern IntPtr CreateCompatibleDC(IntPtr hdc); [DllImport ("gdi32.dll",EntryPoint="SelectObject")] public static extern IntPtr SelectObject(IntPtr hdc,IntPtr bmp); [DllImport("user32.dll", EntryPoint="GetDesktopWindow")] public static extern IntPtr GetDesktopWindow(); [DllImport("user32.dll",EntryPoint="GetDC")] public static extern IntPtr GetDC(IntPtr ptr); [DllImport("user32.dll",EntryPoint="GetSystemMetrics")] public static extern int GetSystemMetrics(int abc); [DllImport("user32.dll",EntryPoint="GetWindowDC")] public static extern IntPtr GetWindowDC(Int32 ptr); [DllImport("user32.dll",EntryPoint="ReleaseDC")] public static extern IntPtr ReleaseDC(IntPtr hWnd,IntPtr hDc); } Update Here is the code to hook the PrintScreen (and other key) from C#: Hook code A: Dropbox now provides the hook to do this automagically. If you get a free dropbox account and install the laptop app, when you press PrtScr Dropbox will give you the option of automatically storing all screenshots to your dropbox folder. A: You need a 3rd party screen grab utility for that functionality in XP. I dig Scott Hanselman's extensive blogging about cool tools and usually look there for such a utility -- sure enough, he's blogged about a couple here. A: This will do it in Delphi. Note the use of the BitBlt function, which is a Windows API call, not something specific to Delphi. Edit: Added example usage function TForm1.GetScreenShot(OnlyActiveWindow: boolean) : TBitmap; var w,h : integer; DC : HDC; hWin : Cardinal; r : TRect; begin //take a screenshot and return it as a TBitmap. //if they specify "OnlyActiveWindow", then restrict the screenshot to the //currently focused window (same as alt-prtscrn) //Otherwise, get a normal screenshot (same as prtscrn) Result := TBitmap.Create; if OnlyActiveWindow then begin hWin := GetForegroundWindow; dc := GetWindowDC(hWin); GetWindowRect(hWin,r); w := r.Right - r.Left; h := r.Bottom - r.Top; end //if active window only else begin hWin := GetDesktopWindow; dc := GetDC(hWin); w := GetDeviceCaps(DC,HORZRES); h := GetDeviceCaps(DC,VERTRES); end; //else entire desktop try Result.Width := w; Result.Height := h; BitBlt(Result.Canvas.Handle,0,0,Result.Width,Result.Height,DC,0,0,SRCCOPY); finally ReleaseDC(hWin, DC) ; end; //try-finally end; procedure TForm1.btnSaveScreenshotClick(Sender: TObject); var bmp : TBitmap; savdlg : TSaveDialog; begin //take a screenshot, prompt for where to save it savdlg := TSaveDialog.Create(Self); bmp := GetScreenshot(False); try if savdlg.Execute then begin bmp.SaveToFile(savdlg.FileName); end; finally FreeAndNil(bmp); FreeAndNil(savdlg); end; //try-finally end; A: Try this: http://www.screenshot-utility.com/ From their homepage: When you press a hotkey, it captures and saves a snapshot of your screen to a JPG, GIF or BMP file. A: Little known fact: in most standard Windows (XP) dialogs, you can hit Ctrl+C to have a textual copy of the content of the dialog. Example: open a file in Notepad, hit space, close the window, hit Ctrl+C on the Confirm Exit dialog, cancel, paste in Notepad the text of the dialog. Unrelated to your direct question, but I though it would be nice to mention in this thread. Beside, indeed, you need a third party software to do the screenshot, but you don't need to fire the big Photoshop for that. Something free and lightweight like IrfanWiew or XnView can do the job. I use MWSnap to copy arbitrary parts of the screen. I wrote a little AutoHotkey script calling GDI+ functions to do screenshots. Etc. A: There is no way to save directly to a file without a 3rd party tool before Windows 8. Here are my personal favorite non-third party tool solutions. For Windows 8 and later + PrintScreen saves the screenshot into a folder in <user>/Pictures/Screenshots For Windows 7 In win 7 just use the snipping tool: Most easily accessed via pressing Start, then typing "sni" (enter). or then sni enter Prior versions of Windows I use the following keyboard combination to capture, then save using mspaint. After you do it a couple times, it only takes 2-3 seconds: * *Alt+PrintScreen *Win+R ("run") *type "mspaint" enter *Ctrl-V (paste) *Ctrl-S (save) *use file dialog *Alt-F4 (close mspaint) In addition, Cropper is great (and open source). It does rectangle capture to file or clipboard, and is of course free. A: Thanks for all the source code and comments - thanks to that, I finally have an app that I wanted :) I have compiled some of the examples, and both sources and executables can be found here: http://sdaaubckp.svn.sourceforge.net/viewvc/sdaaubckp/xp-take-screenshot/ I use InterceptCaptureScreen.exe - simply run it in a command prompt terminal, and then press Insert when you want to capture a screenshot (timestamped filenames, png, in the same directory where the executable is); keys will be captured even if the terminal is not in focus. (I use Insert key, since it should have an easier time propagating through, say, VNC than PrintScreen - which on my laptop requires that also Fn key is pressed, and that does not propagate through VNC. Of course, its easy to change what is the actual key used in the source code). Hope this helps, Cheers! A: Very old post I realize, but windows finally realized how inane the process was. In Windows 8.1 (verified, not working in windows 7 (tnx @bobobobo)) windows key + prnt screen saves the screenshot into a folder in <user>/Pictures/Screenshots Source - http://windows.microsoft.com/en-in/windows/take-screen-capture-print-screen#take-screen-capture-print-screen=windows-8 A: Without installing a screenshot autosave utility, yes you do. There are several utilities you can find however folr doing this. For example: http://www.screenshot-utility.com/ A: Of course you could write a program that monitors the clipboard and displays an annoying SaveAs-dialog for every image in the clipboard ;-). I guess you can even find out if the last key pressed was PrintScreen to limit the number of false positives. While I'm thinking about it.. you could also google for someone who already did exactly that. EDIT: .. or just wait for someone to post the source here - as just happend :-) A: Snagit...lots of tech folks use that. A: Short of installing a screen capture program, which I recommend, the best way to do this is by using the standard Print Screen method, then open Microsoft Office Picture Manager and simply paste the screenshot into the white area of the directory that you desire. It'll create a bitmap that you can edit or save-as a different format. A: Thanks to TheSoftwareJedi for providing useful information about snapping tool in Windows 7. Shortcut to open Snipping tool : Go to Start, type sni And you will find the name in the list "Snipping Tool" A: Keep Picasa running in the background, and simply click "Print Screen" key Source A: As far as I know in XP, yes you must use some other app to actually save it. Vista comes with the Snipping tool, that simplifies the process a bit! A: You may want something like this: http://addons.mozilla.org/en-US/firefox/addon/5648 I think there is a version for IE and also with Explorer Integration. Pretty good software. A: It turns out that Google Picasa (free) will do this for you now. If you have it open, when you hit it will save the screen shot to a file and load it into Picasa. In my experience, it works great! A: Is this possible: * *Press Alt PrintScreen *Open a folder *Right click -> paste screenshot Example: Benchmark result window is open, take a screenshot. Open C:\Benchmarks Right click -> Paste screenshot A file named screenshot00x.jpg appears, with text screenshot00x selected. Type Overclock5 Thats it. No need to open anything. If you do not write anything, default name stays.
{ "language": "en", "url": "https://stackoverflow.com/questions/158151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "118" }
Q: Design of high volume TCP Client I have a .NET TCP Client that sends high volumes of messages to a (.NET async) TCP server. I need to keep sending messages to the server but I run out of ports on the client due to TIME_WAIT. How can a program continually and reliably send messages without using all of the available ports? Is there a method to keep reusing the same socket. I have looked at Disconnect() and the REUSEADDRESS socket flag but cannot find any good examples of their use. In fact most sources say not to use Disconnect as it is for lower level use (i.e. it only recycles the socket handle). I'm thinking that I need to switch to UDP or perhaps there is a method using C++ and IOCP? A: You can keep the socket open if your server and client are aware of the format of the data. You're closing the socket so that the server can "see" that the client is "done". If you have some protocol, then the server can "know" when it's finished receiving a block of data. You can look for an End-of-message token of somekind, you can pass in the length of the message, and read the rest based on size, etc. Different ways of doing it. But there's no reason to constantly open and close connections to the server -- that's what's killing you here. A: Can your client just keep the same socket open and send messages in a loop? open socket connection while(running) send messages over socket close socket connection A: TCP tries very hard to prevent congestion in the network. All new TCP connections begin in a "slow start" state, where they send only one packet and wait for an acknowledgement from the other end. If the ACK is received TCP will send two packets, then four, etc until it reaches its maximum window size. If you are generating messages at high datarate, you really want to avoid opening and closing TCP connections. Every time you open a new connection you'll be back in slow start. If you can keep the socket open the TCP connection will get past the slow start state and be able to send data at a much higher rate. To do this, you need to get the server to process more than one message on a connection (which means finding a way to delineate each message). If your server supports HTTP encoding of any sort this would work; make sure to examine any argument or configuration related to "persistent" connections or HTTP 1.1, because that is how HTTP sends multiple requests over a single TCP connection. One option you mentioned is UDP. If you are generating messages at a reasonably high rate you're likely to lose some of them due to queues being full somewhere along the way. If the messages you are sending need to be reliable, UDP is probably not a good basis. A: When coded like that it doesn't work. The server only receives the first message. When I open and close the socket then the server works but I run out of client ports. I'm guessing it's the design of my server that is causing me to have to code the client like that. So how does one code an Async server using .NET. I have followed the MSDN examples and numerous examples online. A: Would something along the lines of a message queuing service benefit your project? That way your client could pass as many messages to your server and your server could simply pull those messages from the queue when it can and as fast as your can and if you're client is sending more than it can handle, they'll simple enter the queue and wait to be processed. Some quick Googling turned up this MSDN documentation on building a message queuing service with C#. A: The basic idea behind writing TCP servers (in any language) is to have one port open to listen for connections, then create new threads or processes to handle new connection requests. open a server socket // this uses the port the clients know about while(running) client_socket = server_socket.listen fork(new handler_object(client_socket)) Here's a good example in C#.
{ "language": "en", "url": "https://stackoverflow.com/questions/158152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What are the potential pitfalls of using HTML Frames for templating? We could also include IFrames as well. A: With standard frames/frameset: * *Bookmarking can be difficult to accomplish. *The Back button can be broken. *Arrival from search engines could be into an inner frame. *Printing won't work the same across browsers. *Scrollbars could be in non-standard/unexpected places. More here. A: I'd like to rephrase your question: what are the advantages of using Html frames for templating? A detailed read about the subject: http://www.yourhtmlsource.com/frames/goodorbad.html Quote from that article: So, in conclusion, frames violate too many accepted web standards to be a worthy information delivery system. A well-designed navigation structure using tables or layers is infinitely preferable. Too many people use frames as a way to change their navigation bars by only modifying one page. This is much better accomplished through includes. A well-designed framed page can still be produced, but it happens so rarely that I would advise all coders to stay away from the thought. Tests have also shown that users do not like framed navigation. It might seem sexy to the coder, but if the users don't like it then you're doing a bad job. Frames confuse and irritate people. Avoid them. A: A long list of benefits and drawbacks are provided here. http://fuzzzyblog.blogspot.com/2009/09/disadvantages-of-using-frames.html
{ "language": "en", "url": "https://stackoverflow.com/questions/158155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Formatting numbers with significant figures in C# I have some decimal data that I am pushing into a SharePoint list where it is to be viewed. I'd like to restrict the number of significant figures displayed in the result data based on my knowledge of the specific calculation. Sometimes it'll be 3, so 12345 will become 12300 and 0.012345 will become 0.0123. Occasionally it will be 4 or 5. Is there any convenient way to handle this? A: This might do the trick: double Input1 = 1234567; string Result1 = Convert.ToDouble(String.Format("{0:G3}",Input1)).ToString("R0"); double Input2 = 0.012345; string Result2 = Convert.ToDouble(String.Format("{0:G3}", Input2)).ToString("R6"); Changing the G3 to G4 produces the oddest result though. It appears to round up the significant digits? A: See: RoundToSignificantFigures by "P Daddy". I've combined his method with another one I liked. Rounding to significant figures is a lot easier in TSQL where the rounding method is based on rounding position, not number of decimal places - which is the case with .Net math.round. You could round a number in TSQL to negative places, which would round at whole numbers - so the scaling isn't needed. Also see this other thread. Pyrolistical's method is good. The trailing zeros part of the problem seems like more of a string operation to me, so I included a ToString() extension method which will pad zeros if necessary. using System; using System.Globalization; public static class Precision { // 2^-24 public const float FLOAT_EPSILON = 0.0000000596046448f; // 2^-53 public const double DOUBLE_EPSILON = 0.00000000000000011102230246251565d; public static bool AlmostEquals(this double a, double b, double epsilon = DOUBLE_EPSILON) { // ReSharper disable CompareOfFloatsByEqualityOperator if (a == b) { return true; } // ReSharper restore CompareOfFloatsByEqualityOperator return (System.Math.Abs(a - b) < epsilon); } public static bool AlmostEquals(this float a, float b, float epsilon = FLOAT_EPSILON) { // ReSharper disable CompareOfFloatsByEqualityOperator if (a == b) { return true; } // ReSharper restore CompareOfFloatsByEqualityOperator return (System.Math.Abs(a - b) < epsilon); } } public static class SignificantDigits { public static double Round(this double value, int significantDigits) { int unneededRoundingPosition; return RoundSignificantDigits(value, significantDigits, out unneededRoundingPosition); } public static string ToString(this double value, int significantDigits) { // this method will round and then append zeros if needed. // i.e. if you round .002 to two significant figures, the resulting number should be .0020. var currentInfo = CultureInfo.CurrentCulture.NumberFormat; if (double.IsNaN(value)) { return currentInfo.NaNSymbol; } if (double.IsPositiveInfinity(value)) { return currentInfo.PositiveInfinitySymbol; } if (double.IsNegativeInfinity(value)) { return currentInfo.NegativeInfinitySymbol; } int roundingPosition; var roundedValue = RoundSignificantDigits(value, significantDigits, out roundingPosition); // when rounding causes a cascading round affecting digits of greater significance, // need to re-round to get a correct rounding position afterwards // this fixes a bug where rounding 9.96 to 2 figures yeilds 10.0 instead of 10 RoundSignificantDigits(roundedValue, significantDigits, out roundingPosition); if (Math.Abs(roundingPosition) > 9) { // use exponential notation format // ReSharper disable FormatStringProblem return string.Format(currentInfo, "{0:E" + (significantDigits - 1) + "}", roundedValue); // ReSharper restore FormatStringProblem } // string.format is only needed with decimal numbers (whole numbers won't need to be padded with zeros to the right.) // ReSharper disable FormatStringProblem return roundingPosition > 0 ? string.Format(currentInfo, "{0:F" + roundingPosition + "}", roundedValue) : roundedValue.ToString(currentInfo); // ReSharper restore FormatStringProblem } private static double RoundSignificantDigits(double value, int significantDigits, out int roundingPosition) { // this method will return a rounded double value at a number of signifigant figures. // the sigFigures parameter must be between 0 and 15, exclusive. roundingPosition = 0; if (value.AlmostEquals(0d)) { roundingPosition = significantDigits - 1; return 0d; } if (double.IsNaN(value)) { return double.NaN; } if (double.IsPositiveInfinity(value)) { return double.PositiveInfinity; } if (double.IsNegativeInfinity(value)) { return double.NegativeInfinity; } if (significantDigits < 1 || significantDigits > 15) { throw new ArgumentOutOfRangeException("significantDigits", value, "The significantDigits argument must be between 1 and 15."); } // The resulting rounding position will be negative for rounding at whole numbers, and positive for decimal places. roundingPosition = significantDigits - 1 - (int)(Math.Floor(Math.Log10(Math.Abs(value)))); // try to use a rounding position directly, if no scale is needed. // this is because the scale mutliplication after the rounding can introduce error, although // this only happens when you're dealing with really tiny numbers, i.e 9.9e-14. if (roundingPosition > 0 && roundingPosition < 16) { return Math.Round(value, roundingPosition, MidpointRounding.AwayFromZero); } // Shouldn't get here unless we need to scale it. // Set the scaling value, for rounding whole numbers or decimals past 15 places var scale = Math.Pow(10, Math.Ceiling(Math.Log10(Math.Abs(value)))); return Math.Round(value / scale, significantDigits, MidpointRounding.AwayFromZero) * scale; } } A: I ended up snagging some code from http://ostermiller.org/utils/SignificantFigures.java.html. It was in java, so I did a quick search/replace and some resharper reformatting to make the C# build. It seems to work nicely for my significant figure needs. FWIW, I removed his javadoc comments to make it more concise here, but the original code is documented quite nicely. /* * Copyright (C) 2002-2007 Stephen Ostermiller * http://ostermiller.org/contact.pl?regarding=Java+Utilities * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * See COPYING.TXT for details. */ public class SignificantFigures { private String original; private StringBuilder _digits; private int mantissa = -1; private bool sign = true; private bool isZero = false; private bool useScientificNotation = true; public SignificantFigures(String number) { original = number; Parse(original); } public SignificantFigures(double number) { original = Convert.ToString(number); try { Parse(original); } catch (Exception nfe) { _digits = null; } } public bool UseScientificNotation { get { return useScientificNotation; } set { useScientificNotation = value; } } public int GetNumberSignificantFigures() { if (_digits == null) return 0; return _digits.Length; } public SignificantFigures SetLSD(int place) { SetLMSD(place, Int32.MinValue); return this; } public SignificantFigures SetLMSD(int leastPlace, int mostPlace) { if (_digits != null && leastPlace != Int32.MinValue) { int significantFigures = _digits.Length; int current = mantissa - significantFigures + 1; int newLength = significantFigures - leastPlace + current; if (newLength <= 0) { if (mostPlace == Int32.MinValue) { original = "NaN"; _digits = null; } else { newLength = mostPlace - leastPlace + 1; _digits.Length = newLength; mantissa = leastPlace; for (int i = 0; i < newLength; i++) { _digits[i] = '0'; } isZero = true; sign = true; } } else { _digits.Length = newLength; for (int i = significantFigures; i < newLength; i++) { _digits[i] = '0'; } } } return this; } public int GetLSD() { if (_digits == null) return Int32.MinValue; return mantissa - _digits.Length + 1; } public int GetMSD() { if (_digits == null) return Int32.MinValue; return mantissa + 1; } public override String ToString() { if (_digits == null) return original; StringBuilder digits = new StringBuilder(this._digits.ToString()); int length = digits.Length; if ((mantissa <= -4 || mantissa >= 7 || (mantissa >= length && digits[digits.Length - 1] == '0') || (isZero && mantissa != 0)) && useScientificNotation) { // use scientific notation. if (length > 1) { digits.Insert(1, '.'); } if (mantissa != 0) { digits.Append("E" + mantissa); } } else if (mantissa <= -1) { digits.Insert(0, "0."); for (int i = mantissa; i < -1; i++) { digits.Insert(2, '0'); } } else if (mantissa + 1 == length) { if (length > 1 && digits[digits.Length - 1] == '0') { digits.Append('.'); } } else if (mantissa < length) { digits.Insert(mantissa + 1, '.'); } else { for (int i = length; i <= mantissa; i++) { digits.Append('0'); } } if (!sign) { digits.Insert(0, '-'); } return digits.ToString(); } public String ToScientificNotation() { if (_digits == null) return original; StringBuilder digits = new StringBuilder(this._digits.ToString()); int length = digits.Length; if (length > 1) { digits.Insert(1, '.'); } if (mantissa != 0) { digits.Append("E" + mantissa); } if (!sign) { digits.Insert(0, '-'); } return digits.ToString(); } private const int INITIAL = 0; private const int LEADZEROS = 1; private const int MIDZEROS = 2; private const int DIGITS = 3; private const int LEADZEROSDOT = 4; private const int DIGITSDOT = 5; private const int MANTISSA = 6; private const int MANTISSADIGIT = 7; private void Parse(String number) { int length = number.Length; _digits = new StringBuilder(length); int state = INITIAL; int mantissaStart = -1; bool foundMantissaDigit = false; // sometimes we don't know if a zero will be // significant or not when it is encountered. // keep track of the number of them so that // the all can be made significant if we find // out that they are. int zeroCount = 0; int leadZeroCount = 0; for (int i = 0; i < length; i++) { char c = number[i]; switch (c) { case '.': { switch (state) { case INITIAL: case LEADZEROS: { state = LEADZEROSDOT; } break; case MIDZEROS: { // we now know that these zeros // are more than just trailing place holders. for (int j = 0; j < zeroCount; j++) { _digits.Append('0'); } zeroCount = 0; state = DIGITSDOT; } break; case DIGITS: { state = DIGITSDOT; } break; default: { throw new Exception( "Unexpected character '" + c + "' at position " + i ); } } } break; case '+': { switch (state) { case INITIAL: { sign = true; state = LEADZEROS; } break; case MANTISSA: { state = MANTISSADIGIT; } break; default: { throw new Exception( "Unexpected character '" + c + "' at position " + i ); } } } break; case '-': { switch (state) { case INITIAL: { sign = false; state = LEADZEROS; } break; case MANTISSA: { state = MANTISSADIGIT; } break; default: { throw new Exception( "Unexpected character '" + c + "' at position " + i ); } } } break; case '0': { switch (state) { case INITIAL: case LEADZEROS: { // only significant if number // is all zeros. zeroCount++; leadZeroCount++; state = LEADZEROS; } break; case MIDZEROS: case DIGITS: { // only significant if followed // by a decimal point or nonzero digit. mantissa++; zeroCount++; state = MIDZEROS; } break; case LEADZEROSDOT: { // only significant if number // is all zeros. mantissa--; zeroCount++; state = LEADZEROSDOT; } break; case DIGITSDOT: { // non-leading zeros after // a decimal point are always // significant. _digits.Append(c); } break; case MANTISSA: case MANTISSADIGIT: { foundMantissaDigit = true; state = MANTISSADIGIT; } break; default: { throw new Exception( "Unexpected character '" + c + "' at position " + i ); } } } break; case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': { switch (state) { case INITIAL: case LEADZEROS: case DIGITS: { zeroCount = 0; _digits.Append(c); mantissa++; state = DIGITS; } break; case MIDZEROS: { // we now know that these zeros // are more than just trailing place holders. for (int j = 0; j < zeroCount; j++) { _digits.Append('0'); } zeroCount = 0; _digits.Append(c); mantissa++; state = DIGITS; } break; case LEADZEROSDOT: case DIGITSDOT: { zeroCount = 0; _digits.Append(c); state = DIGITSDOT; } break; case MANTISSA: case MANTISSADIGIT: { state = MANTISSADIGIT; foundMantissaDigit = true; } break; default: { throw new Exception( "Unexpected character '" + c + "' at position " + i ); } } } break; case 'E': case 'e': { switch (state) { case INITIAL: case LEADZEROS: case DIGITS: case LEADZEROSDOT: case DIGITSDOT: { // record the starting point of the mantissa // so we can do a substring to get it back later mantissaStart = i + 1; state = MANTISSA; } break; default: { throw new Exception( "Unexpected character '" + c + "' at position " + i ); } } } break; default: { throw new Exception( "Unexpected character '" + c + "' at position " + i ); } } } if (mantissaStart != -1) { // if we had found an 'E' if (!foundMantissaDigit) { // we didn't actually find a mantissa to go with. throw new Exception( "No digits in mantissa." ); } // parse the mantissa. mantissa += Convert.ToInt32(number.Substring(mantissaStart)); } if (_digits.Length == 0) { if (zeroCount > 0) { // if nothing but zeros all zeros are significant. for (int j = 0; j < zeroCount; j++) { _digits.Append('0'); } mantissa += leadZeroCount; isZero = true; sign = true; } else { // a hack to catch some cases that we could catch // by adding a ton of extra states. Things like: // "e2" "+e2" "+." "." "+" etc. throw new Exception( "No digits in number." ); } } } public SignificantFigures SetNumberSignificantFigures(int significantFigures) { if (significantFigures <= 0) throw new ArgumentException("Desired number of significant figures must be positive."); if (_digits != null) { int length = _digits.Length; if (length < significantFigures) { // number is not long enough, pad it with zeros. for (int i = length; i < significantFigures; i++) { _digits.Append('0'); } } else if (length > significantFigures) { // number is too long chop some of it off with rounding. bool addOne; // we need to round up if true. char firstInSig = _digits[significantFigures]; if (firstInSig < '5') { // first non-significant digit less than five, round down. addOne = false; } else if (firstInSig == '5') { // first non-significant digit equal to five addOne = false; for (int i = significantFigures + 1; !addOne && i < length; i++) { // if its followed by any non-zero digits, round up. if (_digits[i] != '0') { addOne = true; } } if (!addOne) { // if it was not followed by non-zero digits // if the last significant digit is odd round up // if the last significant digit is even round down addOne = (_digits[significantFigures - 1] & 1) == 1; } } else { // first non-significant digit greater than five, round up. addOne = true; } // loop to add one (and carry a one if added to a nine) // to the last significant digit for (int i = significantFigures - 1; addOne && i >= 0; i--) { char digit = _digits[i]; if (digit < '9') { _digits[i] = (char) (digit + 1); addOne = false; } else { _digits[i] = '0'; } } if (addOne) { // if the number was all nines _digits.Insert(0, '1'); mantissa++; } // chop it to the correct number of figures. _digits.Length = significantFigures; } } return this; } public double ToDouble() { return Convert.ToDouble(original); } public static String Format(double number, int significantFigures) { SignificantFigures sf = new SignificantFigures(number); sf.SetNumberSignificantFigures(significantFigures); return sf.ToString(); } } A: I have a shorted answer to calculating significant figures of a number. Here is the code & the test results... using System; using System.Collections.Generic; namespace ConsoleApplicationRound { class Program { static void Main(string[] args) { //char cDecimal = '.'; // for English cultures char cDecimal = ','; // for German cultures List<double> l_dValue = new List<double>(); ushort usSignificants = 5; l_dValue.Add(0); l_dValue.Add(0.000640589); l_dValue.Add(-0.000640589); l_dValue.Add(-123.405009); l_dValue.Add(123.405009); l_dValue.Add(-540); l_dValue.Add(540); l_dValue.Add(-540911); l_dValue.Add(540911); l_dValue.Add(-118.2); l_dValue.Add(118.2); l_dValue.Add(-118.18); l_dValue.Add(118.18); l_dValue.Add(-118.188); l_dValue.Add(118.188); foreach (double d in l_dValue) { Console.WriteLine("d = Maths.Round('" + cDecimal + "', " + d + ", " + usSignificants + ") = " + Maths.Round( cDecimal, d, usSignificants)); } Console.Read(); } } } The Maths class used is as follows: using System; using System.Text; namespace ConsoleApplicationRound { class Maths { /// <summary> /// The word "Window" /// </summary> private static String m_strZeros = "000000000000000000000000000000000"; /// <summary> /// The minus sign /// </summary> public const char m_cDASH = '-'; /// <summary> /// Determines the number of digits before the decimal point /// </summary> /// <param name="cDecimal"> /// Language-specific decimal separator /// </param> /// <param name="strValue"> /// Value to be scrutinised /// </param> /// <returns> /// Nr. of digits before the decimal point /// </returns> private static ushort NrOfDigitsBeforeDecimal(char cDecimal, String strValue) { short sDecimalPosition = (short)strValue.IndexOf(cDecimal); ushort usSignificantDigits = 0; if (sDecimalPosition >= 0) { strValue = strValue.Substring(0, sDecimalPosition + 1); } for (ushort us = 0; us < strValue.Length; us++) { if (strValue[us] != m_cDASH) usSignificantDigits++; if (strValue[us] == cDecimal) { usSignificantDigits--; break; } } return usSignificantDigits; } /// <summary> /// Rounds to a fixed number of significant digits /// </summary> /// <param name="d"> /// Number to be rounded /// </param> /// <param name="usSignificants"> /// Requested significant digits /// </param> /// <returns> /// The rounded number /// </returns> public static String Round(char cDecimal, double d, ushort usSignificants) { StringBuilder value = new StringBuilder(Convert.ToString(d)); short sDecimalPosition = (short)value.ToString().IndexOf(cDecimal); ushort usAfterDecimal = 0; ushort usDigitsBeforeDecimalPoint = NrOfDigitsBeforeDecimal(cDecimal, value.ToString()); if (usDigitsBeforeDecimalPoint == 1) { usAfterDecimal = (d == 0) ? usSignificants : (ushort)(value.Length - sDecimalPosition - 2); } else { if (usSignificants >= usDigitsBeforeDecimalPoint) { usAfterDecimal = (ushort)(usSignificants - usDigitsBeforeDecimalPoint); } else { double dPower = Math.Pow(10, usDigitsBeforeDecimalPoint - usSignificants); d = dPower*(long)(d/dPower); } } double dRounded = Math.Round(d, usAfterDecimal); StringBuilder result = new StringBuilder(); result.Append(dRounded); ushort usDigits = (ushort)result.ToString().Replace( Convert.ToString(cDecimal), "").Replace( Convert.ToString(m_cDASH), "").Length; // Add lagging zeros, if necessary: if (usDigits < usSignificants) { if (usAfterDecimal != 0) { if (result.ToString().IndexOf(cDecimal) == -1) { result.Append(cDecimal); } int i = (d == 0) ? 0 : Math.Min(0, usDigits - usSignificants); result.Append(m_strZeros.Substring(0, usAfterDecimal + i)); } } return result.ToString(); } } } Any answer with a shorter code? A: You can get an elegant bit perfect rounding by using the GetBits method on Decimal and leveraging BigInteger to perform masking. Some utils public static int CountDigits (BigInteger number) => ((int)BigInteger.Log10(number))+1; private static readonly BigInteger[] BigPowers10 = Enumerable.Range(0, 100) .Select(v => BigInteger.Pow(10, v)) .ToArray(); The main function public static decimal RoundToSignificantDigits (this decimal num, short n) { var bits = decimal.GetBits(num); var u0 = unchecked((uint)bits[0]); var u1 = unchecked((uint)bits[1]); var u2 = unchecked((uint)bits[2]); var i = new BigInteger(u0) + (new BigInteger(u1) << 32) + (new BigInteger(u2) << 64); var d = CountDigits(i); var delta = d - n; if (delta < 0) return num; var scale = BigPowers10[delta]; var div = i/scale; var rem = i%scale; var up = rem > scale/2; if (up) div += 1; var shifted = div*scale; bits[0] =unchecked((int)(uint) (shifted & BigUnitMask)); bits[1] =unchecked((int)(uint) (shifted>>32 & BigUnitMask)); bits[2] =unchecked((int)(uint) (shifted>>64 & BigUnitMask)); return new decimal(bits); } test case 0 public void RoundToSignificantDigits() { WMath.RoundToSignificantDigits(0.0012345m, 2).Should().Be(0.0012m); WMath.RoundToSignificantDigits(0.0012645m, 2).Should().Be(0.0013m); WMath.RoundToSignificantDigits(0.040000000000000008, 6).Should().Be(0.04); WMath.RoundToSignificantDigits(0.040000010000000008, 6).Should().Be(0.04); WMath.RoundToSignificantDigits(0.040000100000000008, 6).Should().Be(0.0400001); WMath.RoundToSignificantDigits(0.040000110000000008, 6).Should().Be(0.0400001); WMath.RoundToSignificantDigits(0.20000000000000004, 6).Should().Be(0.2); WMath.RoundToSignificantDigits(0.10000000000000002, 6).Should().Be(0.1); WMath.RoundToSignificantDigits(0.0, 6).Should().Be(0.0); } test case 1 public void RoundToSigFigShouldWork() { 1.2m.RoundToSignificantDigits(1).Should().Be(1m); 0.01235668m.RoundToSignificantDigits(3).Should().Be(0.0124m); 0.01m.RoundToSignificantDigits(3).Should().Be(0.01m); 1.23456789123456789123456789m.RoundToSignificantDigits(4) .Should().Be(1.235m); 1.23456789123456789123456789m.RoundToSignificantDigits(16) .Should().Be(1.234567891234568m); 1.23456789123456789123456789m.RoundToSignificantDigits(24) .Should().Be(1.23456789123456789123457m); 1.23456789123456789123456789m.RoundToSignificantDigits(27) .Should().Be(1.23456789123456789123456789m); } A: I found this article doing a quick search on it. Basically this one converts to a string and goes by the characters in that array one at a time, till it reached the max. significance. Will this work? A: The following code doesn't quite meet the spec, since it doesn't try to round anything to the left of the decimal point. But it's simpler than anything else presented here (so far). I was quite surprised that C# doesn't have a built-in method to handle this. static public string SignificantDigits(double d, int digits=10) { int magnitude = (d == 0.0) ? 0 : (int)Math.Floor(Math.Log10(Math.Abs(d))) + 1; digits -= magnitude; if (digits < 0) digits = 0; string fmt = "f" + digits.ToString(); return d.ToString(fmt); } A: This method is dead simple and works with any number, positive or negative, and only uses a single transcendental function (Log10). The only difference (which may/may-not matter) is that it will not round the integer component. This is perfect however for currency processing where you know the limits are within certain bounds, because you can use doubles for much faster processing than the dreadfully slow Decimal type. public static double ToDecimal( this double x, int significantFigures = 15 ) { // determine # of digits before & after the decimal int digitsBeforeDecimal = (int)x.Abs().Log10().Ceil().Max( 0 ), digitsAfterDecimal = (significantFigures - digitsBeforeDecimal).Max( 0 ); // round it off return x.Round( digitsAfterDecimal ); } A: As I remember it "significant figures" means the number of digits after the dot separator so 3 significant digits for 0.012345 would be 0.012 and not 0.0123, but that really doesnt matter for the solution. I also understand that you want to "nullify" the last digits to a certain degree if the number is > 1. You write that 12345 would become 12300 but im not sure whether you want 123456 to become 1230000 or 123400 ? My solution does the last. Instead of calculating the factor you could ofcourse make a small initialized array if you only have a couple of variations. private static string FormatToSignificantFigures(decimal number, int amount) { if (number > 1) { int factor = Factor(amount); return ((int)(number/factor)*factor).ToString(); } NumberFormatInfo nfi = new CultureInfo("en-US", false).NumberFormat; nfi.NumberDecimalDigits = amount; return(number.ToString("F", nfi)); } private static int Factor(int x) { return DoCalcFactor(10, x-1); } private static int DoCalcFactor(int x, int y) { if (y == 1) return x; return 10*DoCalcFactor(x, y - 1); } Kind regards Carsten
{ "language": "en", "url": "https://stackoverflow.com/questions/158172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Why would you ever implement finalize()? I've been reading through a lot of the rookie Java questions on finalize() and find it kind of bewildering that no one has really made it plain that finalize() is an unreliable way to clean up resources. I saw someone comment that they use it to clean up Connections, which is really scary since the only way to come as close to a guarantee that a Connection is closed is to implement try (catch) finally. I was not schooled in CS, but I have been programming in Java professionally for close to a decade now and I have never seen anyone implement finalize() in a production system ever. This still doesn't mean that it doesn't have its uses, or that people I've worked with have been doing it right. So my question is, what use cases are there for implementing finalize() that cannot be handled more reliably via another process or syntax within the language? Please provide specific scenarios or your experience, simply repeating a Java text book, or finalize's intended use is not enough, as is not the intent of this question. A: You shouldn't depend on finalize() to clean up your resources for you. finalize() won't run until the class is garbage collected, if then. It's much better to explicitly free resources when you're done using them. A: A simple rule: never use finalizers. The fact alone that an object has a finalizer (regardless what code it executes) is enough to cause considerable overhead for garbage collection. From an article by Brian Goetz: Objects with finalizers (those that have a non-trivial finalize() method) have significant overhead compared to objects without finalizers, and should be used sparingly. Finalizeable objects are both slower to allocate and slower to collect. At allocation time, the JVM must register any finalizeable objects with the garbage collector, and (at least in the HotSpot JVM implementation) finalizeable objects must follow a slower allocation path than most other objects. Similarly, finalizeable objects are slower to collect, too. It takes at least two garbage collection cycles (in the best case) before a finalizeable object can be reclaimed, and the garbage collector has to do extra work to invoke the finalizer. The result is more time spent allocating and collecting objects and more pressure on the garbage collector, because the memory used by unreachable finalizeable objects is retained longer. Combine that with the fact that finalizers are not guaranteed to run in any predictable timeframe, or even at all, and you can see that there are relatively few situations for which finalization is the right tool to use. A: The only time I've used finalize in production code was to implement a check that a given object's resources had been cleaned up, and if not, then log a very vocal message. It didn't actually try and do it itself, it just shouted a lot if it wasn't done properly. Turned out to be quite useful. A: Be careful about what you do in a finalize(). Especially if you are using it for things like calling close() to ensure that resources are cleaned up. We ran into several situations where we had JNI libraries linked in to the running java code, and in any circumstances where we used finalize() to invoke JNI methods, we would get very bad java heap corruption. The corruption was not caused by the underlying JNI code itself, all of the memory traces were fine in the native libraries. It was just the fact that we were calling JNI methods from the finalize() at all. This was with a JDK 1.5 which is still in widespread use. We wouldn't find out that something went wrong until much later, but in the end the culprit was always the finalize() method making use of JNI calls. A: To highlight a point in the above answers: finalizers will be executed on the lone GC thread. I have heard of a major Sun demo where the developers added a small sleep to some finalizers and intentionally brought an otherwise fancy 3D demo to its knees. Best to avoid, with possible exception of test-env diagnostics. Eckel's Thinking in Java has a good section on this. A: Hmmm, I once used it to clean up objects that weren't being returned to an existing pool. They were passed around a lot, so it was impossible to tell when they could safely be returned to the pool. The problem was that it introduced a huge penalty during garbage collection that was far greater than any savings from pooling the objects. It was in production for about a month before I ripped out the whole pool, made everything dynamic and was done with it. A: I've been doing Java professionally since 1998, and I've never implemented finalize(). Not once. A: The accepted answer is good, I just wanted to add that there is now a way to have the functionality of finalize without actually using it at all. Look at the "Reference" classes. Weak reference, Phantom Reference & Soft Reference. You can use them to keep a reference to all your objects, but this reference ALONE will not stop GC. The neat thing about this is you can have it call a method when it will be deleted, and this method can be guaranteed to be called. As for finalize: I used finalize once to understand what objects were being freed. You can play some neat games with statics, reference counting and such--but it was only for analysis, but watch out for code like this (not just in finalize, but that's where you are most likely to see it): public void finalize() { ref1 = null; ref2 = null; othercrap = null; } It is a sign that somebody didn't know what they were doing. "Cleaning up" like this is virtually never needed. When the class is GC'd, this is done automatically. If you find code like that in a finalize it's guaranteed that the person who wrote it was confused. If it's elsewhere, it could be that the code is a valid patch to a bad model (a class stays around for a long time and for some reason things it referenced had to be manually freed before the object is GC'd). Generally it's because someone forgot to remove a listener or something and can't figure out why their object isn't being GC'd so they just delete things it refers to and shrug their shoulders and walk away. It should never be used to clean things up "Quicker". A: When writing code that will be used by other developers that requires some sort of "cleanup" method to be called to free up resources. Sometimes those other developers forget to call your cleanup (or close, or destroy, or whatever) method. To avoid possible resource leaks you can check in the finalize method to ensure that the method was called and if it wasn't you can call it yourself. Many database drivers do this in their Statement and Connection implementations to provide a little safety against developers who forget to call close on them. A: I'm not sure what you can make of this, but... itsadok@laptop ~/jdk1.6.0_02/src/ $ find . -name "*.java" | xargs grep "void finalize()" | wc -l 41 So I guess the Sun found some cases where (they think) it should be used. A: You could use it as a backstop for an object holding an external resource (socket, file, etc). Implement a close() method and document that it needs to be called. Implement finalize() to do the close() processing if you detect it hasn't been done. Maybe with something dumped to stderr to point out that you're cleaning up after a buggy caller. It provides extra safety in an exceptional/buggy situation. Not every caller is going to do the correct try {} finally {} stuff every time. Unfortunate, but true in most environments. I agree that it's rarely needed. And as commenters point out, it comes with GC overhead. Only use if you need that "belt and suspenders" safety in a long-running app. I see that as of Java 9, Object.finalize() is deprecated! They point us to java.lang.ref.Cleaner and java.lang.ref.PhantomReference as alternatives. A: class MyObject { Test main; public MyObject(Test t) { main = t; } protected void finalize() { main.ref = this; // let instance become reachable again System.out.println("This is finalize"); //test finalize run only once } } class Test { MyObject ref; public static void main(String[] args) { Test test = new Test(); test.ref = new MyObject(test); test.ref = null; //MyObject become unreachable,finalize will be invoked System.gc(); if (test.ref != null) System.out.println("MyObject still alive!"); } } ==================================== result: This is finalize MyObject still alive! ===================================== So you may make an unreachable instance reachable in finalize method. A: Edit: Okay, it really doesn't work. I implemented it and thought if it fails sometimes that's ok for me but it did not even call the finalize method a single time. I am not a professional programmer but in my program I have a case that I think to be an example of a good case of using finalize(), that is a cache that writes its content to disk before it is destroyed. Because it is not necessary that it is executed every time on destruction, it does only speed up my program, I hope that it i didn't do it wrong. @Override public void finalize() { try {saveCache();} catch (Exception e) {e.printStackTrace();} } public void saveCache() throws FileNotFoundException, IOException { ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream("temp/cache.tmp")); out.writeObject(cache); } A: It can be handy to remove things that have been added to a global/static place (out of need), and need to be removed when the object is removed. For instance: private void addGlobalClickListener() { weakAwtEventListener = new WeakAWTEventListener(this); Toolkit.getDefaultToolkit().addAWTEventListener(weakAwtEventListener, AWTEvent.MOUSE_EVENT_MASK); } @Override protected void finalize() throws Throwable { super.finalize(); if(weakAwtEventListener != null) { Toolkit.getDefaultToolkit().removeAWTEventListener(weakAwtEventListener); } } A: finalize() is a hint to the JVM that it might be nice to execute your code at an unspecified time. This is good when you want code to mysteriously fail to run. Doing anything significant in finalizers (basically anything except logging) is also good in three situations: * *you want to gamble that other finalized objects will still be in a state that the rest of your program considers valid. *you want to add lots of checking code to all the methods of all your classes that have a finalizer, to make sure they behave correctly after finalization. *you want to accidentally resurrect finalized objects, and spend a lot of time trying to figure out why they don't work, and/or why they don't get finalized when they are eventually released. If you think you need finalize(), sometimes what you really want is a phantom reference (which in the example given could hold a hard reference to a connection used by its referand, and close it after the phantom reference has been queued). This also has the property that it may mysteriously never run, but at least it can't call methods on or resurrect finalized objects. So it's just right for situations where you don't absolutely need to close that connection cleanly, but you'd quite like to, and the clients of your class can't or won't call close themselves (which is actually fair enough - what's the point of having a garbage collector at all if you design interfaces that require a specific action be taken prior to collection? That just puts us back in the days of malloc/free.) Other times you need the resource you think you're managing to be more robust. For example, why do you need to close that connection? It must ultimately be based on some kind of I/O provided by the system (socket, file, whatever), so why can't you rely on the system to close it for you when the lowest level of resource is gced? If the server at the other end absolutely requires you to close the connection cleanly rather than just dropping the socket, then what's going to happen when someone trips over the power cable of the machine your code is running on, or the intervening network goes out? Disclaimer: I've worked on a JVM implementation in the past. I hate finalizers. A: finalize() can be useful to catch resource leaks. If the resource should be closed but is not write the fact that it wasn't closed to a log file and close it. That way you remove the resource leak and give yourself a way to know that it has happened so you can fix it. I have been programming in Java since 1.0 alpha 3 (1995) and I have yet to override finalize for anything... A: The accepted answer lists that closing a resource during finalize can be done. However this answer shows that at least in java8 with the JIT compiler, you run into unexpected issues where sometimes the finalizer is called even before you finish reading from a stream maintained by your object. So even in that situation calling finalize would not be recommended. A: iirc - you can use finalize method as a means of implementing a pooling mechanism for expensive resources - so they don't get GC's too. A: Personally, I almost never used finalize() except in one rare circumstance: I made a custom generic-type collection, and I wrote a custom finalize() method that does the following: public void finalize() throws Throwable { super.finalize(); if (destructiveFinalize) { T item; for (int i = 0, l = length(); i < l; i++) { item = get(i); if (item == null) { continue; } if (item instanceof Window) { ((Window) get(i)).dispose(); } if (item instanceof CompleteObject) { ((CompleteObject) get(i)).finalize(); } set(i, null); } } } (CompleteObject is an interface I made that lets you specify that you've implemented rarely-implemented Object methods like #finalize(), #hashCode(), and #clone()) So, using a sister #setDestructivelyFinalizes(boolean) method, the program using my collection can (help) guarantee that destroying a reference to this collection also destroys references to its contents and disposes any windows that might keep the JVM alive unintentionally. I considered also stopping any threads, but that opened a whole new can of worms. A: The resources (File, Socket, Stream etc.) need to be closed once we are done with them. They generally have close() method which we generally call in finally section of try-catch statements. Sometimes finalize() can also be used by few developers but IMO that is not a suitable way as there is no guarantee that finalize will be called always. In Java 7 we have got try-with-resources statement which can be used like: try (BufferedReader br = new BufferedReader(new FileReader(path))) { // Processing and other logic here. } catch (Exception e) { // log exception } finally { // Just in case we need to do some stuff here. } In the above example try-with-resource will automatically close the resource BufferedReader by invoking close() method. If we want we can also implement Closeable in our own classes and use it in similar way. IMO it seems more neat and simple to understand. A: As a side note: An object that overrides finalize() is treated specially by the garbage collector. Usually, an object is immediately destroyed during the collection cycle after the object is no longer in scope. However, finalizable objects are instead moved to a queue, where separate finalization threads will drain the queue and run the finalize() method on each object. Once the finalize() method terminates, the object will at last be ready for garbage collection in the next cycle. Source: finalize() deprecated on java-9
{ "language": "en", "url": "https://stackoverflow.com/questions/158174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "393" }
Q: How do you do relative positioning in WPF? How can you relatively position elements in WPF? The standard model is to use layout managers for everything, but what if you want to position elements (on a Canvas, for example) simply based on the position of other elements? For example, you may want one element (say a button) to be attached the side of another (perhaps a panel) independent of the position or layout of that panel. Anyone that's worked with engineering tools (SolidWorks, AutoCad, etc.) is familiar with this sort of relative positioning. Forcing everything into layout managers (the different WPF Panels) does not make much sense for certain scenarios, where you don't care that elements are maintained by some parent container and you do not want the other children to be affected by a change in the layout/appearance of each other. Does WPF support this relative positioning model in any way? A: Instead of putting (as in your example) a button directly on the canvas, you could put a stackpanel on the canvas, horizontally aligned, and put the two buttons in there. Like so: <Canvas> <StackPanel Canvas.Left="100" Canvas.Top="100" Orientation="Horizontal"> <Button>Button 1</Button><Button>Button 2</Button> </StackPanel> </Canvas> I think that it's quite flexible when you use more than 1 layout in a form, and you can create pretty much any configuration you want. A: Good question. As far as I know, we need to have a different custom panel to get this feature. Since WPF is based on Visual Hierarchy there is no way to have this sort of Flat structure for the elements in the platform. But Here is a trick to do this. Place your elements in the same position and give relative displacement by using RenderTransform.TranslateTransform. This way your TranslateTransfrom's X and Y will always be relatuve to the other element.
{ "language": "en", "url": "https://stackoverflow.com/questions/158175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: How to trunc a date to seconds in Oracle This page mentions how to trunc a timestamp to minutes/hours/etc. in Oracle. How would you trunc a timestamp to seconds in the same manner? A: I am sorry, but all my predecessors seem to be wrong. select cast(systimestamp as date) from dual ..does not truncate, but rounds to the next second instead. I use a function: CREATE OR REPLACE FUNCTION TRUNC_TS(TS IN TIMESTAMP) RETURN DATE AS BEGIN RETURN TS; END; For example: SELECT systimestamp ,trunc_ts(systimestamp) date_trunc ,CAST(systimestamp AS DATE) date_cast FROM dual; Returns: SYSTIMESTAMP DATE_TRUNC DATE_CAST 21.01.10 15:03:34,567350 +01:00 21.01.2010 15:03:34 21.01.2010 15:03:35 A: Since the precision of DATE is to the second (and no fractions of seconds), there is no need to TRUNC at all. The data type TIMESTAMP allows for fractions of seconds. If you convert it to a DATE the fractional seconds will be removed - e.g. select cast(systimestamp as date) from dual; A: On the general topic of truncating Oracle dates, here's the documentation link for the format models that can be used in date trunc() AND round() functions http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/functions242.htm#sthref2718 "Seconds" is not listed because the granularity of the DATE datatype is seconds. A: I used function like this: FUNCTION trunc_sec(p_ts IN timestamp) IS p_res timestamp; BEGIN RETURN TO_TIMESTAMP(TO_CHAR(p_ts, 'YYYYMMDDHH24MI'), 'YYYYMMDDHH24MI'); END trunc_sec; A: trunc work to min only, cast to date to_char(START_TIME,'YYYYMMDDHH24MISS') or simply select to_char(current_timestamp, 'YYYYMMDDHH24MISS') from dual; https://www.techonthenet.com/oracle/functions/trunc_date.php A: To truncate a timestamp to seconds you can cast it to a date: CAST(timestamp AS DATE) To then perform the TRUNC's in the article: TRUNC(CAST(timestamp AS DATE), 'YEAR') A: Something on the order of: select to_char(current_timestamp, 'SS') from dual;
{ "language": "en", "url": "https://stackoverflow.com/questions/158189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I set up a mock queue using mockrunner to test an xml filter? I'm using the mockrunner package from http://mockrunner.sourceforge.net/ to set up a mock queue for JUnit testing an XML filter which operates like this: * *sets recognized properties for an ftp server to put and get xml input and a jms queue server that keeps track of jobs. Remotely there waits a server that actually parses the xml once a queue message is received. *creates a remote directory using ftp and starts a queue connection using mqconnectionfactory to the given address of the queue server. *once the new queue entry is made in 2), the filter waits for a new queue message to appear signifying the job has been completed by the remote server. The filter then grabs the modified xml file from the ftp and passes it along to the next filter. The JUnit test I am working on simply needs to emulate this environment by starting a local ftp and mock queue server for the filter to connect to, then waiting for the filter to connect to the queue and put the new xml input file on a local directory via a local ftp server, wait for the queue message and then modify the xml input slightly, put the modified xml in a new directory and post another message to the queue signifying the job has completed. All of the tutorials I have found on the net have used EJB and JNDI to lookup the queue server once it has been made. If possible, I'd like to sidestep that route by just creating a mock queue on my local machine and connecting to it in the simplest manner possible, not using EJB and JNDI. Thanks in advance! A: I'm using MockEjb and there are some examples among them one for using mock queues, so take a look to the info and to the example Hopefully it helps. A: I'd recommend having a look at using Apache Camel to create your test case. Then its really easy to switch your test case from any of the available components and most importantly Camel comes with some really handy Mock Endpoints which makes it super easy to test complex routing logic particularly with asynchronous operations. If you also use Spring, then maybe start by trying out these Spring unit tests with mock endpoints in Camel which let you inject the mock endpoints to perform assertions on together with the ProducerTemplate object to make it really easy to fire your messages for your test case. e.g. see the last example on that page. Start off using simple endpoints like the SEDA endpoint - then when you've got your head around the core spring/mock framework, try using the JMS endpoint or FTP endpoint endpoints etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/158200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Bit Twiddle to perform this conversion curious if anyone might have some insight in how I would do the following to a binary number: convert 01+0 -> 10+1 (+ as in regular expressions, one or more) 01 -> 10 10 -> 01 so, 10101000010100011100 01010100101010100010 and to clarify that this isn't a simple inversion: 000000100000000000 000001010000000000 I was thinking regex, but I'm working with binary numbers and want to stay that way. The bit twiddling hacks page hasn't given me any insight either. This clearly has some essence of cellular automata. So, anyone have a few bit operations that can take care of this? (no code is necessary, I know how to do that). A: Let's say x is your variable. Then you'd have: unsigned myBitOperation(unsigned x) { return ((x<<1) | (x>>1)) & (~x); } A: Twidle in C/C++ is ~
{ "language": "en", "url": "https://stackoverflow.com/questions/158209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Project dependencies across multiple Visual Studio versions I have 3 .net projects. Project1.dll is generated by a VS2008 project. Project2.dll is generated by a VS2005 project that references Project1.dll. Project3.dll is generated by a VS2008 project that references both Project1.dll and Project2.dll. Right now, I build Project1.dll, and manually copy it to the place where Project 2 can pick it up. Then I build Project2.dll and manually copy it and Project1.dll to the place where Project 3 can pick them up. Obviously I'm doing something wrong (manual). What is the correct way to keep my projects' references up to date? Updating Project2 to VS2008 and then creating one solution containing all 3 projects is not an option at this time. We have a 3rd party visualstudio plugin that does not yet work in VS2008. Project2 must stay in VS2005 De-updating Project1 and Project3 to VS2005 and then creating one solution is not an option either. We're relying on C# 3.0 and .net 3.5 features in those projects. A: Probably the best option would be to have a common build folder for all three projects. This can be done in the Project Properties-> Build -> Output path. Then point the references to the output folder. That way anytime you build any of the lower projects, the higher projects would have the latest versions. You can set the path per configurations (Debug, Release) as well, so you won't need to change that for each type of build. A: How about a pre-build event for Project3, that goes out and uses a batch file to build Project1 copy it to Project2 folder and then build project2 and copy it to project3 folder. A: I would recommend sharing the csproj/vbproj files between the solutions. The format of the project files is compatible between the two versions of studio (solution files are not, however), and as long as your VS2008 projects are targeting the 2.0 runtime you should have no trouble compiling them. This will allow you to reference the projects, which will take care of dependencies. The only place where this gets hairy is if you have a web project that needs to work between the two versions of studio. In that case there are some modifications to the project files which will point to the correct MSBuild target files. A: We use a build script that handles the dependencies, builds the DLLs and does what you're doing manually. A: A trick I have used in the past is to move everything to 2008. Then I setup a special solution in 2005 for project two and use it to work with the addin. Getting this to work just depends on how bad project two behaves in 2008.
{ "language": "en", "url": "https://stackoverflow.com/questions/158218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What would cause the current directory of an executing app to change? I have a C# application that includes the following code: string file = "relativePath.txt"; //Time elapses... string contents = File.ReadAllText(file); This works fine, most of the time. The file is read relative to the directory that the app was started from. However, in testing, it has been found that if left alone for about 5 hours, the app will throw a FileNotFoundException saying that "C:\Documents and Settings\Adminstrator\relativePath.txt" could not be found. If the action that reads the file is run right away though, the file is read from the proper location, which we'll call "C:\foo\relativePath.txt" What gives? And, what is the best fix? Resolving the file against Assembly.GetEntryAssembly().Location? A: One spooky place that can change your path is the OpenFileDialog. As a user navigates between folders it's changing your application directory to the one currently being looked at. If the user closes the dialog in a different directory then you will be stuck in that directory. It has a property called RestoreDirectory which causes the dialog to reset the path. But I believe the default is "false". A: If the file is always in a path relative to the executable assembly, then yes, use Assembly.Location. I mostly use Assembly.GetExecutingAssembly if applicable though instead of Assembly.GetEntryAssembly. This means that if you're accessing the file from a DLL, the path will be relative to the DLL path. A: I think the lesson should be don't rely on relative paths, they are prone to error. The current directory can be changed by any number of things in your running process like file dialogs (though there is a property to prevent them changing it), so you can never really guarantee where a relative path will lead at all times unless you use the relative path to generate a fixed one from a known path like Application.StartupPath (though beware when launching from Visual Studio) or some other known path. Using relative paths will make your code difficult to maintain as a change in a totally unrelated part of your project could cause another part to fail. A: In System.Environment, you have the SpecialFolder enum, that will help you get standard relative paths. This way at least, the path is gotten internally and handed back to you, so hopefully if the system is changing the path somehow, the code will just handle it. A: if you do somehting like > cd c:\folder 1 c:\folder 1 > ../folder 2/theApplication.exe The current working directory of the applicaiton will be c:\folder 1 . Here is an example program using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; namespace CWD { class Program { static void Main (string[] args) { Console.WriteLine(Application.StartupPath); } } } Build this in visualstudio then open a command prompt in the debug/bin directory and do bin/debug > CWD.exe then do bin/debug > cd ../../ > bin/debug/CWD.exe you will see the difference in the startup path. In relation to the original question... "if left alone for about 5 hours, the app will throw a FileNotFoundException" Once the application is running, only moving, or removing that file from the expected location should cause this error. greg A: If you use an openfiledialog and the remember path property (not sure about the exact name) is true then it will change your current directory I think.
{ "language": "en", "url": "https://stackoverflow.com/questions/158219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you compile OpenSSL for x64? After following the instructions in INSTALL.W64 I have two problems: * *The code is still written to the "out32" folder. I need to be able to link to both 32-bit and 64-bit versions of the library on my workstation, so I don't want the 64-bit versions to clobber the 32-bit libs. *The output is still 32-bit! This means that I get "unresolved external symbol" errors when trying to link to the libraries from an x64 app. A: I solved the problem this way, using the 1.0.1c source: Add this block to util/pl/VC-32.pl, just before the $o='\\'; line. if ($debug) { $ssl .= 'd'; $crypto .= 'd'; } Add this block to util/pl/VC-32.pl, just before the if ($debug) line. if ($FLAVOR =~ /WIN64/) { $out_def =~ s/32/64/; $tmp_def =~ s/32/64/; $inc_def =~ s/32/64/; } Then build all varieties: setenv /x86 /release perl Configure VC-WIN32 --prefix=build -DUNICODE -D_UNICODE ms\do_ms nmake -f ms\ntdll.mak setenv /x64 /release perl Configure VC-WIN64A --prefix=build ms\do_win64a.bat nmake -f ms\ntdll.mak setenv /x86 /debug perl Configure debug-VC-WIN32 --prefix=build -DUNICODE -D_UNICODE ms\do_ms move /y ms\libeay32.def ms\libeay32d.def move /y ms\ssleay32.def ms\ssleay32d.def nmake -f ms\ntdll.mak setenv /x64 /debug perl Configure debug-VC-WIN64A --prefix=build ms\do_win64a.bat move /y ms\libeay32.def ms\libeay32d.def move /y ms\ssleay32.def ms\ssleay32d.def nmake -f ms\ntdll.mak A: Use Conan. It is very simple to install and use. You can request the files ready for use. For example for Linux x64 or usage with Visual Studio 2012. Here a sample instruction: conan install OpenSSL/1.0.2g@lasote/stable -s arch="x86_64" -s build_type="Debug" -s compiler="gcc" -s compiler.version="5.3" -s os="Linux" -o 386="False" -o no_asm="False" -o no_rsa="False" -o no_cast="False" -o no_hmac="False" -o no_sse2="False" -o no_zlib="False" ... A: To compile the static libraries (both release and debug), this is what you need to do: * *Install Perl - www.activestate.com *Run the "Visual Studio 2008 x64 Cross Tools Command Prompt" (Note: The regular command prompt WILL NOT WORK.) *Configure with perl Configure VC-WIN64A no-shared no-idea *Run: ms\do_win64a *EDIT ms\nt.mak and change "32" to "64" in the output dirs: # The output directory for everything intersting OUT_D=out64.dbg # The output directory for all the temporary muck TMP_D=tmp64.dbg # The output directory for the header files INC_D=inc64 INCO_D=inc64\openssl *EDIT ms\nt.mak and remove bufferoverflowu.lib from EX_LIBS if you get an error about it. *Run: nmake -f ms\nt.mak *EDIT the ms\do_win64a file and ADD "debug" to all lines, except the "ml64" and the last two lines *Run: ms\do_win64a *Repeat steps 4 and 5 *EDIT the ms\nt.mak file and ADD /Zi to the CFLAG list! *Run: nmake -f ms\nt.mak A: According to the official documentation: "You may be surprised: the 64bit artefacts are indeed output in the out32* sub-directories and bear names ending *32.dll. Fact is the 64 bit compile target is so far an incremental change over the legacy 32bit windows target. Numerous compile flags are still labelled "32" although those do apply to both 32 and 64bit targets." So the first answer is no longer necessary. Instructions can be found here: https://wiki.openssl.org/index.php/Compilation_and_Installation#W64 A: At the time of writing this how-to the most recent version of OpenSSL is 1.1.1a. Environment: * *Windows 10 *MS Visual Studio 2017 Prerequisites: * *Install ActivePerl - Community edition is fine *Install NASM Make sure both Perl and NASM are in PATH environment variable. Compiling x64: * *Open x64 Native Tools Command Prompt *perl Configure VC-WIN64A --prefix=e:\projects\bin\OpenSSL\vc-win64a --openssldir=e:\projects\bin\OpenSSL\SSL *nmake *nmake test *nmake install Step 4 is optional. Compiling x86: * *Open x86 Native Tools Command Prompt *perl Configure VC-WIN32 --prefix=e:\projects\bin\OpenSSL\vc-win32 --openssldir=e:\projects\bin\OpenSSL\SSL *nmake *nmake test *nmake install Step 4 is optional. A: If you're building in cygwin, you can use the following script, assume MSDEVPATH has already been set to your Visual Studio dir echo "Building x64 OpenSSL" # save the path of the x86 msdev MSDEVPATH_x86=$MSDEVPATH # and set a new var with x64 one MSDEVPATH_x64=`cygpath -u $MSDEVPATH/bin/x86_amd64` # now set vars with the several lib path for x64 in windows mode LIBPATH_AMD64=`cygpath -w $MSDEVPATH_x86/lib/amd64` LIBPATH_PLATFORM_x64=`cygpath -w $MSDEVPATH_x86/PlatformSDK/lib/x64` # and set the LIB env var that link looks at export LIB="$LIBPATH_AMD64;$LIBPATH_PLATFORM_x64" # the new path for nmake to look for cl, x64 at the start to override any other msdev that was set previously export PATH=$MSDEVPATH_x64:$PATH ./Configure VC-WIN64A zlib-dynamic --prefix=$OUT --with-zlib-include=zlib-$ZLIB_VERSION/include --with-zlib-lib=zlib-$ZLIB_VERSION/x64_lib # do the deed ms/do_win64a.bat $MSDEVPATH_x86/bin/nmake -f ms/ntdll.mak ${1:-install} A: The build instructions have changed since this question was originally asked. The new instructions can be found here. Note that you will need to have perl and NASM installed, and you will need to use the developer command prompt. A: You can also use MSYS+mingw-w64: 1) download and extract msys to C:\msys 2) download and extract mingw-w64 to c:\mingw64 3) run msys postinstall script. When it asks for your mingw installation, point it to C:\mingw64\bin 4) Extract an openssl daily snapshot (1.0.0 release has a bug). In the source dir run configure mingw64 make make check make install 5) openssl is installed to /local/
{ "language": "en", "url": "https://stackoverflow.com/questions/158232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: PHP: Replace umlauts with closest 7-bit ASCII equivalent in an UTF-8 string What I want to do is to remove all accents and umlauts from a string, turning "lärm" into "larm" or "andré" into "andre". What I tried to do was to utf8_decode the string and then use strtr on it, but since my source file is saved as UTF-8 file, I can't enter the ISO-8859-15 characters for all umlauts - the editor inserts the UTF-8 characters. Obviously a solution for this would be to have an include that's an ISO-8859-15 file, but there must be a better way than to have another required include? echo strtr(utf8_decode($input), 'ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ', 'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy'); UPDATE: Maybe I was a bit inaccurate with what I try to do: I do not actually want to remove the umlauts, but to replace them with their closest "one character ASCII" equivalent. A: you can also try this $string = "Fóø Bår"; $transliterator = Transliterator::createFromRules(':: Any-Latin; :: Latin-ASCII; :: NFD; :: [:Nonspacing Mark:] Remove; :: Lower(); :: NFC;', Transliterator::FORWARD); echo $normalized = $transliterator->transliterate($string); but you need to have http://php.net/manual/en/book.intl.php available A: iconv("utf-8","ascii//TRANSLIT",$input); Extended example A: A little trick that doesn't require setting locales or having huge translation tables: function Unaccent($string) { if (strpos($string = htmlentities($string, ENT_QUOTES, 'UTF-8'), '&') !== false) { $string = html_entity_decode(preg_replace('~&([a-z]{1,2})(?:acute|cedil|circ|grave|lig|orn|ring|slash|tilde|uml);~i', '$1', $string), ENT_QUOTES, 'UTF-8'); } return $string; } The only requirement for it to work properly is to save your files in UTF-8 (as you should already). A: Okay, found an obvious solution myself, but it's not the best concerning performance... echo strtr(utf8_decode($input), utf8_decode('ŠŒŽšœžŸ¥µÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýÿ'), 'SOZsozYYuAAAAAAACEEEEIIIIDNOOOOOOUUUUYsaaaaaaaceeeeiiiionoooooouuuuyy'); A: If you are using WordPress, you can use the built-in function remove_accents( $string ) https://codex.wordpress.org/Function_Reference/remove_accents However I noticed a bug : it doesn’t work on a string with a single character. A: For Arabic and Persian users i recommend this way to remove diacritics: $diacritics = array('َ','ِ','ً','ٌ','ٍ','ّ','ْ','ـ'); $search_txt = str_replace($diacritics, '', $diacritics); For typing diacritics in Arabic keyboards u can use this Asci(those codes are Asci not Unicode) codes in windows editors typing diacritics directly or holding Alt + (type the code of diacritic character) This is the codes ـَ(0243) ـِ(0246) ـُ(0245) ـً(0240) ـٍ(0242) ـٌ(0241) ـْ(0250) ـّ(0248) ـ ـ(0220) A: I found that this one gives the most consistent results in French and German. with the meta tag set to utf-8, I have place it in a function to return a line from a array of words and it works perfect. htmlentities ( $line, ENT_SUBSTITUTE , 'utf-8' )
{ "language": "en", "url": "https://stackoverflow.com/questions/158241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: What is the best free, Ajax.NET (System.Web.Extensions 3.5) compatible Rich Text Box control? I'm looking for a good ASP.NET RichTextBox component that integrates fairly easily with .NET Framework 3.5 Ajax, specifically one that can easily provide its values from inside an UpdatePanel. I got burned by RicherComponents RichTextBox which still does not reference the Framework 3.5. thanks! A: Look at FCKEditor for a free solution. I'm unsure if it's usable inside an update panel, but it's free and opensource. http://www.fckeditor.net/ A: If you would consider going with an HTML editor instead of a Rich Text format editor, I recommend the Telerik web editor. It is very flexible and integrates quite solidly with Ajax. A: Googled based on craigmoliver's answer and found this: http://www.webcitation.org/5bFQaq7Wp Basically, it's a solution to allow FCKEditor to work in an update panel, which I will try and post if it work.
{ "language": "en", "url": "https://stackoverflow.com/questions/158256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: WebRequest from localhost to localhost : why is it being denied? My app uses a WebRequest at certain points to get pages from itself. This shouldn't be a problem. It actually works fine on the server, which is a "shared" hosting package with Medium trust. Locally, I use a custom security policy based on Medium trust, which includes the following — copied straight from the default Medium trust policy: <IPermission class="WebPermission" version="1"> <ConnectAccess> <URI uri="$OriginHost$"/> </ConnectAccess> </IPermission> The offending line is in a custom XmlRelativeUrlResolver: public override object GetEntity( System.Uri puriAbsolute, string psRole, System.Type pReturnType ) { return _baseResolver.GetEntity( puriAbsolute, psRole, pReturnType ); } The url being requested is on localhost, in the same application as the requester. Here's the top of the stack trace. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.HttpWebRequest..ctor(Uri uri, ServicePoint servicePoint) at System.Net.HttpRequestCreator.Create(Uri Uri) at System.Net.WebRequest.Create(Uri requestUri, Boolean useUriBase) at System.Net.WebRequest.Create(Uri requestUri) at System.Xml.XmlDownloadManager.GetNonFileStream(Uri uri, ICredentials credentials) at System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials) at System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) at flow.controls.XmlRelativeUrlResolver.GetEntity(Uri puriAbsolute, String psRole, Type pReturnType) in c:\flow\source\controls\DataTransform.cs:line 105 at System.Xml.Xsl.Xslt.XsltLoader.CreateReader(Uri uri, XmlResolver xmlResolver) Anyone see the problem here? @Sijin: Thanks for the suggestion. The url that gets sent to the resolver is based on the request URL, and I confirmed in the debugger that accessing the site at 127.0.0.1 yields the same result. A: Does it work if you put 127.0.0.1 instead of localhost? A: My ignorance. I didn't know that the $OriginHost$ token was replaced using the originUrl attribute of the trust level — I thought it just came from the url of the app. I had originally left this attribute blank. <trust level="CustomMedium" originUrl="http://localhost/" /> A: This might not be the solution but when I saw your post I remembered this issue that I ran into about a year ago: http://support.microsoft.com/default.aspx/kb/896861 You receive error 401.1 when you browse a Web site that uses Integrated Authentication and is hosted on IIS 5.1 or IIS 6 We were creating a WebRequest to screen scrape a page and it worked in our production environment because we were not using a loopback host name but on development machines we ended up with access denied (after applying Windows Server 2003 SP2). The one difference here is that this was under integrated authentication which caused it to fail... it worked when the request was anonymous (so that is why I am not sure this is the answer for you).
{ "language": "en", "url": "https://stackoverflow.com/questions/158257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Python module dependency Ok I have two modules, each containing a class, the problem is their classes reference each other. Lets say for example I had a room module and a person module containing CRoom and CPerson. The CRoom class contains infomation about the room, and a CPerson list of every one in the room. The CPerson class however sometimes needs to use the CRoom class for the room its in, for example to find the door, or too see who else is in the room. The problem is with the two modules importing each other I just get an import error on which ever is being imported second :( In c++ I could solve this by only including the headers, and since in both cases the classes just have pointers to the other class, a forward declaration would suffice for the header eg: class CPerson;//forward declare class CRoom { std::set<CPerson*> People; ... Is there anyway to do this in python, other than placing both classes in the same module or something like that? edit: added python example showing problem using above classes error: Traceback (most recent call last): File "C:\Projects\python\test\main.py", line 1, in from room import CRoom File "C:\Projects\python\test\room.py", line 1, in from person import CPerson File "C:\Projects\python\test\person.py", line 1, in from room import CRoom ImportError: cannot import name CRoom room.py from person import CPerson class CRoom: def __init__(Self): Self.People = {} Self.NextId = 0 def AddPerson(Self, FirstName, SecondName, Gender): Id = Self.NextId Self.NextId += 1# Person = CPerson(FirstName,SecondName,Gender,Id) Self.People[Id] = Person return Person def FindDoorAndLeave(Self, PersonId): del Self.People[PeopleId] person.py from room import CRoom class CPerson: def __init__(Self, Room, FirstName, SecondName, Gender, Id): Self.Room = Room Self.FirstName = FirstName Self.SecondName = SecondName Self.Gender = Gender Self.Id = Id def Leave(Self): Self.Room.FindDoorAndLeave(Self.Id) A: Do you actually need to reference the classes at class definition time? ie. class CRoom(object): person = CPerson("a person") Or (more likely), do you just need to use CPerson in the methods of your class (and vice versa). eg: class CRoom(object): def getPerson(self): return CPerson("someone") If the second, there's no problem - as by the time the method gets called rather than defined, the module will be imported. Your sole problem is how to refer to it. Likely you're doing something like: from CRoom import CPerson # or even import * With circularly referencing modules, you can't do this, as at the point one module imports another, the original modules body won't have finished executing, so the namespace will be incomplete. Instead, use qualified references. ie: #croom.py import cperson class CRoom(object): def getPerson(self): return cperson.CPerson("someone") Here, python doesn't need to lookup the attribute on the namespace until the method actually gets called, by which time both modules should have completed their initialisation. A: First, naming your arguments with uppercase letters is confusing. Since Python does not have formal, static type checking, we use the UpperCase to mean a class and lowerCase to mean an argument. Second, we don't bother with CRoom and CPerson. Upper case is sufficient to indicate it's a class. The letter C isn't used. Room. Person. Third, we don't usually put things in One Class Per File format. A file is a Python module, and we more often import an entire module with all the classes and functions. [I'm aware those are habits -- you don't need to break them today, but they do make it hard to read.] Python doesn't use statically defined types like C++. When you define a method function, you don't formally define the data type of the arguments to that function. You merely list some variable names. Hopefully, the client class will provide arguments of the correct type. At run time, when you make a method request, then Python has to be sure the object has the method. NOTE. Python doesn't check to see if the object is the right type -- that doesn't matter. It only checks to see if it has the right method. The loop between room.Room and person.Person is a problem. You don't need to include one when defining the other. It's safest to import the entire module. Here's room.py import person class Room( object ): def __init__( self ): self.nextId= 0 self.people= {} def addPerson(self, firstName, secondName, gender): id= self.NextId self.nextId += 1 thePerson = person.Person(firstName,secondName,gender,id) self.people[id] = thePerson return thePerson Works fine as long as Person is eventually defined in the namespace where this is executing. Person does not have to be known when you define the class. Person does not have to be known until runtime when then Person(...) expression is evaluated. Here's person.py import room class Person( object ): def something( self, x, y ): aRoom= room.Room( ) aRoom.addPerson( self.firstName, self.lastName, self.gender ) Your main.py looks like this import room import person r = room.Room( ... ) r.addPerson( "some", "name", "M" ) print r A: No need to import CRoom You don't use CRoom in person.py, so don't import it. Due to dynamic binding, Python doesn't need to "see all class definitions at compile time". If you actually do use CRoom in person.py, then change from room import CRoom to import room and use module-qualified form room.CRoom. See Effbot's Circular Imports for details. Sidenote: you probably have an error in Self.NextId += 1 line. It increments NextId of instance, not NextId of class. To increment class's counter use CRoom.NextId += 1 or Self.__class__.NextId += 1. A: You could just alias the second one. import CRoom CPerson = CRoom.CPerson A: @S.Lott if i don't import anything into the room module I get an undefined error instead (I imported it into the main module like you showed) Traceback (most recent call last): File "C:\Projects\python\test\main.py", line 6, in Ben = Room.AddPerson('Ben', 'Blacker', 'Male') File "C:\Projects\python\test\room.py", line 12, in AddPerson Person = CPerson(FirstName,SecondName,Gender,Id) NameError: global name 'CPerson' is not defined Also, the reason there diffrent modules is where I encountered the problem to start with the container class (ieg the room) is already several hundred lines, so I wanted the items in it (eg the people) in a seperate file. EDIT: main.py from room import CRoom from person import CPerson Room = CRoom() Ben = Room.AddPerson('Ben', 'Blacker', 'Male') Tom = Room.AddPerson('Tom', 'Smith', 'Male') Ben.Leave()
{ "language": "en", "url": "https://stackoverflow.com/questions/158268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Can't see subreport in the Report Manager I've been following this tutorial (lesson 6) in order to build and deploy a sample report with an embedded subreport which reads its parameters' values from the parent report. This subreport is embedded in one of the group rows of the report's table, and both share the same datasource. Additionally, detail rows appear collapsed until the user presses the (+) button for each group of data in the table. The report works great when I preview it at the Business Intelligence Development Studio (by the way, SQL 2005 Express edition) but when I deploy it and try to see in the Report Manager, the subreport is not shown. And, if I press the (+) button, the following message appears: Some parameters or credentials have not been specified Does anybody has the slightest idea of what I am doing wrong? Why does it works perfectly in the Report Viewer embedded in Visual Studio but not in the Report Manager web app? Thanks in advance. A: Does the subreport use the same Data Source as the parent report? If not, be sure to check the data source of the subreport to make sure it is correct. Check in Report Manager, not your local copy. A: I'm beginning to think this could be an issue with the browser. I'm currently using Internet Explorer 8 Beta and I'm also experiencing weird behavior from the Report Manager. I've tried with Google Chrome and Firefox 3 and, although the navigation is not as smooth as I like, the problem seems fixed. A: it happens when you use IE8 as a report browser. I faced same issue and when I tested on chrome it worked fine . A: This may or may not be related, but I had a similar problem some time ago (except that in my case the reports were accessed through a custom web page) and it turned out we had an older version of the report viewer control (the version that came with Sql 2005 RTM). After upgrading to the latest version the issue went away.
{ "language": "en", "url": "https://stackoverflow.com/questions/158277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Working with Silverlight B2 and RC0 I've been reading about the new developer-only RC0 for Silverlight, and the fact that it is supposed to be used only by developers to solve any breaking changes when upgrading from beta 2, so that when the actual S2 is released, migration is smoother. My question is, since you are supposed to uninstall B2 tools and install RC0 in VS2008, is there any way to keep providing support and bugfixing for an existent B2 app, while maintaining a RC0 branch at the same time? Or the only possible course of action is having a VM with VS2008 and B2 used for working with the B2 app until the RC is actually released? A: A VM is really the only solution I've heard of so far for this. Hopefully we won't be in this limbo for too horribly long. One thing you could do to make the VM a bit smaller is if you do your RC0 stuff on the VM, you can use Visual Web Developer Express 2008 SP1 on there instead of the full blown Visual Studio. Unfortunately you can't do the reverse and do the Beta 2 stuff on VWD Express.
{ "language": "en", "url": "https://stackoverflow.com/questions/158278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I install MySQL modules within PHP? I've updated php.ini and moved php_mysql.dll as explained in steps 6 and 8 here. I get this error… Fatal error: Call to undefined function mysql_connect() in C:\inetpub... MySQL doesn't show up in my phpinfo; report. I've updated the c:\Windows\php.ini file from ; Directory in which the loadable extensions (modules) reside. extension_dir = "./" to ; Directory in which the loadable extensions (modules) reside. extension_dir = ".;c:\Windows\System32" Result: no change. I changed the php.ini value of extension_dir thusly: extension_dir = "C:\Windows\System32" Result: much more in the phpinfo; report, but MySQL still isn't working. I copied the file libmysql.dll from folder C:\php to folders C:\Windows\System32 and C:\Windows Result: no change. I stopped and restarted IIS. Result: new, different errors instead! Warning: mysql_connect() [function.mysql-connect]: Access denied for user '...'@'localhost' (using password: YES) in C:\inetpub\... error in query. Fatal error: Call to a member function RecordCount() on a non-object in C:\inetpub\... I found several .php files in the website where I had to set variables: $db_user $db_pass Result: The site works! A: As the others say these two values in php.ini are crucial. I have the following in my php.ini: note the trailing slash - not sure if it is needed - but it does work. extension_dir = "H:\apps\php\ext\" extension=php_mysql.dll Also it is worth ensuring that you only have one copy of php.ini on your machine - I've had problems with this where I've been editting a php.ini file which php isn't using and getting very frustrated until I realised. Also if php is running as a module within apache you will need to restart the apache server to pickup the changes. Wise to do this in anycase if you're not sure. a "php -m" from the cmd prompt will show you the modules that are loaded from the ini file. A: In the php.ini file, check if the extention path configuration is valid. A: You will need to enable the extension=php_mysql.dll option in the php.ini as well. Also, make sure that the file is in the extension_dir you set. You can read more about it at: http://us3.php.net/manual/en/install.windows.extensions.php A: On a completely different note, might I suggest WampServer? It should get you up and running with a Apache/PHP/MySQL install in no time. You could even compare the WampServer config files with your own to see where you originally went wrong.
{ "language": "en", "url": "https://stackoverflow.com/questions/158279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is DirectSound the best audio abstraction layer for Windows? Is DirectSound the best audio abstraction layer for Windows? Switching my app from a very bad sound implementation, built to a specific chipset, to an abstration layer. App is native Winform, .net 3.5. DirectX/DirectSound is the likely choice, but a little concerned about the overhead. Any other options? Or is it silly to even THINK about anything else? A: DirectSound is not getting the same love from Microsoft today as it got in the past. As far as DirectX is concerned, you may try XAudio2 or XACT instead. Some people love those, others hate them. XAudio2 is more low-level, while XACT is rather high-level. Both are accessible from Microsoft XNA, which is like Managed DirectX, but is actively developed. But you are not restricted to using what DirectX comes with. Try FMod if you want something great. They still have their Shareware/Hobbyist license model and a Freeware license model, in case you don't want to pay some big bucks. Your choice depends on what exactly you want to do with sound. A: See if SDL looks better. A: Well, you can try OpenAL instead. What OpenGL is to Direct3D is OpenAL to DirectSound(3D). The interface is pretty similar to OpenGL, if you don't like that, you'll probably dislike OpenAL, too. Also I'm not sure if the Windows version of this lib is an own, native implementation or just calls DirectSound and thus might just be a (thin?) wrapper on top of it. A: DirectSound is pretty good. If you need low latency or good support for sound input and output via multiple soundcards at the same time you may also want to have a look at ASIO: http://de.wikipedia.org/wiki/Audio_Stream_Input/Output A: The waveOut... API is still an option. It's tricky to work with from managed code, but you can play multiple sounds at once this way (in XP and Vista, at least). If you just need to play sounds occasionally, System.Media.SoundPlayer is very easy to use. However, you can't play more than one sound at a time with this component. DirectSound is your only other major alternative. It has a built-in software synthesizer, if that's something you need. EDIT: SDL looks interesting. Thanks, Sijin. A: SharpDX looks interesting. I'm planning on trying it as a replacement for Managed DirectX because of the x86 limitations of the latter.
{ "language": "en", "url": "https://stackoverflow.com/questions/158282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: IE7 detected as IE6 on Vista...Why? I have two vista Business machines. I have IE 7 installed on both. On my first machine (Computer1) if I go to this site (http://www.quirksmode.org/js/detect.html), it says I am using "Explorer 6 on Windows". If I use Computer2 with Vista Business and IE7, it says I am using "Explorer 7 on Windows". Here is a screen capture. The same version of IE is on both machines. Anyone have a solution? A: Computer1: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) ; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; InfoPath.2; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30618; MS-RTC LM 8; .NET CLR 1.1.4322) Rick Kierner (11 minutes ago) Computer2: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506; InfoPath.2; .NET CLR 3.5.21022) Rick Kierner (10 minutes ago) There seems to be some garbage in the user agent of Computer1 that repeats the Mozilla/4.0 (compatible...) information with MSIE 6.0 information (and mismatched closing brackets). That said, I ran your user agent through the script provided on the page you linked to and it came back as Explorer 7, so I'm not sure why it is failing on the page itself. Regardless, check your Registry for additional User Agent information that could be removed at [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\ Internet Settings\5.0\User Agent] (yes, it resides under '5.0' even if you have Internet Explorer 7). Note that this is the location in Windows XP, I'm assuming it is the same in Windows Vista. A: Can you post the User Agent of both machines? (you can go to some site that displays the user agent, i.e. this one, at the very bottom). I assume it's a bug on the Quirksmode site in conjunction with the user gaent. A: Are you using the same version of IE7 on both machines? If the versions are different then it is possible that the script is not recognising one version for some reason and is just defaulting to IE6 as a lowest common denominator. It is possible that one of the machines may have a version of IE which isn't exactly following the rules to the letter and the script is having a hard time handling it. A: Check the registry keys [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\User Agent\Post Platform] and [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\ Internet Settings\5.0\User Agent\Post Platform] Some pieces of software will add additional values here, which is fine, unless you specify a user agent string. In that case, most browser detects will fire off and detect the last value they find. Typically, these values will either be in a "User Agent" key or "Post Platform" key. A: I found the registry entry: HKEY_USERS\S-1-5-21-817507923-1393677948-3603797094-1205\Software\Microsoft\Windows\CurrentVersion\Internet Settings\User Agent\Post Platform It had the "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" value. After removing that, my browser is recognized as IE 7 A: This is just a guess, but the first string you posted explicitly has "MSIE 6.0" in the query string. If the site is lazy and doesn't properly parse the string, that could override the "MSIE 7.0" in the string earlier on, and give you a false result. A: I found the IE6 registry key. Am I able to delete this without causing problems on my PC?? HKEY_USERS\S-1-5-21-117609710-1647877149-839522115-1003\Software\Microsoft\Windows\CurrentVersion\Internet Settings\User Agent\Post Platform where I found the following: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) I have IE7 installed and am able to use most facebook etc. items. It was pointed out to me that I seem to have both versions active and could experience problems if I don't fix this. I don't want to remove the registry key if that could cause a whole new set of problems! thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/158283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Tool for translation of Oracle PL/SQL into Postgresql PL/pgSQL Is there a tool (preferably free) which will translate Oracle's PL/SQL stored procedure language into Postgresql's PL/pgSQL stored procedure language? A: There is a tool available at http://ora2pg.darold.net/ which can be used to transalate Oracle Schemas to Postgres schemas, but I'm not sure if it will also translate the stored procedures. But it might provide a place to start. A: There's also EnterpriseDB which has a quite a bit of Oracle compatibility to help migration from Oracle. The version with Oracle compatibility is not free but worth a look if you are doing more than just one procedure translation. A: Having been working on an Oracle to Postgres conversion for quite some time. The only way to do it is by hand. There are subtle differences between the two languages that can trip you up. We tried using an automated tool but it only made the problem worse and we ended up trashing the output. A: Use ora2pg to translate your schema. For stored procedures: * *Manually convert all DECODE() to CASE statements and all old-style Oracle WHERE (+) outer joins to explicit LEFT OUTER JOIN statements. I haven't found a tool to do this. *Translate PL/SQL functions in PL/PGSQL (see below). It would be very nice if someone started a sourceforge project to do this. Hint hint... Here's what I mean for (2) above: CREATE OR REPLACE FUNCTION trunc( parmDate DATE , parmFormat VARCHAR ) RETURNS date AS $$ DECLARE varPlSqlFormat VARCHAR; varPgSqlFormat VARCHAR; BEGIN varPgSqlFormat := lower(parmFormat); IF varPgSqlFormat IN ( 'syyyy' , 'yyyy' , 'year' , 'syear' , 'yyy' , 'yy' , 'y' ) THEN varPgSqlFormat := 'year'; ELSEIF varPgSqlFormat IN ( 'month' , 'mon' , 'mm' , 'rm' ) THEN varPgSqlFormat := 'month'; ELSEIF varPgSqlFormat IN ( 'ddd' , 'dd' , 'j' ) THEN varPgSqlFormat := 'day'; END IF; RETURN DATE_TRUNC(varPgSqlFormat,parmDate); END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION trunc( parmDate DATE) RETURNS date AS $$ DECLARE BEGIN RETURN DATE_TRUNC('day',parmDate); END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION last_day(in_date date) RETURNS date AS $$ DECLARE BEGIN RETURN CAST(DATE_TRUNC('month', in_date) + '1 month'::INTERVAL AS DATE) - 1; END; $$ LANGUAGE plpgsql;
{ "language": "en", "url": "https://stackoverflow.com/questions/158310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Is there a cross-browser onload event when clicking the back button? For all major browsers (except IE), the JavaScript onload event doesn’t fire when the page loads as a result of a back button operation — it only fires when the page is first loaded. Can someone point me at some sample cross-browser code (Firefox, Opera, Safari, IE, …) that solves this problem? I’m familiar with Firefox’s pageshow event but unfortunately neither Opera nor Safari implement this. A: Some modern browsers (Firefox, Safari, and Opera, but not Chrome) support the special "back/forward" cache (I'll call it bfcache, which is a term invented by Mozilla), involved when the user navigates Back. Unlike the regular (HTTP) cache, it captures the complete state of the page (including the state of JS, DOM). This allows it to re-load the page quicker and exactly as the user left it. The load event is not supposed to fire when the page is loaded from this bfcache. For example, if you created your UI in the "load" handler, and the "load" event was fired once on the initial load, and the second time when the page was re-loaded from the bfcache, the page would end up with duplicate UI elements. This is also why adding the "unload" handler stops the page from being stored in the bfcache (thus making it slower to navigate back to) -- the unload handler could perform clean-up tasks, which could leave the page in unworkable state. For pages that need to know when they're being navigated away/back to, Firefox 1.5+ and the version of Safari with the fix for bug 28758 support special events called "pageshow" and "pagehide". References: * *Webkit: http://webkit.org/blog/516/webkit-page-cache-ii-the-unload-event/ *Firefox: https://developer.mozilla.org/En/Using_Firefox_1.5_caching. *Chrome: https://code.google.com/p/chromium/issues/detail?id=2879 A: I couldn't get the above examples to work. I simply wanted to trigger a refresh of certain modified div areas when coming back to the page via the back button. The trick I used was to set a hidden input field (called a "dirty bit") to 1 as soon as the div areas changed from the original. The hidden input field actually retains its value when I click back, so onload I can check for this bit. If it's set, I refresh the page (or just refresh the divs). On the original load, however, the bit is not set, so I don't waste time loading the page twice. <input type='hidden' id='dirty'> <script> $(document).ready(function() { if ($('#dirty').val()) { // ... reload the page or specific divs only } // when something modifies a div that needs to be refreshed, set dirty=1 $('#dirty').val('1'); }); </script> And it would trigger properly whenever I clicked the back button. A: I can confirm ckramer that jQuery's ready event works in IE and FireFox. Here's a sample: <html> <head> <title>Test Page</title> <script src="http://code.jquery.com/jquery-latest.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { var d = new Date(); $('#test').html( "Hi at " + d.toString() ); }); </script> </head> <body> <div id="test"></div> <div> <a href="http://www.google.com">Go!</a> </div> </body> </html> A: If I remember rightly, then adding an unload() event means that page cannot be cached (in forward/backward cache) - because it's state changes/may change when user navigates away. So - it is not safe to restore the last-second state of the page when returning to it by navigating through history object. A: I thought this would be for "onunload", not page load, since aren't we talking about firing an event when hitting "Back"? $document.ready() is for events desired on page load, no matter how you get to that page (i.e. redirect, opening the browser to the URL directly, etc.), not when clicking "Back", unless you're talking about what to fire on the previous page when it loads again. And I'm not sure the page isn't getting cached as I've found that Javascripts still are, even when $document.ready() is included in them. We've had to hit Ctrl+F5 when editing our scripts that have this event whenever we revise them and we want test the results in our pages. $(window).unload(function(){ alert('do unload stuff here'); }); is what you'd want for an onunload event when hitting "Back" and unloading the current page, and would also fire when a user closes the browser window. This sounded more like what was desired, even if I'm outnumbered with the $document.ready() responses. Basically the difference is between an event firing on the current page while it's closing or on the one that loads when clicking "Back" as it's loading. Tested in IE 7 fine, can't speak for the other browsers as they aren't allowed where we are. But this might be another option. A: I ran into a problem that my js was not executing when the user had clicked back or forward. I first set out to stop the browser from caching, but this didn't seem to be the problem. My javascript was set to execute after all of the libraries etc. were loaded. I checked these with the readyStateChange event. After some testing I found out that the readyState of an element in a page where back has been clicked is not 'loaded' but 'complete'. Adding || element.readyState == 'complete' to my conditional statement solved my problems. Just thought I'd share my findings, hopefully they will help someone else. Edit for completeness My code looked as follows: script.onreadystatechange(function(){ if(script.readyState == 'loaded' || script.readyState == 'complete') { // call code to execute here. } }); In the code sample above the script variable was a newly created script element which had been added to the DOM. A: jQuery's ready event was created for just this sort of issue. You may want to dig into the implementation to see what is going on under the covers. A: for the people who don't want to use the whole jquery library i extracted the implementation in separate code. It's only 0,4 KB big. You can find the code, together with a german tutorial in this wiki: http://www.easy-coding.de/wiki/html-ajax-und-co/onload-event-cross-browser-kompatibler-domcontentloaded.html A: OK, here is a final solution based on ckramer's initial solution and palehorse's example that works in all of the browsers, including Opera. If you set history.navigationMode to 'compatible' then jQuery's ready function will fire on Back button operations in Opera as well as the other major browsers. This page has more information. Example: history.navigationMode = 'compatible'; $(document).ready(function(){ alert('test'); }); I tested this in Opera 9.5, IE7, FF3 and Safari and it works in all of them. A: Bill, I dare answer your question, however I am not 100% sure with my guesses. I think other then IE browsers when taking user to a page in history will not only load the page and its resources from cache but they will also restore the entire DOM (read session) state for it. IE doesn't do DOM restoration (or at lease did not do) and thus the onload event looks to be necessary for proper page re-initialization there. A: I tried the solution from Bill using $(document).ready... but at first it did not work. I discovered that if the script is placed after the html section, it will not work. If it is the head section it will work but only in IE. The script does not work in Firefox. A: Guys, I found that JQuery has only one effect: the page is reloaded when the back button is pressed. This has nothing to do with "ready". How does this work? Well, JQuery adds an onunload event listener. // http://code.jquery.com/jquery-latest.js jQuery(window).bind("unload", function() { // ... By default, it does nothing. But somehow this seems to trigger a reload in Safari, Opera and Mozilla -- no matter what the event handler contains. [edit(Nickolay): here's why it works that way: webkit.org, developer.mozilla.org. Please read those articles (or my summary in a separate answer below) and consider whether you really need to do this and make your page load slower for your users.] Can't believe it? Try this: <body onunload=""><!-- This does the trick --> <script type="text/javascript"> alert('first load / reload'); window.onload = function(){alert('onload')}; </script> <a href="http://stackoverflow.com">click me, then press the back button</a> </body> You will see similar results when using JQuery. You may want to compare to this one without onunload <body><!-- Will not reload on back button --> <script type="text/javascript"> alert('first load / reload'); window.onload = function(){alert('onload')}; </script> <a href="http://stackoverflow.com">click me, then press the back button</a> </body> A: OK, I tried this and it works in Firefox 3, Safari 3.1.1, and IE7 but not in Opera 9.52. If you use the example shown below (based on palehorse's example), you get an alert box pop-up when the page first loads. But if you then go to another URL, and then hit the Back button to go back to this page, you don't get an alert box pop-up in Opera (but you do in the other browsers). Anyway, I think this is close enough for now. Thanks everyone! <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Untitled Document</title> <meta http-equiv="expires" content="0"> <script src="jquery.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready( function(){ alert('test'); } ); </script> </head> <body> <h1>Test of the page load event and the Back button using jQuery</h1> </body> </html> A: Unload event is not working fine on IE 9. I tried it with load event (onload()), it is working fine on IE 9 and FF5. Example: <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> jQuery(window).bind("load", function() { $("[name=customerName]").val(''); }); </script> </head> <body> <h1>body.jsp</h1> <form action="success.jsp"> <div id="myDiv"> Your Full Name: <input name="yourName" id="fullName" value="Your Full Name" /><br> <br> <input type="submit"><br> </div> </form> </body> </html> A: I have used an html template. In this template's custom.js file, there was a function like this: jQuery(document).ready(function($) { $(window).on('load', function() { //... }); }); But this function was not working when I go to back after go to other page. So, I tried this and it has worked: jQuery(document).ready(function($) { //... }); //Window Load Start window.addEventListener('load', function() { jQuery(document).ready(function($) { //... }); }); Now, I have 2 "ready" function but it doesn't give any error and the page is working very well. Nevertheless, I have to declare that it has tested on Windows 10 - Opera v53 and Edge v42 but no other browsers. Keep in mind this... Note: jquery version was 3.3.1 and migrate version was 3.0.0
{ "language": "en", "url": "https://stackoverflow.com/questions/158319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "198" }
Q: RoR: Creating/Updating: Showing validation errors while preserving previous values I have a basic model in which i have specified some of the fields to validate the presence of. in the create action in the controller i do the standard: @obj = SomeObject.new(params[:some_obj]) if @obj.save flash[:notice] = "ok" redirect... else flash[:error] = @obj.errors.full_messages.collect { |msg| msg + "<br/>" } redirect to new form end however when i redirect to the new form, the errors show, but the fields are empty. is there a way to repopulate the fields with the entered values so the errors can be corrected easily? A: You render :action => :new rather than redirecting. A: Capture @obj in the flash hash as well, and then check for it in the new action. @obj = SomeObject.new(params[:some_obj]) if @obj.save flash[:notice] = "ok" # success else flash[:error] = @obj.errors.full_messages.collect { |msg| msg + "<br/>" } flash[:obj] = @obj # redirect to new form end In new: @obj = flash[:obj] || MyClass.new
{ "language": "en", "url": "https://stackoverflow.com/questions/158324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How would I go about creating a mod_rewrite that redirects to launch.php?i=/the/url/that/they/want? So if the user types mydomain.com/dashboard, the document the server actually sends them is /launch.php?i=/dashboard. The one caveat is that I would like to leave requests for * */flags */people */posters */css */icons */images */libraries */patterns alone, and they should request the actual folder. How would I create such a mod_rewrite? A: This is the .htaccess file for the CakePHP Framework. Please replace the index.php and ?url= to fit your needs. <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] </IfModule> The "!-d" tells Apache to follow existing folders and "!-f" to follow existing files. Everything else is channelled through index.php As suggested in a comment, you have to be aware that if it's not working it could be because mod_rewrite is not enabled and you'll not get an error stating that fact, you'll probably only have a HTTP 404.
{ "language": "en", "url": "https://stackoverflow.com/questions/158328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: http/AJAX (GWT) vs Eclipse gui for thin client deployment I am starting a project for which we will have a thin client, sending requests and getting responses from a server. We are still in the planning stages, so we have a choice to settle on either an Eclipse based GUI (Eclipse plugin) or using GWT as a fromtend for the application. I am not very familiar with Eclipse as a GUI (Nor with GWT) but do know 'normal' Java. What would be the main benefits and drawbacks of either approach? Edit: Addressing the questions posed: * *The project, if Eclipse based, would be using the core Eclipse gui (No coding tools, just bare bones) and the GUI would be packaged with it. *I have been looking at GWT and so far seems the best choice, but still have some research to do. *Communication method is a variant of CORBA (In house libraries) A: Coming from someone who has just as much experience as you do (haven't developed any Eclipse based plugins or anything with GWT), this is purely an opinion from another set of eyes on your problem. Purely from the standpoint of this application being served from a thin client, I would think GWT would fit the bill for this situation a bit better. It would certainly be a bit lighter and would not require the overhead that an Eclipse Plugin would. I also think this would make deploying updates a lot easier. A: If you are thinking of using Eclipse to build a standalone client or a plugin that's just added to an existing Eclipse install, how are you planning to communicate with your server? Our team tried building an Eclipse Rich Client Platform application and having that communicate with a J2EE EJB-based middle tier over RMI, and that worked pretty well, except for when we got to security and couldn't use any of the standard J2EE security patterns to create a login on the Eclipse client that would authenticate against the server. This seems to be a known issue in Eclipse circles, but I haven't seen anything thats a good solution for it. GWT seems pretty advanced for what it is, and there's several IDEs that added tooling for working with it, but I have no first hand experience developing with it. Everything that I have seen in terms of demos and examples makes it look really powerful and easy to use. So my basic point is, Eclipse is an exciting platform, but you will face difficulties which you might have to solve yourself. GWT seems to be an easier alternative for now.
{ "language": "en", "url": "https://stackoverflow.com/questions/158330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there a way to run a method/class only on Tomcat/Wildfly/Glassfish startup? I need to remove temp files on Tomcat startup, the pass to a folder which contains temp files is in applicationContext.xml. Is there a way to run a method/class only on Tomcat startup? A: You could write a ServletContextListener which calls your method from the contextInitialized() method. You attach the listener to your webapp in web.xml, e.g. <listener> <listener-class>my.Listener</listener-class> </listener> and package my; public class Listener implements javax.servlet.ServletContextListener { public void contextInitialized(ServletContext context) { MyOtherClass.callMe(); } } Strictly speaking, this is only run once on webapp startup, rather than Tomcat startup, but that may amount to the same thing. A: I'm sure there must be a better way to do it as part of the container's lifecycle (edit: Hank has the answer - I was wondering why he was suggesting a SessonListener before I answered), but you could create a Servlet which has no other purpose than to perform one-time actions when the server is started: <servlet> <description>Does stuff on container startup</description> <display-name>StartupServlet</display-name> <servlet-name>StartupServlet</servlet-name> <servlet-class>com.foo.bar.servlets.StartupServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> A: You can also use (starting Servlet v3) an annotated aproach (no need to add anything to web.xml): @WebListener public class InitializeListner implements ServletContextListener { @Override public final void contextInitialized(final ServletContextEvent sce) { } @Override public final void contextDestroyed(final ServletContextEvent sce) { } }
{ "language": "en", "url": "https://stackoverflow.com/questions/158336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Is there a way to make SQL Management Studio never generate USE [database-name] in scripts? Is there a way to turn this 'feature' off? A: Awesome, I just found it: Tools -> Options -> Sql Server Object Explorer -> General Scripting Options Script USE <database> -> False A: Tools -> Options -> Sql Server Object Explorer -> Scripting -> Script USE <database> (under the General scripting options heading). That's in SQL Server 2008 Management Studio, I'm told it's there in 2005, too.
{ "language": "en", "url": "https://stackoverflow.com/questions/158341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Remote clients Can't open XLS file using ASP.NET/ADO I'm trying to do the following: * *User goes to web page, uploads XLS file *use ADO .NET to open XLS file using JET engine connection to locally uploaded file on web server This all works fine locally (my machine as the client and the web server) - and in fact is working on the customer's web server with remote clients but is not working when trying to test internally using a remote client. The error I get is: TIME: [10/1/2008 11:15:28 AM] SEVERITY: EXCEPTION PROGRAM: Microsoft JET Database Engine EXCEPTION: Unspecified error STACK TRACE: at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OleDb.OleDbConnection.Open() The code generating the error is: OleDbConnection l_DbConnection; OleDbDataAdapter l_DbCommand; DataSet l_dataSet = new DataSet(); l_DbConnection = new OleDbConnection("provider=Microsoft.Jet.OLEDB.4.0; data source=\"" + l_importFileName + "\";Extended Properties=Excel 8.0;"); l_DbCommand = new OleDbDataAdapter("select * from [Sheet1$]", l_DbConnection); //try using provider to read file try { l_DbConnection.Open(); } The call to "Open" is raising the exception above. The site is using impersonation and all calls are made as the user logged in on the client. What I've done so far to try and get this working: Followed the steps here http://support.microsoft.com/kb/251254/ and assigned permissions to the TMP/TEMP environment variable directory to the user I am using to test (also assigned permissions to ASPNET and then to "Everyone" as a blanket "is this permissions related?" test). Ensured that the file is being uploaded and the XLS file itself has inherited the directory permissions that allow the user full access to the file. I also gave this dir permissions to "Everyone" just in case - that also didn't help. I haven't had to change any environment variables and have, therefore, not restarted after making these changes - but I shouldn't have to for Windows folder/file permissions to take effect. At this point I'm at a total loss A: Ok, figured it out - turns out that even with IIS using impersonation and the TMP/TEMP environment variables being set to C:\WINDOWS\Temp the ASP.NET process is still running under the ASPNET account and each individual user needed permissions to the Documents and Settings\ASPNET\Local Settings\Temp folder The other way around this would probably be to create a new app pool and have that app pool run as a user with permissions to the right folder rather than ASPNET A: Go to the directory \Documents and Settings\"machineName"\ASPNET\Local Settings\Temp and give the read, write rights to the user "EveryOne" Then it will work fine. Moreover you have to set "" in web.config file
{ "language": "en", "url": "https://stackoverflow.com/questions/158343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: In PowerShell, how can I determine if the current drive is a networked drive or not? I need to know, from within Powershell, if the current drive is a mapped drive or not. Unfortunately, Get-PSDrive is not working "as expected": PS:24 H:\temp >get-psdrive h Name Provider Root CurrentLocation ---- -------- ---- --------------- H FileSystem H:\ temp but in MS-Dos "net use" shows that H: is really a mapped network drive: New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK H: \\spma1fp1\JARAVJ$ Microsoft Windows Network The command completed successfully. What I want to do is to get the root of the drive and show it in the prompt (see: Customizing PowerShell Prompt - Equivalent to CMD's $M$P$_$+$G?) A: A slightly more compact variation on the accepted answer: [System.IO.DriveInfo]("C") A: Use the .NET framework: PS H:\> $x = new-object system.io.driveinfo("h:\") PS H:\> $x.drivetype Network A: Try WMI: Get-WMI -query "Select ProviderName From Win32_LogicalDisk Where DeviceID='H:'" A: An alternative way to use WMI: get-wmiobject Win32_LogicalDisk | ? {$_.deviceid -eq "s:"} | % {$_.providername} Get all network drives with: get-wmiobject Win32_LogicalDisk | ? {$_.drivetype -eq 4} | % {$_.providername} A: The most reliable way is to use WMI get-wmiobject win32_volume | ? { $_.DriveType -eq 4 } | % { get-psdrive $_.DriveLetter[0] } The DriveType is an enum wit hthe following values 0 - Unknown 1 - No Root Directory 2 - Removable Disk 3 - Local Disk 4 - Network Drive 5 - Compact Disk 6 - RAM Disk Here's a link to a blog post I did on the subject A: Take this a step further as shown below: ([System.IO.DriveInfo]("C")).Drivetype Note this only works for the the local system. Use WMI for remote computers.
{ "language": "en", "url": "https://stackoverflow.com/questions/158359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Possible to detect the *type of mobile device* via javascript or HTTP Headers? I've got a request from a customer to automatically detect the type of mobile device (not the browser, the type. ex: Moto Q, Blackjack II, etc.) and automatically select the device from a drop down with a list of supported devices. So far I've found that the HTTP Headers (submitted by mobile IE) contain information such as * *Resolution *UA-CPU (i've seen ARM from WM 2003 and x86 from WM5) *User Agent (which basically just says Windows CE) The only thing I can think of right now is possibly using a combination of the resolution/cpu and making a "best guess" Any thoughts? A: You may want to have a look at WURFL, here: http://wurfl.sourceforge.net/. From the site: So... What is WURFL? The WURFL is an XML configuration file which contains information about capabilities and features of many mobile devices. The main scope of the file is to collect as much information as we can about all the existing mobile devices that access WAP pages so that developers will be able to build better applications and better services for the users. A: What exactly does the customer mean by "supported". Surely it means that the phone in question supports the web application and it's inner functionality - wouldn't it be better then to forget device detection and simply focus on detecting those capabilities required for the app to function properly? For example, if my mobile website requires Ajax to work then instead of listing all the devices which are said to "support Ajax" I could do some simple object detection to find out for myself. Device detection, just like browser detection is unreliable. Yes, it's possible but I wouldn't recomend it... on a project I've done we used the User Agent string to detect various devices. The indexOf javaScript method came in handy! :) A: Another fast and easy solution is Apache Mobile Filter: http://www.apachemobilefilter.org
{ "language": "en", "url": "https://stackoverflow.com/questions/158369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Error 1053: the service did not respond to the start or control request in a timely fashion I have recently inherited a couple of applications that run as windows services, and I am having problems providing a gui (accessible from a context menu in system tray) with both of them. The reason why we need a gui for a windows service is in order to be able to re-configure the behaviour of the windows service(s) without resorting to stopping/re-starting. My code works fine in debug mode, and I get the context menu come up, and everything behaves correctly etc. When I install the service via "installutil" using a named account (i.e., not Local System Account), the service runs fine, but doesn't display the icon in the system tray (I know this is normal behavior because I don't have the "interact with desktop" option). Here is the problem though - when I choose the "LocalSystemAccount" option, and check the "interact with desktop" option, the service takes AGES to start up for no obvious reason, and I just keep getting Could not start the ... service on Local Computer. Error 1053: the service did not respond to the start or control request in a timely fashion. Incidentally, I increased the windows service timeout from the default 30 seconds to 2 minutes via a registry hack (see http://support.microsoft.com/kb/824344, search for TimeoutPeriod in section 3), however the service start up still times out. My first question is - why might the "Local System Account" login takes SOOOOO MUCH LONGER than when the service logs in with the non-LocalSystemAccount, causing the windows service time-out? what's could the difference be between these two to cause such different behavior at start up? Secondly - taking a step back, all I'm trying to achieve, is simply a windows service that provides a gui for configuration - I'd be quite happy to run using the non-Local System Account (with named user/pwd), if I could get the service to interact with the desktop (that is, have a context menu available from the system tray). Is this possible, and if so how? Any pointers to the above questions would be appreciated! A: After fighting this message for days, a friend told me that you MUST use the Release build. When I InstallUtil the Debug build, it gives this message. The Release build Starts fine. A: In service class within OnStart method don't do huge operation, OS expect short amount of time to run service, run your method using thread start: protected override void OnStart(string[] args) { Thread t = new Thead(new ThreadStart(MethodName)); // e.g. t.Start(); } A: I'm shooting blind here, but I've very often found that long delays in service startups are directly or indirectly caused by network function timeouts, often when attemting to contact a domain controller when looking up account SIDs - which happens very often indirectly via GetMachineAccountSid() whether you realize it or not, since that function is called by the RPC subsystem. For an example on how to debug in such situations, see The Case of the Process Startup Delays on Mark Russinovich's blog. A: If you are using Debug code as below in your service the problem may arise. #if(!DEBUG) ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new EmailService() }; ServiceBase.Run(ServicesToRun); #else //direct call function what you need to run #endif To fix this, while you build your windows service remove #if condition because it didn't work as it is. Please use argument for debug mode instead as below. if (args != null && args.Length > 0) { _isDebug = args[0].ToLower().Contains("debug"); } A: In my case the problem was missing version of .net framework. My service used <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> </startup> But .net Framework version of server was 4, so by changing 4.5 to 4 the problem fixed: <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" /> </startup> A: If you continue down the road of trying to make your service interact with the user's desktop directly, you'll lose: even under the best of circumstances (i.e. "before Vista"), this is extremely tricky. Windows internally manages several window stations, each with their own desktop. The window station assigned to services running under a given account is completely different from the window station of the logged-on interactive user. Cross-window station access has always been frowned upon, as it's a security risk, but whereas previous Windows versions allowed some exceptions, these have been mostly eliminated in Vista and later operating systems. The most likely reason your service is hanging on startup, is because it's trying to interact with a nonexistent desktop (or assumes Explorer is running inside the system user session, which also isn't the case), or waiting for input from an invisible desktop. The only reliable fix for these issues is to eliminate all UI code from your service, and move it to a separate executable that runs inside the interactive user session (the executable can be started using the global Startup group, for example). Communication between your UI code and your service can be implemented using any RPC mechanism: Named Pipes work particularly well for this purpose. If your communications needs are minimal, using application-defined Service Control Manager commands might also do the trick. It will take some effort to achieve this separation between UI and service code: however, it's the only way to make things work reliably, and will serve you well in the future. ADDENDUM, April 2010: Since this question remains pretty popular, here's a way to fix another common scenario that causes "service did not respond..." errors, involving .NET services that don't attempt any funny stuff like interacting with the desktop, but do use Authenticode signed assemblies: disable the verification of the Authenticode signature at load time in order to create Publisher evidence, by adding the following elements to your .exe.config file: <configuration> <runtime> <generatePublisherEvidence enabled="false"/> </runtime> </configuration> Publisher evidence is a little-used Code Access Security (CAS) feature: only in the unlikely event that your service actually relies on the PublisherMembershipCondition will disabling it cause issues. In all other cases, it will make the permanent or intermittent startup failures go away, by no longer requiring the runtime to do expensive certificate checks (including revocation list lookups). A: Copy the release DLL or get the dll from release mode rather than Debug mode and paste it to installation folder,,it should work A: I was running into a similar problem with a Service I was writing. It worked fine then one day I started getting the timeout on Start errors. It happened in one &/or both Release and Debug depending on what was going on. I had instantiated an EventLogger from System.Diagnostics, but whatever error I was seeing must have been happening before the Logger was able to write... If you are not aware of where to look up the EventLogs, in VS you can go to your machine under the Server Explorer. I started poking around in some of the other EventLogs besides those for my Service. Under Application - .NETRuntime I found the Error logs pertinent to the error on startup. Basically, there were some exceptions in my service's constructor (one turned out to be an exception in the EventLog instance setup - which explained why I could not see any logs in my Service EventLog). On a previous build apparently there had been other errors (which had caused me to make the changes leading to the error in the EventLog set up). Long story short - the reason for the timeout may be due to various exceptions/errors, but using the Runtime EventLogs may just help you figure out what is going on (especially in the instances where one build works but another doesn't). Hope this helps! A: Once try to run your exe file. I had the same problem, but when I ran it direct by double click on the exe file, I got a message about .Net framework version, because I was released the service project with a framework which it wasn't installed on target machine. A: I faced this problem because of a missing framework on the box running my service. The box had .NET 4.0 and the service was written on top of .NET 4.5. I installed the following download on the box, restarted, and the service started up fine: http://www.microsoft.com/en-us/download/details.aspx?id=30653 A: Install the debug build of the service and attach the debugger to the service to see what's happening. A: I want to echo mdb's comments here. Don't go this path. Your service is not supposed to have a UI... "No user interaction" is like the definining feature of a service. If you need to configure your service, write another application that edits the same configuration that the service reads on startup. But make it a distinct tool -- when you want to start the service, you start the service. When you want to configure it, you run the configuration tool. Now, if you need realtime monitoring of the service, then that's a little trickier (and certainly something I've wished for with services). Now you're talking about having to use interprocess communications and other headaches. Worst of all, if you need user interaction, then you have a real disconnect here, because services don't interact with the user. In your shoes I would step back and ask why does this need to be a service? And why does it need user interaction? These two requirements are pretty incompatible, and that should raise alarms. A: I had this problem and it drove me nuts for two days… If your problem similar to mine: I have settings “User settings” in my windows service, so the service can do self-maintenance, without stopping and starting the service. Well, the problem is with the “user settings”, where the config file for these settings is saved in a folder under the user-profile of the user who is running the windows service under the service-exe file version. This folder for some reason was corrupted. I deleted the folder and service start working back again happily as usual… A: I had this problem, it took about a day to fix. For me the problem was that my code skipped the "main content" and effectively ran a couple of lines then finished. And this caused the error for me. It is a C# console application which installs a Windows Service, as soon as it tried to run it with the ServiceController (sc.Run() ) then it would give this error for me. After I fixed the code to go to the main content, it would run the intended code: ServiceBase.Run(new ServiceHost()); Then it stopped showing up. As lots of people have already said, the error could be anything, and the solutions people provide may or may not solve it. If they don't solve it (like the Release instead of Debug, adding generatePublisherEvidence=false into your config, etc), then chances are that the problem is with your own code. Try and get your code to run without using sc.Run() (i.e. make the code run that sc.Run() would have executed). A: This problem usually occurs when there is some reference missing on your assembly and usually the binding fails at the run time. to debug put Thread.Sleep(1000) in the main(). and put a break point in the next line of execution. Then start the process and attach the debugger to the process while it is starting. Press f5 after it hit the break point. It will throw the exception of missing assembly or reference. Hopefully this will resolve this error. A: Took me hours, should have seen the event viewer get_AppSettings(). A change in the app config, caused the problem. A: To debug the startup of your service, add the following to the top of the OnStart() method of your service: while(!System.Diagnostics.Debugger.IsAttached) Thread.Sleep(100); This will stall the service until you manually attach the Visual Studio Debugger using Debug -> Attach to Process... Note: In general, if you need a user to interact with your service, it is better to split the GUI components into a separate Windows application that runs when the user logs in. You then use something like named pipes or some other form of IPC to establish communication between the GUI app and your service. This is in fact the only way that this is possible in Windows Vista. A: Adding 127.0.0.1 crl.microsoft.com to the "Hosts" file solved our issue. A: My issue was due to target framework mentioned in windows service config was <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6"/> </startup> and my server in which I tried to install windows service was not supported for this .Net version. Changing which , I could able to resolve the issue. A: I had a similar issue, steps I followed: * *Put a Debugger.Launch() in the windows service constructor *Followed step by step to see where it got stuck My issue wasn't due to any error. I had a BlockingCollection.GetConsumingEnumerable() in the way that caused the windows service to wait. A: I had this problem too. I made it to work by changing Log On account to Local System Account. In my project I had it setup to run as Local Service account. So when I installed it, by default it was using Local Service. I'm using .net 2.0 and VS 2005. So installing .net 1.1 SP1 wouldn't have helped. A: Both Local System Account and Local Service would not work for me, i then set it to Network Service and this worked fine. A: In my case, I had this trouble due to a genuine error. Before the service constructor is called, one static constructor of member variable was failing: private static OracleCommand cmd; static SchedTasks() { try { cmd = new OracleCommand("select * from change_notification"); } catch (Exception e) { Log(e.Message); // "The provider is not compatible with the version of Oracle client" } } By adding try-catch block I found the exception was occuring because of wrong oracle version. Installing correct database solved the problem. A: I also faced similar problem and found that there was issue loading assembly. I was receiving this error immediately when trying to start the service. To quickly debug the issue, try to run service executable via command prompt using ProcDump http://technet.microsoft.com/en-us/sysinternals/dd996900. It shall provide sufficient hint about exact error. http://bytes.com/topic/net/answers/637227-1053-error-trying-start-my-net-windows-service helped me quite a bit. A: This worked for me. Basically make sure the Log on user is set to the right one. However it depends how the account infrastructure is set. In my example it's using AD account user credentials. In start up menu search box search for 'Services' -In Services find the required service -right click on and select the Log On tab -Select 'This account' and enter the required content/credentials -Ok it and start the service as usual A: In case you have a windows form used for testing, ensure that the startup object is still the service and not the windows form A: We have Log4Net configured to log to a database table. The table had grown so large that the service was timing out trying to log messages. A: open the services window as administrator,Then try to start the service.That worked for me. A: * *Build project in Release Mode. *Copy all Release folder files to source path. *Execute Window service using command prompt window in administrative access. *Never delete files from source path. At lease this works for me. A: Release build did not work for me, however, I looked through my event viewer and Application log and saw that the Windows Service was throwing a security exception when it was trying to create an event log. I fixed this by adding the event source manually with administration access. I followed this guide from Microsoft: * *open registry editor, run --> regedit *Locate the following registry subkey: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Application *Right-click the Application subkey, point to New, and then click Key. *Type event source name used in your windows service for the key name. *Close Registry Editor. A: In my case it was permission for user account in AD. After set it correctly, it works perfect. A: I faced the same issue and tried all the above ways but issue remain same then I tried the below solution and it was worked for me, if you tried all the ways, you can also try this one, may be it will resolve your issue: * *Go to the folder: ~\bin\Debug, *Take the backup of all the files and after that remove all, *Re-Build the application and then deploy the fresh files on the server, *Now try to run the service. A: In our case we referred event viewer and got the error stating that machine config not well formed XML so we investigated and found someone has put double << angular bracket in one of element we removed one and this resolved our problem A: My issue was that the service program could not find thrid-party dlls. Adding path to the dlls into system PATH solved this.
{ "language": "en", "url": "https://stackoverflow.com/questions/158371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: How do I build a WPF application where I can drag and drop a user control between windows? I'm building a simple Todo List application where I want to be able to have multiple lists floating around my desktop that I can label and manage tasks in. The relevant UIElements in my app are: Window1 (Window) TodoList (User Control) TodoStackCard (User Control) Window1 looks like this: <Window x:Class="TaskHole.App.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:t="clr-namespace:TaskHole.App.Controls" xmlns:tcc="clr-namespace:TaskHole.CustomControls" Title="Window1" Width="500" Height="500" Background="Transparent" WindowStyle="None" AllowsTransparency="True" > <Canvas Name="maincanvas" Width="500" Height="500" VerticalAlignment="Stretch" HorizontalAlignment="Stretch"> <ResizeGrip SizeChanged="ResizeGrip_SizeChanged" /> <t:TodoList Canvas.Top="0" Canvas.Left="0" MinWidth="30" Width="50" Height="500" x:Name="todoList" TaskHover="todoList_TaskHover" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"/> </Canvas> </Window> TodoList looks like this: <UserControl x:Class="TaskHole.App.Controls.TodoList" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:t="clr-namespace:TaskHole.App.Controls" xmlns:tcc="clr-namespace:TaskHole.CustomControls" Background="Transparent"> <StackPanel VerticalAlignment="Bottom" HorizontalAlignment="Stretch" MinWidth="1" Grid.Row="2" Height="Auto" AllowDrop="True"> <ItemsControl Name="todolist" ItemsSource="{Binding}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <VirtualizingStackPanel Name="stackPanel" VerticalAlignment="Bottom"> </VirtualizingStackPanel> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ItemsControl.ItemTemplate> <DataTemplate> <t:TodoStackCard x:Name="card" TaskHover="card_TaskHover" Orientation="Vertical" VerticalContentAlignment="Top" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> </UserControl> I have multiple instances of these windows, and I want to be able to drag any of the controls between the windows. I have tried using a Thumb control and, while this works, it only allows me to drag a control around the containing canvas. How do I mimic the behaviour of, say, Windows Explorer, where I can drag a file outside of the application and onto another application, all the while seeing a ghosted representation of the file under the cursor. Can I accomplish this purely in C# and WPF? If so/if not, how? A: You have to call DoDragDrop to initialize the Drag And Drop framework. Jaime Rodriguez provides a guide to Drag and Drop here A: Just as an FYI, there's a big difference to "dragging controls" around, and doing what Explorer does, which is Drag and Drop, specifically with files. That's what you'll want to look up, how to do drag and drop from a WPF app to something else. You'll need something that creates a Data Object (IDataObject) or whatever they call that in WPF world, and then you need to call DoDragDrop (again, or whatever is analogous to this in WPF) to start the dragging. Doing what explorer does is also possible, put I suspect you need ot make some lower level calls to accomplish this. Take a look at http://www.codeproject.com/KB/wtl/wtl4mfc10.aspx to see the stuff you need ot look for. WPF may in fact wrap all this up, but if it doesn't these are some of the things you need to look into, especially IDragSourceHelper.
{ "language": "en", "url": "https://stackoverflow.com/questions/158372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Clear the JavaScript sent to Firebug console I want to clear the Firebug console of the JavaScript already sent. Does something like console.clear() exist and work? A: If you want to see all the available methods under console: for(var i in console) { console.log(i); } A: Just a call to the clear function from the console works. However, I can't seem to clear the console from javascript code, but that seems to make sense (you will lose information). So in Chrome's console, just type: clear(); A: console.clear(); works for me A: You can type clear(); in the Firebug command line. I don't think there's a way to do it from a web page though. A: _FirebugCommandLine.clear(); Will clear the console A: console.clear() is part of the Firebug API. I fixed the documentation page: http://getfirebug.com/wiki/index.php/Console_API#console.clear.28.29 jjb
{ "language": "en", "url": "https://stackoverflow.com/questions/158375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Performance Testing MSMQ Server Has anyone done any sort of performance tests against MSMQ? We have a solution in prod environment where errors are added to a MSMQ for distribution to databases or event monitors. We need to test the capacity of this system but not sure how to start. Anyone know any tools or have any tips? A: try overloading it with a test program and see where it balks/fails [analgous to "destructive testing" in materials engineering] A: QueueExplorer has "Mass send" option which could be used to send bunch of messages to a queue, with or without delay between them. I know it's not a fully automated stress test, but running it from few instances or even few machines, could generate significant stress load. Disclaimer: I'm author of QueueExplorer. A: yeah I was thinking that was hoping for a more public tool.
{ "language": "en", "url": "https://stackoverflow.com/questions/158380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Open an ANSI file and Save a a Unicode file using Delphi For some reason, lately the *.UDL files on many of my client systems are no longer compatible as they were once saved as ANSI files, which is no longer compatible with the expected UNICODE file format. The end result is an error dialog which states "the file is not a valid compound file". What is the easiest way to programatically open these files and save as a unicode file? I know I can do this by opening each one in notepad and then saving as the same file but with the "unicode" selected in the encoding section of the save as dialog, but I need to do this in the program to cut down on support calls. This problem is very easy to duplicate, just create a *.txt file in a directory, rename it to *.UDL, then edit it using the microsoft editor. Then open it in notepad and save as the file as an ANSI encoded file. Try to open the udl from the udl editor and it will tell you its corrupt. then save it (using notepad) as a Unicode encoded file and it will open again properly. A: Ok, using delphi 2009, I was able to come up with the following code which appears to work, but is it the proper way of doing this conversion? var sl : TStrings; FileName : string; begin FileName := fServerDir+'configuration\hdconfig4.udl'; sl := TStringList.Create; try sl.LoadFromFile(FileName, TEncoding.Default); sl.SaveToFile(FileName, TEncoding.Unicode); finally sl.Free; end; end; A: This is very simple to do with my TGpTextFile unit. I'll put together a short sample and post it here. It should also be very simple with the new Delphi 2009 - are you maybe using it? EDIT: This his how you can do it using my stuff in pre-2009 Delphis. var strAnsi : TGpTextFile; strUnicode: TGpTextFile; begin strAnsi := TGpTextFile.Create('c:\0\test.udl'); try strAnsi.Reset; // you can also specify non-default 8-bit codepage here strUnicode := TGpTextFile.Create('c:\0\test-out.udl'); try strUnicode.Rewrite([cfUnicode]); while not strAnsi.Eof do strUnicode.Writeln(strAnsi.Readln); finally FreeAndNil(strUnicode); end; finally FreeAndNil(strAnsi); end; end; License: The code fragment above belongs to public domain. Use it anyway you like.
{ "language": "en", "url": "https://stackoverflow.com/questions/158382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What happens first? .htaccess or php code? If I use mod_rewrite to control all my 301 redirects, does this happen before my page is served? so if I also have a bunch of redirect rules in a php script that runs on my page, will the .htaccess kick in first? A: Yes, the .htaccess file is parsed before your script is served. A: The .htaccess will kick in first. If you look at the Apache request cycle: PHP is a response handler. mod_rewrite runs at URI translation, except for rewrite rules in .htaccess and <Directory> or <Location> blocks which run in the fixup phase. This is because Apache doesn't know which directory it's in (and thus which <Directory> or .htaccess to read) until after URI translation. In response to to gabriel1836's question about the image, I grabbed it from the second slide of this presentation but it's originally from the book: Writing Apache Modules in Perl and C which I highly recommend. A: .htaccess happens first. A: htaccess is controlled by the webserver. This file will be taken in account before your PHP file. For example, you could restrict access to a particular folder with your htaccess file. So, it have to be take in charge before your PHP. Hope this helps. A: The .htaccess is performed by Apache before the php script execution. (imagine if the php script is executed and then the .htaccess make a redirection to another page...). A: When a request is made to the URI affected by the .htaccess file, then Apache will handle any rewrite rules before any of your PHP code executes. A: You always can test this with the following command: wget -S --spider http://yourdomain.com With this command you see the who is responding to your request. As all the others mentioned, .htaccess is first. A: So basically, the .htaccess more or less requires the relevant PHP code or files, as according to the rules specified in the .htaccess, meaning .htaccess is run first.
{ "language": "en", "url": "https://stackoverflow.com/questions/158384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Citrix - how to keep smartclient apps from re-downloading every time they are launched Our company uses Citrix to remote into a terminal server for remote users to launch smart client apps within a virtual window on their machine. The problem is that smartclient apps are being downloaded each time the user launches them eventhough the version on the remote citrix server has not change. This is due to the user's profile being purged each time they close their Citrix session. Is there any way to avoid this and still continue to purge the user's profile? Not purging the profile leads to wasted space on the citrix servers and corrupt profile issues. A: I can't speak to details on Citrix servers. However, with ClickOnce you have no say over where an application is installed. It's installed under the user profile, no ifs, ands, or buts. One of the major goals with ClickOnce was improved security and installing apps to the profile makes that easier. So, if you're clearing the profile, you're stuck. However, couldn't you just deploy the app to the Citrix server without ClickOnce? Most .Net apps can just be xcopy deployed, so it seems it would be pretty easy to write a few batch files to copy the latest deployment to your Citrix server and skip ClickOnce all together. A: The way to do this in the Citrix environment is to use the Citrix URL Content redirection feature (in Feature Release 2) to redirect the ClickOnce URL to the local machine (http://xxx.xxx/myapplication.application). This will cause the browser window to open on the local machine and not on the Citrix machine. Once this happens, ClickOnce takes over and installs on the local user's machine, instead of inside Citrix. Executing locally will still give you all the normal ClickOnce benefits. You don't want to install inside Citrix due to the problems in codeConcussion's answer. Plus, ClickOnce doesn't support mandatory or temporary profiles, which is probably what the user has inside Citrix.
{ "language": "en", "url": "https://stackoverflow.com/questions/158385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I make a program start up automatically in OSX? I have a little program that I want to make open automatically when my mac is started up. Because this program accepts command line arguments, its not as simple as just going to System Prefs/Accounts/Login items and adding it there... From google, I read that I can create a .profile file in my user's home folder, and that will execute whatever I put in it... So I have a .profile page in ~ like this: -rw-r--r--@ 1 matt staff 27 27 Sep 13:36 .profile That contains this... /Applications/mousefix 3.5 But it doesn't execute on startup! If I enter "/Applications/mousefix 3.5" manually into the terminal, it does work. Any ideas? A: You can use Lingon to help construct a plist file for launchd. A: The most general way of launching things on startup on MacOS is using launchd. You can create a plist file to tell it to launch your program on startup, which can include arguments. A: From here and into the future, look into launchd for what you want to do. All other methods have been deprecated or are now unsupported. This is probably a bit more heavy-weight than what you want, though. It could also be a problem with your version of the bash shell not correctly executing your .profile. Try putting the command into .bashrc in your home directory, and see if that helps. A: You can use Applescript which can run terminal commands, then have that applescript launched at startup. A: The .profile and .bash_profile only come into play when you open a new shell (ie. opening Terminal or entering through SSH). Also, I believe if bash detects .bash_profile it won't look for .profile If you want it start upon login, I would look at the other suggestions about launchd A: You could always write a wrapper script that runs it with the arguments you want A: Thanks all. The launchd solution is pretty cool, yes its heavyweight for such a simple thing, but its good to know, and as a developer I'm happy to tinker about :)
{ "language": "en", "url": "https://stackoverflow.com/questions/158388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Primary Key versus Unique Constraint? I'm currently designing a brand new database. In school, we always learned to put a primary key in each table. I read a lot of articles/discussions/newsgroups posts saying that it's better to use unique constraint (aka unique index for some db) instead of PK. What's your point of view? A: It would be very rare denormalization that would make you want to have a table without a primary key. Primary keys have unique constraints automatically just by their nature as the PK. A unique constraint would be used when you want to guarantee uniqueness in a column in ADDITION to the primary key. The rule of always have a PK is a good one. http://msdn.microsoft.com/en-us/library/ms191166.aspx A: You should always have a primary key. However I suspect your question is just worded bit misleading, and you actually mean to ask if the primary key should always be an automatically generated number (also known as surrogate key), or some unique field which is actual meaningful data (also known as natural key), like SSN for people, ISBN for books and so on. This question is an age old religious war in the DB field. My take is that natural keys are preferable if they indeed are unique and never change. However, you should be careful, even something seemingly stable like a persons SSN may change under certain circumstances. A: A Primary Key is really just a candidate key that does not allow for NULL. As such, in SQL terms - it's no different than any other unique key. However, for our non-theoretical RDBMS's, you should have a Primary Key - I've never heard it argued otherwise. If that Primary Key is a surrogate key, then you should also have unique constraints on the natural key(s). The important bit to walk away with is that you should have unique constraints on all the candidate (whether natural or surrogate) keys. You should then pick the one that is easiest to reference in a Foreign Key to be your Primary Key*. You should also have a clustered index*. this could be your Primary Key, or a natural key - but it's not required to be either. You should pick your clustered index based on query usage of the table. When in doubt, the Primary Key is not a bad first choice. * *Though it's technically only required to refer to a unique key in a foreign key relationship, it's accepted standard practice to greatly favor the primary key. In fact, I wouldn't be surprised if some RDBMS only allow primary key references. *Edit: It's been pointed out that Oracle's term of "clustered table" and "clustered index" are different than Sql Server. The equivalent of what I'm speaking of in Oracle-ese is an Index Ordered Table and it is recommended for OLTP tables - which, I think, would be the main focus of SO questions. I assume if you're responsible for a large OLAP data warehouse, you should already have your own opinions on database design and optimization. A: Unless the table is a temporary table to stage the data while you work on it, you always want to put a primary key on the table and here's why: 1 - a unique constraint can allow nulls but a primary key never allows nulls. If you run a query with a join on columns with null values you eliminate those rows from the resulting data set because null is not equal to null. This is how even big companies can make accounting errors and have to restate their profits. Their queries didn't show certain rows that should have been included in the total because there were null values in some of the columns of their unique index. Shoulda used a primary key. 2 - a unique index will automatically be placed on the primary key, so you don't have to create one. 3 - most database engines will automatically put a clustered index on the primary key, making queries faster because the rows are stored contiguously in the data blocks. (This can be altered to place the clustered index on a different index if that would speed up the queries.) If a table doesn't have a clustered index, the rows won't be stored contiguously in the data blocks, making the queries slower because the read/write head has to travel all over the disk to pick up the data. 4 - many front end development environments require a primary key in order to update the table or make deletions. A: Can you provide references to these articles? I see no reason to change the tried and true methods. After all, Primary Keys are a fundamental design feature of relational databases. Using UNIQUE to serve the same purpose sounds really hackish to me. What is their rationale? Edit: My attention just got drawn back to this old answer. Perhaps the discussion that you read regarding PK vs. UNIQUE dealt with people making something a PK for the sole purpose of enforcing uniqueness on it. The answer to this is, If it IS a key, then make it key, otherwise make it UNIQUE. A: Primary keys should be used in situations where you will be establishing relationships from this table to other tables that will reference this value. However, depending on the nature of the table and the data that you're thinking of applying the unique constraint to, you may be able to use that particular field as a natural primary key rather than having to establish a surrogate key. Of course, surrogate vs natural keys are a whole other discussion. :) Unique keys can be used if there will be no relationship established between this table and other tables. For example, a table that contains a list of valid email addresses that will be compared against before inserting a new user record or some such. Or unique keys can be used when you have values in a table that has a primary key but must also be absolutely unique. For example, if you have a users table that has a user name. You wouldn't want to use the user name as the primary key, but it must also be unique in order for it to be used for log in purposes. A: We need to make a distinction here between logical constructs and physical constructs, and similarly between theory and practice. To begin with: from a theoretical perspective, if you don't have a primary key, you don't have a table. It's just that simple. So, your question isn't whether your table should have a primary key (of course it should) but how you label it within your RDBMS. At the physical level, most RDBMSs implement the Primary Key constraint as a Unique Index. If your chosen RDBMS is one of these, there's probably not much practical difference, between designating a column as a Primary Key and simply putting a unique constraint on the column. However: one of these options captures your intent, and the other doesn't. So, the decision is a no-brainer. Furthermore, some RDBMSs make additional features available if Primary Keys are properly labelled, such as diagramming, and semi-automated foreign-key-constraint support. Anyone who tells you to use Unique Constraints instead of Primary Keys as a general rule should provide a pretty damned good reason. A: A primary key is just a candidate key (unique constraint) singled out for special treatment (automatic creation of indexes, etc). I expect that the folks who argue against them see no reason to treat one key differently than another. That's where I stand. [Edit] Apparently I can't comment even on my own answer without 50 points. @chris: I don't think there's any harm. "Primary Key" is really just syntactic sugar. I use them all the time, but I certainly don't think they're required. A unique key is required, yes, but not necessarily a Primary Key. A: the thing is that a primary key can be one or more columns which uniquely identify a single record of a table, where a Unique Constraint is just a constraint on a field which allows only a single instance of any given data element in a table. PERSONALLY, I use either GUID or auto-incrementing BIGINTS (Identity Insert for SQL SERVER) for unique keys utilized for cross referencing amongst my tables. Then I'll use other data to allow the user to select specific records. For example, I'll have a list of employees, and have a GUID attached to every record that I use behind the scenes, but when the user selects an employee, they're selecting them based off of the following fields: LastName + FirstName + EmployeeNumber. My primary key in this scenario is LastName + FirstName + EmployeeNumber while unique key is the associated GUID. A: posts saying that it's better to use unique constraint (aka unique index for some db) instead of PK i guess that the only point here is the same old discussion "natural vs surrogate keys", because unique indexes and pk´s are the same thing. translating: posts saying that it's better to use natural key instead of surrogate key A: I usually use both PK and UNIQUE KEY. Because even if you don't denote PK in your schema, one is always generated for you internally. It's true both for SQL Server 2005 and MySQL 5. But I don't use the PK column in my SQLs. It is for management purposes like DELETEing some erroneous rows, finding out gaps between PK values if it's set to AUTO INCREMENT. And, it makes sense to have a PK as numbers, not a set of columns or char arrays. A: I've written a lot on this subject: if you read anything of mine be clear that I was probably referring specifically to Jet a.k.a. MS Access. In Jet, the tables are physically ordered on the PRIMARY KEY using a non-maintained clustered index (is clustered on compact). If the table has no PK but does have candidate keys defined using UNIQUE constraints on NOT NULL columns then the engine will pick one for the clustered index (if your table has no clustered index then it is called a heap, arguably not a table at all!) How does the engine pick a candidate key? Can it pick one which includes nullable columns? I really don't know. The point is that in Jet the only explicit way of specifying the clustered index to the engine is to use PRIMARY KEY. There are of course other uses for the PK in Jet e.g. it will be used as the key if one is omitted from a FOREIGN KEY declaration in SQL DDL but again why not be explicit. The trouble with Jet is that most people who create tables are unaware of or unconcerned about clustered indexes. In fact, most users (I wager) put an autoincrement Autonumber column on every table and define the PRIMARY KEY solely on this column while failing to put any unique constraints on the natural key and candidate keys (whether an autoincrement column can actually be regarded as a key without exposing it to end users is another discussion in itself). I won't go into detail about clustered indexes here but suffice to say that IMO a sole autoincrement column is rarely to ideal choice. Whatever you SQL engine, the choice of PRIMARY KEY is arbitrary and engine specific. Usually the engine will apply special meaning to the PK, therefore you should find out what it is and use it to your advantage. I encourage people to use NOT NULL UNIQUE constraints in the hope they will give greater consideration to all candidate keys, especially when they have chosen to use 'autonumber' columns which (should) have no meaning in the data model. But I'd rather folk choose one well considered key and used PRIMARY KEY rather than putting it on the autoincrement column out of habit. Should all tables have a PK? I say yes because doing otherwise means at the very least you are missing out on a slight advantage the engine affords the PK and at worst you have no data integrity. BTW Chris OC makes a good point here about temporal tables, which require sequenced primary keys (lowercase) which cannot be implemented via simple PRIMARY KEY constraints (SQL key words in uppercase). A: PRIMARY KEY 1. Null It doesn’t allow Null values. Because of this we refer PRIMARY KEY = UNIQUE KEY + Not Null CONSTRAINT. 2. INDEX By default it adds a clustered index. 3. LIMIT A table can have only one PRIMARY KEY Column[s]. UNIQUE KEY 1. Null Allows Null value. But only one Null value. 2. INDEX By default it adds a UNIQUE non-clustered index. 3. LIMIT A table can have more than one UNIQUE Key Column[s]. A: If you plan on using LINQ-to-SQL, your tables will require Primary Keys if you plan on performing updates, and they will require a timestamp column if you plan on working in a disconnected environment (such as passing an object through a WCF service application). If you like .NET, PK's and FK's are your friends. A: I submit that you may need both. Primary keys by nature need to be unique and not nullable. They are often surrogate keys as integers create faster joins than character fileds and especially than multiple field character joins. However, as these are often autogenerated, they do not guarantee uniqueness of the data record excluding the id itself. If your table has a natural key that should be unique, you should have a unique index on it to prevent data entry of duplicates. This is a basic data integrity requirement. Edited to add: It is also a real problem that real world data often does not have a natural key that truly guarantees uniqueness in a normalized table structure, especially if the database is people centered. Names, even name, address and phone number combined (think father and son in the same medical practice) are not necessarily unique. A: I was thinking of this problem my self. If you are using unique, you will hurt the 2. NF. According to this every non-pk-attribute has to be depending on the PK. The pair of attributes in this unique constraint are to be considered as part of the PK. sorry for replying to this 7 years later but didn't want to start a new discussion.
{ "language": "en", "url": "https://stackoverflow.com/questions/158392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: What is the best way to separate UI (designer/editor) logic from the Package framework (like Visual Studio Package) I want to separate concerns here. Create and embed all the UI logic for the Custom XML designer, object model, validations etc in to a separate assembly. Then the Package framework should only register the designer information and ask for a UI Service and everything works magically. This way I don't need to play with the Package framework (Visual Studio Package) assembly, when I need to modify the UI designer. This question also applies to anything where you have to separate the UI logic from the Skeleton framework that loads it up, like a plugin. I have several choices a ServiceProvider model, a plugin model or may be other. Any samples, suggestions for patterns, links are welcome. Update 1: What I am looking for is a thought such as - "Does Prism (Composite WPF) fit the bill? Has anyone worked on a project/application which does the separation of concerns just like I mentioned above? etc" (I am still looking out for answers) A: I've created a VSPackage that loads an editor. The Editor sits in a separate assembly and implements an interface that I defined. The VSPackage works with the interface, so any changes I make to the editor (and its assembly) does not affect the VSPackage as long as I don't change the interface. A: What you're asking about seams very much like the separation of concerns that the MVC pattern tries to enforce. ASP.NET MVC is already out there with a preview 5. It's mainly for web but I think they are planning on using it also for WinForms, but I'm not sure. A: I prefer the Model View Presenter pattern
{ "language": "en", "url": "https://stackoverflow.com/questions/158420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Selecting Office 2003/2007 COM Object (Correct One) in Winforms Application We are creating a Windows Form application (C# or VB.NET) that needs to reference an Office 2003 or Office 2007 COM object, depending on the version of office installed. What is the best way to handle this scenario and reference the correct COM object at runtime? A: Unless you want to use any of the newly added objects and methods of the Office 2007 object model, it is fine to build referencing the Office 2003 PIAs, just make sure the correct version of the PIAs is deployed on the target system: Another way around this problem is to remove the dependency on the later PIAs. Because of the high degree of backwards compatibility in Office, you can safely assume that if your add-in works on Office 2003 (with the Office 2003 PIAs), then it should also work on Office 2007 (with the Office 2007 PIAs). (from Add-ins for Multiple Office Versions without PIAs by Andrew Whitechapel) Otherwise I would recommend you the following blog articles by Andrew Whitechapel: Can you build one add-in for multiple versions of Office? (See the BIG warning that this is not officially supported by Microsoft). Another option where you do not need the PIAs (this makes deployment a lot easier) would be to use ComImport together with late binding. This is however slower than using the interop assemblies, but if the automation code is not on the fast path this might be a good solution. You'll find an explanation how to implement this in the same blog post: Add-ins for Multiple Office Versions without PIAs A: Would the Primary Interop assemblies for Office not help with this? I don't know for sure as I haven't had to use them in earnest, but I think they would.
{ "language": "en", "url": "https://stackoverflow.com/questions/158428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL Server Reporting Services 2005 - How to Handle Empty Reports I was wondering if it is possible to not attach Excel sheet if it is empty, and maybe write a different comment in the email if empty. When I go to report delivery options, there's no such configuration. Edit: I'm running SQL Server Reporting Services 2005. Some possible workarounds as mentioned below: MSDN: Reporting Services Extensions NoRows and NoRowsMessage properties I should look into these things. A: I believe the answer is no, at least not out of the box. It shouldn't be difficult to write your own delivery extension given the printing delivery extension sample included in RS. A: Yeah, I don't think that is possible. You could use the "NoRows" property of your table to display a message when no data is returned, but that wouldn't prevent the report from being attached. But at least when they opened the excel file it could print out your custom message instead of an empty document. A: Found this somewhere else... I have a clean solution to this problem, the only down side is that a system administrator must create and maintain the schedule. Try these steps: * *Create a subscription for the report with all the required recipients. *Set the subscription to run weekly on yesterday's day (ie if today is Tuesday, select Monday) with the schedule starting on today's date and stopping on today's date. Essentially, this schedule will never run. *Open the newly created job in SQL Management Studio, go to the steps and copy the line of SQL (it will look something like this: EXEC ReportServer.dbo.AddEvent @EventType='TimedSubscription', @EventData='1c2d9808-aa22-4597-6191-f152d7503fff') *Create your own job in SQL with the actual schedule and use something like: IF EXISTS(SELECT your test criteria...) BEGIN EXEC ReportServer.dbo.AddEvent @EventType=... etc. END A: I have had success with using a Data-Driven Subscription and a table containing my subscribers, with the data-driven subscription query looking like this: SELECT * FROM REPORT_SUBSCRIBERS WHERE EXISTS (SELECT QUERY_FROM_YOUR_REPORT) In the delivery settings, the recipient is the data column containing my email addresses. If the inner query returns no rows, then no emails will be sent. For your purposes, you can take advantage of the "Include Report" and "Comment" delivery settings. I imagine that a data-driven subscription query like this will work for you: SELECT 'person1@domain.com; person2@domain.com' AS RECIPIENTS, CASE WHEN EXISTS (REPORT_QUERY) THEN 'TRUE' ELSE 'FALSE' END AS INCLUDE_REPORT, CASE WHEN EXISTS (REPORT_QUERY) THEN 'The report is attached' ELSE 'There was no data in this report' END AS COMMENT Then use those columns in the appropriate fields when configuring the delivery settings for the subscription.
{ "language": "en", "url": "https://stackoverflow.com/questions/158431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Problems running ASP.NET application with IIS I have written a small ASP.NET application. It runs fine when running it with the small IIS installation that comes with Visual Studio 2005, but not when trying with IIS. I created the virtual directory in IIS where the application is located (done it though both IIS and VS 2005), but it does not work. In the beginning I thought it might be caused by the web.config file, but after a few tests, I think that the problem lies with IIS (not certain about it). Some of the errors that I get are Unable to start debugging on the web server. The underlying connection was closed: An unexpected error ocurred on a receiver. Click help for more information Can anybody give me a suggestion of what to try next? A: Have you run aspnet_regiis? Here's an overview site for different IIS versions setup and should help if there are other questions/issues A: Try reinstalling aspnet_regiis.exe. If you are using .net frameworkype 4.0 and using 64 bit system, Go to Run, Type cmd and Command Prompt will be up, then type %windir%\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe -i If you are using 34 bit system, Go to Run Type cmd and then type %windir%\Microsoft.NET\Framework\v4.0.30319\aspnet_regiis.exe -i A: The site won't load at all or you can't debug remotely? A: Some thoughts: * *Make sure you've got debugging enabled in your web.config, if you're trying to debug. Otherwise, build in 'Release' mode. *Make sure your project is set as an application, and is running as the correct version of .NET *More description of your setup and the error message would be useful. A: Make sure that the customErrors is set to off in the web.config. That should show the actual exception. Make sure that your virtual directory in IIS is set to the correct version of the .NET framework. Look at the properties in the virtual directory and see that the correct default documents are the ones that you are using in your dev project. Also look at the url headers for the website in IIS. A: On some of our servers we have both versions of the .NET framework. In IIS I typically have to set what version the virtual directory should be using. This can cause problems running it on the server. A: First make sure you have installed and configured IIS server. To check whether IIS server is installed: Run->inetmgr press enter. To know how to install and configure IIS server check the following Link: http://chalaki.com/install-iis6-windows-xp-professional-sp3-setup-run-csharp-cgi/425/ To develop Website using Visual developer with IIS instead of "default ASP.NET Development Server", In the new website window under "Location" click on "Browse" to see the different Server options including IIS Server. User can select the server as IIS server instead of "File System", then the "Location" option will be "HTTP" instead of "File System". In Visual developer 2008 under Properties->Start Option->Server->"Use local IIS server" option is not shown, Even though IIS server was installed and configured successfully. the only options shown are "use default server" and " Use custom server with base URL". So in Visual developer 2008 to run on IIS server(If IIS server is installed), Need do the following: New Website -> Under Locations Click on "Browse" -> Click on "Local IIS" and then select the "IIS Virtual Directory"(IIS vitual Directory which is directory created by the user while configuring IIS server) -> Open While Running/debugging, the server which you selected while creating the website, the same server will used to open the website, that is while creating the website if you selected "IIS Server" then the website will be opened though IIS server. One more thing is, while installing Visual developer 2008 and IIS server, If you installed IIS server after installing Visual studio then you need to do the following before creating new website: Run ->cmd press enter (then enter the following Command)-> C:\WINDOWS\Microsoft.NET\Framework\Version# aspnet_regiis -i press Enter then you get message after 3 seconds "ASP.net was installed successfully". (*Version# will be v2.0.50727 in most cases)
{ "language": "en", "url": "https://stackoverflow.com/questions/158436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: IE7 form not prompted for remember password when submitted through javascript I have a website where we use Javascript to submit the login form. On Firefox it prompts the user to remember their password, when they login, but on IE7 it doesn't. After doing some research it looks like the user is only prompted in IE7 when the form is submitted via a Submit control. I've created some sample html to prove this is the case. <html> <head> <title>test autocomplete</title> <script type="text/javascript"> function submitForm() { return document.forms[0].submit(); } </script> </head> <body> <form method="GET" action="test_autocomplete.html"> <input type="text" id="username" name="username"> <br> <input type="password" id="password" name="password"/> <br> <a href="javascript:submitForm();">Submit</a> <br> <input type="submit"/> </form> </body> </html> The href link doesn't get the prompt but the submit button will in IE7. Both work in Firefox. I can't get the style of my site to look the same with a submit button, Does anyone know how to get the remember password prompt to show up when submitting via Javascript? A: Why not try hooking the form submission this way? <html> <head> <title>test autocomplete</title> <script type="text/javascript"> function submitForm() { return true; } </script> </head> <body> <form method="GET" action="test_autocomplete.html" onsubmit="return submitForm();"> <input type="text" id="username" name="username"> <br> <input type="password" id="password" name="password"/> <br> <a href="#" onclick="document.getElementById('FORMBUTTON').click();">Submit</a> <br> <input id="FORMBUTTON" type="submit"/> </form> </body> </html> That way your function will be called whether the link is clicked or the submit button is pushed (or the enter key is pressed) and you can cancel the submission by returning false. This may affect the way IE7 interprets the form's submission. Edit: I would recommend always hooking form submission this way rather than calling submit() on the form object. If you call submit() then it will not trigger the form object's onsubmit. A: Did you try putting in url in the href and attaching a click event handler to submit the form and returning false from the click handler so that the url does not get navigates to. Alternatively hidden submit button triggered via javascript? A: You could try using the HTML <button> tag instead of a link or a submit button. For example, <button type="submit">Submit</button> The <button> tag is much easier to style than the standard <input type="submit">. There are some cross-browser quirks but they are not insurmountable. A really great article about the use of <button> can be found at particletree: Rediscovering the button element
{ "language": "en", "url": "https://stackoverflow.com/questions/158438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Daemon logging in Linux So I have a daemon running on a Linux system, and I want to have a record of its activities: a log. The question is, what is the "best" way to accomplish this? My first idea is to simply open a file and write to it. FILE* log = fopen("logfile.log", "w"); /* daemon works...needs to write to log */ fprintf(log, "foo%s\n", (char*)bar); /* ...all done, close the file */ fclose(log); Is there anything inherently wrong with logging this way? Is there a better way, such as some framework built into Linux? A: I spit a lot of daemon messages out to daemon.info and daemon.debug when I am unit testing. A line in your syslog.conf can stick those messages in whatever file you want. http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/040/4036/4036s1.html has a better explanation of the C API than the man page, imo. A: This is probably going to be a was horse race, but yes the syslog facility which exists in most if not all Un*x derivatives is the preferred way to go. There is nothing wrong with logging to a file, but it does leave on your shoulders an number of tasks: * *is there a file system at your logging location to save the file *what about buffering (for performance) vs flushing (to get logs written before a system crash) *if your daemon runs for a long time, what do you do about the ever growing log file. Syslog takes care of all this, and more, for you. The API is similar the printf clan so you should have no problems adapting your code. A: As stated above you should look into syslog. But if you want to write your own logging code I'd advise you to use the "a" (write append) mode of fopen. A few drawbacks of writing your own logging code are: Log rotation handling, Locking (if you have multiple threads), Synchronization (do you want to wait for the logs being written to disk ?). One of the drawbacks of syslog is that the application doesn't know if the logs have been written to disk (they might have been lost). A: Syslog is a good option, but you may wish to consider looking at log4c. The log4[something] frameworks work well in their Java and Perl implementations, and allow you to - from a configuration file - choose to log to either syslog, console, flat files, or user-defined log writers. You can define specific log contexts for each of your modules, and have each context log at a different level as defined by your configuration. (trace, debug, info, warn, error, critical), and have your daemon re-read that configuration file on the fly by trapping a signal, allowing you to manipulate log levels on a running server. A: If you use threading and you use logging as a debugging tool, you will want to look for a logging library that uses some sort of thread-safe, but unlocked ring buffers. One buffer per thread, with a global lock only when strictly needed. This avoids logging causing serious slowdowns in your software and it avoids creating heisenbugs which change when you add debug logging. If it has a high-speed compressed binary log format that doesn't waste time with format operations during logging and some nice log parsing and display tools, that is a bonus. I'd provide a reference to some good code for this but I don't have one myself. I just want one. :) A: One other advantage of syslog in larger (or more security-conscious) installations: The syslog daemon can be configured to send the logs to another server for recording there instead of (or in addition to) the local filesystem. It's much more convenient to have all the logs for your server farm in one place rather than having to read them separately on each machine, especially when you're trying to correlate events on one server with those on another. And when one gets cracked, you can't trust its logs any more... but if the log server stayed secure, you know nothing will have been deleted from its logs, so any record of the intrusion will be intact. A: Unix has had for a long while a special logging framework called syslog. Type in your shell man 3 syslog and you'll get the help for the C interface to it. Some examples #include <stdio.h> #include <unistd.h> #include <syslog.h> int main(void) { openlog("slog", LOG_PID|LOG_CONS, LOG_USER); syslog(LOG_INFO, "A different kind of Hello world ... "); closelog(); return 0; } A: Our embedded system doesn't have syslog so the daemons I write do debugging to a file using the "a" open mode similar to how you've described it. I have a function that opens a log file, spits out the message and then closes the file (I only do this when something unexpected happens). However, I also had to write code to handle log rotation as other commenters have mentioned which consists of 'tail -c 65536 logfile > logfiletmp && mv logfiletmp logfile'. It's pretty rough and maybe should be called "log frontal truncations" but it stops our small RAM disk based filesystem from filling up with log file. A: There are a lot of potential issues: for example, if the disk is full, do you want your daemon to fail? Also, you will be overwriting your file every time. Often a circular file is used so that you have space allocated on the machine for your file, but you can keep enough history to be useful without taking up too much space. There are tools like log4c that you can help you. If your code is c++, then you might consider log4cxx in the Apache project (apt-get install liblog4cxx9-dev on ubuntu/debian), but it looks like you are using C. A: So far nobody mentioned boost log library which has nice and easy way to redirect your log messages to files or syslog sink or even Windows event log.
{ "language": "en", "url": "https://stackoverflow.com/questions/158457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: Search file in directory using complex pattern I am looking for a C# library for getting files or directory from a directory using a complex pattern like the one used in Ant: * *dir1/dir2/**/SVN/* --> Matches all files in SVN directories that are located anywhere in the directory tree under dir1/dir2 ***/test/** --> Matches all files that have a test element in their path, including test as a filename. *... Do I need to code it myself? extract what I want from NAnt? Or this library exists and my google skill sucks. Directory.GetFiles(String path, String searchPattern) doesn't handle directory pattern and NDepend.Helpers.FileDirectoryPath neither (it's a great library for path manipulation by the way) A: Coding it yourself wouldnt be that hard. Just use a correctly formulated regular expression with System.IO methods to build the full path A: Are you comfortable with defining "*" as "anything but slash" and "**" as "anything at all"? If so, the regex conversion seems straightforward. * -> [^\/]* ** -> .* Then it's a matter of recursively enumerating all files, and checking if their paths match the regex.
{ "language": "en", "url": "https://stackoverflow.com/questions/158460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to obtain longitude and latitude for a street address programmatically (and legally) Supposedly, it is possible to get this from Google Maps or some such service. (US addresses only is not good enough.) A: You could also try the OpenStreetMap NameFinder, which contains open source, wiki-like street data for (potentially) the entire world. NameFinder will cease to exist at the end of august, but Nominatim is its replacement. A: The term you're looking for is geocoding and yes Google does provide this service. * *New V3 API: http://code.google.com/apis/maps/documentation/geocoding/ *Old V2 API: http://code.google.com/apis/maps/documentation/services.html#Geocoding A: Google requires you to show a Google map with their data, with a max of 2.5k (was 25k) HTTP requests per day. Useful but you have to watch usage. They do have http://groups.google.com/group/Google-Maps-API/web/resources-non-google-geocoders (Google has since removed this. If you see a duplicate or cache, I'll link to that.) ...in which I found GeoNames which has both a downloadable db, free web service and a commercial web service. A: Google's terms of service will let you use their geocoding API for free if your website is in turn free for consumers to use. If not you will have to get a license for the Enterprise Maps. A: For use with Drupal and PHP (and easily modified): function get_lat_long($address) { $res = drupal_http_request('http://maps.googleapis.com/maps/api/geocode/json?address=' . $address .'&sensor=false'); return json_decode($res->data)->results[0]->geometry->location; } A: In addition to the aforementioned Google geocoding web service, there is also a competing service provided by Yahoo. In a recent project where geocoding is done with user interaction, I included support for both. The reason is I have found that, especially outside the U.S., their handling of more obscure locations varies widely. Sometimes Google will have the best answer, sometimes Yahoo. One gotcha to be aware of: if Google really thinks they don't know where your place is, they will return a 602 error indicating failure. Yahoo, on the other hand, if it can peel out a city/province/state/etc out of your bad address, will return the location of the center of that town. So you do have to pay attention to the results you get to see if they are really what you want. There are ancillary fields in some results that tell you about this: Yahoo calls this field "precision" and Google calls it "accuracy". A: You can have a look at the Google Maps API docs here to get a start on this: http://code.google.com/apis/maps/documentation/services.html#Geocoding It also seems to be something that you can do for international addresses using Live Maps also: http://virtualearth.spaces.live.com/blog/cns!2BBC66E99FDCDB98!1588.entry A: You can also do this with Microsoft's MapPoint Web Services. Here's a blog post that explains how: http://www.codestrider.com/BlogRead.aspx?b=b5e8e275-cd18-4c24-b321-0da26e01bec5 A: R Code to get the latitude and longitude of a street address # CODE TO GET THE LATITUDE AND LONGITUDE OF A STREET ADDRESS WITH GOOGLE API addr <- '6th Main Rd, New Thippasandra, Bengaluru, Karnataka' # set your address here url = paste('http://maps.google.com/maps/api/geocode/xml?address=', addr,'&sensor=false',sep='') # construct the URL doc = xmlTreeParse(url) root = xmlRoot(doc) lat = xmlValue(root[['result']][['geometry']][['location']][['lat']]) long = xmlValue(root[['result']][['geometry']][['location']][['lng']]) lat [1] "12.9725020" long [1] "77.6510688" A: If you want to do this in Python: import json, urllib, urllib2 address = "Your address, New York, NY" encodedAddress = urllib.quote_plus(address) data = urllib2.urlopen("http://maps.googleapis.com/maps/api/geocode/json?address=" + encodedAddress + '&sensor=false').read() location = json.loads(data)['results'][0]['geometry']['location'] lat = location['lat'] lng = location['lng'] print lat, lng Note that Google does seem to throttle requests if it sees more than a certain amount, so you do want to use an API key in your HTTP request. A: If you want to do this without relying on a service, then you download the TIGER Shapefiles from the US Census. You look up the street you're interested in, which will have several segments. Each segment will have a start address and end address, and you interpolate along the segment to find where on the segment your house number lies. This will provide you with a lon/lat pair. Keep in mind, however, that online services employ a great deal of address checking and correction, which you'd have to duplicate as well to get good results. Also note that as nice as free data is, it's not perfect - the latest streets aren't in there (they might be in the data Google uses), and the streets may be off their real location by some amount due to survey inaccuracies. But for 98% of geocoding needs it works perfectly, is free, and you control everything so you're reducing dependencies in your app. Openstreetmaps has the aim of mapping everything in the world, though they aren't quite there it's worth keeping tabs on as they provide their data under a CC license However, many (most?) other countries are only mapped by gov't or services for which you need to pay a fee. If you don't need to geocode very much data, then using Google, Yahoo, or some of the other free worldwide mapping services may be enough. If you have to geocode a lot of data, then you will be best served by leasing map data from a major provider, such as teleatlas. -Adam A: I had a batch of 100,000 records to be geocode and ran into Google API's limit (and since it was for an internal enterprise app, we had to upgrade to their premium service which is $10K plus) So, I used this instead: http://geoservices.tamu.edu/Services/Geocode/BatchProcess/ -- they also have an API. (the total cost was around ~200$) A: You can try this in JavaScript for city like kohat var geocoder = new google.maps.Geocoder(); var address = "kohat"; geocoder.geocode( { 'address': address}, function(results, status) { var latitude = results[0].geometry.location.lat(); var longitude = results[0].geometry.location.lng(); alert(latitude+" and "+longitude); } }); A: In python using geopy PyPI used to get the lattitude,langitude,zipcode etc.. Here is the working sample code.. from geopy.geocoders import Nominatim geolocator = Nominatim(user_agent="your-app-id") location = geolocator.geocode("Your required address ") if location: print('\n Nominatim ADDRESS :',location.address) print('\n Nominatim LATLANG :',(location.latitude, location.longitude)) print('\n Nominatim FULL RESPONSE :',location.raw) else: print('Cannot Find') In Nominatim - Some addresses can't working, so i just tried MapQuest. It returns correctly. Mapquest provides free-plan 15000 transactions/month. It is enough for me. Sample code: import geocoder g = geocoder.mapquest("Your required address ",key='your-api-key') for result in g: # print(result.address, result.latlng) print('\n mapquest ADDRESS :',result.address,result.city,result.state,result.country) print('\n mapquest LATLANG :', result.latlng) print('\n mapquest FULL RESPONSE :',result.raw) Hope it helps. A: I know this is old question but google changing way to get latitude and longitude on regular based. HTML code <form> <input type="text" name="address" id="address" style="width:100%;"> <input type="button" onclick="return getLatLong()" value="Get Lat Long" /> </form> <div id="latlong"> <p>Latitude: <input size="20" type="text" id="latbox" name="lat" ></p> <p>Longitude: <input size="20" type="text" id="lngbox" name="lng" ></p> </div> JavaScript Code <script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap" async defer></script> <script> function getLatLong() { var address = document.getElementById("address").value; var geocoder = new google.maps.Geocoder(); geocoder.geocode( { 'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var latitude = results[0].geometry.location.lat(); document.getElementById("latbox").value=latitude; var longitude = results[0].geometry.location.lng(); document.getElementById("lngbox").value=longitude; } }); } </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/158474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: Programmatically recognize text from scans in a PDF File I have a PDF file, which contains data that we need to import into a database. The files seem to be pdf scans of printed alphanumeric text. Looks like 10 pt. Times New Roman. Are there any tools or components that can will allow me to recognize and parse this text? A: You can't extract scanned text from a PDF. You need OCR software. The good news is there are a few open source applications you can try and the OCR route will most likely be easier than using a PDF library to extract text. Check out Tesseract and GOCR. A: I have posted about parsing pdf's in one of my blogs. Hit this link: http://devpinoy.org/blogs/marl/archive/2008/03/04/pdf-to-text-using-open-source-library-pdfbox-another-sample-for-grade-1-pupils.aspx Edit: Link no long works. Below quoted from http://web.archive.org/web/20130507084207/http://devpinoy.org/blogs/marl/archive/2008/03/04/pdf-to-text-using-open-source-library-pdfbox-another-sample-for-grade-1-pupils.aspx Well, the following is based on popular examples available on the web. What this does is "read" the pdf file and output it as a text in the rich text box control in the form. The PDFBox for .NET library can be downloaded from sourceforge. You need to add reference to IKVM.GNU.Classpath & PDFBox-0.7.3. And also, FontBox-0.1.0-dev.dll and PDFBox-0.7.3.dll need to be added on the bin folder of your application. For some reason I can't recall (maybe it's from one of the tutorials), I also added to the bin IKVM.GNU.Classpath.dll. On the side note, just got my copy of "Head First C#" (on Keith's suggestion) from Amazon. The book is cool! It is really written for beginners. This edition covers VS2008 and the framework 3.5. Here you go... /* Marlon Ribunal * Convert PDF To Text * *******************/ using System; using System.Collections.Generic; using System.Drawing; using System.Windows.Forms; using System.Drawing.Printing; using System.IO; using System.Text; using System.ComponentModel.Design; using System.ComponentModel; using org.pdfbox.pdmodel; using org.pdfbox.util; namespace MarlonRibunal.iPdfToText { public partial class MainForm : Form { public MainForm() { InitializeComponent(); } void Button1Click(object sender, EventArgs e) { PDDocument doc = PDDocument.load("C:\\pdftoText\\myPdfTest.pdf"); PDFTextStripper stripper = new PDFTextStripper(); richTextBox1.Text=(stripper.getText(doc)); } } } A: I've used pdftohtml to successfully strip tables out of PDF into CSV. It's based on Xpdf, which is a more general purpose tool, that includes pdftotext. I just wrap it as a Process.Start call from C#. If you're looking for something a little more DIY, there's the iTextSharp library - a port of Java's iText - and PDFBox (yes, it says Java - but they have a .NET version by way of IKVM.NET). Here's some CodeProject articles on using iTextSharp and PDFBox from C#. And, if you're really a masochist, you could call into Adobe's PDF IFilter with COM interop. The IFilter specs is pretty simple, but I would guess that the interop overhead would be significant. Edit: After re-reading the question and subsequent answers, it's become clear that the OP is dealing with images in his PDF. In that case, you'll need to extract the images (the PDF libraries above are able to do that fairly easily) and run it through an OCR engine. I've used MODI interactively before, with decent results. It's COM, so calling it from C# via interop is also doable and pretty simple: ' lifted from http://en.wikipedia.org/wiki/Microsoft_Office_Document_Imaging Dim inputFile As String = "C:\test\multipage.tif" Dim strRecText As String = "" Dim Doc1 As MODI.Document Doc1 = New MODI.Document Doc1.Create(inputFile) Doc1.OCR() ' this will ocr all pages of a multi-page tiff file Doc1.Save() ' this will save the deskewed reoriented images, and the OCR text, back to the inputFile For imageCounter As Integer = 0 To (Doc1.Images.Count - 1) ' work your way through each page of results strRecText &= Doc1.Images(imageCounter).Layout.Text ' this puts the ocr results into a string Next File.AppendAllText("C:\test\testmodi.txt", strRecText) ' write the OCR file out to disk Doc1.Close() ' clean up Doc1 = Nothing Others like Tesseract, but I have direct experience with it. I've heard both good and bad things about it, so I imagine it greatly depends on your source quality. A: At a company I used to work for, we used ActivePDF toolkit with some success: http://www.activepdf.com/products/serverproducts/toolkit/index.cfm I think you'd need at least the Standard or Pro version but they have trials so you can see if it'll do what you want it to. A: A quick google search shows this promising result. http://www.pdftron.com/net/index.html A: If the PDF is a scans of printed text, it will be hard (involves image processing, character recognizing etc.) to do it yourself. PDF will generally store the scanned documents as JPEGs internally. You are better of using a third party tool (OCR tool) that does this. A: You can use a module like perl's PDF to extract the text. And use another tool to import the pertinent info into the database. I am sure there are PDF components for .NET, but I have not tried any, so I don't know what is good. A: I've recently found ReportLab for Python. A: If I get it right, sheebz is asking how to extract PDF fields and load the data into a database. Have you looked at iTextSharp? - http://sourceforge.net/projects/itextsharp/ A: Based on Mark Brackett's answer, I created a Nuget package to wrap pdftotext. It's open source, targeting .net standard 1.6 and .net framework 4.5. Usage: using XpdfNet; var pdfHelper = new XpdfHelper(); string content = pdfHelper.ToText("./pathToFile.pdf");
{ "language": "en", "url": "https://stackoverflow.com/questions/158479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Inject data members to an object This question is based on another question of mine(thankfully answered). So if in a model I have this: def self.find_extended person = Person.find(:first) complete_name = person.firstname + ', ' + person.lastname return person end How can I inject complete name in the person object so in my controller/view I can access it by person.complete_name? Thank you for your time, Silviu A: I think the best way to do this is creation of complete_name attribute in your Person class: def complete_name firstname + ', ' + lastname end A: If you are going to be iterating over a lot of records, then using an interpolated string will be more memory-efficient. def complete_name "#{firstname}, #{lastname}" end Using String#+ to concatenate strings creates String objects at each step. In other words, if firstname is 'John' and lastname is 'Doe', then each of these strings will exist in memory and need to be garbage-collected at some point: 'John', 'Doe', 'John, ', and finally 'John, Doe'. Not to mention that there are three method invocations instead of one string interpolation which is more efficiently implemented in C. If you use the #{} notation, then you avoid creating the 'John, ' string. Doesn't matter when dealing with one or two records, but in large datasets used in all sorts of methods it can add up quickly. A: You could define: attr_accessor :complete_name in the person model and then just do person.complete_name= person.firstname + ', ' + person.lastname A: Also, another quick note. You don't need that return statement in Ruby. The last statement in your method will be returned.
{ "language": "en", "url": "https://stackoverflow.com/questions/158482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: c# network login How do I perform a network login, to access a shared driver for instance, programmatically in c#? The same can be achieved by either attempting to open a share through the explorer, or by the net use shell command. A: P/Invoke call to WNetAddConnection2 will do the trick. Look here for more info. [DllImport("mpr.dll")] public static extern int WNetAddConnection2A ( [MarshalAs(UnmanagedType.LPArray)] NETRESOURCEA[] lpNetResource, [MarshalAs(UnmanagedType.LPStr)] string lpPassword, [MarshalAs(UnmanagedType.LPStr)] string UserName, int dwFlags ); A: You'll need to use Windows Identity Impersonation, take a look at these links http://blogs.msdn.com/shawnfa/archive/2005/03/21/400088.aspx http://blogs.msdn.com/saurabhkv/archive/2008/05/29/windowsidentity-impersonation-using-c-code.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/158492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to use JQuery UI datepicker with bgIframe on IE 6 I am trying to use the JQuery UI datepicker (latest stable version 1.5.2) on an IE6 website. But I am having the usual problems with combo boxes (selects) on IE6 where they float above other controls. I have tried adding the bgIframe plugin after declaring the datepicker with no luck. My guess is that the .ui-datepicker-div to which I am attaching the bgIframe doesn't exist until the calendar is shown. I am wondering if I can put the .bgIframe() command directly into the datepicker .js file and if so, where? (the similar control by kelvin Luck uses this approach) Current code $(".DateItem").datepicker({ showOn:"button", ... etc ... }); $(".ui-datepicker-div").bgIframe(); A: This should be taken care of for you by default. The iframe gets included by default in IE6 in the datepicker. The style for it, called ui-datepicker-cover that handles the transparency. The only time this isn't the case is in the old themeroller code the style wasn't in there. A: I worried very much due to the problem, too. The solution becomes the following. $(".DateItem").datepicker({ showOn:"button", beforeShow:function(){ $('#ui-datepicker-div').bgiframe(); }, ... etc ... }); A: I have noted Marc's comment that the ui-datepicker-cover style should handle this. In my case the right and bottom edges of the calendar would still show drop downs through them. It looks like the size of the iFrame is initially being set by the following lines of code if ($.browser.msie && parseInt($.browser.version, 10) < 7) // fix IE < 7 select problems $('iframe.ui-datepicker-cover').css({ width: inst.dpDiv.width() + 4, height: inst.dpDiv.height() + 4 }); in the postProcess function. This size is then reset each time the date is changed by the line inst.dpDiv.empty().append(this._generateHTML(inst)). find('iframe.ui-datepicker-cover'). css({ width: dims.width, height: dims.height }); My simplistic solution was to remove these two sets of code and fix the size of the cover style in the .css file //if ($.browser.msie && parseInt($.browser.version, 10) < 7) // fix IE < 7 select problems // $('iframe.ui-datepicker-cover').css({ width: inst.dpDiv.width() + 4, height: inst.dpDiv.height() + 4 }); inst.dpDiv.empty().append(this._generateHTML(inst))//. <=== note the // before the . //find('iframe.ui-datepicker-cover'). //css({ width: dims.width, height: dims.height }); in css file set the width of .ui-datepicker-cover to 220px, height to 200px Steve A: i had something like this and to use the bgIframe plugin, just i put the bgiframe(); function inside onBeforeShow() datepicker's method check it $('#date').DatePicker({ format:'Y/m/d', date: $('#date').val(), current: $('#date').val(), position: 'r', onBeforeShow: function(){ $('#date').DatePickerSetDate($('#date').val(), true); $('.datepickerContainer').bgiframe(); }, onChange: function(formated, dates){ $('#date').val(formated); $('#date').DatePickerHide(); } });
{ "language": "en", "url": "https://stackoverflow.com/questions/158502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }