text
stringlengths
8
267k
meta
dict
Q: Cross-thread operation not valid: Control accessed from a thread other than the thread it was created on I have a scenario. (Windows Forms, C#, .NET) * *There is a main form which hosts some user control. *The user control does some heavy data operation, such that if I directly call the UserControl_Load method the UI become nonresponsive for the duration for load method execution. *To overcome this I load data on different thread (trying to change existing code as little as I can) *I used a background worker thread which will be loading the data and when done will notify the application that it has done its work. *Now came a real problem. All the UI (main form and its child usercontrols) was created on the primary main thread. In the LOAD method of the usercontrol I'm fetching data based on the values of some control (like textbox) on userControl. The pseudocode would look like this: CODE 1 UserContrl1_LoadDataMethod() { if (textbox1.text == "MyName") // This gives exception { //Load data corresponding to "MyName". //Populate a globale variable List<string> which will be binded to grid at some later stage. } } The Exception it gave was Cross-thread operation not valid: Control accessed from a thread other than the thread it was created on. To know more about this I did some googling and a suggestion came up like using the following code CODE 2 UserContrl1_LoadDataMethod() { if (InvokeRequired) // Line #1 { this.Invoke(new MethodInvoker(UserContrl1_LoadDataMethod)); return; } if (textbox1.text == "MyName") // Now it won't give an exception { //Load data correspondin to "MyName" //Populate a globale variable List<string> which will be binded to grid at some later stage } } But it still seems that I've come back to square one. The Application again becomes unresponsive. It seems to be due to the execution of line #1 if condition. The loading task is again done by the parent thread and not the third that I spawned. I don't know whether I perceived this right or wrong. How do I resolve this and also what is the effect of execution of Line#1 if block? The situation is this: I want to load data into a global variable based on the value of a control. I don't want to change the value of a control from the child thread. I'm not going to do it ever from a child thread. So only accessing the value so that the corresponding data can be fetched from the database. A: You need to look at the Backgroundworker example: http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx Especially how it interacts with the UI layer. Based on your posting, this seems to answer your issues. A: You only want to use Invoke or BeginInvoke for the bare minimum piece of work required to change the UI. Your "heavy" method should execute on another thread (e.g. via BackgroundWorker) but then using Control.Invoke/Control.BeginInvoke just to update the UI. That way your UI thread will be free to handle UI events etc. See my threading article for a WinForms example - although the article was written before BackgroundWorker arrived on the scene, and I'm afraid I haven't updated it in that respect. BackgroundWorker merely simplifies the callback a bit. A: Here is an alternative way if the object you are working with doesn't have (InvokeRequired) This is useful if you are working with the main form in a class other than the main form with an object that is in the main form, but doesn't have InvokeRequired delegate void updateMainFormObject(FormObjectType objectWithoutInvoke, string text); private void updateFormObjectType(FormObjectType objectWithoutInvoke, string text) { MainForm.Invoke(new updateMainFormObject(UpdateObject), objectWithoutInvoke, text); } public void UpdateObject(ToolStripStatusLabel objectWithoutInvoke, string text) { objectWithoutInvoke.Text = text; } It works the same as above, but it is a different approach if you don't have an object with invokerequired, but do have access to the MainForm A: I found a need for this while programming an iOS-Phone monotouch app controller in a visual studio winforms prototype project outside of xamarin stuidio. Preferring to program in VS over xamarin studio as much as possible, I wanted the controller to be completely decoupled from the phone framework. This way implementing this for other frameworks like Android and Windows Phone would be much easier for future uses. I wanted a solution where the GUI could respond to events without the burden of dealing with the cross threading switching code behind every button click. Basically let the class controller handle that to keep the client code simple. You could possibly have many events on the GUI where as if you could handle it in one place in the class would be cleaner. I am not a multi theading expert, let me know if this is flawed. public partial class Form1 : Form { private ExampleController.MyController controller; public Form1() { InitializeComponent(); controller = new ExampleController.MyController((ISynchronizeInvoke) this); controller.Finished += controller_Finished; } void controller_Finished(string returnValue) { label1.Text = returnValue; } private void button1_Click(object sender, EventArgs e) { controller.SubmitTask("Do It"); } } The GUI form is unaware the controller is running asynchronous tasks. public delegate void FinishedTasksHandler(string returnValue); public class MyController { private ISynchronizeInvoke _syn; public MyController(ISynchronizeInvoke syn) { _syn = syn; } public event FinishedTasksHandler Finished; public void SubmitTask(string someValue) { System.Threading.ThreadPool.QueueUserWorkItem(state => submitTask(someValue)); } private void submitTask(string someValue) { someValue = someValue + " " + DateTime.Now.ToString(); System.Threading.Thread.Sleep(5000); //Finished(someValue); This causes cross threading error if called like this. if (Finished != null) { if (_syn.InvokeRequired) { _syn.Invoke(Finished, new object[] { someValue }); } else { Finished(someValue); } } } } A: Simple and re-usable way to work around this problem. Extension Method public static class FormExts { public static void LoadOnUI(this Form frm, Action action) { if (frm.InvokeRequired) frm.Invoke(action); else action.Invoke(); } } Sample Usage private void OnAnyEvent(object sender, EventArgs args) { this.LoadOnUI(() => { label1.Text = ""; button1.Text = ""; }); } A: I know its too late now. However even today if you are having trouble accessing cross thread controls? This is the shortest answer till date :P Invoke(new Action(() => { label1.Text = "WooHoo!!!"; })); This is how i access any form control from a thread. A: For example to get the text from a Control of the UI thread: Private Delegate Function GetControlTextInvoker(ByVal ctl As Control) As String Private Function GetControlText(ByVal ctl As Control) As String Dim text As String If ctl.InvokeRequired Then text = CStr(ctl.Invoke( New GetControlTextInvoker(AddressOf GetControlText), ctl)) Else text = ctl.Text End If Return text End Function A: Along the same lines as previous answers, but a very short addition that Allows to use all Control properties without having cross thread invokation exception. Helper Method /// <summary> /// Helper method to determin if invoke required, if so will rerun method on correct thread. /// if not do nothing. /// </summary> /// <param name="c">Control that might require invoking</param> /// <param name="a">action to preform on control thread if so.</param> /// <returns>true if invoke required</returns> public bool ControlInvokeRequired(Control c, Action a) { if (c.InvokeRequired) c.Invoke(new MethodInvoker(delegate { a(); })); else return false; return true; } Sample Usage // usage on textbox public void UpdateTextBox1(String text) { //Check if invoke requied if so return - as i will be recalled in correct thread if (ControlInvokeRequired(textBox1, () => UpdateTextBox1(text))) return; textBox1.Text = ellapsed; } //Or any control public void UpdateControl(Color c, String s) { //Check if invoke requied if so return - as i will be recalled in correct thread if (ControlInvokeRequired(myControl, () => UpdateControl(c, s))) return; myControl.Text = s; myControl.BackColor = c; } A: this.Invoke(new MethodInvoker(delegate { //your code here; })); A: As per Prerak K's update comment (since deleted): I guess I have not presented the question properly. Situation is this: I want to load data into a global variable based on the value of a control. I don't want to change the value of a control from the child thread. I'm not going to do it ever from a child thread. So only accessing the value so that corresponding data can be fetched from the database. The solution you want then should look like: UserContrl1_LOadDataMethod() { string name = ""; if(textbox1.InvokeRequired) { textbox1.Invoke(new MethodInvoker(delegate { name = textbox1.text; })); } if(name == "MyName") { // do whatever } } Do your serious processing in the separate thread before you attempt to switch back to the control's thread. For example: UserContrl1_LOadDataMethod() { if(textbox1.text=="MyName") //<<======Now it wont give exception** { //Load data correspondin to "MyName" //Populate a globale variable List<string> which will be //bound to grid at some later stage if(InvokeRequired) { // after we've done all the processing, this.Invoke(new MethodInvoker(delegate { // load the control with the appropriate data })); return; } } } A: I have had this problem with the FileSystemWatcher and found that the following code solved the problem: fsw.SynchronizingObject = this The control then uses the current form object to deal with the events, and will therefore be on the same thread. A: Same question : how-to-update-the-gui-from-another-thread-in-c Two Ways: * *Return value in e.result and use it to set yout textbox value in backgroundWorker_RunWorkerCompleted event *Declare some variable to hold these kind of values in a separate class (which will work as data holder) . Create static instance of this class adn you can access it over any thread. Example: public class data_holder_for_controls { //it will hold value for your label public string status = string.Empty; } class Demo { public static data_holder_for_controls d1 = new data_holder_for_controls(); static void Main(string[] args) { ThreadStart ts = new ThreadStart(perform_logic); Thread t1 = new Thread(ts); t1.Start(); t1.Join(); //your_label.Text=d1.status; --- can access it from any thread } public static void perform_logic() { //put some code here in this function for (int i = 0; i < 10; i++) { //statements here } //set result in status variable d1.status = "Task done"; } } A: I find the check-and-invoke code which needs to be littered within all methods related to forms to be way too verbose and unneeded. Here's a simple extension method which lets you do away with it completely: public static class Extensions { public static void Invoke<TControlType>(this TControlType control, Action<TControlType> del) where TControlType : Control { if (control.InvokeRequired) control.Invoke(new Action(() => del(control))); else del(control); } } And then you can simply do this: textbox1.Invoke(t => t.Text = "A"); No more messing around - simple. A: Threading Model in UI Please read the Threading Model in UI applications (old VB link is here) in order to understand basic concepts. The link navigates to page that describes the WPF threading model. However, Windows Forms utilizes the same idea. The UI Thread * *There is only one thread (UI thread), that is allowed to access System.Windows.Forms.Control and its subclasses members. *Attempt to access member of System.Windows.Forms.Control from different thread than UI thread will cause cross-thread exception. *Since there is only one thread, all UI operations are queued as work items into that thread: * *If there is no work for UI thread, then there are idle gaps that can be used by a not-UI related computing. *In order to use mentioned gaps use System.Windows.Forms.Control.Invoke or System.Windows.Forms.Control.BeginInvoke methods: BeginInvoke and Invoke methods * *The computing overhead of method being invoked should be small as well as computing overhead of event handler methods because the UI thread is used there - the same that is responsible for handling user input. Regardless if this is System.Windows.Forms.Control.Invoke or System.Windows.Forms.Control.BeginInvoke. *To perform computing expensive operation always use separate thread. Since .NET 2.0 BackgroundWorker is dedicated to performing computing expensive operations in Windows Forms. However in new solutions you should use the async-await pattern as described here. *Use System.Windows.Forms.Control.Invoke or System.Windows.Forms.Control.BeginInvoke methods only to update a user interface. If you use them for heavy computations, your application will block: Invoke * *System.Windows.Forms.Control.Invoke causes separate thread to wait till invoked method is completed: BeginInvoke * *System.Windows.Forms.Control.BeginInvoke doesn't cause the separate thread to wait till invoked method is completed: Code solution Read answers on question How to update the GUI from another thread in C#?. For C# 5.0 and .NET 4.5 the recommended solution is here. A: Simply use this: this.Invoke((MethodInvoker)delegate { YourControl.Property= value; // runs thread safe }); A: Controls in .NET are not generally thread safe. That means you shouldn't access a control from a thread other than the one where it lives. To get around this, you need to invoke the control, which is what your 2nd sample is attempting. However, in your case all you've done is pass the long-running method back to the main thread. Of course, that's not really what you want to do. You need to rethink this a little so that all you're doing on the main thread is setting a quick property here and there. A: The cleanest (and proper) solution for UI cross-threading issues is to use SynchronizationContext, see Synchronizing calls to the UI in a multi-threaded application article, it explains it very nicely. A: Follow the simplest (in my opinion) way to modify objects from another thread: using System.Threading.Tasks; using System.Threading; namespace TESTE { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { Action<string> DelegateTeste_ModifyText = THREAD_MOD; Invoke(DelegateTeste_ModifyText, "MODIFY BY THREAD"); } private void THREAD_MOD(string teste) { textBox1.Text = teste; } } } A: A new look using Async/Await and callbacks. You only need one line of code if you keep the extension method in your project. /// <summary> /// A new way to use Tasks for Asynchronous calls /// </summary> public class Example { /// <summary> /// No more delegates, background workers etc. just one line of code as shown below /// Note it is dependent on the XTask class shown next. /// </summary> public async void ExampleMethod() { //Still on GUI/Original Thread here //Do your updates before the next line of code await XTask.RunAsync(() => { //Running an asynchronous task here //Cannot update GUI Thread here, but can do lots of work }); //Can update GUI/Original thread on this line } } /// <summary> /// A class containing extension methods for the Task class /// Put this file in folder named Extensions /// Use prefix of X for the class it Extends /// </summary> public static class XTask { /// <summary> /// RunAsync is an extension method that encapsulates the Task.Run using a callback /// </summary> /// <param name="Code">The caller is called back on the new Task (on a different thread)</param> /// <returns></returns> public async static Task RunAsync(Action Code) { await Task.Run(() => { Code(); }); return; } } You can add other things to the Extension method such as wrapping it in a Try/Catch statement, allowing caller to tell it what type to return after completion, an exception callback to caller: Adding Try Catch, Auto Exception Logging and CallBack /// <summary> /// Run Async /// </summary> /// <typeparam name="T">The type to return</typeparam> /// <param name="Code">The callback to the code</param> /// <param name="Error">The handled and logged exception if one occurs</param> /// <returns>The type expected as a competed task</returns> public async static Task<T> RunAsync<T>(Func<string,T> Code, Action<Exception> Error) { var done = await Task<T>.Run(() => { T result = default(T); try { result = Code("Code Here"); } catch (Exception ex) { Console.WriteLine("Unhandled Exception: " + ex.Message); Console.WriteLine(ex.StackTrace); Error(ex); } return result; }); return done; } public async void HowToUse() { //We now inject the type we want the async routine to return! var result = await RunAsync<bool>((code) => { //write code here, all exceptions are logged via the wrapped try catch. //return what is needed return someBoolValue; }, error => { //exceptions are already handled but are sent back here for further processing }); if (result) { //we can now process the result because the code above awaited for the completion before //moving to this statement } } A: This is not the recommended way to solve this error but you can suppress it quickly, it will do the job . I prefer this for prototypes or demos . add CheckForIllegalCrossThreadCalls = false in Form1() constructor . A: Action y; //declared inside class label1.Invoke(y=()=>label1.Text="text"); A: There are two options for cross thread operations. Control.InvokeRequired Property and second one is to use SynchronizationContext Post Method Control.InvokeRequired is only useful when working controls inherited from Control class while SynchronizationContext can be used anywhere. Some useful information is as following links Cross Thread Update UI | .Net Cross Thread Update UI using SynchronizationContext | .Net
{ "language": "en", "url": "https://stackoverflow.com/questions/142003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "662" }
Q: Server based syntax highlighting in Management Studio Why are some keywords highlighted blue and some gray in SQL Server Management Studio? And why does the UNION keyword highlight as gray when connected to a SQL Server 2000 database, but blue when connected to a SQL Server 2005 database? A: They are reserved words. We have a table called Order in our production DB (Before I started!), which is annoying. Edit: Sorry, misread you. Blue = Keyword, Gray = Operator. Full list of colpurs: http://www.informit.com/guides/content.aspx?g=sqlserver&seqNum=177 about halfway down.
{ "language": "en", "url": "https://stackoverflow.com/questions/142005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Force a Samba process to close a file Is there a way to force a Samba process to close a given file without killing it? Samba opens a process for each client connection, and sometimes I see it holds open files far longer than needed. Usually i just kill the process, and the (windows) client will reopen it the next time it access the share; but sometimes it's actively reading other file for a long time, and i'd like to just 'kill' one file, and not the whole connection. edit: I've tried the 'net rpc file close ', but doesn't seem to work. Anybody knows why? edit: this is the best mention i've found of something similar. It seems to be a problem on the win32 client, something that microsoft servers have a workaround for; but Samba doesn't. I wish the net rpc file close <fileid> command worked, I'll keep trying to find out why. I'm accepting LuckyLindy's answer, even if it didn't solve the problem, because it's the only useful procedure in this case. A: This happens all the time on our systems, particularly when connecting to Samba from a Win98 machine. We follow these steps to solve it (which are probably similar to yours): * *See which computer is using the file (i.e. lsof|grep -i <file_name>) *Try to open that file from the offending computer, or see if a process is hiding in task manager that we can close *If no luck, have the user exit any important network programs *Kill the user's Samba process from linux (i.e. kill -9 <pid>) I wish there was a better way! A: I am creating a new answer, since my first answer really just contained more questions, and really was not a whole lot of help. After doing a bit of searching, I have not been able to find any current open bugs for the latest version of Samba, please check out the Samba Bug Report website, and create a new bug. This is the simplest way to get someone to suggest ideas as to how to possibly fix it, and have developers look at the issue. LuckyLindy left a comment in my previous answer saying that this is the way it has been for 5 years now, well the project is Open Source the best way to fix something that is wrong by reporting it, and or providing patches. I have also found one mailing list entry: Samba Open files, they suggest adding posix locking=no to the configuration file, as long as you don't also have the files handed out over NFS not locking the file should be okay, that is if the file is being held is locked. If you wanted too, you could write a program that uses ptrace and attaches to the program, and it goes through and unlocks and closes all the files. However, be aware that this might possibly leave Samba in an unknown state, which can be more dangerous. The work around that I have already mentioned is to periodically restart samba as a work around. I know it is not a solution but it might work temporarily. A: This is probably answered here: How to close a file descriptor from another process in unix systems At a guess, 'net rpc file close' probably doesn't work because the interprocess communication telling Samba to close the file winds up not being looked at until the file you want to close is done being read. A: If there isn't an explicit option in samba, that would be impossible to externally close an open file descriptor with standard unix interfaces. A: Generally speaking, you can't meddle with a process file descriptors from the outside. Yet as root you can of course do that as you seen in that phrack article from 1997: http://www.phrack.org/issues.html?issue=51&id=5#article - I wouldn't recommend doing that on a production system though... A: The better question in this case would be why? Why do you want to close a file early? What purpose does it ultimately have to close the file? What are you attempting to accomplish? A: Samba provides commands for viewing open files and closing them. To list all open files: net rpc file -U ADadmin%password Replace ADadmin and password with the credentials of a Windows AD domain admin. This gives you a file id, username of who's got it open, lock status, and the filename. You'll frequently want to filter the results by piping them through grep. Once you've found a file you want to close, copy its file id number and use this command: net rpc file close fileid -U ADadmin%password A: I needed to accomplish something like this, so that I could easily unmount devices I happened to be sharing. I wrote this quick bash script: #!/bin/bash PIDS_TO_CLOSE=$(smbstatus -L | tail -n-3 | grep "$1" | cut -d' ' -f1 - | sort -u | sed '/^$/$ for PID in $PIDS_TO_CLOSE; do kill $PID done It takes a single argument, the paths to close: smbclose /media/drive Any path that matches that argument (by grep) is closed, so you should be pretty specific with it. (Only files open through samba are affected.) Obviously, you need root to close files opened by other users, but it works fine for files you have open. Note that as with any other force closing of a file, data corruption can occur. As long as the files are inactive, it should be fine though. It's pretty ugly, but for my use-case (closing whole mount points) it works well enough.
{ "language": "en", "url": "https://stackoverflow.com/questions/142007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can XPath do a foreign key lookup across two subtrees of an XML? Say I have the following XML... <root> <base> <tent key="1" color="red"/> <tent key="2" color="yellow"/> <tent key="3" color="blue"/> </base> <bucket> <tent key="1"/> <tent key="3"/> </bucket> </root> ...what would the XPath be that returns that the "bucket" contains "red" and "blue"? A: If you're using XSLT, I'd recommend setting up a key: <xsl:key name="tents" match="base/tent" use="@key" /> You can then get the <tent> within <base> with a particular key using key('tents', $id) Then you can do key('tents', /root/bucket/tent/@key)/@color or, if $bucket is a particular <bucket> element, key('tents', $bucket/tent/@key)/@color A: I think this will work: /root/base/tent[/root/bucket/tent/@key = @key ]/@color A: It's not pretty. As with any lookup, you need to use current(): /root/bucket[/root/base/tent[@key = current()/tent/@key]/@color = 'blue' or /root/base/tent[@key = current()/tent/@key]/@color = 'red'] A: JeniT has the appropriate response / code listed here. You need to create the key before you walk the XML Document, then perform matches against that key.
{ "language": "en", "url": "https://stackoverflow.com/questions/142010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What are the recommended learning material for SSIS? Okay, you don't need to be a guru, but if you happen to have a good working knowledge on SSIS and you used some tutorials around the web to get you there, then please share them. I have been trying to find some solid stuff (screencasts maybe), but I am having a hard time. Any solid links would be appreciated and I will add them to this question in an aggregated format at the end. Thank you. So far we have: http://blogs.conchango.com/jamiethomson http://sqlis.com A: http://blogs.conchango.com/jamiethomson/ A very, very good place to start, A: I would recommend en excellent series of articles by Marcin Policht There are about 50 articles at the moment and each focuses on different aspect of the SSIS, they are pretty detailed and I found them to be an excellent source of information on the subject of SSIS A: Another great resource besides Cochango (great blog btw!) is http://www.sqlis.com/ I also found these two books to be very helpful: Microsoft SQL Server 2005 Integration Services Expert SQL Server 2005 Integration Services I read them in the order that they are listed above. A: These links are mainly components, but they have good information resources also. http://www.sqlbi.com/ - Some great SSIS components for data warehousing BI http://www.konesans.com/products.aspx - Some more useful components A: The current Jamie Thomson blog is located here, where he continues to write about ETL stuff: http://sqlblog.com/blogs/jamie_thomson/ A: SSIS tutorials for the beginner: http://msdn.microsoft.com/en-us/library/ms169917.aspx BI Monkey has some good examples: http://www.bimonkey.com/ SSIS team blog: http://blogs.msdn.com/b/mattm/
{ "language": "en", "url": "https://stackoverflow.com/questions/142015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: C/C++ Structure offset I'm looking for a piece of code that can tell me the offset of a field within a structure without allocating an instance of the structure. IE: given struct mstct { int myfield; int myfield2; }; I could write: mstct thing; printf("offset %lu\n", (unsigned long)(&thing.myfield2 - &thing)); And get offset 4 for the output. How can I do it without that mstct thing declaration/allocating one? I know that &<struct> does not always point at the first byte of the first field of the structure, I can account for that later. A: Right, use the offsetof macro, which (at least with GNU CC) is available to both C and C++ code: offsetof(struct mstct, myfield2) A: How about the standard offsetof() macro (in stddef.h)? Edit: for people who might not have the offsetof() macro available for some reason, you can get the effect using something like: #define OFFSETOF(type, field) ((unsigned long) &(((type *) 0)->field)) A: printf("offset: %d\n", &((mstct*)0)->myfield2);
{ "language": "en", "url": "https://stackoverflow.com/questions/142016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Is classic ASP still a alternative adverse other languages for new projects? There are a lot of webs still using classic ASP instead of ASP.NET but that is not the question - "never change a running project". The question is if it is still a first choice as a base for a new web-project or would it be worth to switch to ASP.NET? Would you recommend a classic ASP programmer another language to switch over? There was no single update to classic ASP since it first release but a lot of companies are still using it for new projects. Deservedly? A: While I would personally never willingly choose to create another ASP project over an ASP.NET project, the single biggest reason to do so is "skillset". I'd definitely recommend an ASP developer pickup ASP.NET, but if there is a project needed "now", go with what you know. Then learn ASP.NET before you have another project. :) ASP.NET has a number of improvements over ASP, but we (the collective former classic ASP developer community) created a number of good applications using classic ASP. A: In my mind, there is absolutely no reason to use classic ASP compared to ASP.NET Webforms or ASP.NET MVC. Unless you need to integrate with existing classic ASP applications, since some things (notably session) are not compatible across app boundries, leading to creative workarounds (WebServices running on your localhost...yuck). A: Coming from a classic ASP background I had the same questions. 3-4 years ago I took the route of moving towards ASP.NET/VB. I can tell you that the cross-over from ASP/VBScript to ASP.NET/VB is little to none. I was actually quite frustrated with the whole .NET platform for the first few months (more like the first year!) and kept rolling back to classic ASP. In the long run, I ended up starting from scratch and picked up ASP.NET/C#. Oddly enough, I felt that the syntax of C# was more natural, even though my background was in VBScript! For regular web development, ASP.NET is like using a sledgehammer when a simple ping hammer will do. However, the sheer power behind the .NET platform makes it invaluable in an enterprise environment where your web is often time blurred with your other applications. Given what I know now, I would have likely made the move to PHP. Not only is the programming style similar, but PHP really is dedicated toward the web. Whereas it is quite easy to get lost in the mass of information the .NET platform provides. And the rate at which the new .NET techonologies have been coming out in the recent past can and has become overwhelming. To directly answer your question: if you are staying in the realm of web development then I'd recommend taking a hard look at PHP for your new projects. A: I'd be very hard pressed to recommend using "classic" ASP for a new project, but, as with any new project - it should always be about choosing the tool that's best for the job, rather than using "Tool X", just because it's: * *newer *better *the "latest thing" If, for example, "Company X" (who are a small company with 20 employees) needed a new web application for their intranet for logging holiday/leave requests and the intranet server was an ageing NT4 box, Classic ASP would be the way to go. I would make the recommendation that they upgrade to a newer machine that could handle a supported server O/S such as Win2k3, but it may well be the case that they simply don't have the budget/need. A: For existing projects it's no option to switch to another language in my opinion until you have to make some radical changes / additions. Reprogramming is time-consuming and your customer will not pay for it normally. PHP is a nice web-language in my eyes, no question. But I wouldn't use it for very large projects because it is not pre-compiled which makes a good speed-up (my experience). But I left PHP-Development a few years ago, maybe there are some good improvements to this now. Also I wouldn't run PHP on an IIS nor would I run Apache on a Windows-Server. So when your whole server-equipment is based on windows you would have to setup a new server with linux/apache/php - more costs for your company the customers will not pay for. I agree with most answers, there is no good reason to stay with classic asp for new projects forever and there should be made plans to make a changeover to another language. We program most new projects still with classic ASP at the moment because we have a lot of selfmade libraries to use with our CMS etc. and we have to rewrite them with .NET/C#. Also there have to be established some new coding-conventions (e.g. how to make a navigation, folder structure, ...) so we are working alongside on a sample-project in .NET and after finishing with it we will only make small changes to existing projects until we have a chance to redeem the rewrite at least partially with another assignment of the customer. It's a slow process but I believe it have to be done sooner or later. (And I'm a big fan of the .NET-Framework, too! :-) ) A: You might consider looking at some of the differences between ASP classic and ASP.NET. Having had to maintain both in the past, I can tell you that there are numerous pleasures to present in developing in .NET vs. ASP Classic. Transitioning to any web-trendy language (PHP, ASP.NET, Ruby, Python) is going to be worth it if for nothing more than to realize where ASP Classic lacks. A: I think its high time to switch over to Asp.net. The better object oriented way of asp.net will definitely help you to reduce code management night mares. A: For any remaining ASP holdouts, I'd actually recommend jumping ship to PHP. It's a lot more like ASP than ASP.NET, and there's no shortage of new work in it. That being said, I greatly prefer ASP.NET (both MVC and WebForms) myself - but, I left ASP development about 7 years ago. ;) A: Like another poster mentioned, the skill set of the staff would be the deciding factor. If it's a Classic ASP shop and something has to be done ASAP (what doesn't, right?), then it might be hard to convince management that there's a need for .NET, especially if it impacts the timeline. This is where adding in some .NET pages for one-off projects comes in handy, since it lets the dev team become familiar with the language and decide when to switch from Classic ASP to .NET. Going forward, it's important to remember that while Classic ASP still runs and runs well, it's not going anywhere and you can't count on any updates/changes to the language/tools going forward. That being said, from my experience, I've found that jQuery/Ajax/DOM scripting gives the Classic ASP pages a new shot of life and add some of the "fancy/cool" stuff that my clients want to see on their sites. A: I wouldn't write a new app/website in classic asp. Why? number of reasons: 1) no classic asp bugs are fixed any longer by MS, eventually the support will cease to exist, it has to. 2) .net is much faster performance wise 3) .net has a lot of useful extenstion (AJAX for example) 4) skillset - when thinking of a technology you have to be sure that you can find someone to maintain it easily in the future .net has been around for awhile and it's tested, so it's safe (and recommended) to switch over, for new projects for sure. A: well I honestly wouldnt... With asp .net you can take advantage of the .net framework, and object oriented programming... That alone is good enough reason for me to use asp .net instead of classic asp... A: Our team has twice been asked to significantly "upgrade" a classic ASP site and in both cases, it was such a nightmare that we converted/re-wrote it in ASP.Net. I know the "Don't rewrite what's working" mantra, but knowing that we or someone else would have to continue to maintain the codebase and seeing how horrible the ASP code was to maintain, we decided to make a clean break. For that reason alone, I see nothing to recommend ever writing anything else in classic ASP. If ASP.Net is not an option, I'd go with PHP or Ruby.
{ "language": "en", "url": "https://stackoverflow.com/questions/142041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Create xml-stylesheet PI with Rails XMLBuilder I want to attach an xslt stylesheet to an XML document that I build with XMLBuilder. This is done with a Processing Instruction that looks like <?xml-stylesheet type='text/xsl' href='/stylesheets/style.xslt' ?> Normally, I'd use the instruct! method, but :xml-stylesheet is not a valid Ruby symbol. XMLBuilder has a solution for this case for elements using tag! method, but I don't see the equivalent for Processing Instructions. Any ideas? A: You do it like this: xm.instruct! 'xml-stylesheet', {:href=>'/stylesheets/style.xslt', :type=>'text/xsl'} Just add that line right after xm.instruct! :xml, {:encoding=>"your_encoding_type"} and before the rest of your document output code and you should be good to go. A: I'm not sure this will solve your problem since I don't know the instruct! method of that object, but :'xml-stylesheet' is a valid ruby symbol. A: If using the atom_feed helper, you can pass this in the instruct option: atom_feed(instruct: { 'xml-stylesheet' => {type: 'text/xsl', href: 'styles.xml'} }) do |feed| feed.title "My Atom Feed" # entries... end Which results in (showing only first 3 lines): <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="styles.xml"?> <feed xml:lang="en-US" xmlns="http://www.w3.org/2005/Atom">
{ "language": "en", "url": "https://stackoverflow.com/questions/142042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Relative path for xsl:import or xsl:include I am trying to use VBScript to do an XSLT transform on an XML object. The XSL file I'm translating includes the <xsl:import href="script.xsl"/> directive. If I use the absolute URL (http://localhost/mysite/script.xsl), it imports the style sheet fine; however, if I use the relative path (script.xsl) it reports "resource not found." I need to be able to port this amongst a set of machines, so I need to be able to use the relative URI. Any suggestions? Notes: * *VBScript file is at http://localhost/myscript.asp *first XSL file is at http://localhost/mysite/styles.xsl *second XSL file is at http://localhost/mysite/script.xsl *using the relative path mysite/script.xsl also does not work Addendum: Thanks, everyone, for your answers. The more I dig into the code that is doing this, the stranger it is. myscript.asp is a rather unusual compilation of code. What happens is styles.xsl is included in the HTML output of myscript.asp as an XML chunk (<xml src=...>) and then that chunk is loaded as a stylesheet, using VBScript, on the client side. This stylesheet is then used to transform an XML chunk that is retrieved via XMLHTTP. So the problem is the context of styles.xsl is the HTML on the client side and has no relation to where script.xsl is. A: The current directory for xsl:import, xsl:include, and the document() function is the directory containing the transform that uses them. So the xsl:import directive that you've said you're using ought to be working. The only thing I can think of that might affect this: if you use a relative path, the file's being read directly from the file system, while if you use an absolute URI, it's being retrieved from the web server. Is it possible that there's some security setting that's preventing scripts from reading files in this directory? A: @Jon I think you are very close... but shouldn't it be... <xsl:import href="/mysite/script.xsl"/> ...with a leading slash? A: I would tackle this by running Sysinternals Process Monitor. With this tool running, you can actually see what files your script tries to open, even if they don't exist. A: Is it possible that the "current directory" for purposes of the relative path might be the location of your ASP page, not your XSL file? In other words, if you haven't already, you might try: <xsl:import href="mysite/script.xsl"/> A: I often run into this problem because there is a custom URI resolver being used by a library I can't see (or don't know about because I didn't read pertinent documentation.) I can't remember if this is spec or not but in the Saxon/java world, the custom URI resolver gets first crack at trying to resolve URI's for include/import statements as well as the document() function. If it can't resolve the URI, a default URI resolver gives it a try, which usually never misses when then URI is absolute. So, it's probably something in the ASP engine that is using a context driven URI resolver based on the app context. A: First Attempt: I tried including script.xsl as another xml chunk and changing the import statement in every way I could imagine but without success. Final solution: Since the absolute url for includeing script.xsl worked from the beginning, my final solution was to convert style.xsl to style.asp with the correct doctype. In this file I was then able to retrieve the server name, protocol and path and echo them into the right place in the import statement using asp. Then, when this file got included in mysscript.asp, it had the correct absolute url for the server. This is a bit of a hack but the only way I found to solve this rather convoluted situation. A: You need a variable that defines the approot, or webroot when loading JS, Image or CSS files. <xsl:import href="{$approot}/somedir/script.xsl"/> or if you have the value in the XML, <xsl:import href="{/root/@approot}/somedir/script.xsl"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/142058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Which database has the best support for replication I have a fairly good feel for what MySQL replication can do. I'm wondering what other databases support replication, and how they compare to MySQL and others? Some questions I would have are: * *Is replication built in, or an add-on/plugin? *How does the replication work (high-level)? MySQL provides statement-based replication (and row-based replication in 5.1). I'm interested in how other databases compare. What gets shipped over the wire? How do changes get applied to the replicas? *Is it easy to check consistency between master and slaves? *How easy is it to get a failed replica back in sync with the master? *Performance? One thing I hate about MySQL replication is that it's single-threaded, and replicas often have trouble keeping up, since the master can be running many updates in parallel, but the replicas have to run them serially. Are there any gotchas like this in other databases? *Any other interesting features... A: MySQL's replication is weak inasmuch as one needs to sacrifice other functionality to get full master/master support (due to the restriction on supported backends). PostgreSQL's replication is weak inasmuch as only master/standby is supported built-in (using log shipping); more powerful solutions (such as Slony or Londiste) require add-on functionality. Archive log segments are shipped over the wire, which are the same records used to make sure that a standalone database is in working, consistent state on unclean startup. This is what I'm using presently, and we have resynchronization (and setup, and other functionality) fully automated. None of these approaches are fully synchronous. More complete support will be built in as of PostgreSQL 8.5. Log shipping does not allow databases to come out of synchronization, so there is no need for processes to test the synchronized status; bringing the two databases back into sync involves setting the backup flag on the master, rsyncing to the slave (with the database still runnning; this is safe), and unsetting the backup flag (and restarting the slave process) with the archive logs generated during the backup process available; my shop has this process (like all other administration tasks) automated. Performance is a nonissue, since the master has to replay the log segments internally anyhow in addition to doing other work; thus, the slaves will always be under less load than the master. Oracle's RAC (which isn't properly replication, as there's only one storage backend -- but you have multiple frontends sharing the load, and can build redundancy into that shared storage backend itself, so it's worthy of mention here) is a multi-master approach far more comprehensive than other solutions, but is extremely expensive. Database contents aren't "shipped over the wire"; instead, they're stored to the shared backend, which all the systems involved can access. Because there is only one backend, the systems cannot come out of sync. Continuent offers a third-party solution which does fully synchronous statement-level replication with support for all three of the above databases; however, the commercially supported version of their product isn't particularly cheap (though vastly less expensive. Last time I administered it, Continuent's solution required manual intervention for bringing a cluster back into sync. A: I have some experience with MS-SQL 2005 (publisher) and SQLEXPRESS (subscribers) with overseas merge replication. Here are my comments: 1 - Is replication built in, or an add-on/plugin? Built in 2 - How does the replication work (high-level)? Different ways to replicate, from snapshot (giving static data at the subscriber level) to transactional replication (each INSERT/DELETE/UPDATE instruction is executed on all servers). Merge replication replicate only final changes (successives UPDATES on the same record will be made at once during replication). 3 - Is it easy to check consistency between master and slaves? Something I have never done ... 4 - How easy is it to get a failed replica back in sync with the master? The basic resynch process is just a double-click one .... But if you have 4Go of data to reinitialize over a 64 Kb connection, it will be a long process unless you customize it. 5 - Performance? Well ... You will of course have a bottleneck somewhere, being your connection performance, volume of data, or finally your server performance. In my configuration, users only write to subscribers, which all replicate with the main database = publisher. This server is then never sollicited by final users, and its CPU is strictly dedicated to data replication (to multiple servers) and backup. Subscribers are dedicated to clients and one replication (to publisher), which gives a very interesting result in terms of data availability for final users. Replications between publisher and subscribers can be launched together. 6 - Any other interesting features... It is possible, with some anticipation, to keep on developping the database without even stopping the replication process....tables (in an indirect way), fields and rules can be added and replicated to your subscribers. Configurations with a main publisher and multiple suscribers can be VERY cheap (when compared to some others...), as you can use the free SQLEXPRESS on the suscriber's side, even when running merge or transactional replications A: Try Sybase SQL Anywhere A: Just adding to the options with SQL Server (especially SQL 2008, which has Change Tracking features now). Something to consider is the Sync Framework from Microsoft. There's a few options there, from the basic hub-and-spoke architecture which is great if you have a single central server and sometimes-connected clients, right through to peer-to-peer sync which gives you the ability to do much more advanced syncing with multiple 'master' databases. The reason you might want to consider this instead of traditional replication is that you have a lot more control from code, for example you can get events during the sync progress for Update/Update, Update/Delete, Delete/Update, Insert/Insert conflicts and decide how to resolve them based on business logic, and if needed store the loser of the conflict's data somewhere for manual or automatic processing. Have a look at this guide to help you decide what's possible with the different methods of replication and/or sync. For the keen programmers the Sync Framework is open enough that you can have the clients connect via WCF to your WCF Service which can abstract any back-end data store (I hear some people are experimenting using Oracle as the back-end). My team has just gone release with a large project that involves multiple SQL Express databases syncing sub-sets of data from a central SQL Server database via WAN and Internet (slow dial-up connection in some cases) with great success. A: MS SQL 2005 Standard Edition and above have excellent replication capabilities and tools. Take a look at: http://msdn.microsoft.com/en-us/library/ms151198(SQL.90).aspx It's pretty capable. You can even use SQL Server Express as a readonly subscriber. A: There are a lot of different things which databases CALL replication. Not all of them actually involve replication, and those which do work in vastly different ways. Some databases support several different types. MySQL supports asynchronous replication, which is very good for some things. However, there are weaknesses. Statement-based replication is not the same as what most (any?) other databases do, and doesn't always result in the expected behaviour. Row-based replication is only supported by a non production-ready version (but is more consistent with how other databases do it). Each database has its own take on replication, some involve other tools plugging in. A: A bit off-topic but you might want to check Maatkit for tools to help with MySQL replication. A: All the main commercial databases have decent replication - but some are more decent than others. IBM Informix Dynamic Server (version 11 and later) is particularly good. It actually has two systems - one for high availability (HDR - high-availability data replication) and the other for distributing data (ER - enterprise replication). And the the Mach 11 features (RSS - remote standalone secondary, and SDS - shared disk secondary) are excellent too, doubly so in 11.50 where you can write to either the primary or secondary of an HDR pair. (Full disclosure: I work on Informix softare.) A: I haven't tried it myself, but you might also want to look into OpenBaseSQL, which seems to have some simple to use replication built-in. A: Another way to go is to run in a virtualized environment. I thought the data in this blog article was interesting http://chucksblog.typepad.com/chucks_blog/2008/09/enterprise-apps.html It's from an EMC executive, so obviously, it's not independent, but the experiment should be reproducible Here's the data specific for Oracle http://oraclestorageguy.typepad.com/oraclestorageguy/2008/09/to-rac-or-not-to-rac-reprise.html Edit: If you run virtualized, then there are ways to make anything replicate http://chucksblog.typepad.com/chucks_blog/2008/05/vmwares-srm-cha.html
{ "language": "en", "url": "https://stackoverflow.com/questions/142068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Symbian S60 Code Editor I am looking for an app that will let me type in code ON my cellphone. I don't need to compile or anything, and its not to program for the cellphone. Just something nice to have when an idea pops in my head. Am I completely overlooking a simple code editor for Symbian S60v3 phones? I am looking for something similar to CEdit which is for Windows Mobile. A: I've used pyEdit on my S60v2 phone, it looks like it's supported under v3 as well. It depends on the python runtime, so you'll need to install that first. A: Try YEdit or LightNotepad A: try ped-s60 http://code.google.com/p/ped-s60/downloads/list A: You can use visual studio : http://wiki.forum.nokia.com/index.php/Using_Visual_Studio_6.0_with_S60_3rd_Edition A: You can download the very basic edition of Carbide C++ for free from Nokia This is built on the eclipse platform and is very good.
{ "language": "en", "url": "https://stackoverflow.com/questions/142071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is there use for the Scroll Lock button anymore? The scroll lock button seems to be a reminder of the good old green terminal days. Does anyone still use it? Should the 101 button keyboard become the 100 button keyboard? A: Microsoft Excel uses Scroll Lock to allow you to scroll the spreadsheet around with the arrow keys without changing the active/selected cell -- in line with the Scroll Lock key's original intent. A: In Microsoft Excel, Scroll Lock allows you to scroll a spreadsheet with the arrow keys without moving the active cell pointer from the currently highlighted cell. In Quattro Pro, another spreadsheet program, Scroll Lock works in a similar manner, although in contrast to Excel it's not possible to scroll the active cell pointer completely off the screen. Other programs use Scroll Lock for special functions. A: Scroll lock (at least the LED for it, anyways) is used along with the caps lock and num lock LED's to provide diagnostic error codes when troubleshooting hardware issues on Dell laptops. This is quite useful when troubleshooting display problems that might prevent diagnostic messages from being read off the screen. A: In Excel, if you turn on scroll lock, using the arrow keys scrolls the spread sheet instead of changing the cell the cursor is in. A: It looks mostly dead: http://en.wikipedia.org/wiki/Scroll_lock I don't remember the last time I used it... A: In many KVM situations, double-hitting scroll lock with bring up the machine selection screen. A: I use it all the time on Unix terminals. It is quite handy when something catches my eye when I'm tailing a log file. A: If you happen to have some legacy MS-DOS games around, it might be useful ;-) A: One of the visualization plugins for Winamp (A media player) uses scroll lock to prevent the visualization from rotating visualizations every x seconds A: In Synergy you can lock your mouse to the current screen. Very helpful when you want to share a mouse with a PC on the other side of the room. A: Well, that's yet another key to be used with AutoHotkey... (or other keyboard shortcut managers). A: Since it is connected to the LED, some clever folks have added code to make it flash (e.g. scroll lock on/off/on/off) at a given interval as a notification for new email messages (or similar) This can be "cool" if you have a laptop, that is closed (or the screen is sleeping)... you just see if the light is blinking. A: I use it all the time, with the purpose for which it was made. For a variety of things.
{ "language": "en", "url": "https://stackoverflow.com/questions/142076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: C# Lambda Expressions or Delegates as a Properties or Arguments I'm looking to create an ValidationRule class that validates properties on an entity type object. I'd really like to set the name of the property to inspect, and then give the class a delegate or a lambda expression that will be evaluated at runtime when the object runs its IsValid() method. Does anyone have a snippet of something like this, or any ideas on how to pass an anonymous method as an argument or property? Also, I'm not sure if I'm explaining what I'm trying to accomplish so please ask questions if I'm not being clear. A: Really, what you want to use is Func<T,bool> where T is the type of the item you want to validate. Then you would do something like this validator.AddValidation(item => (item.HasEnoughInformation() || item.IsEmpty()); you could store them in a List<Func<T,bool>>. A: class ValidationRule { public delegate bool Validator(); private Validator _v; public ValidationRule(Validator v) { _v = v; } public Validator Validator { get { return _v; } set { _v = value; } } public bool IsValid { get { return _v(); } } } var alwaysPasses = new ValidationRule(() => true); var alwaysFails = new ValidationRule(() => false); var textBoxHasText = new ValidationRule(() => textBox1.Text.Length > 0); That should get you started. But, really, inheritance is far more appropriate here. The problem is simply that the Validator doesn't have access to any state that it doesn't close over, this means that it isn't as reusable as say ValidationRules that contain their own state. Compare the following class to the previous definition of textBoxHasText. interface IValidationRule { bool IsValid { get; } } class BoxHasText : IValidationRule { TextBox _c; public BoxHasText(TextBox c) { _c = c; } public bool IsValid { get { return _c.Text.Length > 0; } } } A: Well, simply, if you have an Entity class, and you want to use lambda expressions on that Entity to determine if something is valid (returning boolean), you could use a Func. So, given an Entity: class Entity { public string MyProperty { get; set; } } You could define a ValidationRule class for that like this: class ValidationRule<T> where T : Entity { private Func<T, bool> _rule; public ValidationRule(Func<T, bool> rule) { _rule = rule; } public bool IsValid(T entity) { return _rule(entity); } } Then you could use it like this: var myEntity = new Entity() { MyProperty = "Hello World" }; var rule = new ValidationRule<Entity>(entity => entity.MyProperty == "Hello World"); var valid = rule.IsValid(myEntity); Of course, that's just one possible solution. If you remove the generic constraint above ("where T : Entity"), you could make this a generic rules engine that could be used with any POCO. You wouldn't have to derive a class for every type of usage you need. So if I wanted to use this same class on a TextBox, I could use the following (after removing the generic constraint): var rule = new ValidationRule<TextBox>(tb => tb.Text.Length > 0); rule.IsValid(myTextBox); It's pretty flexible this way. Using lambda expressions and generics together is very powerful. Instead of accepting Func or Action, you could accept an Expression> or Expression> and have direct access to the express tree to automatically investigate things like the name of a method or property, what type of expression it is, etc. And people using your class would not have to change a single line of code. A: something like: class ValidationRule { private Func<bool> validation; public ValidationRule(Func<bool> validation) { this.validation = validation; } public bool IsValid() { return validation(); } } would be more C# 3 style but is compiled to the same code as @Frank Krueger's answer. This is what you asked for, but doesn't feel right. Is there a good reason why the entity can't be extended to perform validation? A: Would a rule definition syntax like this one work for you? public static void Valid(Address address, IScope scope) { scope.Validate(() => address.Street1, StringIs.Limited(10, 256)); scope.Validate(() => address.Street2, StringIs.Limited(256)); scope.Validate(() => address.Country, Is.NotDefault); scope.Validate(() => address.Zip, StringIs.Limited(10)); switch (address.Country) { case Country.USA: scope.Validate(() => address.Zip, StringIs.Limited(5, 10)); break; case Country.France: break; case Country.Russia: scope.Validate(() => address.Zip, StringIs.Limited(6, 6)); break; default: scope.Validate(() => address.Zip, StringIs.Limited(1, 64)); break; } Check out DDD and Rule driven UI Validation in .NET for more information
{ "language": "en", "url": "https://stackoverflow.com/questions/142090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Can I use OpenId with the ASP MembershipProvider? I have a ASP.Net 2.0 website that is currently using a custom MembershipProvider and the standard login control. I would like to replace the login control with the one from DotNetOpenId. I override the ValidateUser which checks the username and password, but I shouldn't need to implement this when using OpenId. Is it possible to use OpenId and still have the membership provider available to me to so that I can still use it to access the current logged in user? Or is it the case the there is need for the using provider model anymore? A: There is no inbuilt provider available. But you can always implement your own provider. Or you can check out this one available in codePlex. A: One web project template found at http://code.google.com/p/dotnet-membership-provider/ has a sample membership provider class that works with dotnetopenid, although you should probably do a review of it before using it in production... the last time I checked the source code it needed a bit of work. A: This is the premier .NET OpenID library, by Andrew Arnott, MSFT employee: http://code.google.com/p/dotnetopenid/ Not sure about integration with Membership.
{ "language": "en", "url": "https://stackoverflow.com/questions/142101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: VBScript Excel Formatting .xlsx files Basically I want to know how to set center alignment for a cell using VBScript... I've been googling it and can't seem to find anything that helps. A: Set excel = CreateObject("Excel.Application") excel.Workbooks.Add() ' create blank workbook Set workbook = excel.Workbooks(1) ' set A1 to be centered. workbook.Sheets(1).Cells(1,1).HorizontalAlignment = -4108 ' xlCenter constant. workbook.SaveAs("C:\NewFile.xls") excel.Quit() set excel = nothing 'If the script errors, it'll give you an orphaned excel process, so be warned. Save that as a .vbs and run it using the command prompt or double clicking. A: There are many ways to select a cell or a range of cells, but the following will work for a single cell. 'Select a Cell Range Range("D4").Select 'Set the horizontal and vertical alignment With Selection .HorizontalAlignment = xlCenter .VerticalAlignment = xlBottom End With The HorizontalAlignment options are xlLeft, xlRight, and xlCenter
{ "language": "en", "url": "https://stackoverflow.com/questions/142110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Best (free) way to store data? How about updates to the file system? I have an idea for how to solve this problem, but I wanted to know if there's something easier and more extensible to my problem. The program I'm working on has two basic forms of data: images, and the information associated with those images. The information associated with the images has been previously stored in a JET database of extreme simplicity (four tables) which turned out to be both slow and incomplete in the stored fields. We're moving to a new implementation of data storage. Given the simplicity of the data structures involved, I was thinking that a database was overkill. Each image will have information of it's own (capture parameters), will be part of a group of images which are interrelated (taken in the same thirty minute period, say), and then part of a larger group altogether (taken of the same person). Right now, I'm storing people in a dictionary with a unique identifier. Each person then has a List of the different groups of pictures, and each picture group has a List of pictures. All of these classes are serializable, and I'm just serializing and deserializing the dictionary. Fairly straightforward stuff. Images are stored separately, so that the dictionary doesn't become astronomical in size. The problem is: what happens when I need to add new information fields? Is there an easy way to setup these data structures to account for potential future revisions? In the past, the way I'd handle this in C was to create a serializable struct with lots of empty bytes (at least a k) for future extensibility, with one of the bytes in the struct indicating the version. Then, when the program read the struct, it would know which deserialization to use based on a massive switch statement (and old versions could read new data, because extraneous data would just go into fields which are ignored). Does such a scheme exist in C#? Like, if I have a class that's a group of String and Int objects, and then I add another String object to the struct, how can I deserialize an object from disk, and then add the string to it? Do I need to resign myself to having multiple versions of the data classes, and a factory which takes a deserialization stream and handles deserialization based on some version information stored in a base class? Or is a class like Dictionary ideal for storing this kind of information, as it will deserialize all the fields on disk automatically, and if there are new fields added in, I can just catch exceptions and substitute in blank Strings and Ints for those values? If I go with the dictionary approach, is there a speed hit associated with file read/writes as well as parameter retrieval times? I figure that if there's just fields in a class, then field retrieval is instant, but in a dictionary, there's some small overhead associated with that class. Thanks! A: Sqlite is what you want. It's a fast, embeddable, single-file database that has bindings to most languages. With regards to extensibility, you can store your models with default attributes, and then have a separate table for attribute extensions for future changes. A year or two down the road, if the code is still in use, you'll be happy that 1)Other developers won't have to learn a customized code structure to maintain the code, 2) You can export, view, modify the data with standard database tools (there's an ODBC driver for sqlite files and various query tools), and 3) you'll be able to scale up to a database with minimal code changes. A: Just a wee word of warning, SQLLite, Protocol Buffers, mmap et al...all very good but you should prototype and test each implementation and make sure that your not going to hit the same perf issues or different bottlenecks. Simplicity may be just to upsize to SQL (Express) (you'll may be surprised at the perf gain) and fix whatever's missing from the present database design. Then if perf is still an issue start investigating these other technologies. A: There's a database schema, for which I can't remember the name, that can handle this sort of situation. You basically have two tables. One table stores the variable name, and the other stores the variable value. If you want to group the variables, then add a third table that will have a one to many relationship with the variable name table. This setup has the advantage of letting you keep adding different variables without having to keep changing your database schema. Saved my bacon quite a few times when dealing with departments that change their mind frequently (like Marketing). The only drawback is that the variable value table will need to store the actual value as a string column (varchar or nvarchar actually). Then you have to deal with the hassle of converting the values back to their native representations. I currently maintain something like this. The variable table currently has around 800 million rows. It's still fairly fast, as I can still retrieve certain variations of values in under one second. A: My brain is fried at the moment, so I'm not sure I can advise for or against a database, but if you're looking for version-agnostic serialization, you'd be a fool to not at least check into Protocol Buffers. Here's a quick list of implementations I know about for C#/.NET: * *protobuf-net *Proto# *jskeet's dotnet-protobufs A: I'm no C# programmer but I like the mmap() call and saw there is a project doing such a thing for C#. See Mmap Structured files are very performing if tailored for a specific application but are difficult to manage and an hardly reusable code resource. A better solution is a virtual memory-like implementation. * *Up to 4 gigabyte of information can be managed. *Space can be optimized to real data size. *All the data can be viewed as a single array and accessed with read/write operations. *No needing to structure to store but just use and store. *Can be cached. Is highly reusable. A: So go with sqllite for the following reasons: 1. You don't need to read/write the entire database from disk every time 2. Much easier to add to even if you don't leave enough placeholders at the beginning 3. Easier to search based on anything you want 4. easier to change data in ways beyond the application was designed Problems with Dictionary approach 1. Unless you made a smart dictionary you need to read/write the entire database every time (unless you carefully design the data structure it will be very hard to maintain backwards compatibility) ----- a) if you did not leave enough place holders bye bye 2. It appears as if you'd have to linear search through all the photos in order to search on one of the Capture Attributes 3. Can a picture be in more than one group? Can a picture be under more than one person? Can two people be in the same group? With dictionaries these things can get hairy.... With a database table, if you get a new attribute you can just say Alter Table Picture Add Attribute DataType. Then as long as you don't make a rule saying the attribute has to have a value, you can still load and save older versions. At the same time the newer versions can use the new attributes. Also you don't need to save the picture in the database. You could just store the path to the picture in the database. Then when the app needs the picture, just load it from a disk file. This keeps the database size smaller. Also the extra seek time to get the disk file will most likely be insignificant compared to the time to load the image. Probably your table should be Picture(PictureID, GroupID?, File Path, Capture Parameter 1, Capture Parameter 2, etc..) If you want more flexibility you could make a table CaptureParameter(PictureID, ParameterName, ParameterValue) ... I would advise against this because it is a lot less efficient than just putting them in one table (not to mention the queries to retrieve/search the Capture Parameters would be more complicated). Person(PersonID, Any Person Attributes like Name/Etc.) Group(GroupID, Group Name, PersonID?) PersonGroup?(PersonID, GroupID) PictureGroup?(GroupID, PictureID)
{ "language": "en", "url": "https://stackoverflow.com/questions/142114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Unit Tests for comparing text files in NUnit I have a class that processes a 2 xml files and produces a text file. I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following: * *For input A and B, generate the output. *Compare the contents of the generated file to the contents expected output *When the actual contents differ from the expected contents, fail and display some useful information about the differences. Below is the prototype for the class along with my first stab at unit tests. Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions? Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm? class ReportGenerator { string Generate(string inputPathA, string inputPathB) { //do stuff } } [TextFixture] public class ReportGeneratorTests { static Diff(string pathToExpectedResult, string pathToActualResult) { using (StreamReader rs1 = File.OpenText(pathToExpectedResult)) { using (StreamReader rs2 = File.OpenText(pathToActualResult)) { string actualContents = rs2.ReadToEnd(); string expectedContents = rs1.ReadToEnd(); //this works, but the output could be a LOT more useful. Assert.AreEqual(expectedContents, actualContents); } } } static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult) { ReportGenerator obj = new ReportGenerator(); string pathToResult = obj.Generate(pathToInputA, pathToInputB); Diff(pathToExpectedResult, pathToResult); } [Test] public void TestX() { TestGenerate("x1.xml", "x2.xml", "x-expected.txt"); } [Test] public void TestY() { TestGenerate("y1.xml", "y2.xml", "y-expected.txt"); } //etc... } Update I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures. A: As for the multiple tests with different data, use the NUnit RowTest extension: using NUnit.Framework.Extensions; [RowTest] [Row("x1.xml", "x2.xml", "x-expected.xml")] [Row("y1.xml", "y2.xml", "y-expected.xml")] public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult) { ReportGenerator obj = new ReportGenerator(); string pathToResult = obj.Generate(pathToInputA, pathToInputB); Diff(pathToExpectedResult, pathToResult); } A: You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it. Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results. Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this: [Test] public void TestX() { DoTest("X"); } [Test] public void TestY() { DoTest("Y"); } Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data. Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold". A: Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like... Line 32 Column 12 - Found 'x' when 'y' was expected You could optionally enhance that by displaying multiple lines of output Difference at Line 32 Column 12, first difference shown A = this is a txst B = this is a tests Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct! A: I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number. Then, you can write new tests just by adding new text files. A: I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different. PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.
{ "language": "en", "url": "https://stackoverflow.com/questions/142121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are the key differences between ASP.NET webforms and MVC I know what MVC is and I work in webforms but I don't know how MVC will be that much different. I guess the code behind model will be different. So will it be like webforms minus the code behind and instead having it in a controller? I see there are other related posts but I don't they address this. A: The image says it all. Update: Adding the original link for completeness. http://forums.asp.net/t/1528396.aspx?MVC+vs+Web+Forms A: The video tutorials here help describe the differences. A: There is so much that can be said about your question. MVC allows for clean separation of concerns, testability, and test driven development (TDD). It supports clean RESTful URLs and is very extensible... meaning you could swap out of the viewing engine, the routing mechanism, and many other things that you might not like out of the box. For additional information I would suggest reading Dino Esposito's blog post entitled An Architectural View of the ASP.NET MVC Framework. Inside this post he compares many differences between the classic code behind approach with MVC. A: For starters, MVC does not use the <asp:control> controls, in preference for good old standard <input>'s and the like. Thus, you don't attach "events" to a control that get executed in a code-behind like you would in ASP. It relies on the standard http POST to do that. It does not use the viewstate object. It allows for more intelligent url mapping, though now that the Routing namespace has been spun off, I wonder if it can be used for WebForms? It is much easier to automate testing of web parts. It allows for much easier separation of UI logic from the "backend" components. A: Asp.Net Web Forms: * *Asp.Net Web Form follows a traditional event driven development model. *Asp.Net Web Form has server controls. Asp.Net MVC model: * *Asp.Net MVC is a lightweight and follow MVC (Model, View, and Controller) pattern based development model.Asp.Net MVC does not support view state. See more..
{ "language": "en", "url": "https://stackoverflow.com/questions/142132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: SQL - Query to get server's IP address Is there a query in SQL Server 2005 I can use to get the server's IP or name? A: It's in the @@SERVERNAME variable; SELECT @@SERVERNAME; A: select @@servername A: you can use command line query and execute in mssql: exec xp_cmdshell 'ipconfig' A: Please use this query: SELECT CONNECTIONPROPERTY('local_net_address') AS [IP] A: You can get the[hostname]\[instancename] by: SELECT @@SERVERNAME; To get only the hostname when you have hostname\instance name format: SELECT LEFT(ltrim(rtrim(@@ServerName)), Charindex('\', ltrim(rtrim(@@ServerName))) -1) Alternatively as @GilM pointed out: SELECT SERVERPROPERTY('MachineName') You can get the actual IP address using this: create Procedure sp_get_ip_address (@ip varchar(40) out) as begin Declare @ipLine varchar(200) Declare @pos int set nocount on set @ip = NULL Create table #temp (ipLine varchar(200)) Insert #temp exec master..xp_cmdshell 'ipconfig' select @ipLine = ipLine from #temp where upper (ipLine) like '%IP ADDRESS%' if (isnull (@ipLine,'***') != '***') begin set @pos = CharIndex (':',@ipLine,1); set @ip = rtrim(ltrim(substring (@ipLine , @pos + 1 , len (@ipLine) - @pos))) end drop table #temp set nocount off end go declare @ip varchar(40) exec sp_get_ip_address @ip out print @ip Source of the SQL script. A: The server might have multiple IP addresses that it is listening on. If your connection has the VIEW SERVER STATE server permission granted to it, you can run this query to get the address you have connected to SQL Server: SELECT dec.local_net_address FROM sys.dm_exec_connections AS dec WHERE dec.session_id = @@SPID; This solution does not require you to shell out to the OS via xp_cmdshell, which is a technique that should be disabled (or at least strictly secured) on a production server. It may require you to grant VIEW SERVER STATE to the appropriate login, but that is a far smaller security risk than running xp_cmdshell. The technique mentioned by GilM for the server name is the preferred one: SELECT SERVERPROPERTY(N'MachineName'); A: A simpler way to get the machine name without the \InstanceName is: SELECT SERVERPROPERTY('MachineName') A: I know this is an old post, but perhaps this solution can be usefull when you want to retrieve the IP address and TCP port from a Shared Memory connection (e.g. from a script run in SSMS locally on the server). The key is to open a secondary connection to your SQL Server using OPENROWSET, in which you specify 'tcp:' in your connection string. The rest of the code is merely building dynamic SQL to get around OPENROWSET’s limitation of not being able to take variables as its parameters. DECLARE @ip_address varchar(15) DECLARE @tcp_port int DECLARE @connectionstring nvarchar(max) DECLARE @parm_definition nvarchar(max) DECLARE @command nvarchar(max) SET @connectionstring = N'Server=tcp:' + @@SERVERNAME + ';Trusted_Connection=yes;' SET @parm_definition = N'@ip_address_OUT varchar(15) OUTPUT , @tcp_port_OUT int OUTPUT'; SET @command = N'SELECT @ip_address_OUT = a.local_net_address, @tcp_port_OUT = a.local_tcp_port FROM OPENROWSET(''SQLNCLI'' , ''' + @connectionstring + ''' , ''SELECT local_net_address , local_tcp_port FROM sys.dm_exec_connections WHERE session_id = @@spid '') as a' EXEC SP_executeSQL @command , @parm_definition , @ip_address_OUT = @ip_address OUTPUT , @tcp_port_OUT = @tcp_port OUTPUT; SELECT @ip_address, @tcp_port A: SELECT CONNECTIONPROPERTY('net_transport') AS net_transport, CONNECTIONPROPERTY('protocol_type') AS protocol_type, CONNECTIONPROPERTY('auth_scheme') AS auth_scheme, CONNECTIONPROPERTY('local_net_address') AS local_net_address, CONNECTIONPROPERTY('local_tcp_port') AS local_tcp_port, CONNECTIONPROPERTY('client_net_address') AS client_net_address The code here Will give you the IP Address; This will work for a remote client request to SQL 2008 and newer. If you have Shared Memory connections allowed, then running above on the server itself will give you * *"Shared Memory" as the value for 'net_transport', and *NULL for 'local_net_address', and *'<local machine>' will be shown in 'client_net_address'. 'client_net_address' is the address of the computer that the request originated from, whereas 'local_net_address' would be the SQL server (thus NULL over Shared Memory connections), and the address you would give to someone if they can't use the server's NetBios name or FQDN for some reason. I advice strongly against using this answer. Enabling the shell out is a very bad idea on a production SQL Server. A: Most solutions for getting the IP address via t-sql fall into these two camps: * *Run ipconfig.exe via xp_cmdshell and parse the output *Query DMV sys.dm_exec_connections I'm not a fan of option #1. Enabling xp_cmdshell has security drawbacks, and there's lots of parsing involved anyway. That's cumbersome. Option #2 is elegant. And it's a pure t-sql solution, which I almost always prefer. Here are two sample queries for option #2: SELECT c.local_net_address FROM sys.dm_exec_connections AS c WHERE c.session_id = @@SPID; SELECT TOP(1) c.local_net_address FROM sys.dm_exec_connections AS c WHERE c.local_net_address IS NOT NULL; Sometimes, neither of the above queries works, though. Query #1 returns NULL if you're connected over Shared Memory (logged in and running SSMS on the SQL host). Query #2 may return nothing if there are no connections using a non-Shared Memory protocol. This scenario is likely when connected to a newly installed SQL instance. The solution? Force a connection over TCP/IP. To do this, create a new connection in SSMS and use the "tcp:" prefix with the server name. Then re-run either query and you'll get the IP address. A: --Try this script it works to my needs. Reformat to read it. SELECT SERVERPROPERTY('ComputerNamePhysicalNetBios') as 'Is_Current_Owner' ,SERVERPROPERTY('MachineName') as 'MachineName' ,case when @@ServiceName = Right (@@Servername,len(@@ServiceName)) then @@Servername else @@servername +' \ ' + @@Servicename end as '@@Servername \ Servicename', CONNECTIONPROPERTY('net_transport') AS net_transport, CONNECTIONPROPERTY('local_tcp_port') AS local_tcp_port, dec.local_tcp_port, CONNECTIONPROPERTY('local_net_address') AS local_net_address, dec.local_net_address as 'dec.local_net_address' FROM sys.dm_exec_connections AS dec WHERE dec.session_id = @@SPID; A: It is possible to use the host_name() function select HOST_NAME()
{ "language": "en", "url": "https://stackoverflow.com/questions/142142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "121" }
Q: Actionscript 2 functions I'm an experienced programmer but just starting out with Flash/Actionscript. I'm working on a project that for certain reasons requires me to use Actionscript 2 rather than 3. When I run the following (I just put it in frame one of a new flash project), the output is a 3 rather than a 1 ? I need it to be a 1. Why does the scope of the 'ii' variable continue between loops? var fs:Array = new Array(); for (var i = 0; i < 3; i++){ var ii = i + 1; fs[i] = function(){ trace(ii); } } fs[0](); A: Unfortunately, AS2 is not that kind of language; it doesn't have that kind of closure. Functions aren't exactly first-class citizens in AS2, and one of the results of that is that a function doesn't retain its own scope, it has to be associated with some scope when it's called (usually the same scope where the function itself is defined, unless you use a function's call or apply methods). Then when the function is executed, the scope of variables inside it is just the scope of wherever it happened to be called - in your case, the scope outside your loop. This is also why you can do things like this: function foo() { trace( this.value ); } objA = { value:"A" }; objB = { value:"B" }; foo.apply( objA ); // A foo.apply( objB ); // B objA.foo = foo; objB.foo = foo; objA.foo(); // A objB.foo(); // B If you're used to true OO languages that looks very strange, and the reason is that AS2 is ultimately a prototyped language. Everything that looks object-oriented is just a coincidence. ;D A: Unfortunately Actionscript 2.0 does not have a strong scope... especially on the time line. var fs:Array = new Array(); for (var i = 0; i < 3; i++){ var ii = i + 1; fs[i] = function(){ trace(ii); } } fs[0](); trace("out of scope: " + ii + "... but still works"); A: I came up with a kind of strage solution to my own problem: var fs:Array = new Array(); for (var i = 0; i < 3; i++){ var ii = i + 1; f = function(j){ return function(){ trace(j); }; }; fs[i] = f(ii); } fs[0](); //1 fs[1](); //2 fs[2](); //3
{ "language": "en", "url": "https://stackoverflow.com/questions/142147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I locate resources on the classpath in java? Specifically stuff that ends in .hbm.xml How do I locate resources on the classpath in java? Specifically stuff that ends in .hbm.xml. My goal is to get a List of all resources on the classpath that end with ".hbm.xml". A: You have to get a classloader, and test whether it's a URLClassLoader. If so, downcast and get its URLs. From there, open each as a JarFile and look at its entries. Apply a regex to each entry and see if it's one that interests you. Clearly, this isn't fast. It's best to be given a name to be looked up in the classpath, perhaps listed in a standard file name in the META-INF directory of each classpath element, similar to the technique used by the ServiceProvider facility. Note that you can list all files with a given name on the classpath. A: Method findClasses from our ClassLoaderUtil might be a good starting point to adapt to your needs. public class ClassLoaderUtil { /** * Recursive method used to find all classes in a given path (directory or zip file url). Directories * are searched recursively. (zip files are * Adapted from http://snippets.dzone.com/posts/show/4831 and extended to support use of JAR files * * @param path The base directory or url from which to search. * @param packageName The package name for classes found inside the base directory * @param regex an optional class name pattern. e.g. .*Test * @return The classes */ private static TreeSet<String> findClasses(String path, String packageName, Pattern regex) throws Exception { TreeSet<String> classes = new TreeSet<String>(); if (path.startsWith("file:") && path.contains("!")) { String[] split = path.split("!"); URL jar = new URL(split[0]); ZipInputStream zip = new ZipInputStream(jar.openStream()); ZipEntry entry; while ((entry = zip.getNextEntry()) != null) { if (entry.getName().endsWith(".class")) { String className = entry.getName().replaceAll("[$].*", "").replaceAll("[.]class", "").replace('/', '.'); if (className.startsWith(packageName) && (regex == null || regex.matcher(className).matches())) classes.add(className); } } } File dir = new File(path); if (!dir.exists()) { return classes; } File[] files = dir.listFiles(); for (File file : files) { if (file.isDirectory()) { assert !file.getName().contains("."); classes.addAll(findClasses(file.getAbsolutePath(), packageName + "." + file.getName(), regex)); } else if (file.getName().endsWith(".class")) { String className = packageName + '.' + file.getName().substring(0, file.getName().length() - 6); if (regex == null || regex.matcher(className).matches()) classes.add(className); } } return classes; } public static <T> List<T> instances(Class<? extends T>[] classList) { List<T> tList = new LinkedList<T>(); for(Class<? extends T> tClass : classList) { try { // Only try to instantiate real classes. if(! Modifier.isAbstract(tClass.getModifiers()) && ! Modifier.isInterface(tClass.getModifiers())) { tList.add(tClass.newInstance()); } } catch (Throwable t) { throw new RuntimeException(t.getMessage(), t); } } return tList; } public static Class[] findByPackage(String packageName, Class isAssignableFrom) { Class[] clazzes = getClassesInPackage(packageName, null); if(isAssignableFrom == null) { return clazzes; } else { List<Class> filteredList = new ArrayList<Class>(); for(Class clazz : clazzes) { if(isAssignableFrom.isAssignableFrom(clazz)) filteredList.add(clazz); } return filteredList.toArray(new Class[0]); } } /** * Scans all classes accessible from the context class loader which belong to the given package and subpackages. * Adapted from http://snippets.dzone.com/posts/show/4831 and extended to support use of JAR files * * @param packageName The base package * @param regexFilter an optional class name pattern. * @return The classes */ public static Class[] getClassesInPackage(String packageName, String regexFilter) { Pattern regex = null; if (regexFilter != null) regex = Pattern.compile(regexFilter); try { ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); assert classLoader != null; String path = packageName.replace('.', '/'); Enumeration<URL> resources = classLoader.getResources(path); List<String> dirs = new ArrayList<String>(); while (resources.hasMoreElements()) { URL resource = resources.nextElement(); dirs.add(resource.getFile()); } TreeSet<String> classes = new TreeSet<String>(); for (String directory : dirs) { classes.addAll(findClasses(directory, packageName, regex)); } ArrayList<Class> classList = new ArrayList<Class>(); for (String clazz : classes) { classList.add(Class.forName(clazz)); } return classList.toArray(new Class[classes.size()]); } catch (Exception e) { e.printStackTrace(); return null; } } } A: MyClass.class.getClassLoader().getResourceAsStream("Person.hbm.xml") is one way to look for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/142151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Which object should I mock? I am writing a repository. Fetching objects is done through a DAO. Creating and updating objects is done through a Request object, which is given to a RequestHandler object (a la Command pattern). I didn't write the DAO, Request, or RequestHandler, so I can't modify them. I'm trying to write a test for this repository. I have mocked out both the DAO and RequestHandler. My goal is to have the mocked RequestHandler simply add the new or updated object to the mocked DAO. This will create the illusion that I'm talking to the DB. This way, I don't have to mock the repository for all the classes that call this repository. The problem is that the Request object is this gob of string blobs and various alphanumeric codes. I'm pretty sure XML is involved too. It's sort of a mess. Another developer is writing the code to create the Request object based on the objects being stored. And since RequestHandler takes in Requests and not the object I'm storing, it can't update the mocked DAO. So the question is: do I mock the Request too, or should I wait until the other guy, who is kind of slow, to finish his code before I write the test? Or screw it and mock out the entire repository when testing the classes that call the repository? BTW, I say "mock" not in the NMock sense, but rather like faking the DB with an in-memory collection. A: To test the repository I would suggest that you use test doubles for all of the lower layer objects. To test the classes that depend on the repository I would suggest that you use test doubles for the repository. In both cases I mean test doubles created by some mocking library (fakes where that works for the test, stubs where you need to return something to the object under test and mocks if you really have to). If you are creating an implementation of the DAO using in-memory collections to functionally replace the database in a demo or test system that is different to unit testing the upper layers. I have done something similar so that I can give prototypes to people and concentrate on business objects not the physical model. That isn't for unit testing though. A: You may of may not be creating a web application, but you can have a look at the NerdDinner application which uses Repository. It is a free PDF that explains how to create an application using ASP.NET MVC and can be found here: Professional ASP.NET MVC 2.0
{ "language": "en", "url": "https://stackoverflow.com/questions/142179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there an embeddable Webkit component for Windows / C# development? I've seen a few COM controls which wrap the Gecko rendering engine (GeckoFX, as well as the control shipped by Mozilla - mozctlx.dll). Is there a wrapper for Webkit that can be included in a .NET Winform application? A: There's a WebKit-Sharp component on Mono's Subversion Server. I can't find any web-viewable documentation on it, and I'm not even sure if it's WinForms or GTK# (can't grab the source from here to check at the moment), but it's probably your best bet, either way. I think this component is CLI wrapper around webkit for Ubuntu. So this wrapper most likely could be not working on win32 Try check another variant - project awesomium - wrapper around google project "Chromium" that use webkit. Also awesomium has features like to should interavtive web pages on 3D objects under WPF A: There is OpenWebKitSharp, a fork of WebKit.NET 0.5 and very advanced. Details: http://code.google.com/p/open-webkit-sharp/ A: I was able to do this using CefSharp (which uses chromium browser). Here are a couple posts that show this in action: * *Running Chrome inside O2 *Video: Installing and running CefSharp (C# Chrome with WPF Browser) *Video: Running Chrome Natively in O2 and VisualStudio A: I've just release a pre-alpha version of CefSharp my .Net bindings for the Chromium Embedded Framework. Check it out and give me your thoughts: https://github.com/chillitom/CefSharp (binary libs and example available in the downloads page) update: Released a new version, includes the ability to bind C# objects into the DOM and more. update 2: no-longer alpha, lib is used in real world projects including Facebook Messenger for Windows, Rdio's Windows client and Github for Windows A: Haven't tried yet but found WebKit.NET on SourceForge. It was moved to GitHub. Warning: Not maintained anymore, last commits are from early 2013 A: The Windows version of Qt 4 includes both WebKit and classes to create ActiveX components. It probably isn't an ideal solution if you aren't already using Qt though. A: There's a WebKit-Sharp component on Mono's GitHub Repository. I can't find any web-viewable documentation on it, and I'm not even sure if it's WinForms or GTK# (can't grab the source from here to check at the moment), but it's probably your best bet, either way. A: Berkelium is a C++ tool for making chrome embeddable. AwesomiumDotNet is a wrapper around both Berkelium and Awesomium BTW, the link here to Awesomium appears to be more current. A: try this one http://code.google.com/p/geckofx/ hope it ain't dupe or this one i think is better http://webkitdotnet.sourceforge.net/
{ "language": "en", "url": "https://stackoverflow.com/questions/142184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Can I Install Visual Studio 2008 Express with VS 2005? I wonder if They can work perfectly together... A: The simple answer is yes - I have both installed on the machine I'm replying to this question from. :=) A: Yes VS'08 and VS'05 will work nicely when installed on the same machine. Now, if only they made the .NET 2.0 support in VS'08 use the same solution/project file version number as VS'05 so you could easily move back and forth VS versions with the same project without modification. A: I have both running on my machine and all seems to be fine after 2 weeks of use... A: Yes, they work well together. Refer to Installing Visual Studio Versions Side-by-Side on MSDN for more information. You may need to install VS2005 before VS2008 though, or you're file associations may end up not working correctly.
{ "language": "en", "url": "https://stackoverflow.com/questions/142197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Where do you store your misc project settings? Some projects have properties have miscellaneous settings such as: "AllowPayments", "ShowSideBar", "SectionTitle". Really things that don't necessarily fit in to other objects. How do you guys store these kinds of values? ApplicationSettings? Flat File? Database table? How do you access them? Static object with properties? DB call? Would either of these change if you were in a load balanced environment where you would have to synchronize the files across multiple servers? Environment ASP.NET 2.0 A: App.Config, or a custom xml configuration file and config service. Key value pair mappings keeps things very simple. A: For me it depends on the context the setting is. If it relates to the data and the domain, i store in the database, if it relates the the application i store in the web.config. A: Since you didn't tell which environment you use: In .NET applications, I use the ApplicationSettings system from Visual Studio. This way you can configure the settings with default values in the designer, and a strongly-typed class to access the values is generated. I usually add a second ApplicationSettings element with the name Persistent in addition to the default Settings, with anything the user configures to go in the Settings object and anything I just save (i.e. window position) to the Persistent object. This goes for desktop applications.
{ "language": "en", "url": "https://stackoverflow.com/questions/142204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: .NET unable to connect to Oracle DB using Oracle proxy user I am setting up a test version of my website against a new schema. I am trying to connect using the proxy connection and am getting the following error: ORA-28150: proxy not authorized to connect as client my connect string has the following form: Data Source=Instance; User Id=user; Proxy User Id=prxy_usr;Proxy Password=prxy_pass; Min Pool Size = 0; Connection Timeout = 30 Do you have any idea what might be wrong? A: EddieAwad's answer was correct but here is the specific code to run: ALTER USER username GRANT CONNECT THROUGH proxyUserName; The THROUGH keyword is the part I couldn't find in the documentation. A: According to the docs: Grant the proxy user permission to perform actions on behalf of the client by using the ALTER USER ... GRANT CONNECT command. A: Here is the ALTER USER documentation. You will find the CONNECT THROUGH clause there as well as some proxy users examples.
{ "language": "en", "url": "https://stackoverflow.com/questions/142211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Under what circumstances are dynamic languages not appropriate? What factors indicate that a project's solution should not be coded in a dynamic language? A: Familiarity and willingness of the programmers to work with the language. Your dynamic language is probably my static language. A: System level development is a key group of software that typically shouldn't be in dynamic languages. (drivers, kernel level stuff, etc). Basically anything that needs to have every ounce of performance or low level hardware access, should be in a lower level language. Another indicator is if it is highly number crunching, like scientific data number crunching. That is, if it needs to run fast and do number crunching. I think a common theme is processor intensive problems... in which case you will easily see the performance differences, and you will find that the dynamic language just can't give you the power to use the hardware effectively. That said, if you are doing processor intensive work and you don't mind the hit in performance, then you could still potentially use a dynamic language. Update: Note that for number crunching, I mean really long running number crunching in scientific arena where the process is running for hours or days... in this case a 2x performance gain is GINORMOUS... if it is on a much smaller scale, then dynamic languages could still be of use. A: To a large degree, programming language is a style choice. Use the language you want to use and you'll be maximally productive and happy. If for some reason that's not possible, then hopefully your ultimate decision will be based on something meaningful, like a platform you have to run against or real, empirical performance numbers, rather than someone else's arbitrary style choice. A: Video card device drivers A: Speed is typically the primary answer. Though this is becoming less of an issue these days. A: when speed is crucial. Dynamic languages are getting faster, but still not close to the performance of what a compiled language is. A: Interop is absolutely possible with dynamic languages. (remember classic visual basic, which has "lazy binding"?) It requires the COM component to be compiled with some extras though for helping their callers to call by name. I don't think that number crunching has to be statically compiled, most often it is a matter of how you solve. Matlab is a good example made for number crunching, and it has a non-compiled language. Matlab, however, has a very specific runtime for numbers and matrices. A: I believe you should always opt for statically-typed language where possible. I'm not saying C# or Java have good static systems but C# is getting close. Good type inference is the key because it will give you benefits seen in dynamic languages while still giving you security and features of statically-typed ones. Problem solved - no more flamewars. A: System level code for embedded systems. A possible problem is that dynamic languages sometimes hide the performance implications of a single easy looking statement. Like say this Perl statement: @contents = <FILE>; If FILE is a few megabytes, then that is one resource-consuming statement - you might exhaust your heap, or cause a watchdog timeout, or generally slow down the response of the embedded system. If you want to "program closer to the metal", you probably want to be using a statically typed and "middle level" language. A: How about interop? Is it possible to call a COM component from Ruby or Python?
{ "language": "en", "url": "https://stackoverflow.com/questions/142223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there a dynamic language based .NET build tool? I might be starting a new project in .NET soon, and would prefer not to use an XML-based build tool like Nant or MSBuild. I like Rake, but I'm thinking it may be better to have something based on a .NET dynamic language (such as Boo), similar in spirit to gant or BuildR for Java. Is there such a thing for .NET? If not, what do you recommend? A: There is always the Boo Build System or "Boobs" for short (yes it's a silly name) and looks very similar to Rake. Ayende has written about this previously in Introducing the Boobs Build System and shows a nice example of the syntax. Boo is written in C# and has a really nifty compiler that can be modified at runtime for doing all sorts of domain specific language (DSL) tricks. A: You should really check out FinalBuilder. I evaluated it quite a bit last year and really liked it although in the end we deployed TFS2008 and so we're using TeamBuild to get a lot of the integration goodness. But really FinalBuilder had TONs of prebuilt build activities, great support for lots of environments and tools, and a nice IDE for designing it all. A: You could try FluentBuild. For my money, using UppercuT (uses NAnt in the back end) is an optimal solution because of everything that it can do for me without much work to set it up. http://code.google.com/p/uppercut/ Some good explanations here: UppercuT A: You should try NUBuild. I uses it on a regular basis and I work with around 75 projects that I need to build with every code change/release. NUBuild is extremely fast, easy to setup (you do it only once) and gives you the power of a complete build server at your fingertip by letting you do 'local builds'. It also has lots of other advance features and functionalities. You can find more detail on the project site (on codeplex): http://nubuild.codeplex.com/ A: I haven't heard of anything like that, but maybe you could port Rake to IronRuby, and then extend it to understand building C#/VB.NET and running other .NET tools. A: Using a non-industry standard build system is something you should only really do if the industry standard build systems don't do something you need. Is there some functionality that Nant/Msbuild don't provide that you expect to need? A: Since you mention it, I just got started with IronRuby and rake on a current project. Because I don't want my team to have to install MRI I decided to go with an xcopy deployment of IronRuby that I had preloaded with rake. Not sure if this is exactly what you're after but check out my blog post on the early findings. http://dylandoesdigits.blogspot.com/2009/11/rake-for-net-projects.html I think it meets your requirements. .NET based on the dynamic language runtime, no XML. As a current msbuild angle bracket co-sufferer, I'm pretty happy with how little work it's been thus far.
{ "language": "en", "url": "https://stackoverflow.com/questions/142225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Integrating External Sources in a Build Where I work I have several projects in separate repositories. Each project shares a lot of common code that I want to pull out into a distinct project. I'm thinking of calling the project with the following scheme: Company.Department.Common or Company.Department.Utility. I've constructed a project for this and it has an automated build script (as does every other project). My question is that I would like to refactor my existing projects to depend on this new project. One method that is quite basic but not exactly what I am looking for is to simply build my Utility project, copy the DLL to my lib folder in my consuming project, and check that DLL in with that consuming project. I personally feel that method is quite poor. I would like to have a reference to my Utility project and will perform a svn-update and build of the Utility project before the build of the consuming project. FYI, the kind of code that's in the Utlity project are Logging facilities, BDD Unit Testing classes, IoC faclities, and Common Company.Department focused classes. Hope my question isn't too vague, but with some answers I may be able to sharpen the focus on exactly what I would like to do. Lastly, this is for .Net projects and using NAnt as the build script, and svn for code versioning. A: Greg is right in that you will probably want to use the svn:external feature. I created a step by step guide on how to do this on Windows with TortoiseSVN. I found it quite confusing the first couple times I used it. I created the guide, so that I can look it up, because I don't do it all the time. Using svn:externals with Windows A: Have you checked out the svn:externals feature? This allows you to make a different repository appear as a subdirectory of a higher level repository. If I understand what you're trying to do, this might help.
{ "language": "en", "url": "https://stackoverflow.com/questions/142237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How would you store and query hours of operation? We're building an app that stores "hours of operation" for various businesses. What is the easiest way to represent this data so you can easily check if an item is open? Some options: * *Segment out blocks (every 15 minutes) that you can mark "open/closed". Checking involves seeing if the "open" bit is set for the desired time (a bit like a train schedule). *Storing a list of time ranges (11am-2pm, 5-7pm, etc.) and checking whether the current time falls in any specified range (this is what our brain does when parsing the strings above). Does anyone have experience in storing and querying timetable information and any advice to give? (There's all sorts of crazy corner cases like "closed the first Tuesday of the month", but we'll leave that for another day). A: store each contiguous block of time as a start time and a duration; this makes it easier to check when the hours cross date boundaries if you're certain that hours of operation will never cross date boundaries (i.e. there will never be an open-all-night sale or 72-hour marathon event et al) then start/end times will suffice A: The most flexible solution might be use the bitset approach. There are 168 hours in a week, so there are 672 15-minute periods. That's only 84 bytes worth of space, which should be tolerable. A: I'd use a table like this: BusinessID | weekDay | OpenTime | CloseTime --------------------------------------------- 1 1 9 13 1 2 5 18 1 3 5 18 1 4 5 18 1 5 5 18 1 6 5 18 1 7 5 18 Here, we have a business that has regular hours of 5 to 6, but shorter hours on sunday. A query for if open would be (psuedo-sql) SELECT @isOpen = CAST (SELECT 1 FROM tblHours WHERE BusinessId = @id AND weekDay = @Day AND CONVERT(Currentime to 24 hour) IS BETWEEN(OpenTime,CloseTime)) AS BIT; If you need to store edge cases, then just have 365 entries, one per day...its really not that much in the grand scheme of things, place an index on the day column and businessId column. Don't forget to store the businesses timezone in a separate table (normalize!), and perform a transform between your time and it before making these comparisons. A: OK, I'll throw in on this for what it's worth. I need to handle quite a few things. * *Fast / Performant Query *Any increments of time, 9:01 PM, 12:14, etc. *International (?) - not sure if this is an issue even with timezones, at least in my case but someone more versed here feel free to chime in *Open - Close spanning to the next day (open at noon, close at 2:00 AM) *Multiple timespans / day *Ability to override specific days (holidays, whatever) *Ability for overrides to be recurring *Ability to query for any point in time and get businesses open (now, future time, past time) *Ability to easily exclude results of businesses closing soon (filter businesses closing in 30 minutes, you don't want to make your users 'that guy that shows up 5 minutes before closing in the food/beverage industry) I like a lot of the approaches presented and I'm borrowing from a few of them. In my website, project, whatever I need to take into consideration I may have millions of businesses and a few of the approaches here don't seem to scale well to me personally. Here's what I propose for an algorithm and structure. We have to make some concrete assumptions, across the globe, anywhere, any time: There are 7 days in a week. There are 1440 minutes in one day. There are a finite number of permutations of minutes of open / closed that are possible. Not concrete but decent assumptions: Many permutations of open/closed minutes will be shared across businesses reducing total permutations actually stored. There was a time in my life I could easily calculate the actual possible combinations to this approach but if someone could assist/thinks it would be useful, that would be great. I propose 3 tables: Before you stop reading, consider in the real-world 2 of these tables will be small enough cache neatly. This approach isn't going to be for everyone either due to the sheer complexity of code required to interpret a UI to the data model and back again if needed. Your mileage and needs may vary. This is an attempt at a reasonable 'enterprise' level solution, whatever that means. HoursOfOperations Table ID | OPEN (minute of day) | CLOSE (minute of day) 1 | 360 | 1020 (example: 9 AM - 5 PM) 2 | 365 | 1021 (example: edge-case 9:05 AM - 5:01 PM (weirdos) ) etc. HoursOfOperations doesn't care about what days, just open and close and uniqueness. There can be only a single entry per open/close combination. Now, depending on your environment either this entire table can be cached or it could be cached for the current hour of the day, etc. At any rate, you shouldn't need to query this table for every operation. Depending on your storage solution I envision every column in this table as indexed for performance. As time progresses, this table likely has an exponentially inverse likelihood of INSERT(s). Really though, dealing with this table should mostly be an in-process operation (RAM). Business2HoursMap Note: In my example I'm storing "Day" as a bit-flag field/column. This is largely due to my needs and the advancement of LINQ / Flags Enums in C#. There's nothing stopping you from expanding this to 7 bit fields. Both approaches should be relatively similar in both storage logic and query approach. Another Note: I'm not entering into a semantics argument on "every table needs a PK ID column", please find another forum for that. BusinessID | HoursID | Day (or, if you prefer split into: BIT Monday, BIT Tuesday, ...) 1 | 1 | 1111111 (this business is open 9-5 every day of the week) 2 | 2 | 1111110 (this business is open 9:05 - 5:01 M-Sat (Monday = day 1) The reason this is easy to query is that we can always determine quite easily the MOTD (Minute of the Day) that we're after. If I want to know what's open at 5 PM tomorrow I grab all HoursOfOperations IDS WHERE Close >= 1020. Unless I'm looking for a time range, Open becomes insignificant. If you don't want to show businesses closing in the next half-hour, just adjust your incoming time accordingly (search for 5:30 PM (1050), not 5:00 PM (1020). The second query would naturally be 'give me all business with HoursID IN (1, 2, 3, 4, 5), etc. This should probably raise a red flag as there are limitations to this approach. However, if someone can answer the actual permutations question above we may be able to pull the red flag down. Consider we only need the possible permutations on any one side of the equation at one time, either open or close. Considering we've got our first table cached, that's a quick operation. Second operation is querying this potentially large-row table but we're searching very small (SMALLINT) hopefully indexed columns. Now, you may be seeing the complexity on the code side of things. I'm targeting mostly bars in my particular project so it's going to be very safe to assume that I will have a considerable number of businesses with hours such as "11:00 AM - 2:00 AM (the next day)". That would indeed be 2 entries into both the HoursOfOperations table as well as the Business2HoursMap table. E.g. a bar that is open from 11:00 AM - 2:00 AM will have 2 references to the HoursOfOperations table 660 - 1440 (11:00 AM - Midnight) and 0 - 120 (Midnight - 2:00 AM). Those references would be reflected into the actual days in the Business2HoursMap table as 2 entries in our simplistic case, 1 entry = all days Hours reference #1, another all days reference #2. Hope that makes sense, it's been a long day. Overriding on special days / holidays / whatever. Overrides are by nature, date based, not day of week based. I think this is where some of the approaches try to shove the proverbial round peg into a square hole. We need another table. HoursID | BusinessID | Day | Month | Year 1 | 2 | 1 | 1 | NULL This can certainly get more complex if you needed something like "on every second Tuesday, this company goes fishing for 4 hours". However, what this will allow us to do quite easily is allow 1 - overrides, 2 - reasonable recurring overrides. E.G. if year IS NULL, then every year on New Years day this weirdo bar is open from 9:00 AM to 5:00 PM keeping in line with our above data examples. I.e. - If year were set, it's only for 2013. If month is null, it's every first day of the month. Again, this won't handle every scheduling scenario by NULL columns alone, but theoretically, you could handle just about anything by relying on a long sequence of absolute dates if needed. Again, I would cache this table on a rolling day basis. I just can't realistically see the rows for this table in a single-day snapshot being very large, at least for my needs. I would check this table first as it is well, an override and would save a query against the much larger Business2HoursMap table on the storage-side. Interesting problem. I'm really surprised this is the first time I've really needed to think this through. As always, very keen on different insights, approaches or flaws in my approach. A: I think I'd personally go for a start + end time, as it would make everything more flexible. A good question would be: what's the chance that the block size would change at a certain point? Then pick the solution that best fits your situation (if it's liable to change I'd go for the timespans definately). You could store them as a timespan, and use segments in your application. That way you have the easy input using blocks, while keeping the flexibility to change in your datastore. A: To add to what Johnathan Holland said, I would allow for multiple entries for the same day. I would also allow for decimal time, or another column for minutes. Why? many restaurants and some businesses, and many businesses around the world have lunch and or afternoon breaks. Also, many restaurants (2 that I know of near my house close at odd non-15-increments time. One closes at 9:40 PM on Sundays, and one closes at 1:40 AM. There is also the issue of holiday hours , such as stores closing early on thanksgiving day, for example, so you need to have calendar-based override. Perhaps what can be done is a date/time open, date-time close, such as this: businessID | datetime | type ========================================== 1 10/1/2008 10:30:00 AM 1 1 10/1/2008 02:45:00 PM 0 1 10/1/2008 05:15:00 PM 1 1 10/2/2008 02:00:00 AM 0 1 10/2/2008 10:30:00 AM 1 etc. (type: 1 being open and 0 closed) And have all the days in the coming 1 or two years precalculated 1-2 years in advance. Note that you would only have 3 columns: int, date/time/bit so the data consumption should be minimal. This will also allow you to modify specific dates for odd hours for special days, as they become known. It also takes care of crossing over midnight, as well as 12/24 hour conversions. It is also timezone agnostic. If you store start time and duration, when you calculate the end time, is your machine going to give you the TZ adjusted time? Is that what you want? More code. as far as querying for open-closed status: query the date-time in question, select top 1 type from thehours where datetimefield<=somedatetime and businessID = somebusinessid order by datetime desc then look at "type". if one, it's open, if 0, it's closed. PS: I was in retail for 10 years. So I am familiar with the small business crazy-hours problems. A: The segment blocks are better, just make sure you give the user an easy way to set them. Click and drag is good. Any other system (like ranges) is going to be really annoying when you cross the midnight boundary. As for how you store them, in C++ bitfields would probably be best. In most other languages, and array might be better (lots of wasted space, but would run faster and be easier to comprehend). A: I would think a little about those edge-cases right now, because they are going to inform whether you have a base configuration plus overlay or complete static storage of opening times or whatever. There are so many exceptions - and on a regular basis (like snow days, irregular holidays like Easter, Good Friday), that if this is expected to be a reliable representation of reality (as opposed to a good guess), you'll need to address it pretty soon in the architecture. A: How about something like this: Store Hours Table Business_id (int) Start_Time (time) End_Time (time) Condition varchar/string Open bit 'Condition' is a lambda expression (text for a 'where' clause). Build the query dynamically. So for a particular business you select all of the open/close times Let Query1 = select count(open) from store_hours where @t between start_time and end_time and open = true and business_id = @id and (.. dynamically built expression) Let Query2 = select count(closed) from store_hours where @t between start_time and end_time and open = false and business_id = @id and (.. dynamically built expression) So end the end you want something like: select cast(Query1 as bit) & ~cast(Query2 as bit) If the result of the last query is 1 then the store is open at time t, otherwise it is closed. Now you just need a friendly interface that can generate your where clauses (lambda expressions) for you. The only other corner case that I can think of is what happens if a store is open from say 7am to 2am on one date but closes at 11pm on the following date. Your system should be able to handle that as well by smartly splitting up the times between the two days. A: There is surely no need to conserve memory here, but perhaps a need for clean and comprehensible code. "Bit twiddling" is not, IMHO, the way to go. We need a set container here, which holds any number of unique items and can determine quickly and easily whether an item is a member or not. The setup reuires care, but in routine use a single line of simply understood code determines if you are open or closed Concept: Assign index number to every 15 min block, starting at, say, midnight sunday. Initialize: Insert into a set the index number of every 15 min block when you are open. ( Assuming you are open fewer hours than you are closed. ) Use: Subtract from interesting time, in minutes, midnight the previous sunday and divide by 15. If this number is present in the set, you are open.
{ "language": "en", "url": "https://stackoverflow.com/questions/142239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Explicit code parallelism in c++ Out of order execution in CPUs means that a CPU can reorder instructions to gain better performance and it means the CPU is having to do some very nifty bookkeeping and such. There are other processor approaches too, such as hyper-threading. Some fancy compilers understand the (un)interrelatedness of instructions to a limited extent, and will automatically interleave instruction flows (probably over a longer window than the CPU sees) to better utilise the processor. Deliberate compile-time interleaving of floating and integer instructions is another example of this. Now I have highly-parallel task. And I typically have an ageing single-core x86 processor without hyper-threading. Is there a straight-forward way to get my the body of my 'for' loop for this highly-parallel task to be interleaved so that two (or more) iterations are being done together? (This is slightly different from 'loop unwinding' as I understand it.) My task is a 'virtual machine' running through a set of instructions, which I'll really simplify for illustration as: void run(int num) { for(int n=0; n<num; n++) { vm_t data(n); for(int i=0; i<data.len(); i++) { data.insn(i).parse(); data.insn(i).eval(); } } } So the execution trail might look like this: data(1) insn(0) parse data(1) insn(0) eval data(1) insn(1) parse ... data(2) insn(1) eval data(2) insn(2) parse data(2) insn(2) eval Now, what I'd like is to be able to do two (or more) iterations explicitly in parallel: data(1) insn(0) parse data(2) insn(0) parse \ processor can do OOO as these two flow in data(1) insn(0) eval / data(2) insn(0) eval \ OOO opportunity here too data(1) insn(1) parse / data(2) insn(1) parse I know, from profiling, (e.g. using Callgrind with --simulate-cache=yes), that parsing is about random memory accesses (cache missing) and eval is about doing ops in registers and then writing results back. Each step is several thousand instructions long. So if I can intermingle the two steps for two iterations at once, the processor will hopefully have something to do whilst the cache misses of the parse step are occurring... Is there some c++ template madness to get this kind of explicit parallelism generated? Of course I can do the interleaving - and even staggering - myself in code, but it makes for much less readable code. And if I really want unreadable, I can go so far as assembler! But surely there is some pattern for this kind of thing? A: Given optimizing compilers and pipelined processors, I would suggest you just write clear, readable code. A: Your best plan may be to look into OpenMP. It basically allows you to insert "pragmas" into your code which tell the compiler how it can split between processors. A: Hyperthreading is a much higher-level system than instruction reordering. It makes the processor look like two processors to the operating system, so you'd need to use an actual threading library to take advantage of that. The same thing naturally applies to multicore processors. If you don't want to use low-level threading libraries and instead want to use a task-based parallel system (and it sounds like that's what you're after) I'd suggest looking at OpenMP or Intel's Threading Building Blocks. TBB is a library, so it can be used with any modern C++ compiler. OpenMP is a set of compiler extensions, so you need a compiler that supports it. GCC/G++ will from verion 4.2 and newer. Recent versions of the Intel and Microsoft compilers also support it. I don't know about any others, though. EDIT: One other note. Using a system like TBB or OpenMP will scale the processing as much as possible - that is, if you have 100 objects to work on, they'll get split about 50/50 in a two-core system, 25/25/25/25 in a four-core system, etc. A: Modern processors like the Core 2 have an enormous instruction reorder buffer on the order of nearly 100 instructions; even if the compiler is rather dumb the CPU can still make up for it. The main issue would be if the code used a lot of registers, in which case the register pressure could force the code to be executed in sequence even if theoretically it could be done in parallel. A: There is no support for parallel execution in the current C++ standard. This will change for the next version of the standard, due out next year or so. However, I don't see what you are trying to accomplish. Are you referring to one single-core processor, or multiple processors or cores? If you have only one core, you should do whatever gets the fewest cache misses, which means whatever approach uses the smallest memory working set. This would probably be either doing all the parsing followed by all the evaluation, or doing the parsing and evaluation alternately. If you have two cores, and want to use them efficiently, you're going to have to either use a particularly smart compiler or language extensions. Is there one particular operating system you're developing for, or should this be for multiple systems? A: It sounds like you ran into the same problem chip designers face: Executing a single instruction takes a lot of effort, but it involves a bunch of different steps that can be strung together in an execution pipeline. (It is easier to execute things in parallel when you can build them out of separate blocks of hardware.) The most obvious way is to split each task into different threads. You might want to create a single thread to execute each instruction to completion, or create one thread for each of your two execution steps and pass data between them. In either case, you'll have to be very careful with how you share data between threads and make sure to handle the case where one instruction affects the result of the following instruction. Even though you only have one core and only one thread can be running at any given time, your operating system should be able to schedule compute-intense threads while other threads are waiting for their cache misses. (A few hours of your time would probably pay for a single very fast computer, but if you're trying to deploy it widely on cheap hardware it might make sense to consider the problem the way you're looking at it. Regardless, it's an interesting problem to consider.) A: Take a look at cilk. It's an extension to ANSI C that has some nice constructs for writing parallelized code in C. However, since it's an extension of C, it has very limited compiler support, and can be tricky to work with. A: This answer was written assuming the questions does not contain the part "And I typically have an ageing single-core x86 processor without hyper-threading.". I hope it might help other people who want to parallelize highly-parallel tasks, but target dual/multicore CPUs. As already posted in another answer, OpenMP is a portable way how to do this. However my experience is OpenMP overhead is quite high and it is very easy to beat it by rolling a DIY (Do It Youself) implementation. Hopefully OpenMP will improve over time, but as it is now, I would not recommend using it for anything else than prototyping. Given the nature of your task, What you want to do is most likely a data based parallelism, which in my experience is quite easy - the programming style can be very similar to a single-core code, because you know what other threads are doing, which makes maintaining thread safety a lot easier - an approach which worked for me: avoid dependencies and call only thread safe functions from the loop. To create a DYI OpenMP parallel loop you need to: * *as a preparation create a serial for loop template and change your code to use functors to implement the loop bodies. This can be tedious, as you need to pass all references across the functor object *create a virtual JobItem interface for the functor, and inherit your functors from this interface *create a thread function which is able process individual JobItems objects *create a thread pool of the thread using this thread function *experiment with various synchronizations primitives to see which works best for you. While semaphore is very easy to use, its overhead is quite significant and if your loop body is very short, you do not want to pay this overhead for each loop iteration. What worked great for me was a combination of manual reset event + atomic (interlocked) counter as a much faster alternative. *experiment with various JobItem scheduling strategies. If you have long enough loop, it is better if each thread picks up multiple successive JobItems at a time. This reduces the synchronization overhead and at the same time it makes the threads more cache friendly. You may also want to do this in some dynamic way, reducing the length of the scheduled sequence as you are exhausting your tasks, or letting individual threads to steal items from other thread schedules.
{ "language": "en", "url": "https://stackoverflow.com/questions/142240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best way to pick a random row from a table in MySQL? Possible Duplicate: quick selection of a random row from a large table in mysql I have seen random rows pulled using queries like this, which are quite inefficient for large data sets. SELECT id FROM table ORDER BY RANDOM() LIMIT 1 I have also seen various other RDBMS-specific solutions that don't work with MySQL. The best thing I can think of doing off-hand is using two queries and doing something like this. * *Get the number of rows in the table. MyISAM tables store the row count so this is very fast. *Calculate a random number between 0 and rowcount - 1. *Select a row ordered by primary key, with a LIMIT randnum, 1 Here's the SQL: SELECT COUNT(*) FROM table; SELECT id FROM table LIMIT randnum, 1; Does anyone have a better idea?
{ "language": "en", "url": "https://stackoverflow.com/questions/142242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Test if a floating point number is an integer This code works (C# 3) double d; if(d == (double)(int)d) ...; * *Is there a better way to do this? *For extraneous reasons I want to avoid the double cast so; what nice ways exist other than this? (even if they aren't as good) Note: Several people pointed out the (important) point that == is often problematic regrading floating point. In this cases I expect values in the range of 0 to a few hundred and they are supposed to be integers (non ints are errors) so if those points "shouldn't" be an issue for me. A: This would work I think: if (d % 1 == 0) { //... } A: If your double is the result of another calculation, you probably want something like: d == Math.Floor(d + 0.00001); That way, if there's been a slight rounding error, it'll still match. A: I cannot answer the C#-specific part of the question, but I must point out you are probably missing a generic problem with floating point numbers. Generally, integerness is not well defined on floats. For the same reason that equality is not well defined on floats. Floating point calculations normally include both rounding and representation errors. For example, 1.1 + 0.6 != 1.7. Yup, that's just the way floating point numbers work. Here, 1.1 + 0.6 - 1.7 == 2.2204460492503131e-16. Strictly speaking, the closest thing to equality comparison you can do with floats is comparing them up to a chosen precision. If this is not sufficient, you must work with a decimal number representation, with a floating point number representation with built-in error range, or with symbolic computations. A: d == Math.Floor(d) does the same thing in other words. NB: Hopefully you're aware that you have to be very careful when doing this kind of thing; floats/doubles will very easily accumulate miniscule errors that make exact comparisons (like this one) fail for no obvious reason. A: If you are just going to convert it, Mike F / Khoth's answer is good, but doesn't quite answer your question. If you are going to actually test, and it's actually important, I recommend you implement something that includes a margin of error. For instance, if you are considering money and you want to test for even dollar amounts, you might say (following Khoth's pattern): if( Math.abs(d - Math.Floor(d + 0.001)) < 0.001) In other words, take the absolute value of the difference of the value and it's integer representation and ensure that it's small. A: A simple test such as 'x == floor(x)' is mathematically assured to work correctly, for any fixed-precision FP number. All legal fixed-precision FP encodings represent distinct real numbers, and so for every integer x, there is at most one fixed-precision FP encoding that matches it exactly. Therefore, for every integer x that CAN be represented in such way, we have x == floor(x) necessarily, since floor(x) by definition returns the largest FP number y such that y <= x and y represents an integer; so floor(x) must return x. A: You don't need the extra (double) in there. This works: if (d == (int)d) { //... } A: Use Math.Truncate() A: This will let you choose what precision you're looking for, plus or minus half a tick, to account for floating point drift. The comparison is integral also which is nice. static void Main(string[] args) { const int precision = 10000; foreach (var d in new[] { 2, 2.9, 2.001, 1.999, 1.99999999, 2.00000001 }) { if ((int) (d*precision + .5)%precision == 0) { Console.WriteLine("{0} is an int", d); } } } and the output is 2 is an int 1.99999999 is an int 2.00000001 is an int A: Something like this double d = 4.0; int i = 4; bool equal = d.CompareTo(i) == 0; // true A: Could you use this bool IsInt(double x) { try { int y = Int16.Parse(x.ToString()); return true; } catch { return false; } } A: To handle the precision of the double... Math.Abs(d - Math.Floor(d)) <= double.Epsilon Consider the following case where a value less then double.Epsilon fails to compare as zero. // number of possible rounds const int rounds = 1; // precision causes rounding up to double.Epsilon double d = double.Epsilon*.75; // due to the rounding this comparison fails Console.WriteLine(d == Math.Floor(d)); // this comparison succeeds by accounting for the rounding Console.WriteLine(Math.Abs(d - Math.Floor(d)) <= rounds*double.Epsilon); // The difference is double.Epsilon, 4.940656458412465E-324 Console.WriteLine(Math.Abs(d - Math.Floor(d)).ToString("E15"));
{ "language": "en", "url": "https://stackoverflow.com/questions/142252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Mangling __FILE__ and __LINE__ in code for quoting? Is there a way to get the C/C++ preprocessor or a template or such to mangle/hash the __FILE__ and __LINE__ and perhaps some other external input like a build-number into a single short number that can be quoted in logs or error messages? (The intention would be to be able to reverse it (to a list of candidates if its lossy) when needed when a customer quotes it in a bug report.) A: You will have to use a function to perform the hashing and create a code from __LINE__ and __FILE__ as the C preprocessor is not able to do such complex tasks. Anyway, you can take inspiration by this article to see if a different solution can be better suited to your situation. A: Well, if you're displaying the message to the user yourself (as opposed to having a crash address or function be displayed by the system), there's nothing to keep you from displaying exactly what you want. For example: typedef union ErrorCode { struct { unsigned int file: 15; unsigned int line: 12; /* Better than 5 bits, still not great Thanks commenters!! */ unsigned int build: 5; } bits; unsigned int code; } ErrorCode; unsigned int buildErrorCodes(const char *file, int line, int build) { ErrorCode code; code.bits.line=line & ((1<<12) - 1); code.bits.build=build & ((1<< 5) - 1); code.bits.file=some_hash_function(file) & ((1<<15) - 1); return code.code; } You'd use that as buildErrorCodes(__FILE__, __LINE__, BUILD_CODE) and output it in hex. It wouldn't be very hard to decode... (Edited -- the commenters are correct, I must have been nuts to specify 5 bits for the line number. Modulo 4096, however, lines with error messages aren't likely to collide. 5 bits for build is still fine - modulo 32 means that only 32 builds can be outstanding AND have the error still happen at the same line.) A: Well... you could use something like: ((*(int*)__FILE__ && 0xFFFF0000) | version << 8 | __LINE__ ) It wouldn't be perfectly unique, but it might work for what you want. Could change those ORs to +, which might work better for some things. Naturally, if you can actually create a hashcode, you'll probably want to do that. A: I needed serial valuse in a project of mine and got them by making a template that specialized on __LINE__ and __FILE__ and resulted in an int as well as generating (as compile time output to stdout) a template specialization for it's inputs that resulted in the line number of that template. These were collected the first time through the compiler and then dumped into a code file and the program was compiled again. That time each location that the template was used got a different number. (done in D so it might not be possible in C++) template Serial(char[] file, int line) { prgams(msg, "template Serial(char[] file : \"~file~"\", int line : "~line.stringof~")" "{const int Serial = __LINE__;"); const int Serial = -1; } A: A simpler solution would be to keep a global static "error location" variable. #ifdef DEBUG #define trace_here(version) printf("[%d]%s:%d {%d}\n", version, __FILE__, __LINE__, errloc++); #else #define trace_here(version) printf("{%lu}\n", version<<16|errloc++); #endif Or without the printf.. Just increment the errloc everytime you cross a tracepoint. Then you can correlate the value to the line/number/version spit out by your debug builds pretty easily. You'd need to include version or build number, because those error locations could change with any build. Doesn't work well if you can't reproduce the code paths. A: __FILE__ is a pointer into the constants segment of your program. If you output the difference between that and some other constant you should get a result that's independent of any relocation, etc: extern const char g_DebugAnchor; #define FILE_STR_OFFSET (__FILE__ - &g_DebugAnchor) You can then report that, or combine it in some way with the line number, etc. The middle bits of FILE_STR_OFFSET are likely the most interesting.
{ "language": "en", "url": "https://stackoverflow.com/questions/142261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Standard way to detect mobile browsers in a web application based on the http request We are beginning to go down the path of mobile browser support for an enterprise e-commerce webapp (Java/Servlet based). Of course there are many decisions to be made, but it seems to me the cornerstone is to be able to reliably detect mobile browsers, and make decisions on the content to be returned accordingly. Is there a standard way to make this determination (quickly) based on the http request, and ideally glean more information about the given browser and device making the request (screen size, html capabilities, etc?). I would also appreciate any supplemental information that would be of use from someone who has gone down this path of taking an existing large scale enterprise webapp and architect-ing out mobile browser support from the development side. [edit] I certainly understand the request header and the information about a database of standard user agents is a great help. For those talking about 'other' request header properties, if you could include similar standardized name / resource of values that would be a big help. [edit] Several users have proposed solutions that involve a call over the wire to some web service that will do the detection. While I'm sure this works, it is not a good solution for an enterprise e-commerce site for two reasons: 1) speed. A call over the wire for every page request to a third party would have huge performance implications. 2) dependency/legal. We'd tie our website response time and key functionality to their service, which is horrible for legal and risk reasons. A: You can use Modernizer to detect browser abilities A: While you could detect a mobile browser through it's user agent the browser war on the PC platform has shown that sniffing user agents isn't really such a good thing to do. What ideally should be done is that specific styles should be applied based on media type or that a different answer should be sent based on a header other than the user agent - such as the Accept-header which tells which kind of content that the browser prefers. Right now it might be enough to code a site that works with the iPhone and with Opera through browser sniffing - but Googles Android is coming any minute now and there are many other mobile phones that will have browser functionality close to the iPhone's in the near future and it would be a waste to develop a mobile website that didn't support those devices as good as possibel from scratch. A: After days of searching for the right way of detecting a mobile device I've decided to keep it simple [ stupid ] and i shall put a 'Mobile device site' button on my index page.... it's only one click away!! A: Wouldn't the standard way be to check the user agent? Here's a database of user agents you can use to detect mobile browsers. A: This article (and its follow-up) seems nice. A: Detect Mobile Browsers - snippets in various programming languages. A: The following light weight Apache configuration does a pretty good job and remembers user preference if they prefer the PC version <VirtualHost (your-address-binding)> (your-virtual-host-configuration) RewriteEngine On RewriteCond %{QUERY_STRING} !ui=pc RewriteCond %{HTTP_COOKIE} !ui=pc RewriteCond %{HTTP_USER_AGENT} "^.*(iphone|ipod|ipad|android|symbian|nokia|blackberry| rim |opera mini|opera mobi|windows ce|windows phone|up\.browser|netfront|palm-|palm os|pre\/|palmsource|avantogo|webos|hiptop|iris|kddi|kindle|lg-|lge|mot-|motorola|nintendo ds|nitro|playstation portable|samsung|sanyo|sprint|sonyericsson|symbian).*$" [NC,OR] RewriteCond %{HTTP_USER_AGENT} "^(alcatel|audiovox|bird|coral|cricket|docomo|edl|huawei|htc|gt-|lava|lct|lg|lynx|mobile|lenovo|maui|micromax|mot|myphone|nec|nexian|nook|pantech|pg|polaris|ppc|sch|sec|spice|tianyu|ustarcom|utstarcom|videocon|vodafone|winwap|zte).*$" [NC] RewriteRule /(.*) http://bemoko.com/$1 [L] RewriteCond %{QUERY_STRING} "ui=pc" RewriteRule ^/ - [CO=ui:pc:(your-cookie-domain):86400:/] RewriteCond %{QUERY_STRING} "ui=default" RewriteRule ^/ - [CO=ui:default:(your-cookie-domain):86400:/] </VirtualHost> More background on this @ http://bemoko.com/training.team/help/team/pc-to-mobile-redirect A: @David's answer mentioned using WURFL -- which is probably your best option. Be forewarned, however, the success rate is usually around 60% (from mine and other's experience). With carriers changing UA's constantly and the amount of device profiles that exist (60,000+ ?), there's no bulletproof way to get all the right data you want. Just a bit of warning before relying heavily on a device DB. I would try to keep the user's options open by allowing them to change session options in case i've guessed wrong. A: I propose a free detection system which is based on uaprof and user agent: http://www.mobilemultimedia.be UAprof should be the primary key for detection when it's available as there are usually multiple user agents for the same uaprof. If you want to manage this on your own, you should then go for Wurfl because you can download the entire database and manage it locally by yourself. A: When I had a similar need recently, I found this code that uses HTTP_X_WAP_PROFILE, HTTP_ACCEPT, and HTTP_USER_AGENT to identify a browser as mobile or non-mobile. It's PHP but could be converted fairly easily into whatever you need (I implemented it in VBScript for classic ASP). Ironically, it turned out that I didn't end up using the code because we decided to provide specific URLs for mobile and non-mobile users, but it certainly worked when I was testing it ... A: You will get most of the information like browser, device, accepted languages, accepted formats etc from the request header. The user agent mentioned above is part of the request header. A: OK, here is a very simple answer - how about letting the user decide? on your login to your ap, provide a link to the mobile site. on the mobile site, provide a link "back to the main site" - try www.fazolis.com on your mobile device - they do a good job of this. then, on the link to the mobile site from the browser site, register their "vote" and their user agent. You can build your own reliable list of YOUR clients who want the mobile site. Use this married to specs on screen size for these mobile devices, and you can build some pretty good logic for a satisfactory user experience. I would NEVER post out to a network source for something as elementary as this. Oh and on your "mobile site" - if you write your ap semantically well, then you should be able to present a single site for both mobile and browser vs. having to write two separate page sets. Just something to think about - this is worth the extra thought and effort to save time later. A: I can't see it posted on here, but another option I am looking into currently is www.detectmobilebrowser.com A: The easiest way is to create an array with regular tags associated with mobile browsers. At least most mobile user agents must have the word mobile, mini, nokia, java ME, android, iphone, mobile OS, etc. If any is matched with the user agent, using php strpos, print a mobile button on top of the page. Leave the user to choose. I love full site cos my mobile browser gives me the same experience, except that I need to zoom or scroll most of the times. A: You will have to check the user agent string with a previously defined list, like this one A: you can use a webservice to detect mobile browsing like handsetdetection.com. A: The fact is that just relying on the useragent is not good enough to detect mobile browsers. Sure, years ago you could search it for certain strings and guess that it was a Nokia or something, but now there are so many phones out there, and so many that pretend to be things that they are not that something more sophisticated is needed. I found a great site at link textwhich is based on the same solution that MTV use for all their mobile web sites. It is REALLY good as it has a device independent markup language but more importantly they offer a webservice call for isMobileDevice(). Just look in the manual then 'how it works'. I've been using it for my customers sites and have yet to find a mobile browser that it doesn't detect accurately. Totally blinding! A: Just ran across Device and feature detection on the mobile web with these contents: * *Using device and feature detection to improve user experience on the mobile web *Introduction to device detection *Approaches to mobile site design * *Do nothing *Providing a generic mobile site *Designing with mobile and adaptation in mind *Content adaptation and device grouping strategies * *Device grouping *Content adaptation *Minimising the need for adaptation in the first place *Common approaches to device detection * *Server side adaptation *Client-side adaptation *Server-side User Agent (UA) and header lookup *Server-side UA string combined with device database lookup *Server-side User Agent Profiles (UAProf) detection *Detection based on JavaScript technology *CSS media types *CSS media queries *Additional best practices * *Redirection + manual link *Landing page + manual link *Downloadable sample page A: you can use WURFL APIs to detect device type http://wurfl.sourceforge.net/wurfl_schema.php or Modernizer to detect browser abilities
{ "language": "en", "url": "https://stackoverflow.com/questions/142273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: How do I tell if a UIView is visible and on screen? If I have a UIView (or UIView subclass) that is visible, how can I tell if it's currently being shown on the screen (as opposed to, for example, being in a section of a scroll view that is currently off-screen)? To maybe give you a better idea of what I mean, UITableView has a couple of methods for determining the set of currently visible cells. I'm looking for some code that can make a similar determination for any given UIView. A: Here's what I used to check which UIViews were visible in a UIScrollView: for(UIView* view in scrollView.subviews) { if([view isKindOfClass:[SomeView class]]) { // the parent of view of scrollView (which basically matches the application frame) CGRect f = self.view.frame; // adjust our frame to match the scroll view's content offset f.origin.y = _scrollView.contentOffset.y; CGRect r = [self.view convertRect:view.frame toView:self.view]; if(CGRectIntersectsRect(f, r)) { // view is visible } } } A: Not tried any of this yet. But CGRectIntersectsRect(), -[UIView convertRect:to(from)View] and -[UIScrollView contentOffset] seem to be your basic building blocks here. A: if you are primarily worried about releasing an object that is not in the view hierarchy, you could test to see if it has a superview, as in: if (myView.superview){ //do something with myView because you can assume it is on the screen } else { //myView is not in the view hierarchy } A: I recently had to check whether my view was onscreen. This worked for me: CGRect viewFrame = self.view.frame; CGRect appFrame = [[UIScreen mainScreen] applicationFrame]; // We may have received messages while this tableview is offscreen if (CGRectIntersectsRect(viewFrame, appFrame)) { // Do work here }
{ "language": "en", "url": "https://stackoverflow.com/questions/142282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Sending the same but modifed object over ObjectOutputStream I have the following code that shows either a bug or a misunderstanding on my part. I sent the same list, but modified over an ObjectOutputStream. Once as [0] and other as [1]. But when I read it, I get [0] twice. I think this is caused by the fact that I am sending over the same object and ObjectOutputStream must be caching them somehow. Is this work as it should, or should I file a bug? import java.io.*; import java.net.*; import java.util.*; public class OOS { public static void main(String[] args) throws Exception { Thread t1 = new Thread(new Runnable() { public void run() { try { ServerSocket ss = new ServerSocket(12344); Socket s= ss.accept(); ObjectOutputStream oos = new ObjectOutputStream(s.getOutputStream()); List<Integer> same = new ArrayList<Integer>(); same.add(0); oos.writeObject(same); same.clear(); same.add(1); oos.writeObject(same); } catch(Exception e) { e.printStackTrace(); } } }); t1.start(); Socket s = new Socket("localhost", 12344); ObjectInputStream ois = new ObjectInputStream(s.getInputStream()); // outputs [0] as expected System.out.println(ois.readObject()); // outputs [0], but expected [1] System.out.println(ois.readObject()); System.exit(0); } } A: Max is correct, but you can also use: public void writeUnshared(Object obj); See comment below for caveat A: The stream has a reference graph, so an object which is sent twice will not give two objects on the other end, you will only get one. And sending the same object twice separately will give you the same instance twice (each with the same data - which is what you're seeing). See the reset() method if you want to reset the graph.
{ "language": "en", "url": "https://stackoverflow.com/questions/142317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Can gmail's random-signatures-from-an-RSS-feed be used for truly dynamic signatures? This is a new gmail labs feature that lets you specify an RSS feed to grab random quotes from to append to your email signature. I'd like to use that to generate signatures programmatically based on parameters I pass in, the current time, etc. (For example, I have a script in pine that appends the current probabilities of McCain and Obama winning, fetched from intrade's API. See below.) But it seems gmail caches the contents of the URL you specify. Any way to control that or anyone know how often gmail looks at the URL? ADDED: Here's the program I'm using to test this. This file lives at http://kibotzer.com/sigs.php. The no-cache header idea, taken from here -- http://mapki.com/wiki/Dynamic_XML -- seems to not help. <?php header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); // Date in the past header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); // HTTP/1.1 header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); // HTTP/1.0 header("Pragma: no-cache"); //XML Header header("content-type:text/xml"); ?> <!DOCTYPE rss PUBLIC "-//Netscape Communications//DTD RSS 0.91//EN" "http://my.netscape.com/publish/formats/rss-0.91.dtd"> <rss version="0.91"> <channel> <title>Dynamic Signatures</title> <link>http://kibotzer.com</link> <description>Blah blah</description> <language>en-us</language> <pubDate>26 Sep 2008 02:15:01 -0000</pubDate> <webMaster>dreeves@kibotzer.com</webMaster> <managingEditor>dreeves@kibotzer.com (Daniel Reeves)</managingEditor> <lastBuildDate>26 Sep 2008 02:15:01 -0000</lastBuildDate>  <item> <title> Dynamic Signature 1 (<?php echo gmdate("H:i:s"); ?>) </title> <link>http://kibotzer.com</link> <description>This is the description for Signature 1 (<?php echo gmdate("H:i:s"); ?>) </description> </item> <item> <title> Dynamic Signature 2 (<?php echo gmdate("H:i:s"); ?>) </title> <link>http://kibotzer.com</link> <description>This is the description for Signature 2 (<?php echo gmdate("H:i:s"); ?>) </description> </item> </channel> </rss> -- http://ai.eecs.umich.edu/people/dreeves - - search://"Daniel Reeves" Latest probabilities from intrade... 42.1% McCain becomes president (last trade 18:07 FRI) 57.0% Obama becomes president (last trade 18:34 FRI) 17.6% US recession in 2008 (last trade 16:24 FRI) 16.1% Overt air strike against Iran in '08 (last trade 17:39 FRI) A: You might be able to do something on the clientside, take a look at this greasemonkey script which randomly adds a signature. Since it's under your control, and not google's, you can control if it caches or not. A: Try setting the Cache-Control: no-cache and Pragma: no-cache HTTP headers. If Google's signature code honors either of these headers then you'll be in luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/142319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Setting up our new Dev server What is the easiest way to assign multiple IP addresses I'm setting up our new Dev server, what is the easiest way to assign multiple IP addresses to Windows 2008 Server Network Adapter? I'm setting up our development machine, running IIS 7 and want to have the range between 192.168.1.200 - .254 available when I'm setting up a new website in IIS 7. A: > netsh interface ipv4 add address "Local Area Connection" 192.168.1.201 255.255.255.0 Wrap in a cmd.exe "for" loop to add multiple IPs. EDIT: (from Brian) "Local Area Connection" above is a placeholder, make sure you use the actual network adapter name on your system. A: The complete CMD.EXE loop: FOR /L %b IN (200,1,254) DO netsh interface ip add address "your_adapter" 192.168.1.%b 255.255.255.0 In the code above, replace "your_adapter" with the actual interface name (usually "Local Area Connection"). In addition, the netmask at the end is an assumption of /24 or Class C subnet; substitute the correct netmask. A: Network Connections -> Local Area Network Connection Properties -> TCP/IP Properties -> Advanced -> IP Settings -> Add Button.
{ "language": "en", "url": "https://stackoverflow.com/questions/142320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What URL do I post to for Live Search SOAP service? Its possible I am just really really thick. However, looking over the SDK for the live search (MSN search) that uses SOAP, doesn't tell me what URL the service is at?? I can download SDKs for C# or VB which probably encapsulate, but that doesn't help me (I am using ruby). http://search.live.com/developer/ A: The URL you need is: http://soap.search.live.com/webservices.asmx Additional info on various tools you could use to discover endpoints from WSDL: If you have VS, you can discover the endpoint by adding a Web Service Reference to a C# console project and then opening the app.config file and looking for the <endpoint> element. To add the Web Service Reference for the Live Search web service, point the wizard to the WSDL at http://soap.search.live.com/webservices.asmx?wsdl. Alternatively, you can use the svcutil.exe tool from .Net 3.0 to generate C# client wrapper and a .config file from the WSDL. Again, you are interested in the <endpoint> from the generate config.
{ "language": "en", "url": "https://stackoverflow.com/questions/142326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Resources for high performance computing in C++ I'm looking for books or online resources that go in detail over programming techniques for high performance computing using C++. A: Even though not FOSS, the Intel IPP and MKL libraries can really save you a lot of time (both in development and at runtime) if you need to perform any of the operations supported by these libraries (e.g.: signal processing, image processing, matrix math). Of course, it depends on your platform whether you can take benefit from them. (No: I don't work for Intel, but a happy customer of theirs I am.) A: The Trilinos suite of libraries and packages offer a broad range of middleware libraries for HPC including sparse, iterative linear solvers; nonlinear solvers; eigen solvers; ODE & DAE integrators including sensitivity analysis; optimization (both invasive and black box); finite element interfaces; mesh interfaces; preconditioners; etc. All of these packages are designed using fairly modern C++ techniques (there are Python APIs as well as some C and Fortran). There used in very large scale parallel (5000+ CPUs) simulations of exceptional consequence (nuclear weapon design) with great success. These packages offer a great suite of capabilities that are much higher level than BLAS, etc. A: The first thing might be reading about MPI(Message Passing Interface) which is the de facto standard in HPC node interconnects. A: Despite being 14+ years old, the pioneering work of Expression Templates is still regarded as some of the most exceptional C++ work in years. Fast, efficient, safe... I've used the techniques and they're really remarkable. Edit: In case the above link remains broken, here's an alternate reference for Expression Templates. This DDJ article cites the original work of Veldhuizen. A: Check out the Eigen Vector/Matrix library. The api is very elegant, and the resulting programs are blazing fast (due to explicit vectorization for SSE2 architectures).. A: practically all HPC code I've heard of is either for solving sytems of linear equations or FFT's. Heres some links to start you off at least in the libraries used: * *BLAS - standard set of routines for linear algebra - stuff like matrix multiplication *LAPACK - standard set of higher level linear algebra routines - stuff like LU decomp. *ATLAS - Optimized BLAS implementation *FFTW - Optimized FFT implementation *PBLAS - BLAS for distributed processors *SCALAPACK - distributed LAPACK implementation *MPI - Communications library for distributed systems. *PETSc - Scalable nonlinear and linear solvers (user-extensible, interface to much above) A: Take a look at The ADAPTIVE Communication Environment (ACE). It's a library of templates and objects for high performance applications in C++. It has great cross-platform primitives for threading, networking, etc. A: No matter what you write, and how much you design for performance from the beginning, chances are pretty good it will benefit from performance tuning. Usually the bigger the program, the more it will benefit. THIS is a simple, effective way to do that tuning. It is based on "deep sampling", a technique that gives accuracy of diagnosis while de-emphasizing measurement. You could also look at http://en.wikipedia.org/wiki/Performance_analysis#Simple_manual_technique A: High Scalability - Building bigger, faster, more reliable websites. http://highscalability.com/ And also: http://www.ddj.com/hpc-high-performance-computing/
{ "language": "en", "url": "https://stackoverflow.com/questions/142331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do I remotely debug a stored procedure within the same domain? What are the steps needed to successfully be able to remotely debug a stored procedure (SQL Server 2005) from within VS 2005? Both client and server machines are on the same domain. I have never done this so step by step instructions would greatly be appreciated. A: Great Question! If I'm not mistaken I don't think debugging is possible inside of SQL Management Studio anymore (as it was back in the SQL Server 2000, Enterprise Studio days). Instructions to Remote Debug MS SQL Stored Procedures within Visual Studio 2005 * *Launch Visual Studio (If you're running from Vista, Run As Administrator) *Within Visual Studio 2005 click View->Server Explorer, which you'll notice brings up a panel with a Data Connections element. *Right click on Data Connections and select Add Connection *Ensure the Data Source is set to SqlClient. *Fill out the Server connection information, filling in the database name where the stored procedure that you wish to debug lives. *Once a successful connection is made you'll notice the a tree for the database is populated that gives you the list of Tables, Views, Stored Procedures, Functions, etc. *Expand Stored Procedures, finding the one you wish to debug and right click on it and select Step Into Stored Procedure. *If the stored procedure has parameters a dialog will come up and you can specify what those parameters are. *At this point, depending on your firewall settings and what not, you maybe be prompted to make modifications to your firewall to allow for the necessary ports to be opened up. However, Visual Studio seems to handle this for you. *Once completed, Visual Studio should place you at the beginning of the stored procedure so you can starting the act of debugging! Happy Debugging! A: SQL Specifically http://msdn.microsoft.com/en-us/library/s4sszxst(VS.71).aspx VS2005 in general http://msdn.microsoft.com/en-us/library/y7f5zaaa(VS.71).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/142339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you version your projects? I understand that Microsoft uses this template when versioning their products: Major.Minor.Build.Revision. Major is changed when the "developers" want to show that there is a big change in the software and backward compatibility cannot be assumed. Maybe a major rewrite of the code is done. Minor number represents a significant enhancement with the intention of backward compatibility. Build number is a small change, for example a recompilation of the same source. Revision is used to fix a security hole and should be fully interchangeable. Both Build and Revision are optional. This information is based on MSDN Version Class. How do you version your projects and why do you version them this way? A: We generally do major.minor[.maintenance[.build]] where I work, but it seems to vary a little per project. Major/minor the same as you mentioned. maintenance would be incremented for small (bug) fixes and build for each time the build server runs. A: I personally like to use a scheme that focuses on the level of backwards compatibility that users of the project/product can expect: Before 1.0: * *0.0.1 = First release *0.-.X = Backwards compatible update *0.X.0 = Backwards incompatible update After 1.0: * *-.-.X = Update without interface changes *-.X.0 = Update with backwards compatible interface additions *X.0.0 = Backwards incompatible update Using compatibility as the central point in the version number makes it easier for users, especially if te product is a library, to judge whether or not they can expect a smoothe and safe upgrade or not. A: I often see Xyz where X is the year after release number and yz is the month of the year. I.e. 201 is January, 2 years after release. I.e. when product launches in May, it's first release number is 105. Release in February next year is 202. A: We usually version our projects based on the current release date, YYYY.MM.DD.*, and we let the build number generate automatically, so for example, if we had a release today it would be 2008.9.26.BUILD. A: I use major.minor.point.revision, where point is a bugfix-only release and revision is the repository revision. It's easy and works well. A: I just do Major.minor. Since I'm a single developer (with occasional help) working on a web app most people couldn't care less about the minor fixes that I make. So I just iterate up the minor versions as I put in new features and major version numbers when I make some whopper of a change/upgrade. Otherwise, I just ignore the small fixes as far as version numbers go (though I do have Subversion revision numbers if I need to refer back for myself). A: I work on a lot of smaller projects and i have personally found this useful. PatchNumber.DateMonthYear This is for small web based tools where the users can see when the last update and how often it has been updated. PatchNumber is the number of releases that has been done and the rest is used to show the users when this was published. A: Major.minor.patch.build with patch being the hotfix or patch release. If you can get QA to by in and are on SVN, you could use the svn HEAD revision as the build number. In that way, each build describes where it came from in terms of source control and what's in the build. This does mean that you'll have builds that go up with gaps (1.0.0.1, 1.0.0.34....) A: Major.Minor.BugFix.SVNRevision e.g: 3.5.2.31578 * *The SVN Revision gives you the very exact peace of code sent to the customer. You are absolutely sure if that bugfix was there or not. *It also helps finding the proper PDB in the event you have an application error. Just match the SVN Revisions on your build server, copy the PDB to EXE location, open the debugger and you got the crash stack trace. A: I just have a number. First release is 001. Second release's third beta is 002b3, and so on. This is just for personal stuff mind, I don't actually have anything 'released' at the moment, so this is all theory. A: I started using a pseudo-similar format as Ubuntu: Y.MMDD This helps for a few reasons: * *it's easier to check for version requirements: if (version < 8.0901) die/exit/etc.; *it can be auto-generated in your build process On that 2nd point (ruby & rake): def serial(t) t = Time.now.utc if not t.instance_of?(Time) t.strftime("%Y").to_i - 2000 + t.strftime("0.%m%d").to_f end serial(Time.now) #=> 8.0926 serial(Time.now.utc) #=> 8.0927 NOTE: t.strftime("%Y.%m%d").to_f - 2000 runs into floating point inaccuracies: 8.09269999999992 A: I used to like the Nantucket way of versioning their Clipper compiler in the 80's: Clipper Winter 1984 Clipper Summer 1985 Clipper Winter 1985 Clipper Autumn 1986 Clipper Summer 1987 Oh and overlays.... [gets teary eyed]
{ "language": "en", "url": "https://stackoverflow.com/questions/142340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Most efficient way to get default constructor of a Type What is the most efficient way to get the default constructor (i.e. instance constructor with no parameters) of a System.Type? I was thinking something along the lines of the code below but it seems like there should be a simplier more efficient way to do it. Type type = typeof(FooBar) BindingFlags flags = BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance; type.GetConstructors(flags) .Where(constructor => constructor.GetParameters().Length == 0) .First(); A: If you have the generic type parameter, then Jeff Bridgman's answer is the best one. If you only have a Type object representing the type you want to construct, you could use Activator.CreateInstance(Type) like Alex Lyman suggested, but I have been told it is slow (I haven't profiled it personally though). However, if you find yourself constructing these objects very frequently, there is a more eloquent approach using dynamically compiled Linq Expressions: using System; using System.Linq.Expressions; public static class TypeHelper { public static Func<object> CreateDefaultConstructor(Type type) { NewExpression newExp = Expression.New(type); // Create a new lambda expression with the NewExpression as the body. var lambda = Expression.Lambda<Func<object>>(newExp); // Compile our new lambda expression. return lambda.Compile(); } } Just call the delegate returned to you. You should cache this delegate, because constantly recompiling Linq expressions can be expensive, but if you cache the delegate and reuse it each time, it can be very fast! I personally use a static lookup dictionary indexed by type. This function comes in handy when you are dealing with serialized objects where you may only know the Type information. NOTE: This can fail if the type is not constructable or does not have a default constructor! A: If you actually need the ConstructorInfo object, then see Curt Hagenlocher's answer. On the other hand, if you're really just trying to create an object at run-time from a System.Type, see System.Activator.CreateInstance -- it's not just future-proofed (Activator handles more details than ConstructorInfo.Invoke), it's also much less ugly. A: If you only want to get the default constructor to instantiate the class, and are getting the type as a generic type parameter to a function, you can do the following: T NewItUp<T>() where T : new() { return new T(); } A: type.GetConstructor(Type.EmptyTypes) A: you would want to try FormatterServices.GetUninitializedObject(Type) this one is better than Activator.CreateInstance However , this method doesn't call the object constructor , so if you are setting initial values there, this won't work Check MSDN for this thing http://msdn.microsoft.com/en-us/library/system.runtime.serialization.formatterservices.getuninitializedobject.aspx there is another way here http://www.ozcandegirmenci.com/post/2008/02/Create-object-instances-Faster-than-Reflection.aspx however this one fails if the object have parametrize constructors Hope this helps
{ "language": "en", "url": "https://stackoverflow.com/questions/142356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79" }
Q: What are the best JVM settings for Eclipse? What are the best JVM settings you have found for running Eclipse? A: If you are using Linux + Sun JDK/JRE 32bits, change the "-vm" to: -vm [your_jdk_folder]/jre/lib/i386/client/libjvm.so If you are using Linux + Sun JDK/JRE 64bits, change the "-vm" to: -vm [your_jdk_folder]/jre/lib/amd64/server/libjvm.so That's working fine for me on Ubuntu 8.10 and 9.04 A: If you're going with jdk6 update 14, I'd suggest using using the G1 garbage collector which seems to help performance. To do so, remove these settings: -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing and replace them with these: -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC A: Eclipse Galileo 3.5 and 3.5.1 settings Currently (November 2009), I am testing with jdk6 update 17 the following configuration set of options (with Galileo -- eclipse 3.5.x, see below for 3.4 or above for Helios 3.6.x): (of course, adapt the relative paths present in this eclipse.ini to the correct paths for your setup) Note: for eclipse3.5, replace startup and launcher.library lines by: -startup plugins/org.eclipse.equinox.launcher_1.0.200.v20090520.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.0.200.v20090519 eclipse.ini 3.5.1 -data ../../workspace -showlocation -showsplash org.eclipse.platform --launcher.XXMaxPermSize 384m -startup plugins/org.eclipse.equinox.launcher_1.0.201.R35x_v20090715.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.0.200.v20090519 -vm ../../../../program files/Java/jdk1.6.0_17/jre/bin/client/jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms128m -Xmx384m -Xss4m -XX:PermSize=128m -XX:MaxPermSize=384m -XX:CompileThreshold=5 -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -Dcom.sun.management.jmxremote -Dorg.eclipse.equinox.p2.reconciler.dropins.directory=C:/jv/eclipse/mydropins See also my original answer above for more information. Changes (from July 2009) * *refers to the launcher and not the framework *shared plugins: org.eclipse.equinox.p2.reconciler.dropins.directory option. *Galileo supports fully relative paths for workspace or VM (avoid having to modify those from one eclipse installation to another, if, of course, your JVM and workspace stay the same) Before, those relative paths kept being rewritten into absolute ones when eclipse launched itself... *You also can copy the JRE directory of a Java JDK installation inside your eclipse directory Caveats There was a bug with ignored breakpoints actually related to the JDK. Do use JDK6u16 or more recent for launching eclipse (You can then define as many JDKs you want to compile within eclipse: it is not because you launch an eclipse with JDK6 that you will have to compile with that same JDK). Max Note the usage of: --launcher.XXMaxPermSize 384m -vmargs -XX:MaxPermSize=128m As documented in the Eclipse Wiki, Eclipse 3.3 supports a new argument to the launcher: --launcher.XXMaxPermSize. If the VM being used is a Sun VM and there is not already a -XX:MaxPermSize= VM argument, then the launcher will automatically add -XX:MaxPermSize=256m to the list of VM arguments being used. The 3.3 launcher is only capable of identifying Sun VMs on Windows. As detailed in this entry: Not all vms accept the -XX:MaxPermSize argument which is why it is passed in this manner. There may (or may not) exist problems with identifying sun vms. Note: Eclipse 3.3.1 has a bug where the launcher cannot detect a Sun VM, and therefore does not use the correct PermGen size. It seems this may have been a known bug on Mac OS X for 3.3.0 as well. If you are using either of these platform combination, add the -XX flag to the eclipse.ini as described above. Notes: * *the "384m" line translates to the "=384m" part of the VM argument, if the VM is case sensitive on the "m", then the so is this argument. *the "--launcher." prefix, this specifies that the argument is consumed by the launcher itself and was added to launcher specific arguments to avoid name collisions with application arguments. (Other examples are --launcher.library, --launcher.suppressErrors) The -vmargs -XX:MaxPermSize=384m part is the argument passed directly to the VM, bypassing the launcher entirely and no check on the VM vendor is used. A: You can also try running with JRockit. It's a JVM optimized for servers, but many long running client applications, like IDE's, run very well on JRockit. Eclipse is no exception. JRockit doesn't have a perm-space so you don't need to configure it. It's possible set a pause time target(ms) to avoid long gc pauses stalling the UI. -showsplash org.eclipse.platform -vm C:\jrmc-3.1.2-1.6.0\bin\javaw.exe -vmargs -XgcPrio:deterministic -XpauseTarget:20 I usually don't bother setting -Xmx and -Xms and let JRockit grow the heap as it sees necessary. If you launch your Eclipse application with JRockit you can also monitor, profile and find memory leaks in your application using the JRockit Mission Control tools suite. You download the plugins from this update site. Note, only works for Eclipse 3.3 and Eclipse 3.4 A: Eclipse Ganymede 3.4.2 settings For more recent settings, see Eclipse Galileo 3.5 settings above. JDK The best JVM setting always, in my opinion, includes the latest JDK you can find (so for now, jdk1.6.0_b07 up to b16, except b14 and b15) eclipse.ini Even with those pretty low memory settings, I can run large java projects (along with a web server) on my old (2002) desktop with 2Go RAM. -showlocation -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256M -framework plugins\org.eclipse.osgi_3.4.2.R34x_v20080826-1230.jar -vm jdk1.6.0_10\jre\bin\client\jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms128m -Xmx384m -Xss2m -XX:PermSize=128m -XX:MaxPermSize=128m -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CompileThreshold=5 -Dcom.sun.management.jmxremote See GKelly's SO answer and Piotr Gabryanczyk's blog entry for more details about the new options. Monitoring You can also consider launching: C:\[jdk1.6.0_0x path]\bin\jconsole.exe As said in a previous question about memory consumption. A: Here's my own setting for my Eclipse running on i7 2630M 16GB RAM laptop, this setting has been using for a week, without a single crashing, and Eclipse 3.7 is running smoothly. -startup plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.100.v20110502 -product org.eclipse.epp.package.jee.product --launcher.defaultAction openFile --launcher.XXMaxPermSize 256M -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms1024m -Xmx4096m -XX:MaxPermSize=256m Calculations: For Win 7 x64 * *Xms = Physical Memory / 16 *Xmx = Physical Memory / 4 *MaxPermSize = Same as default value, which is 256m A: -startup ../../../plugins/org.eclipse.equinox.launcher_1.2.0.v20110502.jar --launcher.library ../../../plugins/org.eclipse.equinox.launcher.cocoa.macosx_1.1.100.v20110502 -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -vmargs -Xms128m -Xmx512m -XX:MaxPermSize=256m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -Dcom.sun.management.jmxremote -Declipse.p2.unsignedPolicy=allow And these setting have worked like a charm for me. I am running OS X10.6 , Eclipse 3.7 Indigo , JDK1.6.0_24 A: My own settings (Java 1.7, modify for 1.6): -vm C:/Program Files (x86)/Java/jdk1.7.0/bin -startup plugins/org.eclipse.equinox.launcher_1.1.0.v20100507.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.100.v20100628 -showsplash org.eclipse.platform --launcher.XXMaxPermSize 256m --launcher.defaultAction openFile -vmargs -server -Dosgi.requiredJavaVersion=1.7 -Xmn100m -Xss1m -XgcPrio:deterministic -XpauseTarget:20 -XX:PermSize=400M -XX:MaxPermSize=500M -XX:CompileThreshold=10 -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+UnlockExperimentalVMOptions -XX:+DoEscapeAnalysis -XX:+UseG1GC -XX:+UseFastAccessorMethods -XX:+AggressiveOpts -Xms512m -Xmx512m A: It is that time of year again: "eclipse.ini take 3" the settings strike back! Eclipse Helios 3.6 and 3.6.x settings alt text http://www.eclipse.org/home/promotions/friends-helios/helios.png After settings for Eclipse Ganymede 3.4.x and Eclipse Galileo 3.5.x, here is an in-depth look at an "optimized" eclipse.ini settings file for Eclipse Helios 3.6.x: * *based on runtime options, *and using the Sun-Oracle JVM 1.6u21 b7, released July, 27th (some some Sun proprietary options may be involved). (by "optimized", I mean able to run a full-fledge Eclipse on our crappy workstation at work, some old P4 from 2002 with 2Go RAM and XPSp3. But I have also tested those same settings on Windows7) Eclipse.ini WARNING: for non-windows platform, use the Sun proprietary option -XX:MaxPermSize instead of the Eclipse proprietary option --launcher.XXMaxPermSize. That is: Unless you are using the latest jdk6u21 build 7. See the Oracle section below. -data ../../workspace -showlocation -showsplash org.eclipse.platform --launcher.defaultAction openFile -vm C:/Prog/Java/jdk1.6.0_21/jre/bin/server/jvm.dll -vmargs -Dosgi.requiredJavaVersion=1.6 -Declipse.p2.unsignedPolicy=allow -Xms128m -Xmx384m -Xss4m -XX:PermSize=128m -XX:MaxPermSize=384m -XX:CompileThreshold=5 -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+CMSIncrementalPacing -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:+UseFastAccessorMethods -Dcom.sun.management.jmxremote -Dorg.eclipse.equinox.p2.reconciler.dropins.directory=C:/Prog/Java/eclipse_addons Note: Adapt the p2.reconciler.dropins.directory to an external directory of your choice. See this SO answer. The idea is to be able to drop new plugins in a directory independently from any Eclipse installation. The following sections detail what are in this eclipse.ini file. The dreaded Oracle JVM 1.6u21 (pre build 7) and Eclipse crashes Andrew Niefer did alert me to this situation, and wrote a blog post, about a non-standard vm argument (-XX:MaxPermSize) and can cause vms from other vendors to not start at all. But the eclipse version of that option (--launcher.XXMaxPermSize) is not working with the new JDK (6u21, unless you are using the 6u21 build 7, see below). The final solution is on the Eclipse Wiki, and for Helios on Windows with 6u21 pre build 7 only: * *downloading the fixed eclipse_1308.dll (July 16th, 2010) *and place it into (eclipse_home)/plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.0.v20100503 That's it. No setting to tweak here (again, only for Helios on Windows with a 6u21 pre build 7). For non-Windows platform, you need to revert to the Sun proprietary option -XX:MaxPermSize. The issue is based one a regression: JVM identification fails due to Oracle rebranding in java.exe, and triggered bug 319514 on Eclipse. Andrew took care of Bug 320005 - [launcher] --launcher.XXMaxPermSize: isSunVM should return true for Oracle, but that will be only for Helios 3.6.1. Francis Upton, another Eclipse committer, reflects on the all situation. Update u21b7, July, 27th: Oracle have regressed the change for the next Java 6 release and won't implement it again until JDK 7. If you use jdk6u21 build 7, you can revert to the --launcher.XXMaxPermSize (eclipse option) instead of -XX:MaxPermSize (the non-standard option). The auto-detection happening in the C launcher shim eclipse.exe will still look for the "Sun Microsystems" string, but with 6u21b7, it will now work - again. For now, I still keep the -XX:MaxPermSize version (because I have no idea when everybody will launch eclipse the right JDK). Implicit `-startup` and `--launcher.library` Contrary to the previous settings, the exact path for those modules is not set anymore, which is convenient since it can vary between different Eclipse 3.6.x releases: * *startup: If not specified, the executable will look in the plugins directory for the org.eclipse.equinox.launcher bundle with the highest version. *launcher.library: If not specified, the executable looks in the plugins directory for the appropriate org.eclipse.equinox.launcher.[platform] fragment with the highest version and uses the shared library named eclipse_* inside. Use JDK6 The JDK6 is now explicitly required to launch Eclipse: -Dosgi.requiredJavaVersion = 1.6 This SO question reports a positive incidence for development on Mac OS. +UnlockExperimentalVMOptions The following options are part of some of the experimental options of the Sun JVM. -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:+UseFastAccessorMethods They have been reported in this blog post to potentially speed up Eclipse. See all the JVM options here and also in the official Java Hotspot options page. Note: the detailed list of those options reports that UseFastAccessorMethods might be active by default. See also "Update your JVM": As a reminder, G1 is the new garbage collector in preparation for the JDK 7, but already used in the version 6 release from u17. Opening files in Eclipse from the command line See the blog post from Andrew Niefer reporting this new option: --launcher.defaultAction openFile This tells the launcher that if it is called with a command line that only contains arguments that don't start with "-", then those arguments should be treated as if they followed "--launcher.openFile". eclipse myFile.txt This is the kind of command line the launcher will receive on windows when you double click a file that is associated with eclipse, or you select files and choose "Open With" or "Send To" Eclipse. Relative paths will be resolved first against the current working directory, and second against the eclipse program directory. See bug 301033 for reference. Originally bug 4922 (October 2001, fixed 9 years later). p2 and the Unsigned Dialog Prompt If you are tired of this dialog box during the installation of your many plugins: , add in your eclipse.ini: -Declipse.p2.unsignedPolicy=allow See this blog post from Chris Aniszczy, and the bug report 235526. I do want to say that security research supports the fact that less prompts are better. People ignore things that pop up in the flow of something they want to get done. For 3.6, we should not pop up warnings in the middle of the flow - no matter how much we simplify, people will just ignore them. Instead, we should collect all the problems, do not install those bundles with problems, and instead bring the user back to a point in the workflow where they can fixup - add trust, configure security policy more loosely, etc. This is called 'safe staging'. ---------- http://www.eclipse.org/home/categories/images/wiki.gif alt text http://www.eclipse.org/home/categories/images/wiki.gif alt text http://www.eclipse.org/home/categories/images/wiki.gif Additional options Those options are not directly in the eclipse.ini above, but can come in handy if needed. The `user.home` issue on Windows7 When eclipse starts, it will read its keystore file (where passwords are kept), a file located in user.home. If for some reason that user.home doesn't resolve itself properly to a full-fledge path, Eclipse won't start. Initially raised in this SO question, if you experience this, you need to redefine the keystore file to an explicit path (no more user.home to resolve at the start) Add in your eclipse.ini: -eclipse.keyring C:\eclipse\keyring.txt This has been tracked by bug 300577, it has been solve in this other SO question. Debug mode Wait, there's more than one setting file in Eclipse. if you add to your eclipse.ini the option: -debug , you enable the debug mode and Eclipse will look for another setting file: a .options file where you can specify some OSGI options. And that is great when you are adding new plugins through the dropins folder. Add in your .options file the following settings, as described in this blog post "Dropins diagnosis": org.eclipse.equinox.p2.core/debug=true org.eclipse.equinox.p2.core/reconciler=true P2 will inform you what bundles were found in dropins/ folder, what request was generated, and what is the plan of installation. Maybe it is not detailed explanation of what actually happened, and what went wrong, but it should give you strong information about where to start: * *was your bundle in the plan? *Was it installation problem (P2 fault) *or maybe it is just not optimal to include your feature? That comes from Bug 264924 - [reconciler] No diagnosis of dropins problems, which finally solves the following issue like: Unzip eclipse-SDK-3.5M5-win32.zip to ..../eclipse Unzip mdt-ocl-SDK-1.3.0M5.zip to ..../eclipse/dropins/mdt-ocl-SDK-1.3.0M5 This is a problematic configuration since OCL depends on EMF which is missing. 3.5M5 provides no diagnosis of this problem. Start eclipse. No obvious problems. Nothing in Error Log. * *Help / About / Plugin details shows org.eclipse.ocl.doc, but not org.eclipse.ocl. *Help / About / Configuration details has no (diagnostic) mention of org.eclipse.ocl. *Help / Installation / Information Installed Software has no mention of org.eclipse.ocl. Where are the nice error markers? Manifest Classpath See this blog post: * *In Galileo (aka Eclipse 3.5), JDT started resolving manifest classpath in libraries added to project’s build path. This worked whether the library was added to project’s build path directly or via a classpath container, such as the user library facility provided by JDT or one implemented by a third party. *In Helios, this behavior was changed to exclude classpath containers from manifest classpath resolution. That means some of your projects might no longer compile in Helios. If you want to revert to Galileo behavior, add: -DresolveReferencedLibrariesForContainers=true See bug 305037, bug 313965 and bug 313890 for references. IPV4 stack This SO question mentions a potential fix when not accessing to plugin update sites: -Djava.net.preferIPv4Stack=true Mentioned here just in case it could help in your configuration. JVM1.7x64 potential optimizations This article reports: For the record, the very fastest options I have found so far for my bench test with the 1.7 x64 JVM n Windows are: -Xincgc -XX:-DontCompileHugeMethods -XX:MaxInlineSize=1024 -XX:FreqInlineSize=1024 But I am still working on it... A: Eclipse likes lots of RAM. Use at least -Xmx512M. More if available. A: If youre like me and had problems with the current Oracle release of 1.6 then you might want to update your JDK or set -XX:MaxPermSize. More information is available here: http://java.dzone.com/articles/latest-java-update-fixes A: Eclipse Indigo 3.7.2 settings (64 bit linux) Settings for Sun/Oracle java version "1.6.0_31" and Eclipse 3.7 running on x86-64 Linux: -nosplash -vmargs -Xincgc -Xss500k -Dosgi.requiredJavaVersion=1.6 -Xms64m -Xmx200m -XX:NewSize=8m -XX:PermSize=80m -XX:MaxPermSize=150m -XX:MaxPermHeapExpansion=10m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseParNewGC -XX:+CMSConcurrentMTEnabled -XX:ConcGCThreads=2 -XX:ParallelGCThreads=2 -XX:+CMSIncrementalPacing -XX:CMSIncrementalDutyCycleMin=0 -XX:CMSIncrementalDutyCycle=5 -XX:GCTimeRatio=49 -XX:MaxGCPauseMillis=20 -XX:GCPauseIntervalMillis=1000 -XX:+UseCMSCompactAtFullCollection -XX:+CMSClassUnloadingEnabled -XX:+DoEscapeAnalysis -XX:+UseCompressedOops -XX:+AggressiveOpts -XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses Note that this uses only 200 MB for the heap and 150 MB for the non-heap. If you're using huge plugins, you might want to increase both the "-Xmx200m" and "-XX:MaxPermSize=150m" limits. The primary optimization target for these flags has been to minimize latency in all cases and as a secondary optimization target minimize the memory usage. A: XX:+UseParallelGC that's the most awesome option ever!!! A: -showlocation To make it easier to have eclipse running twice, and know which workspace you're dealing with Eclipse 3.6 adds a preferences option to specify what to show for the Workspace name (shown in window title) which works much better than -showlocation for three reasons: * *You do not need to restart eclipse for it to take affect. *You can chose a short code. *It appears first, before the perspective and application name. A: -vm C:\Program Files\Java\jdk1.6.0_07\jre\bin\client\jvm.dll To specify which java version you are using, and use the dll instead of launching a javaw process A: Here's what I use (though I have them in the shortcut instead of the settings file): eclipse.exe -showlocation -vm "C:\Java\jdk1.6.0_07\bin\javaw.exe" -vmargs -Xms256M -Xmx768M -XX:+UseParallelGC -XX:MaxPermSize=128M
{ "language": "en", "url": "https://stackoverflow.com/questions/142357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "315" }
Q: Getting a boost::shared_ptr for this I am making extensive use of boost:shared_ptr in my code. In fact, most of the objects that are allocated on the heap are held by a shared_ptr. Unfortunately this means that I can't pass this into any function that takes a shared_ptr. Consider this code: void bar(boost::shared_ptr<Foo> pFoo) { ... } void Foo::someFunction() { bar(this); } There are two problems here. First, this won't compile because the T* constructor for shared_ptr is explicit. Second, if I force it to build with bar(boost::shared_ptr<Foo>(this)) I will have created a second shared pointer to my object that will eventually lead to a double-delete. This brings me to my question: Is there any standard pattern for getting a copy of the existing shared pointer you know exists from inside a method on one of those objects? Is using intrusive reference counting my only option here? A: Are you really making more shared copies of pFoo inside bar? If you aren't doing anything crazy inside, just do this: void bar(Foo &foo) { // ... } A: With C++11 shared_ptr and enable_shared_from_this is now in the standard library. The latter is, as the name suggests, for this case exactly. http://en.cppreference.com/w/cpp/memory/shared_ptr http://en.cppreference.com/w/cpp/memory/enable_shared_from_this Example bases on that in the links above: struct Good: std::enable_shared_from_this<Good>{ std::shared_ptr<Good> getptr() { return shared_from_this(); } }; use: std::shared_ptr<Good> gp1(new Good); std::shared_ptr<Good> gp2 = gp1->getptr(); std::cout << "gp2.use_count() = " << gp2.use_count() << '\n'; A: The function accepting a pointer wants to do one of two behaviors: * *Own the object being passed in, and delete it when it goes out of scope. In this case, you can just accept X* and immediately wrap a scoped_ptr around that object (in the function body). This will work to accept "this" or, in general, any heap-allocated object. *Share a pointer (don't own it) to the object being passed in. In this case you do not want to use a scoped_ptr at all, since you don't want to delete the object at the end of your function. In this case, what you theoretically want is a shared_ptr (I've seen it called a linked_ptr elsewhere). The boost library has a version of shared_ptr, and this is also recommended in Scott Meyers' Effective C++ book (item 18 in the 3rd edition). Edit: Oops I slightly misread the question, and I now see this answer is not exactly addressing the question. I'll leave it up anyway, in case this might be helpful for anyone working on similar code. A: Just use a raw pointer for your function parameter instead of the shared_ptr. The purpose of a smart pointer is to control the lifetime of the object, but the object lifetime is already guaranteed by C++ scoping rules: it will exist for at least as long as the end of your function. That is, the calling code can't possibly delete the object before your function returns; thus the safety of a "dumb" pointer is guaranteed, as long as you don't try to delete the object inside your function. The only time you need to pass a shared_ptr into a function is when you want to pass ownership of the object to the function, or want the function to make a copy of the pointer. A: boost has a solution for this use case, check enable_shared_from_this A: You can derive from enable_shared_from_this and then you can use "shared_from_this()" instead of "this" to spawn a shared pointer to your own self object. Example in the link: #include <boost/enable_shared_from_this.hpp> class Y: public boost::enable_shared_from_this<Y> { public: shared_ptr<Y> f() { return shared_from_this(); } } int main() { shared_ptr<Y> p(new Y); shared_ptr<Y> q = p->f(); assert(p == q); assert(!(p < q || q < p)); // p and q must share ownership } It's a good idea when spawning threads from a member function to boost::bind to a shared_from_this() instead of this. It will ensure that the object is not released.
{ "language": "en", "url": "https://stackoverflow.com/questions/142391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77" }
Q: What's the best way to format 24 hour time in XSLT 1.0? I've had a hard time finding good ways of taking a time format and easily determining if it's valid then producing a resulting element that has some formatting using XSLT 1.0. Given the following xml: <root> <srcTime>2300</srcTime> </root> It would be great to produce the resulting xml: <root> <dstTime>23:00</dstTime> </root> However, if the source xml contains an invalid 24 hour time format, the resulting dstTime element should be blank. For example, when the invalid source xml is the following: <root> <srcTime>NOON</srcTime> </root> The resulting xml should be: <root> <dstTime></dstTime> </root> The question is, what's the best XSLT 1.0 fragment that could be written to produce the desired results? The hope would be to keep it quite simple and not have to parse the every piece of the time (i.e. pattern matching would be sweet if possible). A: There aren't any regular expressions in XSLT 1.0, so I'm afraid that pattern matching isn't going to be possible. I'm not clear if <srcTime>23:00</srcTime> is supposed to be legal or not? If it is, try: <dstTime> <xsl:if test="string-length(srcTime) = 4 or string-length(srcTime) = 5"> <xsl:variable name="hour" select="substring(srcTime, 1, 2)" /> <xsl:if test="$hour >= 0 and 24 > $hour"> <xsl:variable name="minute"> <xsl:choose> <xsl:when test="string-length(srcTime) = 5 and substring(srcTime, 3, 1) = ':'"> <xsl:value-of select="substring(srcTime, 4, 2)" /> </xsl:when> <xsl:when test="string-length(srcTime) = 4"> <xsl:value-of select="substring(srcTime, 3, 2)" /> </xsl:when> </xsl:choose> </xsl:variable> <xsl:if test="$minute >= 0 and 60 > $minute"> <xsl:value-of select="concat($hour, ':', $minute)" /> </xsl:if> </xsl:if> </xsl:if> </dstTime> If it isn't, and four digits is the only thing that's legal then: <dstTime> <xsl:if test="string-length(srcTime) = 4"> <xsl:variable name="hour" select="substring(srcTime, 1, 2)" /> <xsl:if test="$hour >= 0 and 24 > $hour"> <xsl:variable name="minute" select="substring(srcTime, 3, 2)" /> <xsl:if test="$minute >= 0 and 60 > $minute"> <xsl:value-of select="concat($hour, ':', $minute)" /> </xsl:if> </xsl:if> </xsl:if> </dstTime> A: XSLT 1.0 does not have any standard support for date/time manipulation. You must write a simple parsing and formatting function. That's not going to be simple, and that's not going to be pretty. XSLT is really designed for tree transformations. This sort of text node manipulations are best done outside of XSLT. A: Depending on the actual xslt processor used you could be able to do desired operations in custom extension function (which you would have to make yourself). Xalan has good support for extension functions, you can write them not only in Java but also in JavaScript or other languages supported by Apache BSF. Microsoft's XSLT engine supports custom extensions as well, as described in .NET Framework Developer's Guide, Extending XSLT Style Sheets A: Have a look at: http://www.exslt.org/ specifically the "dates and times" section. I haven't dug deep into it but it looks like it may be what your looking for. A: Even the exslt.org time() function won't help you here, because it expects its input to be in the proper format (xs:dateTime or xs:time). This is something that is best fixed outside of XSLT. I say this as someone who routinely uses XSLT to do things it wasn't really designed for and manages to get things working. It was really not designed to parse strings. The ideal solution is to fix whatever is producing the XML document so that it formats times using the international standard conveniently established just for that purpose, using the principle that you shouldn't persist or transmit crap data if you can avoid doing so. But if that's not possible, you should either fix the data before passing it to XSLT or fix it after generating the transform's output. A: And to have the list complete, there is also Date/Time processing module part of XSLT Standard Library by Steve Ball.
{ "language": "en", "url": "https://stackoverflow.com/questions/142400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to stop Eclipse 3.4 losing reference to the JRE in projects Sometimes when using eclipse it loses references to the JRE. i.e. It cannot find classes like Object or Arraylist. Some projects report a problem while others don't and they both use the same JRE. I have found that if you switch the installed JRE to another and then back again to the one you want to use, it will then work again Is there a better way to stop it doing this? EDIT: Reloading Eclipse doesn't solve the problem A: The JRE reference in your project is stored using the name you give it in the Installed JREs preference page. Change the name and you break the reference. Just pick names you can keep reusing when switching JREs, or select the workspace default as the JRE for the project. A: I may have a resolution for this. Eclipse was losing the JRE references on many of my Java projects almost daily, and restarting or starting with -clean wasn't helping. I realised that it is clearly a classloader issue of some kind, so what I did was to open the ".classpath" file of each project in the editor and manually move the JRE reference classpathentry line to be the first entry in the file, in the hope that it would load the JRE before any other classes which might be affecting it's ability to load successfully. Since doing this, the problem has not reoccurred. I think the files starting with a "." are hidden by filter in the package explorer on a default eclipse install, so you may need to disable the ".* Resources" filter to be able to open the ".classpath" file. A: It happened to me, but after a reloading of Eclipse all continued working well! A: Personally, I would chalk this up to bugs in eclipse. Check and make sure the source zip is installed with your JRE installation in eclipse. I know your pain. Eclipse is fantastic, but it still has some minor bugs. A: I've had the same experience. Only in Ganymede. Always the same project. Deleting the project (but not the source of course) and re-creating the project fixes it temporarily. Seems to be happy for a week or two and then happens again. A: Running the -clean flag when starting eclipse will remove temporary junk from the eclipse and make eclipse run better overall. I've had varying success with this, and it's a lot easier to implement than recreating the project or reinstalling eclipse. Give it a shot and see what happens. Even though this is not for 3.4, it still applies. http://www.eclipsezone.com/eclipse/forums/t61566.html
{ "language": "en", "url": "https://stackoverflow.com/questions/142404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to start Unit and Functional testing of a Ruby Rails website? I am testing a Ruby Rails website and wanted to get started with Unit and Functional testing. A: Cucumber and RSpec are worth a look. They encourage testing in a behaviour-driven, example-based style. RSpec is a library for unit-level testing: describe "hello_world" it "should say hello to the world" do # RSpec comes with its own mock-object framework built in, # though it lets you use others if you prefer world = mock("World", :population => 6e9) world.should_receive(:hello) hello_world(world) end end It has special support for Rails (e.g. it can test models, views and controllers in isolation) and can replace the testing mechanisms built in to Rails. Cucumber (formerly known as the RSpec Story Runner) lets you write high-level acceptance tests in (fairly) plain English that you could show to (and agree with) a customer, then run them: Story: Commenting on articles As a visitor to the blog I want to post comments on articles So that I can have my 15 minutes of fame Scenario: Post a new comment Given I am viewing an article When I add a comment "Me too!" And I fill in the CAPTCHA correctly Then I should see a comment "Me too!" A: My recommendation is (seriously) just bypass the built in rails unit/functional testing stuff, and go straight for RSpec. The built in rails stuff uses the Test::Unit framework which ships with ruby, and which is more or less a straight port of JUnit/NUnit/AnyOtherUnit. I found these frameworks all rather tedious and annoying, leading to general apathy about writing unit tests, which is obviously not what you're trying to acheive here. RSpec is a different beast, being centered around describing what your code should do, rather than asserting what it already does. It will change how you view testing, and you'll have a heck of a lot more fun doing it. If I sound like a bit of a fanboy, it's only because I really believe RSpec is that good. I went from being annoyed and tired of unit/functional testing, to a staunch believer in it, pretty much solely because of rspec. A: It sounds like you've already written your application, so I'm not sure you'll get a huge bonus from using RSpec over Test::Unit. Anyhow regardless of which one you choose, you'll quickly run into another issue: managing fixtures and mocks (i.e. your test "data"). So take a look at Shoulda and Factory Girl. A: You can also test out the web interface with a Firefox plug-in such as http://selenium.openqa.org/ It will record clicks and text entry and then plays back and will check the page for the correct display elements. A: Even if the app is already written, I would recommend using RSpec over Test::Unit for the simple fact that no application is ever finished. You're going to want to add features and refactor code. Getting the right test habit early on will help make these alterations less painful
{ "language": "en", "url": "https://stackoverflow.com/questions/142407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Disable caching of errors when using Apache as a proxy When you use Apache proxying (using either ProxyPass or RewriteRule), if the destination returns an error (500 series status) then Apache won't proxy any more requests for 30 seconds. I know there's a way to disable this by setting that value to 0 second, but I can't remember how. I think it involves a semicolon and some options but I can't seem to find that detail at apache.org. In development environment, you'd want this value to be 0, so you can fix the error and reload the page immediately. A: You should use a setting like this; source apache docs ProxyPass /mirror/foo/ http://backend.example.com/ retry=0
{ "language": "en", "url": "https://stackoverflow.com/questions/142409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Is there a way to determine the signature of a Lua function? Recently, Lee Baldwin showed how to write a generic, variable argument memoize function. I thought it would be better to return a simpler function where only one parameter is required. Here is my total bogus attempt: local function memoize(f) local cache = {} if select('#', ...) == 1 then return function (x) if cache[x] then return cache[x] else local y = f(x) cache[x] = y return y end end else return function (...) local al = varg_tostring(...) if cache[al] then return cache[al] else local y = f(...) cache[al] = y return y end end end end Obviously, select('#', ...) fails in this context and wouldn't really do what I want anyway. Is there any way to tell inside memoize how many arguments f expects? "No" is a fine answer if you know for sure. It's not a big deal to use two separate memoize functions. A: I guess you could go into the debug info and determine this from the source-code, but basically it's a "no", sorry. A: Yes, for Lua functions but not C functions. It's a bit torturous and a little sketchy. debug.getlocal works on called functions so you have to call the function in question. It doesn't show any hint of ... unless the call passes enough parameters. The code below tries 20 parameters. debug.sethook with the "call" event gives an opportunity to intercept the function before it runs any code. This algorithm works with Lua 5.2. Older versions would be similar but not the same: assert(_VERSION=="Lua 5.2", "Must be compatible with Lua 5.2") A little helper iterator (could be inlined for efficiency): local function getlocals(l) local i = 0 local direction = 1 return function () i = i + direction local k,v = debug.getlocal(l,i) if (direction == 1 and (k == nil or k.sub(k,1,1) == '(')) then i = -1 direction = -1 k,v = debug.getlocal(l,i) end return k,v end end Returns the signature (but could return a parameter count and usesVarargs, instead): local function dumpsig(f) assert(type(f) == 'function', "bad argument #1 to 'dumpsig' (function expected)") local p = {} pcall (function() local oldhook local hook = function(event, line) for k,v in getlocals(3) do if k == "(*vararg)" then table.insert(p,"...") break end table.insert(p,k) end debug.sethook(oldhook) error('aborting the call') end oldhook = debug.sethook(hook, "c") -- To test for vararg must pass a least one vararg parameter f(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20) end) return "function("..table.concat(p,",")..")" end A: I'm pretty sure you can't do that in Lua.
{ "language": "en", "url": "https://stackoverflow.com/questions/142417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Java: Enum parameter in method I have a method lets say: private static String drawCellValue( int maxCellLength, String cellValue, String align) { } and as you can notice, I have a parameter called align. Inside this method I'm going to have some if condition on whether the value is a 'left' or 'right'.. setting the parameter as String, obviously I can pass any string value.. I would like to know if it's possible to have an Enum value as a method parameter, and if so, how? Just in case someone thinks about this; I thought about using a Boolean value but I don't really fancy it. First, how to associate true/false with left/right ? (Ok, I can use comments but I still find it dirty) and secondly, I might decide to add a new value, like 'justify', so if I have more than 2 possible values, Boolean type is definitely not possible to use. Any ideas? A: This should do it: private enum Alignment { LEFT, RIGHT }; String drawCellValue (int maxCellLength, String cellValue, Alignment align){ if (align == Alignment.LEFT) { //Process it... } } A: I like this a lot better. reduces the if/switch, just do. private enum Alignment { LEFT, RIGHT; void process() { //Process it... } }; String drawCellValue (int maxCellLength, String cellValue, Alignment align){ align.process(); } of course, it can be: String process(...) { //Process it... } A: Even cooler with enums you can use switch: switch (align) { case LEFT: { // do stuff break; } case RIGHT: { // do stuff break; } default: { //added TOP_RIGHT but forgot about it? throw new IllegalArgumentException("Can't yet handle " + align); } } Enums are cool because the output of the exception will be the name of the enum value, rather than some arbitrary int value. A: Sure, you could use an enum. Would something like the following work? enum Alignment { LEFT, RIGHT } private static String drawCellValue(int maxCellLength, String cellValue, Alignment alignment) { } If you wanted to use a boolean, you could rename the align parameter to something like alignLeft. I agree that this implementation is not as clean, but if you don't anticipate a lot of changes and this is not a public interface, it might be a good choice. A: You could also reuse SwingConstants.{LEFT,RIGHT}. They are not enums, but they do already exist and are used in many places. A: I am not too sure I would go and use an enum as a full fledged class - this is an object oriented language, and one of the most basic tenets of object orientation is that a class should do one thing and do it well. An enum is doing a pretty good job at being an enum, and a class is doing a good job as a class. Mixing the two I have a feeling will get you into trouble - for example, you can't pass an instance of an enum as a parameter to a method, primarily because you can't create an instance of an enum. So, even though you might be able to enum.process() does not mean that you should. A: You can use an enum in said parameters like this: public enum Alignment { LEFT, RIGHT } private static String drawCellValue( int maxCellLength, String cellValue, Alignment align) {} then you can use either a switch or if statement to actually do something with said parameter. switch(align) { case LEFT: //something case RIGHT: //something default: //something } if(align == Alignment.RIGHT) { /*code*/}
{ "language": "en", "url": "https://stackoverflow.com/questions/142420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: App to Change Ldap Password for a JIRA/SVN server I'm setting up a server to offer JIRA and SVN. I figure, I'll use LDAP to keep the identity management simple. So, before I write one.... is there a good app out there to let users change their ldap password? I want something that lets a user authenticate with ldap and update their password. A form with username, old password, new password and verification would be enough. I can write my own, but it seems silly to do so if there's already a good app out there that handles this.... Thanks for the help. A: I guess you could try LDAP Self Service Portal. We had the same need, but finally used the Account Manager plugin for Trac, the collaborative environment we are using. A: Thanks. I broke down and wrote my own. I used the google web toolkit. It was pretty trivial.
{ "language": "en", "url": "https://stackoverflow.com/questions/142431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you establish context and call an WebSphere EJB from the Sun JRE (not IBM) Is there a way to call an EJB that is served through WebSphere (iiop://host:port/ejbName) from a vanilla JRE (like Sun). A lot of people have been telling me that this type of architecture relies in a homogenous environment. Thoughts? A: Yes, this is possible. You have to create something called a thin client. It has limitations on JNDI lookups due to not being part of the container environment, so fully qualified names have to be used. Just search for "thin client ibm ejb" on google. Unfortunately, I don't have the link to the appropriate libraries (for WAS 6) here, they are at work. A: Although it’s possible, I wouldn’t recommend it because you’re asking for troubles using RMI-IIOP in a heterogeneous environment. My approach would be to expose the EJB as a web service and consume at the client.
{ "language": "en", "url": "https://stackoverflow.com/questions/142445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I call a Win32 Function in PowerShell 1.0 using P/Invoke? There are many scenarios where it would be useful to call a Win32 function or some other DLL from a PowerShell script. to Given the following function signature: bool MyFunction( char* buffer, int* bufferSize ) I hear there is something that makes this easier in PowerShell CTP 2, but I'm curious how this is best done in PowerShell 1.0. The fact that the function needing to be called is using pointers could affect the solution (yet I don't really know). So the question is what's the best way to write a PowerShell script that can call an exported Win32 function like the one above? Remember for PowerShell 1.0. A: To call unmanaged code from Powershell, use the Invoke-Win32 function created by Lee Holmes. You can find the source here. There you can see an example of how to call a function that has pointers, but a more trivial usage would be: PS C:\> Invoke-Win32 "msvcrt.dll" ([Int32]) "puts" ([String]) "Test" Test 0 A: There isn't any mechanism in PowerShell 1.0 to directly call Win32 API's. You could of course write a C# or VB.NET helper class to do this for you and call that from PowerShell. Update: Take a look at - http://blogs.msdn.com/powershell/archive/2006/04/25/583236.aspx http://www.leeholmes.com/blog/ManagingINIFilesWithPowerShell.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/142452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: WMI calls from WebService So I have made a webservice that interfaces with a set of data contained in a WMI namespace. It works fine when I run it with the ASP.net in built development web server, and returns the data as requested. However when I publish it to an IIS 6 server (win 2003), the webservice no longer allows me to execute the WMI method calls. However it does let me still read from it. Instead it gives me: System.Management.ManagementException: Access denied at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObject.InvokeMethod(String methodName, ManagementBaseObject inParameters, InvokeMethodOptions options) at System.Management.ManagementObject.InvokeMethod(String methodName, Object[] args) I have tried to fix this (and yes I know this is a bad practice but I just wanted to see if it would work) by adding the "Everyone" group to that WMI namespaces security settings and giving them full permisions (which includes execute). Then resetting IIS. However I still get this error. Anyone got any ideas? A: Running with IIS as a 'proper' user account should work. The 'everyone' group doesn't mean 'absolutely everyone' -- it means 'every authenticated user'. If you can't authenticate you are still not part of everyone. If you are going after a WMI resource which requires network rights then it will still fail. Other than that maybe accessing WMI requires a user right that the default account IIS is running as doesn't have. A: Well, techinically, Everyone and "Authenicated Users" are different. Everone includes the "guest" account and "guests" group, null and anonymous connection. Everyone is everyone. "Authenticated Users" is anyone who's presented credentials. Slightly subtle, but important. If guest is disabled, then I believe they are for all practical purposes identical, although Everyone might include "null" and "anonymous" sessions.
{ "language": "en", "url": "https://stackoverflow.com/questions/142453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What are the main differences between the popular web frameworks? There are lots of web application frameworks available these days, for pretty much every language out there. In your experience, what are their strengths, weaknesses, and unique features? Assuming the luxury of choice, what factors would lead you to consider one over another other? I'm most interested in people's direct experience with one or more frameworks, rather than an exhaustive comparison of everything out there. Hopefully the SO community has programmers who have good and bad experiences with things like Rails, ASP.NET, Django, TurboGears, or JSF. It would also be great to hear if anyone is using one of the less mainstream frameworks like Seaside or Weblocks. Programming language is an obvious difference, but a Java vs Ruby flamewar won't be much fun, and most of these frameworks seem to be at least as much an investment in technology, tools and complexity as their language of choice; so I'm more interested in things like: * *Development speed and convenience *Barriers to entry - both in terms of developer training, and of infrastructure needed *Lock-in - how much code could you keep if you had to switch frameworks? *Flexibility - does the framework dictate your architecture or design? (Whether that would be a good or bad thing is probably best left to a separate discussion.) *Performance, scalability, and stability - obviously depending on the developers! A: Django vs Struts. Development speed and convenience. Django - up and running in the time required to build the model (in Python), define the Admin mappings (2-3 lines of code per model class) and create HTML templates to work with the default master-detail views. Struts - have to define a database in SQL, then define ORM mappings in iBatis. Then define, test and build various application components, using action classes and JSP template pages. Oh, and I need to define EJB's to move data from application to JSP's. It's all got to compile and I've got to work through numerous details just to get something that fits the compile rules. Barriers to entry - both in terms of developer training, and of infrastructure needed Constant across all frameworks and languages. This is pretty much a don't care item. No language or framework is inherently easy to train. All web frameworks have similar infrastructure requirements. Lock-in - how much code could you keep if you had to switch frameworks? This doesn't make a lot of sense. If you switch from Tomcat to any of the Tomcat derivatives, you can preserve a lot of Java code. Otherwise, you generally don't preserve much code when you switch framework. Flexibility - does the framework dictate your architecture or design? (Whether that would be a good or bad thing is probably best left to a separate discussion.) Actually, that's not a separate discussion. That's the point. Frameworks dictate your architecture -- and that's a good thing. Indeed, the framework is code you don't have to write, test, debug or support. It's a good thing that your application is confined by the framework to a proven, workable structure. Performance, scalability, and stability - obviously depending on the developers! Performance is language (not framework). It's design. To an extent, its also implementation configuration. Scalability is framework (not language). It's design and configuration. Stability is across the board: OS, language, framework, design, programming, QA and implementation configuration. A: I am going to briefly address each area for three popular Python frameworks. This is only based on my personal experiences and observations. Development speed and convenience For TurboGears, Pylons, and Django, development speed is roughly equal. Being modern frameworks, it's easy to get started on a new site and start throwing together pages. Python is famously fast to develop and debug and I would put any Python framework as having a shorter development time than any other setup I've worked with (including PHP, Perl, Embedded Perl, and C#/ASP.Net). Barriers to entry - developer training and infrastructure If you know Python and are willing to watch a 20 minute video tutorial, you can create a fairly complete wiki-type site from scratch. Or you can walk through a social-bookmarking site tutorial in 30 minutes (including installation). These are TurboGears examples but the other two frameworks have nearly identical tutorials as well. The test/development infrastructure that comes out of the box with these frameworks is generally enough to complete most sites. At any point, you can swap out components to meet your production environment requirements. For example, SQLite is fine for setting up your models and loading test data, but you will want to install MySQL (for example) before going live or storing large amounts of data. In all cases, the requirements are very low and dictated entirely by your scalability requirements and not any peculiarities of the framework. If you are comfortable with a certain template language or ORM, it will probably plug right in. Lock-in This is a generalized problem across all frameworks. When you select a language, you limit your code-reuse options. When you select a templater, you are again locked in (although that's easier to change, in general, than other things). The same goes for your ORM, database, and so on. There is nothing these frameworks do specifically that will help or hinder lock-in. Flexibility It's all about MVC with these three frameworks. As you said, that's a very different discussion! Performance, scalability, and stability Well, if you write good code, your site will perform well! Again, this is a problem across all frameworks addressed by different development techniques and is probably way outside the scope of this answer. A: This is an incredibly subjective question.. and that's a tag you ought to add to your question. As several comments have already suggested, you've already specified a pretty good guide; what are you actually asking? There's a billion opinions about this sort of thing and definitely no right answer! Personally, I started using .html, moved onto php, tried ruby (hated it), discovered Python / DJango.. and have been happy ever since. That's a very unique path to take though (probably) so your mileage may vary :)
{ "language": "en", "url": "https://stackoverflow.com/questions/142470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: WM_GETMINMAXINFO, the ptMaxSize not having any effect In handling a WM_GETMINMAXINFO message, I attempt to alter the parameter MINMAXINFO structure by changing the ptMaxSize. It doesn't seem to have any effect. When I receive the WM_SIZE message, I always get the same value, no matter whether I increase or decrease the ptMaxSize in the WM_GETMINMAXINFO. A: Are you sure your window is maximized? As per http://msdn.microsoft.com/en-us/library/ms632605(VS.85).aspx, MINMAXINFO::ptMaxSize controls the maximum size of the window wen maximized. If you want to control the maximum tracking size of your window (the maximum size when the window is normal), you need to modify MINMAXINFO::ptMaxTrackSize. A: Make sure you are handling the WM_GETMINMAXINFO message in the window procedure of the main application. The message only makes sense when handled by the main frame window and will have no effect if the message is handled by one of the child window procedures. A: A window must have the WS_THICKFRAME or WS_CAPTION style to receive WM_GETMINMAXINFO. This is basically all you need to know.
{ "language": "en", "url": "https://stackoverflow.com/questions/142478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Auto-generation of .NET unit tests Is there such a thing as unit test generation? If so... ...does it work well? ...What are the auto generation solutions that are available for .NET? ...are there examples of using a technology like this? ...is this only good for certain types of applications, or could it be used to replace all manually written unit testing? A: Take a look at Pex. Its a Microsoft Research project. From the website: Pex generates Unit Tests from hand-written Parameterized Unit Tests through Automated Exploratory Testing based on Dynamic Symbolic Execution. UPDATE for 2019: As mentioned in the comments, Pex is now called IntelliTest and is a feature of Visual Studio Enterprise Edition. It supports emitting tests in MSTest, MSTest V2, NUnit, and xUnit format and it is extensible so you can use it with other unit test frameworks. But be aware of the following caveats: * *Supports only C# code that targets the .NET Framework. *Does not support x64 configurations. *Available in Visual Studio Enterprise Edition only A: I believe there's no point in Unit test generation, as far as TDD goes. You only make unit tests so that you're sure that you (as a developer) are on track w/ regards to design and specs. Once you start generating tests automatically, it loses that purpose. Sure it would probably mean 100% code coverage, but that coverage would be senseless and empty. Automated unit tests also mean that your strategy is test-after, which is opposite of TDD's test-before tenet. Again, TDD is not about tests. That being said I believe MSTest does have an automatic unit-test generation tool -- I was able to use one with VS2005. A: Updated for 2017: Unit Test Boilerplate Generator works for VS 2015-2017 and is being maintained. Seems to work as advertised. A: Parasoft .TEST has a functionality of tests generation. It uses NUnit framework for tests description and assertions evaluation. It is possible to prepare a regression tests suite by automated generating scenarios (constructing inputs and calling tested method) and creating assertions which are based on the current code base behavior. Later, after code base under tests evolves, assertions indicates regressions or can be easily recorded again. A: I created ErrorUnit. It generates MSTest or NUnit unit tests from your paused Visual Studio or your error logs; Mocking class variables, Method Parameters, and EF Data access so far. See http://ErrorUnit.com No Unit Test generator can do everything. Unit Tests are classically separated into three parts Arrange, Act, and Assert; the Arrange portion is the largest part of a unit test, and it sets up all the preconditions to a test, mocking all the data that is going to be acted upon in the test, the Act-portion of a Unit Test is usually one line. It activates the portion of code being tested, passing in that data. Finally, the Assert portion of the test takes the Act portion results and verifies that it met expectations (can be zero lines when just making sure there is no error). Unit Test generators generally can only do the Arrange, and Act portions on unit test creation; however, unit test generators generally do not write Assert portions as only you know what is correct and what is incorrect for your purposes. So some manual entry/extending of Unit Tests is necessary for completeness. A: I agree with Jon. Certain types of testing, like automated fuzz testing, definitely benefit from automated generation. While you can use the facilities of a unit testing framework to accomplish this, this doesn't accomplish the goals associated with good unit test coverage. A: Selenium generates unit tests from user commands on a web page, pretty nifty. A: I know this thread is old but for the sake of all developpers, there is a good library called nunit : https://marketplace.visualstudio.com/items?itemName=NUnitDevelopers.TestGeneratorNUnitextension EDIT : Please use XUNIT ( Supported by microsoft ) : https://github.com/xunit/xunit Autogenerated test : https://marketplace.visualstudio.com/items?itemName=YowkoTsai.xUnitnetTestGenerator Good dev A: I've used NStub to stub out test for my classes. It works fairly well. A: I've used tools to generate test cases. I think it works well for higher-level, end-user oriented testing. Stuff that's part of User Acceptance Testing, more so than pure unit testing. I use the unit test tools for this acceptance testing. It works well. See Tooling to Build Test Cases. A: There is a commercial product called AgitarOne (www.agitar.com) that automatically generates JUnit test classes. I haven't used it so can't comment on how useful it is, but if I was doing a Java project at the moment I would be looking at it. I don't know of a .net equivalent (Agitar did one announce a .net version but AFAIK it never materialised).
{ "language": "en", "url": "https://stackoverflow.com/questions/142481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Eclipse: Dependency Management What are some methods of utilising Eclipse for Dependency Management? A: A simpler way to go is the Maven Eclipse plugin (as opposed to a Maven plugin for Eclipse). It's simply a maven plugin that generates the .project and .classpath file based on the contents of the pom, you just run mvn eclipse:eclipse and you're done. It uses a classpath variable in Eclipse to locate the local maven repo. I personally prefer this approach most of the time because you have more control over when the maven plugin updates are done. It's also one less Eclipse plugin to deal with. The GUI features of the m2eclipse plugin in the latest version is pretty nice, though. There's also an alternative to the m2eclipse plugin called Q4E, now called Eclipse IAM. A: I really like the The Maven Integration for Eclipse (m2eclipse, Eclipse m2e). I use it purely for the dependency management feature. It's great not having to go out and download a bunch of new jars new each time I set up a project. A: Another option is ivy. Ivy has eclipse integration as well. A comparison of maven and ivy can be found here: http://ant.apache.org/ivy/m2comparison.html
{ "language": "en", "url": "https://stackoverflow.com/questions/142504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How much access do you give BA/PM's? Where I work we have a little bit of a dilemma... I work on a small team developing an application for internal use. We recently just received a new PM to the project. She would like to have access to our database and our sourcecode (stored in svn). Our previous PM did not see a need, nor want, to have access to any of the things "in our sandbox". Having said that, What is the proper amount of access to give a PM/BA? Is there a security breach of some kind with this? If you agree that the PM should have access to one or both, what kind of access? I have thought on this for a bit and at first I did not want the PM/BA in my sandbox, but I have since went the other way in thinking what harm could it do? Am I incorrect? Is this end a battle worth fighting? A: Give her access. Make her check it out of source control and track her like anyone else. If she changes anything you'll have a history. If she makes suggestions about the implementation, it might help. If she's a bully and start ranting about the source code well... she probably would have found a way to do that anyway. A: It really depends on how much the PM knows about programming. Some PMs I work for I would feel completely comfortable giving them full access to SVN, read and commit privileges. Other PMs I would trust them with read privileges, although I don't think they would know what to do with the code when they saw it. A: You will probably want to give read-only access. As managers tend to keep everything in their hands, they might change the code as they see fit, breaking your procedures for review/testing etc. Giving a read-only access would satisfy them if they only want to see what is being done/who is doing what. A: I've never hear of this being considered a problem or a security issue. In fact, after reading the question, I have some serious questions about what your last PM was doing! By all means, embrace the fact that you have an interested manager and give her at least read access so she can do a check out and see what it is her developers are working with. A: Give her full access, if she wants it. She's supposed to manage the project you are developing, and to do that efficiently, she might need to be able to look at any part of the project. Of course, there's always danger that she might do something stupid or malicious. If you have as part of your process auditing of any changes, you'll be able to find out if she messes anything. A: It depends a lot on what specific responsibilities the PM has on the project. Will she be helping users with usage problems and troubleshooting? Helping with testing? Is there any reason why being able to explore the data would help her so her job? I think read-only access to the db and no-commit access to the source isn't likely to be harmful, and if it makes her feel like she's more a part of the team and gets her engaged with the project, then it's all for the best. And it certainly won't do anything for your rapport with her if you refuse and she goes over your head and gets access anyway. A: To address the concern about security, make sure you get a sign off from a higher-level manager if you're going to give the PM any access at all. If anything does go wrong, at least you can show you were following company policies (or were exempted from them by someone higher up). As for access, the PM has no business making changes to the code so be firm on no write access. Even read access should not be necessary unless they're actually doing code reviews or something that would require them to see the code.
{ "language": "en", "url": "https://stackoverflow.com/questions/142507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I check OS with a preprocessor directive? I need my code to do different things based on the operating system on which it gets compiled. I'm looking for something like this: #ifdef OSisWindows // do Windows-specific stuff #else // do Unix-specific stuff #endif Is there a way to do this? Is there a better way to do the same thing? A: Microsoft C/C++ compiler (MSVC) Predefined Macros can be found here I think you are looking for: * *_WIN32 - Defined as 1 when the compilation target is 32-bit ARM, 64-bit ARM, x86, or x64. Otherwise, undefined *_WIN64 - Defined as 1 when the compilation target is 64-bit ARM or x64. Otherwise, undefined. gcc compiler PreDefined MAcros can be found here I think you are looking for: * *__GNUC__ *__GNUC_MINOR__ *__GNUC_PATCHLEVEL__ Do a google for your appropriate compilers pre-defined. A: show GCC defines on Windows: gcc -dM -E - <NUL: on Linux: gcc -dM -E - </dev/null Predefined macros in MinGW: WIN32 _WIN32 __WIN32 __WIN32__ __MINGW32__ WINNT __WINNT __WINNT__ _X86_ i386 __i386 on UNIXes: unix __unix__ __unix A: In most cases it is better to check whether a given functionality is present or not. For example: if the function pipe() exists or not. A: #ifdef _WIN32 // do something for windows like include <windows.h> #elif defined __unix__ // do something for unix like include <unistd.h> #elif defined __APPLE__ // do something for mac #endif A: On MinGW, the _WIN32 define check isn't working. Here's a solution: #if defined(_WIN32) || defined(__CYGWIN__) // Windows (x86 or x64) // ... #elif defined(__linux__) // Linux // ... #elif defined(__APPLE__) && defined(__MACH__) // Mac OS // ... #elif defined(unix) || defined(__unix__) || defined(__unix) // Unix like OS // ... #else #error Unknown environment! #endif For more information please look: https://sourceforge.net/p/predef/wiki/OperatingSystems/ A: Based on nadeausoftware and Lambda Fairy's answer. #include <stdio.h> /** * Determination a platform of an operation system * Fully supported supported only GNU GCC/G++, partially on Clang/LLVM */ #if defined(_WIN32) #define PLATFORM_NAME "windows" // Windows #elif defined(_WIN64) #define PLATFORM_NAME "windows" // Windows #elif defined(__CYGWIN__) && !defined(_WIN32) #define PLATFORM_NAME "windows" // Windows (Cygwin POSIX under Microsoft Window) #elif defined(__ANDROID__) #define PLATFORM_NAME "android" // Android (implies Linux, so it must come first) #elif defined(__linux__) #define PLATFORM_NAME "linux" // Debian, Ubuntu, Gentoo, Fedora, openSUSE, RedHat, Centos and other #elif defined(__unix__) || !defined(__APPLE__) && defined(__MACH__) #include <sys/param.h> #if defined(BSD) #define PLATFORM_NAME "bsd" // FreeBSD, NetBSD, OpenBSD, DragonFly BSD #endif #elif defined(__hpux) #define PLATFORM_NAME "hp-ux" // HP-UX #elif defined(_AIX) #define PLATFORM_NAME "aix" // IBM AIX #elif defined(__APPLE__) && defined(__MACH__) // Apple OSX and iOS (Darwin) #include <TargetConditionals.h> #if TARGET_IPHONE_SIMULATOR == 1 #define PLATFORM_NAME "ios" // Apple iOS #elif TARGET_OS_IPHONE == 1 #define PLATFORM_NAME "ios" // Apple iOS #elif TARGET_OS_MAC == 1 #define PLATFORM_NAME "osx" // Apple OSX #endif #elif defined(__sun) && defined(__SVR4) #define PLATFORM_NAME "solaris" // Oracle Solaris, Open Indiana #else #define PLATFORM_NAME NULL #endif // Return a name of platform, if determined, otherwise - an empty string const char *get_platform_name() { return (PLATFORM_NAME == NULL) ? "" : PLATFORM_NAME; } int main(int argc, char *argv[]) { puts(get_platform_name()); return 0; } Tested with GCC and clang on: * *Debian 8 *Windows (MinGW) *Windows (Cygwin) A: The Predefined Macros for OS site has a very complete list of checks. Here are a few of them, with links to where they're found: Windows _WIN32   Both 32 bit and 64 bit _WIN64   64 bit only __CYGWIN__ Unix (Linux, *BSD, but not Mac OS X) See this related question on some of the pitfalls of using this check. unix __unix __unix__ Mac OS X __APPLE__ Also used for classic __MACH__ Both are defined; checking for either should work. Linux __linux__ linux Obsolete (not POSIX compliant) __linux Obsolete (not POSIX compliant) FreeBSD __FreeBSD__ Android __ANDROID__ A: There is no standard macro that is set according to C standard. Some C compilers will set one on some platforms (e.g. Apple's patched GCC sets a macro to indicate that it is compiling on an Apple system and for the Darwin platform). Your platform and/or your C compiler might set something as well, but there is no general way. Like hayalci said, it's best to have these macros set in your build process somehow. It is easy to define a macro with most compilers without modifying the code. You can simply pass -D MACRO to GCC, i.e. gcc -D Windows gcc -D UNIX And in your code: #if defined(Windows) // do some cool Windows stuff #elif defined(UNIX) // do some cool Unix stuff #else # error Unsupported operating system #endif A: Sorry for the external reference, but I think it is suited to your question: C/C++ tip: How to detect the operating system type using compiler predefined macros A: You can use Boost.Predef which contains various predefined macros for the target platform including the OS (BOOST_OS_*). Yes boost is often thought as a C++ library, but this one is a preprocessor header that works with C as well! This library defines a set of compiler, architecture, operating system, library, and other version numbers from the information it can gather of C, C++, Objective C, and Objective C++ predefined macros or those defined in generally available headers. The idea for this library grew out of a proposal to extend the Boost Config library to provide more, and consistent, information than the feature definitions it supports. What follows is an edited version of that brief proposal. For example #include <boost/predef.h> // or just include the necessary header // #include <boost/predef/os.h> #if BOOST_OS_WINDOWS #elif BOOST_OS_ANDROID #elif BOOST_OS_LINUX #elif BOOST_OS_BSD #elif BOOST_OS_AIX #elif BOOST_OS_HAIKU ... #endif The full list can be found in BOOST_OS operating system macros Demo on Godbolt See also How to get platform IDs from boost? A: Use #define OSsymbol and #ifdef OSsymbol where OSsymbol is a #define'able symbol identifying your target OS. Typically you would include a central header file defining the selected OS symbol and use OS-specific include and library directories to compile and build. You did not specify your development environment, but I'm pretty sure your compiler provides global defines for common platforms and OSes. See also http://en.wikibooks.org/wiki/C_Programming/Preprocessor A: Just to sum it all up, here are a bunch of helpful links. * *GCC Common Predefined Macros *SourceForge predefined Operating Systems *MSDN Predefined Macros *The Much-Linked NaudeaSoftware Page *Wikipedia!!! *SourceForge's "Overview of pre-defined compiler macros for standards, compilers, operating systems, and hardware architectures." *FreeBSD's "Differentiating Operating Systems" *All kinds of predefined macros *libportable A: I did not find Haiku definition here. To be complete, Haiku-os definition is simple __HAIKU__ A: Some compilers will generate #defines that can help you with this. Read the compiler documentation to determine what they are. MSVC defines one that's __WIN32__, GCC has some you can see with touch foo.h; gcc -dM foo.h A: You can use pre-processor directives as warning or error to check at compile time you don't need to run this program at all just simply compile it . #if defined(_WIN32) || defined(_WIN64) || defined(__WINDOWS__) #error Windows_OS #elif defined(__linux__) #error Linux_OS #elif defined(__APPLE__) && defined(__MACH__) #error Mach_OS #elif defined(unix) || defined(__unix__) || defined(__unix) #error Unix_OS #else #error Unknown_OS #endif #include <stdio.h> int main(void) { return 0; } A: I wrote an small library to get the operating system you are on, it can be installed using clib (The C package manager), so it is really simple to use it as a dependency for your projects. Install $ clib install abranhe/os.c Usage #include <stdio.h> #include "os.h" int main() { printf("%s\n", operating_system()); // macOS return 0; } It returns a string (char*) with the name of the operating system you are using, for further information about this project check it out the documentation on Github.
{ "language": "en", "url": "https://stackoverflow.com/questions/142508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "292" }
Q: Tools to help with editing a remote web application I often have nothing more than an FTP access to a server on which the application is placed. What I usually use now is "Keeping remote directory up to date" feature of WinSCP. Files of the local copy (in theory at least) keep being uploaded to a remote server as soon as they get saved and then all I need is to refresh a page in a browser to see the result (sometimes clearing session variables beforehand). WinSCP's bugginess and FTP protocol deficiencies aside, I feel this may be somewhat primitive approach and perphaps there are better ways to get a task like that done. A: I have a similar situation. I used to use Dreamweaver for web development but have switched to other tools that do not have the file sync features of Dreamweaver. I have recently discovered BeyondCompare, which is a diff/merge tool that works really well for comparing local and remote directory trees. It is highly configurable and has a sync mode as well. Very nice. A: If you are using Maven to automate your build I would recommend you use the Cargo plugin to deploy your application to the server. If you cannot use your container's deployer you can still use Maven (with a bit more work) to deploy your web application using SCP, SFTP, FTP or even WebDAV via Wagon.
{ "language": "en", "url": "https://stackoverflow.com/questions/142519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: .dbc --> .csv I have a little utility that converts .dbc files to .csv files, trouble is, somewhere in the conversion some data is lost/destroyed/whatever. I input a.dbc into converter, it produces a.csv. I delete a.dbc,and then run a.csv back through the converter, and I come back with a "slightly" different .dbc file then I had started with. Does anyone know any better way of converting these files? Without loss of information.. I open both files in HexCMP (compares two hex files, show's you the differences) and the differences are totally random through out the file. A: Sounds like this is nothing more than a buggy utility. If you convert the same .dbc file to a .csv file twice in a row, do you get the exact same .csv file? If you run the .csv through twice do you get the same .dbc file out both times? That would at least tell you which side of the conversion the bugs are in. A: Do you have access to FoxPro to export the file as a CSV directly from FoxPro without using the utility? That would allow you to compare the CSV file created from FoxPro versus your utility to try and narrow down where the problem is.
{ "language": "en", "url": "https://stackoverflow.com/questions/142523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Highlight text inside of a textarea Is it possible to highlight text inside of a textarea using javascript? Either changing the background of just a portion of the text area or making a portion of the text selected? A: Easy script I wrote for this: Jsfiddle OPTIONS: * *Optional char counter. *Highlight several patterns with different colors. *Regex. *Collect matches to other containers. *Easy styling: fonts color and face, backgrounds, border radius and lineheight. *Ctrl+Shift for direction change. //include function like in the fiddle! //CREATE ELEMENT: create_bind_textarea_highlight({ eleId:"wrap_all_highlighter", width:400, height:110, padding:5, background:'white', backgroundControls:'#585858', radius:5, fontFamilly:'Arial', fontSize:13, lineHeight:18, counterlettres:true, counterFont:'red', matchpatterns:[["(#[0-9A-Za-z]{0,})","$1"],["(@[0-9A-Za-z]{0,})","$1"]], hightlightsColor:['#00d2ff','#FFBF00'], objectsCopy:["copy_hashes","copy_at"] //PRESS Ctrl + SHIFT for direction swip! }); //HTML EXAMPLE: <div id="wrap_all_highlighter" placer='1'></div> <div id='copy_hashes'></div><!--Optional--> <div id='copy_at'></div><!--Optional--> Have Fun! A: Try this piece of code I wrote this morning, it will highlight a defined set of words: <html> <head> <title></title> <!-- Load jQuery --> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> <!-- The javascript xontaining the plugin and the code to init the plugin --> <script type="text/javascript"> $(function() { // let's init the plugin, that we called "highlight". // We will highlight the words "hello" and "world", // and set the input area to a widht and height of 500 and 250 respectively. $("#container").highlight({ words: ["hello","world"], width: 500, height: 250 }); }); // the plugin that would do the trick (function($){ $.fn.extend({ highlight: function() { // the main class var pluginClass = function() {}; // init the class // Bootloader pluginClass.prototype.__init = function (element) { try { this.element = element; } catch (err) { this.error(err); } }; // centralized error handler pluginClass.prototype.error = function (e) { // manage error and exceptions here //console.info("error!",e); }; // Centralized routing function pluginClass.prototype.execute = function (fn, options) { try { options = $.extend({},options); if (typeof(this[fn]) == "function") { var output = this[fn].apply(this, [options]); } else { this.error("undefined_function"); } } catch (err) { this.error(err); } }; // ********************** // Plugin Class starts here // ********************** // init the component pluginClass.prototype.init = function (options) { try { // the element's reference ( $("#container") ) is stored into "this.element" var scope = this; this.options = options; // just find the different elements we'll need this.highlighterContainer = this.element.find('#highlighterContainer'); this.inputContainer = this.element.find('#inputContainer'); this.textarea = this.inputContainer.find('textarea'); this.highlighter = this.highlighterContainer.find('#highlighter'); // apply the css this.element.css('position','relative'); // place both the highlight container and the textarea container // on the same coordonate to superpose them. this.highlighterContainer.css({ 'position': 'absolute', 'left': '0', 'top': '0', 'border': '1px dashed #ff0000', 'width': this.options.width, 'height': this.options.height, 'cursor': 'text' }); this.inputContainer.css({ 'position': 'absolute', 'left': '0', 'top': '0', 'border': '1px solid #000000' }); // now let's make sure the highlit div and the textarea will superpose, // by applying the same font size and stuffs. // the highlighter must have a white text so it will be invisible this.highlighter.css({ 'padding': '7px', 'color': '#eeeeee', 'background-color': '#ffffff', 'margin': '0px', 'font-size': '11px', 'font-family': '"lucida grande",tahoma,verdana,arial,sans-serif' }); // the textarea must have a transparent background so we can see the highlight div behind it this.textarea.css({ 'background-color': 'transparent', 'padding': '5px', 'margin': '0px', 'font-size': '11px', 'width': this.options.width, 'height': this.options.height, 'font-family': '"lucida grande",tahoma,verdana,arial,sans-serif' }); // apply the hooks this.highlighterContainer.bind('click', function() { scope.textarea.focus(); }); this.textarea.bind('keyup', function() { // when we type in the textarea, // we want the text to be processed and re-injected into the div behind it. scope.applyText($(this).val()); }); } catch (err) { this.error(err); } return true; }; pluginClass.prototype.applyText = function (text) { try { var scope = this; // parse the text: // replace all the line braks by <br/>, and all the double spaces by the html version &nbsp; text = this.replaceAll(text,'\n','<br/>'); text = this.replaceAll(text,' ','&nbsp;&nbsp;'); // replace the words by a highlighted version of the words for (var i=0;i<this.options.words.length;i++) { text = this.replaceAll(text,this.options.words[i],'<span style="background-color: #D8DFEA;">'+this.options.words[i]+'</span>'); } // re-inject the processed text into the div this.highlighter.html(text); } catch (err) { this.error(err); } return true; }; // "replace all" function pluginClass.prototype.replaceAll = function(txt, replace, with_this) { return txt.replace(new RegExp(replace, 'g'),with_this); } // don't worry about this part, it's just the required code for the plugin to hadle the methods and stuffs. Not relevant here. //********************** // process var fn; var options; if (arguments.length == 0) { fn = "init"; options = {}; } else if (arguments.length == 1 && typeof(arguments[0]) == 'object') { fn = "init"; options = $.extend({},arguments[0]); } else { fn = arguments[0]; options = $.extend({},arguments[1]); } $.each(this, function(idx, item) { // if the component is not yet existing, create it. if ($(item).data('highlightPlugin') == null) { $(item).data('highlightPlugin', new pluginClass()); $(item).data('highlightPlugin').__init($(item)); } $(item).data('highlightPlugin').execute(fn, options); }); return this; } }); })(jQuery); </script> </head> <body> <div id="container"> <div id="highlighterContainer"> <div id="highlighter"> </div> </div> <div id="inputContainer"> <textarea cols="30" rows="10"> </textarea> </div> </div> </body> </html> This was written for another post (http://facebook.stackoverflow.com/questions/7497824/how-to-highlight-friends-name-in-facebook-status-update-box-textarea/7597420#7597420), but it seems to be what you're searching for. A: Improved version from above, also works with Regex and more TextArea fields: <html> <head> <title></title> <!-- Load jQuery --> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> <!-- The javascript xontaining the plugin and the code to init the plugin --> <script type="text/javascript"> $(function() { // let's init the plugin, that we called "highlight". // We will highlight the words "hello" and "world", // and set the input area to a widht and height of 500 and 250 respectively. $("#container0").highlight({ words: [["hello","hello"],["world","world"],["(\\[b])(.+?)(\\[/b])","$1$2$3"]], width: 500, height: 125, count:0 }); $("#container1").highlight({ words: [["hello","hello"],["world","world"],["(\\[b])(.+?)(\\[/b])","$1$2$3"]], width: 500, height: 125, count: 1 }); }); // the plugin that would do the trick (function($){ $.fn.extend({ highlight: function() { // the main class var pluginClass = function() {}; // init the class // Bootloader pluginClass.prototype.__init = function (element) { try { this.element = element; } catch (err) { this.error(err); } }; // centralized error handler pluginClass.prototype.error = function (e) { // manage error and exceptions here //console.info("error!",e); }; // Centralized routing function pluginClass.prototype.execute = function (fn, options) { try { options = $.extend({},options); if (typeof(this[fn]) == "function") { var output = this[fn].apply(this, [options]); } else { this.error("undefined_function"); } } catch (err) { this.error(err); } }; // ********************** // Plugin Class starts here // ********************** // init the component pluginClass.prototype.init = function (options) { try { // the element's reference ( $("#container") ) is stored into "this.element" var scope = this; this.options = options; // just find the different elements we'll need this.highlighterContainer = this.element.find('#highlighterContainer'+this.options.count); this.inputContainer = this.element.find('#inputContainer'+this.options.count); this.textarea = this.inputContainer.find('textarea'); this.highlighter = this.highlighterContainer.find('#highlighter'+this.options.count); // apply the css this.element.css({'position':'relative', 'overflow':'auto', 'background':'none repeat scroll 0 0 #FFFFFF', 'height':this.options.height+2, 'width':this.options.width+19, 'border':'1px solid' }); // place both the highlight container and the textarea container // on the same coordonate to superpose them. this.highlighterContainer.css({ 'position': 'absolute', 'left': '0', 'top': '0', 'border': '1px dashed #ff0000', 'width': this.options.width, 'height': this.options.height, 'cursor': 'text', 'z-index': '1' }); this.inputContainer.css({ 'position': 'absolute', 'left': '0', 'top': '0', 'border': '0px solid #000000', 'z-index': '2', 'background': 'none repeat scroll 0 0 transparent' }); // now let's make sure the highlit div and the textarea will superpose, // by applying the same font size and stuffs. // the highlighter must have a white text so it will be invisible var isWebKit = navigator.userAgent.indexOf("WebKit") > -1, isOpera = navigator.userAgent.indexOf("Opera") > -1, isIE /*@cc_on = true @*/, isIE6 = isIE && !window.XMLHttpRequest; // Despite the variable name, this means if IE lower than v7 if (isIE || isOpera){ var padding = '6px 5px'; } else { var padding = '5px 6px'; } this.highlighter.css({ 'padding': padding, 'color': '#eeeeee', 'background-color': '#ffffff', 'margin': '0px', 'font-size': '11px' , 'line-height': '12px' , 'font-family': '"lucida grande",tahoma,verdana,arial,sans-serif' }); // the textarea must have a transparent background so we can see the highlight div behind it this.textarea.css({ 'background-color': 'transparent', 'padding': '5px', 'margin': '0px', 'width': this.options.width, 'height': this.options.height, 'font-size': '11px', 'line-height': '12px' , 'font-family': '"lucida grande",tahoma,verdana,arial,sans-serif', 'overflow': 'hidden', 'border': '0px solid #000000' }); // apply the hooks this.highlighterContainer.bind('click', function() { scope.textarea.focus(); }); this.textarea.bind('keyup', function() { // when we type in the textarea, // we want the text to be processed and re-injected into the div behind it. scope.applyText($(this).val()); }); scope.applyText(this.textarea.val()); } catch (err) { this.error(err) } return true; }; pluginClass.prototype.applyText = function (text) { try { var scope = this; // parse the text: // replace all the line braks by <br/>, and all the double spaces by the html version &nbsp; text = this.replaceAll(text,'\n','<br/>'); text = this.replaceAll(text,' ','&nbsp;&nbsp;'); text = this.replaceAll(text,' ','&nbsp;'); // replace the words by a highlighted version of the words for (var i=0;i<this.options.words.length;i++) { text = this.replaceAll(text,this.options.words[i][0],'<span style="background-color: #D8DFEA;">'+this.options.words[i][1]+'</span>'); //text = this.replaceAll(text,'(\\[b])(.+?)(\\[/b])','<span style="font-weight:bold;background-color: #D8DFEA;">$1$2$3</span>'); } // re-inject the processed text into the div this.highlighter.html(text); if (this.highlighter[0].clientHeight > this.options.height) { // document.getElementById("highlighter0") this.textarea[0].style.height=this.highlighter[0].clientHeight +19+"px"; } else { this.textarea[0].style.height=this.options.height; } } catch (err) { this.error(err); } return true; }; // "replace all" function pluginClass.prototype.replaceAll = function(txt, replace, with_this) { return txt.replace(new RegExp(replace, 'g'),with_this); } // don't worry about this part, it's just the required code for the plugin to hadle the methods and stuffs. Not relevant here. //********************** // process var fn; var options; if (arguments.length == 0) { fn = "init"; options = {}; } else if (arguments.length == 1 && typeof(arguments[0]) == 'object') { fn = "init"; options = $.extend({},arguments[0]); } else { fn = arguments[0]; options = $.extend({},arguments[1]); } $.each(this, function(idx, item) { // if the component is not yet existing, create it. if ($(item).data('highlightPlugin') == null) { $(item).data('highlightPlugin', new pluginClass()); $(item).data('highlightPlugin').__init($(item)); } $(item).data('highlightPlugin').execute(fn, options); }); return this; } }); })(jQuery); </script> </head> <body> <div id="container0"> <div id="highlighterContainer0"> <div id="highlighter0"></div> </div> <div id="inputContainer0"> <textarea id="text1" cols="30" rows="15">hello world</textarea> </div> </div> <h1> haus </h1> <div id="container1"> <div id="highlighterContainer1"> <div id="highlighter1"></div> </div> <div id="inputContainer1"> <textarea cols="30" rows="15">hipp hipp hurra, [b]ich hab es jetzt![/b]</textarea> </div> </div> </body>
{ "language": "en", "url": "https://stackoverflow.com/questions/142527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: How to make a cross-module variable? The __debug__ variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it? The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it. A: Global variables are usually a bad idea, but you can do this by assigning to __builtins__: __builtins__.foo = 'something' print foo Also, modules themselves are variables that you can access from any module. So if you define a module called my_globals.py: # my_globals.py foo = 'something' Then you can use that from anywhere as well: import my_globals print my_globals.foo Using modules rather than modifying __builtins__ is generally a cleaner way to do globals of this sort. A: You can already do this with module-level variables. Modules are the same no matter what module they're being imported from. So you can make the variable a module-level variable in whatever module it makes sense to put it in, and access it or assign to it from other modules. It would be better to call a function to set the variable's value, or to make it a property of some singleton object. That way if you end up needing to run some code when the variable's changed, you can do so without breaking your module's external interface. It's not usually a great way to do things — using globals seldom is — but I think this is the cleanest way to do it. A: I wanted to post an answer that there is a case where the variable won't be found. Cyclical imports may break the module behavior. For example: first.py import second var = 1 second.py import first print(first.var) # will throw an error because the order of execution happens before var gets declared. main.py import first On this is example it should be obvious, but in a large code-base, this can be really confusing. A: I believe that there are plenty of circumstances in which it does make sense and it simplifies programming to have some globals that are known across several (tightly coupled) modules. In this spirit, I would like to elaborate a bit on the idea of having a module of globals which is imported by those modules which need to reference them. When there is only one such module, I name it "g". In it, I assign default values for every variable I intend to treat as global. In each module that uses any of them, I do not use "from g import var", as this only results in a local variable which is initialized from g only at the time of the import. I make most references in the form g.var, and the "g." serves as a constant reminder that I am dealing with a variable that is potentially accessible to other modules. If the value of such a global variable is to be used frequently in some function in a module, then that function can make a local copy: var = g.var. However, it is important to realize that assignments to var are local, and global g.var cannot be updated without referencing g.var explicitly in an assignment. Note that you can also have multiple such globals modules shared by different subsets of your modules to keep things a little more tightly controlled. The reason I use short names for my globals modules is to avoid cluttering up the code too much with occurrences of them. With only a little experience, they become mnemonic enough with only 1 or 2 characters. It is still possible to make an assignment to, say, g.x when x was not already defined in g, and a different module can then access g.x. However, even though the interpreter permits it, this approach is not so transparent, and I do avoid it. There is still the possibility of accidentally creating a new variable in g as a result of a typo in the variable name for an assignment. Sometimes an examination of dir(g) is useful to discover any surprise names that may have arisen by such accident. A: Define a module ( call it "globalbaz" ) and have the variables defined inside it. All the modules using this "pseudoglobal" should import the "globalbaz" module, and refer to it using "globalbaz.var_name" This works regardless of the place of the change, you can change the variable before or after the import. The imported module will use the latest value. (I tested this in a toy example) For clarification, globalbaz.py looks just like this: var_name = "my_useful_string" A: I wondered if it would be possible to avoid some of the disadvantages of using global variables (see e.g. http://wiki.c2.com/?GlobalVariablesAreBad) by using a class namespace rather than a global/module namespace to pass values of variables. The following code indicates that the two methods are essentially identical. There is a slight advantage in using class namespaces as explained below. The following code fragments also show that attributes or variables may be dynamically created and deleted in both global/module namespaces and class namespaces. wall.py # Note no definition of global variables class router: """ Empty class """ I call this module 'wall' since it is used to bounce variables off of. It will act as a space to temporarily define global variables and class-wide attributes of the empty class 'router'. source.py import wall def sourcefn(): msg = 'Hello world!' wall.msg = msg wall.router.msg = msg This module imports wall and defines a single function sourcefn which defines a message and emits it by two different mechanisms, one via globals and one via the router function. Note that the variables wall.msg and wall.router.message are defined here for the first time in their respective namespaces. dest.py import wall def destfn(): if hasattr(wall, 'msg'): print 'global: ' + wall.msg del wall.msg else: print 'global: ' + 'no message' if hasattr(wall.router, 'msg'): print 'router: ' + wall.router.msg del wall.router.msg else: print 'router: ' + 'no message' This module defines a function destfn which uses the two different mechanisms to receive the messages emitted by source. It allows for the possibility that the variable 'msg' may not exist. destfn also deletes the variables once they have been displayed. main.py import source, dest source.sourcefn() dest.destfn() # variables deleted after this call dest.destfn() This module calls the previously defined functions in sequence. After the first call to dest.destfn the variables wall.msg and wall.router.msg no longer exist. The output from the program is: global: Hello world! router: Hello world! global: no message router: no message The above code fragments show that the module/global and the class/class variable mechanisms are essentially identical. If a lot of variables are to be shared, namespace pollution can be managed either by using several wall-type modules, e.g. wall1, wall2 etc. or by defining several router-type classes in a single file. The latter is slightly tidier, so perhaps represents a marginal advantage for use of the class-variable mechanism. A: If you need a global cross-module variable maybe just simple global module-level variable will suffice. a.py: var = 1 b.py: import a print a.var import c print a.var c.py: import a a.var = 2 Test: $ python b.py # -> 1 2 Real-world example: Django's global_settings.py (though in Django apps settings are used by importing the object django.conf.settings). A: I don't endorse this solution in any way, shape or form. But if you add a variable to the __builtin__ module, it will be accessible as if a global from any other module that includes __builtin__ -- which is all of them, by default. a.py contains print foo b.py contains import __builtin__ __builtin__.foo = 1 import a The result is that "1" is printed. Edit: The __builtin__ module is available as the local symbol __builtins__ -- that's the reason for the discrepancy between two of these answers. Also note that __builtin__ has been renamed to builtins in python3. A: You can pass the globals of one module to onother: In Module A: import module_b my_var=2 module_b.do_something_with_my_globals(globals()) print my_var In Module B: def do_something_with_my_globals(glob): # glob is simply a dict. glob["my_var"]=3 A: This sounds like modifying the __builtin__ name space. To do it: import __builtin__ __builtin__.foo = 'some-value' Do not use the __builtins__ directly (notice the extra "s") - apparently this can be a dictionary or a module. Thanks to ΤΖΩΤΖΙΟΥ for pointing this out, more can be found here. Now foo is available for use everywhere. I don't recommend doing this generally, but the use of this is up to the programmer. Assigning to it must be done as above, just setting foo = 'some-other-value' will only set it in the current namespace. A: I use this for a couple built-in primitive functions that I felt were really missing. One example is a find function that has the same usage semantics as filter, map, reduce. def builtin_find(f, x, d=None): for i in x: if f(i): return i return d import __builtin__ __builtin__.find = builtin_find Once this is run (for instance, by importing near your entry point) all your modules can use find() as though, obviously, it was built in. find(lambda i: i < 0, [1, 3, 0, -5, -10]) # Yields -5, the first negative. Note: You can do this, of course, with filter and another line to test for zero length, or with reduce in one sort of weird line, but I always felt it was weird. A: I could achieve cross-module modifiable (or mutable) variables by using a dictionary: # in myapp.__init__ Timeouts = {} # cross-modules global mutable variables for testing purpose Timeouts['WAIT_APP_UP_IN_SECONDS'] = 60 # in myapp.mod1 from myapp import Timeouts def wait_app_up(project_name, port): # wait for app until Timeouts['WAIT_APP_UP_IN_SECONDS'] # ... # in myapp.test.test_mod1 from myapp import Timeouts def test_wait_app_up_fail(self): timeout_bak = Timeouts['WAIT_APP_UP_IN_SECONDS'] Timeouts['WAIT_APP_UP_IN_SECONDS'] = 3 with self.assertRaises(hlp.TimeoutException) as cm: wait_app_up(PROJECT_NAME, PROJECT_PORT) self.assertEqual("Timeout while waiting for App to start", str(cm.exception)) Timeouts['WAIT_JENKINS_UP_TIMEOUT_IN_SECONDS'] = timeout_bak When launching test_wait_app_up_fail, the actual timeout duration is 3 seconds.
{ "language": "en", "url": "https://stackoverflow.com/questions/142545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "129" }
Q: Deployment of war file on Tomcat Is there a way to deploy a given war file on Tomcat server? I want to do this without using the web interface. A: There are several ways to deploy a Tomcat webapp: * *Dropping into $CATALINA_HOME/webapps, as was already mentioned. *Using your build scripts to deploy automatically via the manager interface (that comes with Tomcat). Here are the two ways * *for Maven: use the tomcat plugin. You don't need to include it in pom.xml, just issue the goal mvn tomcat:deploy, the plugin is included in Maven 2. This assumes several defaults explained in the documentation, you can configure the behaviour in the pom.xml. There are other goals that let you deploy as an exploded archive etc. *for Ant: something like this: <property name="manager.url" value="http://localhost:8080/manager"/> <property name="manager.username" value="manager"/> <property name="manager.password" value="foobar"/> <!-- Task definitions --> <taskdef name="deploy" classname="org.apache.catalina.ant.DeployTask"/> <taskdef name="list" classname="org.apache.catalina.ant.ListTask"/> <taskdef name="reload" classname="org.apache.catalina.ant.ReloadTask"/> <taskdef name="undeploy" classname="org.apache.catalina.ant.UndeployTask"/> <!-- goals --> <target name="install" depends="compile" description="Install application to servlet container"> <deploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}" localWar="file://${build.home}"/> </target> <target name="list" description="List installed applications on servlet container"> <list url="${manager.url}" username="${manager.username}" password="${manager.password}"/> </target> <target name="reload" depends="compile" description="Reload application on servlet container"> <reload url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> <target name="remove" description="Remove application on servlet container"> <undeploy url="${manager.url}" username="${manager.username}" password="${manager.password}" path="${app.path}"/> </target> All of those will require you to have a Tomcat user configuration. It lives $CATALINA_BASE/conf/tomcat-users.xml, but since you know already how to use the web interface, I assume you know how to configure the users and passwords. A: you can edit the conf/server.xml and add an entry like this pointing to your war directory <Context path="/strutsDisplayTag" reloadable="true" docBase="C:\work\learn\jsp\strutsDisplayTag" workDir="C:\work\learn\jsp\strutsDisplayTag\work" /> ELSE you can copy your .WAR file to the webapps directory of tomcat. A: We never use the web interface, don't like it. The wars are dropped in the webapps and server.xml edited as necessary. You need to bounce it if you edit the server.xml, but the war file should be picked up automagically. We generally delete the directory expanded from the war first so there is no confusion from where the components came. A: Just copy the war file into the $TOMCAT_HOME/webapps/ directory. Tomcat will deploy the war file by automatically exploding it. FYI - If you want you can make updates directly to the exploded directory, which is useful for development. A: The Tomcat Client Deployer Package looks to be what you need to deploy to a remote server from the command line. From the page: This is a package which can be used to validate, compile, compress to .WAR, and deploy web applications to production or development Tomcat servers. It should be noted that this feature uses the Tomcat Manager and as such the target Tomcat server should be running. A: You can also try this command-line script for managing tomcat called tomcat-manager. It requires Python, and talks to the manager application included with tomcat via HTTP. You can do stuff from a *nix shell like: $ tomcat-manager --user=admin --password=newenglandclamchowder \ > http://localhost:8080/manager/ stop /myapp and: $ tomcat-manager --user=admin --password=newenglandclamchowder \ > http://localhost:8080/manager deploy /myapp ~/src/myapp/myapp.war
{ "language": "en", "url": "https://stackoverflow.com/questions/142548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Do you have to restart apache to make re-write rules in the .htaccess take effect? I have pushed my .htaccess files to the production severs, but they don't work. Would a restart be the next step, or should I check something else. A: No: Apache allows for decentralized management of configuration via special files placed inside the web tree. The special files are usually called .htaccess, but any name can be specified in the AccessFileName directive... Since .htaccess files are read on every request, changes made in these files take immediate effect... A: Only if you have not added the mod_rewrite module to Apache. You only need to restart Apache if you change any Apache ".conf" files. A: I have the same issue and it seems PiedPiper post about AllowOverride were most helpful. Check your httpd.conf file for "AllowOverride" and make sure it is set to All. A: In case of .htaccess restart is not required if it is not working probable reasons include. * *AllowOverride May not be set which user can set inside httpd.conf or might have to contact server admin. *Check the file name of .htaccess it should be .htaccess not htaccess.txt see here for guide how to create one. *Try to use Options -Indexes or deny all kind of simple directive to see if it is working or not. *clear browser cache everytime if having rule for redirects or similar if previous redirect is cached it appears as if things are not working. A: From the apache documentation: Most commonly, the problem is that AllowOverride is not set such that your configuration directives are being honored. Make sure that you don't have a AllowOverride None in effect for the file scope in question. A good test for this is to put garbage in your .htaccess file and reload. If a server error is not generated, then you almost certainly have AllowOverride None in effect. A: A restart is not required for changes to .htaccess. Something else is wrong. Make sure your .htaccess includes the statement RewriteEngine on which is required even if it's also present in httpd.conf. Also check that .htaccess is readable by the httpd process. Check the error_log - it will tell you of any errors in .htaccess if it's being used. Putting an intentional syntax error in .htaccess is a good check to make sure the file is being used -- you should get a 500 error on any page in the same directory. Lastly, you can enable a rewrite log using commands like the following in your httpd.conf: RewriteLog "logs/rewritelog" RewriteLogLevel 7 The log file thus generated will give you the gory detail of which rewrite rules matched and how they were handled. A: What's in your .htaccess? RewriteRules? Check that mod_rewrite is installed and enabled. Other stuff? Try setting AllowOverride to 'all' on that directory.
{ "language": "en", "url": "https://stackoverflow.com/questions/142559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: Looking for doc on why IE "yellow bar" shows when opening a HTML file that contains JavaScript I have a site, from which you can download an HTML file. This HTML file contains a form with hidden fields, which is right away posted back to the site using JavaScript. This is a way of allowing users to download to their own machine data that they edit on the site. On some machines, you get an IE "yellow bar" when trying to open the file you saved. The "yellow bar" in IE is warning that the HTML is trying to run an Active X (which it is not, there is only JavaScript doing a submit() on a form). However if you receive the exact same HTML file by email, save it, and open it, you don't have this problem. (It looks like IE is putting some more constraint on what can be done in a HTML file you saved from web site.) My question is: where can I find documentation on this IE security mechanism, and possibly how can I get around it? Alex A: The yellow bar is because your page is executing in the Local Machine security zone in IE. On different machines, the Local Machine security zone might be configured in different ways, so you can see the yellow bar on some machines and not see it on other machines. To learn more about the IE's URL Security Zones, you can start reading here: http://msdn.microsoft.com/en-us/library/ms537183.aspx A: Look here for details on the MOTW - Mark Of The Web If you add this to your locally served pages, IE will not show the yellow bar. http://msdn.microsoft.com/en-us/library/ms537628(VS.85).aspx A: I am not usre about any specific documnet, but if you open the properties for the file in windows explorer on the general tab is the file blocked? if so click unblock and try again and see if you gte the same issue. This is typical security for files downloaded fom the internet. Other than that i am afraid i dont know what else to suggest. A: I don't 100% follow what your JavaScript is submitting to, but if you're submitting back to the original site from the downloaded copy you'll have a problem using JavaScript as all browsers treat cross-domain JavaScript as a security violation. JavaScript isn't allowed to read or write to any site not on the current domain A: As Franci had said it is becaue you are in the local machine security context and this allows scripts to create objects and execute code that could do harm to your PC. For example you can create a File System Object and perform tasks that an untrusted page shouldn't perform generally because it could be malicious in nature. A: Have you tried changing the file name from yourname.html to yourname.hta to see if the security problem goes away? More on HTML Applications (.HTA files): http://msdn.microsoft.com/en-us/library/ms536496%28VS.85%29.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/142573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to revive C++ skills I was a C++ developer (mostly ATL/COM stuff) until, as many of us, I switched to C# in 2001. I didn't do much C++ programming since then. Do you have any tips on how to revive my C++ skills? What has changed in C++ in the last years? Are there good books, articles or blogs covering the language. The problem is that most material I could find either targets people who are new to the language or those with a lot of experience. Which C++ libraries are popular these days? I guess I will need to read on the STL because I didn't use it much. What else? Boost? ATL? WTL? A: I personally find that syntax is where i mostly need to catch up when i wander back to a language i havent used in a long time. But the concepts and what the language is about stays the same in memory. Assuming its the same with you, i would say its a good idea to relook at the texts you remember to have been useful to you while learning C++. I would recommned Thinking in C++ for getting up fast on the syntax. STL would be really useful yes. Thats one thing i have found commonly appreciated by all mature C++ programmers. It would be useful to know the libraries that Boost provides. The changes to C++ world, depends on the changes your favourite compiler has decided to implement. Since you mentioned ATl/COM i assume it would be VC++. The changes to MFC would be support for Windows Forms (2005 vc++) and Vista compliant uI's and ribbon support(?) (2008 Vc++) VC++ now supports managed C++ -i'm sure you know what that is coming from the C# world - 2008 adds supports for managed STL too. VC++ is trying to be more standards compliant and are making some progress in that area. They have introduced lots of secure functions that depreciate the old stds like strcpy and the compilers will also give warnings if you use the old fns. VC++2005 also has something called function attributes, which it uses to describe the parameters so that it can do more checking on the inputs you pass in and statically flag a warning if it sees soething amiss. Usefuli would say though our shop has not progressed to using the 2005 compiler. MSDN has the list of breaking changes for each releases. Oh & Support for Windows 95, Windows 98, Windows Millennium Edition, and Windows NT 4.0 has been removed from 2005 version of VC++. Additionally the core libraries you required till now (CRT, ATL, MFC etc) now support a new deployment model which makes them shared side sy side assemblies and requires a manifest. This link should get you going - http://msdn.microsoft.com/en-us/library/y8bt6w34.aspx 2008 adds even more like Tr1 recommendations, more optimizning compiler, parallel compilation(/mp), support for new processor architectures etc. Open Mp support has also been enhanced in one of these versions is what i remember. Again refer MSDN - thats the suthentic source for all the answers. Good luck. A: Definitely read the latest edition of "Effective C++" by Scott Meyers. I would also recommend "C++ Gotchas: Avoiding Common Problems in Coding and Design" by Stephen C. Dewhurst. A: To sharpen your C++ skills I'd suggest going over some of your old C++ code if you still have access to it. Revisiting it will hopefully trigger those parts of your brain that have laid dormant after switching to C# :) For libraries STL is good, then boost. I don't think there is too much new stuff going on with ATL/WTL from what you would have known back in 2001. A: Just start a project. The libraries you use will depend on your project, but you should certainly read up on the STL. If you haven't used C++ for a long time you might need learn more about templates. A: Pickup one of the C++ Unit Test frameworks out there (I suggest Google C++ Testing Framework, aka. gtest). Pick a small project that you can start from scratch and try some TDD. The TDD will encourage you to make small steps and to reflect on your code. Also, as you build your suite of unit tests, it gives you a base from which you can experiment with different techniques. A: For a start, I'd say try writing code that will work on both a Mac and Windows or Linux and Windows. This will force you to write code that is much more portable than the type of C++ code you can get away with on Visual C++ - there a lot of finer points that are very different when you go cross platform. I'd suggest stay away from libraries for now if you can - perfect your ANSI C++ game first. I'd also suggest reading up on C++0x - the next standard is due soon and it would help you more to work towards that. To that end, brush up on the STL (the concepts behind it, not the implementation so much) and templates. If you'd like to try BOOST, go ahead, but you can generally get by without using it. The reason I stayed away from it mostly is because of the way templates are used to do what is needed - a lot of which will become much easier once the new standard is introduced. UPDATE: Once you're comfortable with the STL and start needing to do things that require a lot of code with the STL or are just plain tricky, then head over to BOOST. Buy a book on BOOST and read it and understand it well. A: Rewrite some of your C# stuff using C++ A: Take some old piece of code and add to it. This won't get you back on top of the latest C++ trends but it will get your feet wet. At my job I had to add some features to a C++ ActiveX control and I hadn't touched C++ in years and years and have never done it professionally. Figuring out how to do it again was actually pretty damn cool. A: I was in a similar situation: switched from C++ to C# in 2005 and then switched back to C++ in 2007. I can't say C++ universe really changed in those 2 years. The most crucial thing was to regain my memory-management instincts, but that can only be done by practicing. Now that you have both C++ and .NET under your belt you might want to study C++ CLI a bit (new incarnation of late "Managed C++"). As for books, read everything with "Meyers" and "Sutter" on the cover. A: Boost - though it, and other libraries were around back then, its only relatively recently that it's taken off in a big way. Google for TR1 and c++0x standards too. You should defintely read up on STL because (IMHO) its the thing that makes C++ special. ATL is as good a dead technology (don't get me wrong, I liked it and still use it somewhat, but its not fashionable in the MS world anymore). Something like QT is probably more new and cool for C++ developers, and has the advantage of getting you into all the new Linux and web development that'll be increasingly popular over the next few years. However, once you start looking at the things you can do, I think it'll all come back quite quickly.
{ "language": "en", "url": "https://stackoverflow.com/questions/142602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: TraceRoute and Ping in C# Does anyone have C# code handy for doing a ping and traceroute to a target computer? I am looking for a pure code solution, not what I'm doing now, which is invoking the ping.exe and tracert.exe program and parsing the output. I would like something more robust. A: For the ping part, take a look at the Ping class on MSDN. A: Given that I had to write a TraceRoute class today I figured I might as well share the source code. using System.Collections.Generic; using System.Net.NetworkInformation; using System.Text; using System.Net; namespace Answer { public class TraceRoute { private const string Data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; public static IEnumerable<IPAddress> GetTraceRoute(string hostNameOrAddress) { return GetTraceRoute(hostNameOrAddress, 1); } private static IEnumerable<IPAddress> GetTraceRoute(string hostNameOrAddress, int ttl) { Ping pinger = new Ping(); PingOptions pingerOptions = new PingOptions(ttl, true); int timeout = 10000; byte[] buffer = Encoding.ASCII.GetBytes(Data); PingReply reply = default(PingReply); reply = pinger.Send(hostNameOrAddress, timeout, buffer, pingerOptions); List<IPAddress> result = new List<IPAddress>(); if (reply.Status == IPStatus.Success) { result.Add(reply.Address); } else if (reply.Status == IPStatus.TtlExpired || reply.Status == IPStatus.TimedOut) { //add the currently returned address if an address was found with this TTL if (reply.Status == IPStatus.TtlExpired) result.Add(reply.Address); //recurse to get the next address... IEnumerable<IPAddress> tempResult = default(IEnumerable<IPAddress>); tempResult = GetTraceRoute(hostNameOrAddress, ttl + 1); result.AddRange(tempResult); } else { //failure } return result; } } } And a VB version for anyone that wants/needs it Public Class TraceRoute Private Const Data As String = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" Public Shared Function GetTraceRoute(ByVal hostNameOrAddress As String) As IEnumerable(Of IPAddress) Return GetTraceRoute(hostNameOrAddress, 1) End Function Private Shared Function GetTraceRoute(ByVal hostNameOrAddress As String, ByVal ttl As Integer) As IEnumerable(Of IPAddress) Dim pinger As Ping = New Ping Dim pingerOptions As PingOptions = New PingOptions(ttl, True) Dim timeout As Integer = 10000 Dim buffer() As Byte = Encoding.ASCII.GetBytes(Data) Dim reply As PingReply reply = pinger.Send(hostNameOrAddress, timeout, buffer, pingerOptions) Dim result As List(Of IPAddress) = New List(Of IPAddress) If reply.Status = IPStatus.Success Then result.Add(reply.Address) ElseIf reply.Status = IPStatus.TtlExpired Then 'add the currently returned address result.Add(reply.Address) 'recurse to get the next address... Dim tempResult As IEnumerable(Of IPAddress) tempResult = GetTraceRoute(hostNameOrAddress, ttl + 1) result.AddRange(tempResult) Else 'failure End If Return result End Function End Class A: This implementation is simple, lazy (properly enumerable) and it will not go on searching forever (maxTTL) like some of the other answers. public static IEnumerable<IPAddress> GetTraceRoute(string hostname) { // following are similar to the defaults in the "traceroute" unix command. const int timeout = 10000; const int maxTTL = 30; const int bufferSize = 32; byte[] buffer = new byte[bufferSize]; new Random().NextBytes(buffer); using (var pinger = new Ping()) { for (int ttl = 1; ttl <= maxTTL; ttl++) { PingOptions options = new PingOptions(ttl, true); PingReply reply = pinger.Send(hostname, timeout, buffer, options); // we've found a route at this ttl if (reply.Status == IPStatus.Success || reply.Status == IPStatus.TtlExpired) yield return reply.Address; // if we reach a status other than expired or timed out, we're done searching or there has been an error if (reply.Status != IPStatus.TtlExpired && reply.Status != IPStatus.TimedOut) break; } } } A: This is the most efficient way I could think of. Please vote it up if you like it so others can benefit. using System; using System.Collections.Generic; using System.Net.NetworkInformation; namespace NetRouteAnalysis { class Program { static void Main(string[] args) { var route = TraceRoute.GetTraceRoute("8.8.8.8") foreach (var step in route) { Console.WriteLine($"{step.Address,-20} {step.Status,-20} \t{step.RoundtripTime} ms"); } } } public static class TraceRoute { public static IEnumerable<PingReply> GetTraceRoute(string hostnameOrIp) { // Initial variables var limit = 1000; var buffer = new byte[32]; var pingOpts = new PingOptions(1, true); var ping = new Ping(); // Result holder. PingReply result = null; do { result = ping.Send(hostnameOrIp, 4000, buffer, pingOpts); pingOpts = new PingOptions(pingOpts.Ttl + 1, pingOpts.DontFragment); if (result.Status != IPStatus.TimedOut) { yield return result; } } while (result.Status != IPStatus.Success && pingOpts.Ttl < limit); } } } A: Although the Base Class Library includes Ping, the BCL does not include any tracert functionality. However, a quick search reveals two open-source attempts, the first in C# the second in C++: * *http://www.codeproject.com/KB/IP/tracert.aspx *http://www.codeguru.com/Cpp/I-N/network/basicnetworkoperations/article.php/c5457/ A: Ping: We can use the Ping class built into the .NET Framework. Instantiate a Ping and subscribe to the PingCompleted event: Ping pingSender = new Ping(); pingSender.PingCompleted += PingCompletedCallback; Add code to configure and action the ping, e.g.: string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; byte[] buffer = Encoding.ASCII.GetBytes(data); string who = "www.google.com"; AutoResetEvent waiter = new AutoResetEvent(false); int timeout = 12000; PingOptions options = new PingOptions(64, true); pingSender.SendAsync(who, timeout, buffer, options, waiter); Add a PingCompletedEventHandler: public static void PingCompletedCallback(object sender, PingCompletedEventArgs e) { ... Do stuff here } Code-dump of a full working example, based on MSDN's example: public static void Main(string[] args) { string who = "www.google.com"; AutoResetEvent waiter = new AutoResetEvent(false); Ping pingSender = new Ping(); // When the PingCompleted event is raised, // the PingCompletedCallback method is called. pingSender.PingCompleted += PingCompletedCallback; // Create a buffer of 32 bytes of data to be transmitted. string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; byte[] buffer = Encoding.ASCII.GetBytes(data); // Wait 12 seconds for a reply. int timeout = 12000; // Set options for transmission: // The data can go through 64 gateways or routers // before it is destroyed, and the data packet // cannot be fragmented. PingOptions options = new PingOptions(64, true); Console.WriteLine("Time to live: {0}", options.Ttl); Console.WriteLine("Don't fragment: {0}", options.DontFragment); // Send the ping asynchronously. // Use the waiter as the user token. // When the callback completes, it can wake up this thread. pingSender.SendAsync(who, timeout, buffer, options, waiter); // Prevent this example application from ending. // A real application should do something useful // when possible. waiter.WaitOne(); Console.WriteLine("Ping example completed."); } public static void PingCompletedCallback(object sender, PingCompletedEventArgs e) { // If the operation was canceled, display a message to the user. if (e.Cancelled) { Console.WriteLine("Ping canceled."); // Let the main thread resume. // UserToken is the AutoResetEvent object that the main thread // is waiting for. ((AutoResetEvent)e.UserState).Set(); } // If an error occurred, display the exception to the user. if (e.Error != null) { Console.WriteLine("Ping failed:"); Console.WriteLine(e.Error.ToString()); // Let the main thread resume. ((AutoResetEvent)e.UserState).Set(); } Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}"); // Let the main thread resume. ((AutoResetEvent)e.UserState).Set(); } A: As am improvement to Scotts code answer above, I found that his solution doesn't work if the route tapers off into nothing before reaching the destination - it never returns. A better solution with at least a partial route could be this (which I've tested and it works well). You can change the '20' in the for loop to something bigger or smaller or try to detect if it's taking too long if you want to control the number of iterations some other way. Full credit to Scott for the original code - thanks. using System.Collections.Generic; using System.Net.NetworkInformation; using System.Text; using System.Net; ... public static void TraceRoute(string hostNameOrAddress) { for (int i = 1; i < 20; i++) { IPAddress ip = GetTraceRoute(hostNameOrAddress, i); if(ip == null) { break; } Console.WriteLine(ip.ToString()); } } private static IPAddress GetTraceRoute(string hostNameOrAddress, int ttl) { const string Data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"; Ping pinger = new Ping(); PingOptions pingerOptions = new PingOptions(ttl, true); int timeout = 10000; byte[] buffer = Encoding.ASCII.GetBytes(Data); PingReply reply = default(PingReply); reply = pinger.Send(hostNameOrAddress, timeout, buffer, pingerOptions); List<IPAddress> result = new List<IPAddress>(); if (reply.Status == IPStatus.Success || reply.Status == IPStatus.TtlExpired) { return reply.Address; } else { return null; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/142614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: ModalPopupExtender OK Button click event not firing? I have a Button inside an UpdatePanel. The button is being used as the OK button for a ModalPopupExtender. For some reason, the button click event is not firing. Any ideas? Am I missing something? <asp:updatepanel id="UpdatePanel1" runat="server"> <ContentTemplate> <cc1:ModalPopupExtender ID="ModalDialog" runat="server" TargetControlID="OpenDialogLinkButton" PopupControlID="ModalDialogPanel" OkControlID="ModalOKButton" BackgroundCssClass="ModalBackground"> </cc1:ModalPopupExtender> <asp:Panel ID="ModalDialogPanel" CssClass="ModalPopup" runat="server"> ... <asp:Button ID="ModalOKButton" runat="server" Text="OK" onclick="ModalOKButton_Click" /> </asp:Panel> </ContentTemplate> </asp:updatepanel> A: It could also be that the button needs to have CausesValidation="false". That worked for me. A: I was just searching for a solution for this :) it appears that you can't have OkControlID assign to a control if you want to that control fires an event, just removing this property I got everything working again. my code (working): <asp:Panel ID="pnlResetPanelsView" CssClass="modalPopup" runat="server" Style="display:none;"> <h2> Warning</h2> <p> Do you really want to reset the panels to the default view?</p> <div style="text-align: center;"> <asp:Button ID="btnResetPanelsViewOK" Width="60" runat="server" Text="Yes" CssClass="buttonSuperOfficeLayout" OnClick="btnResetPanelsViewOK_Click" />&nbsp; <asp:Button ID="btnResetPanelsViewCancel" Width="60" runat="server" Text="No" CssClass="buttonSuperOfficeLayout" /> </div> </asp:Panel> <ajax:ModalPopupExtender ID="mpeResetPanelsView" runat="server" TargetControlID="btnResetView" PopupControlID="pnlResetPanelsView" BackgroundCssClass="modalBackground" DropShadow="true" CancelControlID="btnResetPanelsViewCancel" /> A: Aspx <ajax:ModalPopupExtender runat="server" ID="modalPop" PopupControlID="pnlpopup" TargetControlID="btnGo" BackgroundCssClass="modalBackground" DropShadow="true" CancelControlID="btnCancel" X="470" Y="300" /> //Codebehind protected void OkButton_Clicked(object sender, EventArgs e) { modalPop.Hide(); //Do something in codebehind } And don't set the OK button as OkControlID. A: Put into the Button-Control the Attribute "UseSubmitBehavior=false". A: None of the previous answers worked for me. I called the postback of the button on the OnOkScript event. <div> <cc1:ModalPopupExtender PopupControlID="Panel1" ID="ModalPopupExtender1" runat="server" TargetControlID="LinkButton1" OkControlID="Ok" OnOkScript="__doPostBack('Ok','')"> </cc1:ModalPopupExtender> <asp:LinkButton ID="LinkButton1" runat="server">LinkButton</asp:LinkButton> </div> <asp:Panel ID="Panel1" runat="server"> <asp:Button ID="Ok" runat="server" Text="Ok" onclick="Ok_Click" /> </asp:Panel> A: I often use a blank label as the TargetControlID. ex. <asp:Label ID="lblghost" runat="server" Text="" /> I've seen two things that cause the click event not fire: 1. you have to remove the OKControlID (as others have mentioned) 2. If you are using field validators you should add CausesValidation="false" on the button. Both scenarios behaved the same way for me. A: It appears that a button that is used as the OK or CANCEL button for a ModalPopupExtender cannot have a click event. I tested this out by removing the OkControlID="ModalOKButton" from the ModalPopupExtender tag, and the button click fires. I'll need to figure out another way to send the data to the server. A: I've found a way to validate a modalpopup without a postback. In the ModalPopupExtender I set the OnOkScript to a function e.g ValidateBeforePostBack(), then in the function I call Page_ClientValidate for the validation group I want, do a check and if it fails, keep the modalpopup showing. If it passes, I call __doPostBack. function ValidateBeforePostBack(){ Page_ClientValidate('MyValidationGroupName'); if (Page_IsValid) { __doPostBack('',''); } else { $find('mpeBehaviourID').show(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/142633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Weird MSC 8.0 error: "The value of ESP was not properly saved across a function call..." We recently attempted to break apart some of our Visual Studio projects into libraries, and everything seemed to compile and build fine in a test project with one of the library projects as a dependency. However, attempting to run the application gave us the following nasty run-time error message: Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function pointer declared with a different calling convention. We have never even specified calling conventions (__cdecl etc.) for our functions, leaving all the compiler switches on the default. I checked and the project settings are consistent for calling convention across the library and test projects. Update: One of our devs changed the "Basic Runtime Checks" project setting from "Both (/RTC1, equiv. to /RTCsu)" to "Default" and the run-time vanished, leaving the program running apparently correctly. I do not trust this at all. Was this a proper solution, or a dangerous hack? A: I saw this error when the code tried to call a function on an object that was not of the expected type. So, class hierarchy: Parent with children: Child1 and Child2 Child1* pMyChild = 0; ... pMyChild = pSomeClass->GetTheObj();// This call actually returned a Child2 object pMyChild->SomeFunction(); // "...value of ESP..." error occurs here A: This debug error means that the stack pointer register is not returned to its original value after the function call, i.e. that the number of pushes before the function call were not followed by the equal number of pops after the call. There are 2 reasons for this that I know (both with dynamically loaded libraries). #1 is what VC++ is describing in the error message, but I don't think this is the most often cause of the error (see #2). 1) Mismatched calling conventions: The caller and the callee do not have a proper agreement on who is going to do what. For example, if you're calling a DLL function that is _stdcall, but you for some reason have it declared as a _cdecl (default in VC++) in your call. This would happen a lot if you're using different languages in different modules etc. You would have to inspect the declaration of the offending function, and make sure it is not declared twice, and differently. 2) Mismatched types: The caller and the callee are not compiled with the same types. For example, a common header defines the types in the API and has recently changed, and one module was recompiled, but the other was not--i.e. some types may have a different size in the caller and in the callee. In that case, the caller pushes the arguments of one size, but the callee (if you're using _stdcall where the callee cleans the stack) pops the different size. The ESP is not, thus, returned to the correct value. (Of course, these arguments, and others below them, would seem garbled in the called function, but sometimes you can survive that without a visible crash.) If you have access to all the code, simply recompile it. A: I was getting similar error for AutoIt APIs which i was calling from VC++ program. typedef long (*AU3_RunFn)(LPCWSTR, LPCWSTR); However, when I changed the declaration which includes WINAPI, as suggested earlier in the thread, problem vanished. Code without any error looks like: typedef long (WINAPI *AU3_RunFn)(LPCWSTR, LPCWSTR); AU3_RunFn _AU3_RunFn; HINSTANCE hInstLibrary = LoadLibrary("AutoItX3.dll"); if (hInstLibrary) { _AU3_RunFn = (AU3_RunFn)GetProcAddress(hInstLibrary, "AU3_WinActivate"); if (_AU3_RunFn) _AU3_RunFn(L"Untitled - Notepad",L""); FreeLibrary(hInstLibrary); } A: I was getting this error calling a function in a DLL which was compiled with a pre-2005 version of Visual C++ from a newer version of VC (2008). The function had this signature: LONG WINAPI myFunc( time_t, SYSTEMTIME*, BOOL* ); The problem was that time_t's size is 32 bits in pre-2005 version, but 64 bits since VS2005 (is defined as _time64_t). The call of the function expects a 32 bit variable but gets a 64 bit variable when called from VC >= 2005. As parameters of functions are passed via the stack when using WINAPI calling convention, this corrupts the stack and generates the above mentioned error message ("Run-Time Check Failure #0 ..."). To fix this, it is possible to #define _USE_32BIT_TIME_T before including the header file of the DLL or -- better -- change the signature of the function in the header file depending on the VS version (pre-2005 versions don't know _time32_t!): #if _MSC_VER >= 1400 LONG WINAPI myFunc( _time32_t, SYSTEMTIME*, BOOL* ); #else LONG WINAPI myFunc( time_t, SYSTEMTIME*, BOOL* ); #endif Note that you need to use _time32_t instead of time_t in the calling program, of course. A: It's worth pointing out that this can also be a Visual Studio bug. I got this issue on VS2017, Win10 x64. At first it made sense, since I was doing weird things casting this to a derived type and wrapping it in a lambda. However, I reverted the code to a previous commit and still got the error, even though it wasn't there before. I tried restarting and then rebuilding the project, and then the error went away. A: I read this in other forum I was having the same problem, but I just FIXED it. I was getting the same error from the following code: HMODULE hPowerFunctions = LoadLibrary("Powrprof.dll"); typedef bool (*tSetSuspendStateSig)(BOOL, BOOL, BOOL); tSetSuspendState SetSuspendState = (tSuspendStateSig)GetProcAddress(hPowerfunctions, "SetSuspendState"); result = SetSuspendState(false, false, false); <---- This line was where the error popped up. After some investigation, I changed one of the lines to: typedef bool (WINAPI*tSetSuspendStateSig)(BOOL, BOOL, BOOL); which solved the problem. If you take a look in the header file where SetSuspendState is found (powrprof.h, part of the SDK), you will see the function prototype is defined as: BOOLEAN WINAPI SetSuspendState(BOOLEAN, BOOLEAN, BOOLEAN); So you guys are having a similar problem. When you are calling a given function from a .dll, its signature is probably off. (In my case it was the missing WINAPI keyword). Hope that helps any future people! :-) Cheers. A: I was having this exact same error after moving functions to a dll and dynamically loading the dll with LoadLibrary and GetProcAddress. I had declared extern "C" for the function in the dll because of the decoration. So that changed calling convention to __cdecl as well. I was declaring function pointers to be __stdcall in the loading code. Once I changed the function pointer from __stdcall to__cdecl in the loading code the runtime error went away. A: Silencing the check is not the right solution. You have to figure out what is messed up with your calling conventions. There are quite a few ways to change the calling convetion of a function without explicitly specifying it. extern "C" will do it, STDMETHODIMP/IFACEMETHODIMP will also do it, other macros might do it as well. I believe if run your program under WinDBG (http://www.microsoft.com/whdc/devtools/debugging/default.mspx), the runtime should break at the point where you hit that problem. You can look at the call stack and figure out which function has the problem and then look at its definition and the declaration that the caller uses. A: Are you creating static libs or DLLs? If DLLs, how are the exports defined; how are the import libraries created? Are the prototypes for the functions in the libs exactly the same as the function declarations where the functions are defined? A: do you have any typedef'd function prototypes (eg int (*fn)(int a, int b) ) if you dom you might be have gotten the prototype wrong. ESP is an error on the calling of a function (can you tell which one in the debugger?) that has a mismatch in the parameters - ie the stack has restored back to the state it started in when you called the function. You can also get this if you're loading C++ functions that need to be declared extern C - C uses cdecl, C++ uses stdcall calling convention by default (IIRC). Put some extern C wrappers around the imported function prototypes and you may fix it. If you can run it in the debugger, you'll see the function immediatey. If not, you can set DrWtsn32 to create a minidump that you can load into windbg to see the callstack at the time of the error (you'll need symbols or a mapfile to see the function names though). A: Another case where esp can get messed up is with an inadvertent buffer overflow, usually through mistaken use of pointers to work past the boundary of an array. Say you have some C function that looks like int a, b[2]; Writing to b[3] will probably change a, and anywhere past that is likely to hose the saved esp on the stack. A: You would get this error if the function is invoked with a calling convention other than the one it is compiled to. Visual Studio uses a default calling convention setting thats decalred in the project's options. Check if this value is the same in the orignal project settings and in the new libraries. An over ambitious dev could have set this to _stdcall/pascal in the original since it reduces the code size compared to the default cdecl. So the base process would be using this setting and the new libraries get the default cdecl which causes the problem Since you have said that you do not use any special calling conventions this seems to be a good probability. Also do a diff on the headers to see if the declarations / files that the process sees are the same ones that the libraries are compiled with . ps : Making the warning go away is BAAAD. the underlying error still persists. A: This happened to me when accessing a COM object (Visual Studio 2010). I passed the GUID for another interface A for in my call to QueryInterface, but then I cast the retrieved pointer as interface B. This resulted in making a function call to one with an entirely signature, which accounts for the stack (and ESP) being messed up. Passing the GUID for interface B fixed the problem. A: In my MFC C++ app I am experiencing the same problem as reported in Weird MSC 8.0 error: “The value of ESP was not properly saved across a function call…”. The posting has over 42K views and 16 answers/comments none of which blamed the compiler as the problem. At least in my case I can show that the VS2015 compiler is at fault. My dev and test setup is the following: I have 3 PCs all of which run Win10 version 10.0.10586. All are compiling with VS2015, but here is the difference. Two of the VS2015s have Update 2 while the other has Update 3 applied. The PC with Update 3 works, but the other two with Update 2 fail with the same error as reported in the posting above. My MFC C++ app code is exactly the same on all three PCs. Conclusion: at least in my case for my app the compiler version (Update 2) contained a bug that broke my code. My app makes heavy use of std::packaged_task so I expect the problem was in that fairly new compiler code. A: ESP is the stack pointer. So according to the compiler, your stack pointer is getting messed up. It is hard to say how (or if) this could be happening without seeing some code. What is the smallest code segment you can get to reproduce this? A: If you're using any callback functions with the Windows API, they must be declared using CALLBACK and/or WINAPI. That will apply appropriate decorations to make the compiler generate code that cleans the stack correctly. For example, on Microsoft's compiler it adds __stdcall. Windows has always used the __stdcall convention as it leads to (slightly) smaller code, with the cleanup happening in the called function rather than at every call site. It's not compatible with varargs functions, though (because only the caller knows how many arguments they pushed). A: Here's a stripped down C++ program that produces that error. Compiled using (Microsoft Visual Studio 2003) produces the above mentioned error. #include "stdafx.h" char* blah(char *a){ char p[1]; strcat(p, a); return (char*)p; } int main(){ std::cout << blah("a"); std::cin.get(); } ERROR: "Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention." A: I had this same problem here at work. I was updating some very old code that was calling a FARPROC function pointer. If you don't know, FARPROC's are function pointers with ZERO type safety. It's the C equivalent of a typdef'd function pointer, without the compiler type checking. So for instance, say you have a function that takes 3 parameters. You point a FARPROC to it, and then call it with 4 parameters instead of 3. The extra parameter pushed extra garbage onto the stack, and when it pops off, ESP is now different than when it started. So I solved it by removing the extra parameter to the invocation of the FARPROC function call. A: Not the best answer but I just recompiled my code from scratch (rebuild in VS) and then the problem went away.
{ "language": "en", "url": "https://stackoverflow.com/questions/142644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Relational camp and "real-world" database development More than decade passed since first publication of Date's and Darwen's "The Third Manifesto" in 1995. What is the place of relational school of thought in today's database world? Is there any evidence that Manifesto's ideas changed mainstream software development and data management practices? Have they catalyzed creation of new data management products? Are these products commercially successful? A: Object oriented databases are an oxymoron. the more you try to make an OO database the more you end up in relational world anyway. in my opinion they're just a hype. Note that ORM's are NOT OO databases. Neither are Datasets. I've heard that argument before so i'm saying it just to make things clear. A: I've seen many discussions over the years about how OODs are supposed to overtake Relational Databases "anytime soon"; that the Relational model is the way of the past; that inertia from the huge installed basis (ehm... legacy) is what holds back progress on OODs. "It's just a matter of time before a 'good-enough' implementation finally comes out and succeeds at dethroning RDBMSs". I'm not an expert by any stretch; but I've thought about this many times, and I've come to believe that these views completely miss the point. In most "real world" scenarios, the big thing, the only thing that matters is the data. Programming techniques, tools and languages change; technology evolves. Corporate "Voice Response Systems" become the rage, then all but vanish behind the shadow of "The Web". Applications come and go; some of them good, some not so much; some critical, some merely convenient; some last 3 months, some last 3 decades. But at the end of the day, the information that feeds all these applications is the heart of the system and must survive the swings of fashion. The data stays. So, the core of the "System" must evolve around that one goal: keep and protect the data. Think about it: SQL databases in particular give us a free-standing, (mostly) standardized repository with a decades-old proven record, and can be accessed anytime with what is, far from obsolete, essentially a Functional language! That's a pretty good place to trust for your most valued component. Any approach that puts the priority in the programming tool, environment, or the application at the expense of saving the data in an obscured store -- anything that binds the application technology too closely to the data, is likely going to fall off the way-side. Not to say that I believe everything in the world must go into a SQL table. OOD-like solutions have a place too, and a lot of potential. But you need to look in places where the "application" is the king, and the "data" is secondary: games, one-off applications and tools, systems that hold not-critical data or data that has no long-term value past the life expectancy of the application. In particular, I think that systems that have a limited useful life (a few years at most) are prime candidates for OOD-like technologies. On the other hand, when working on anything that must one day "hand over" the data to its successor, I would be very leery of anything other than a good-old RDBMSs. To put it in a sound byte, it's never been about the "application"; It's always been about the "data". A: We are seeing some ways that object modelling is coming into the light for managing our data. We have Linq, and NHibernate, which allows us to map data in the database, to objects in our code. However. I think that we are still a long way from having a real object oriented database. I'm not sure we ever will. While working with "objects" has it's advantages, treating data as sets with the relational data model has a lot of advantages. A: So far the OODBMSs that have come out don't seem to have as wide an adoption as some wanted them, and the reason was simple: OODBMSs only address the concerns of OOP developers, but not everyone else, which includes DBAs, analysts, MIS professionals, and a huge amount of developers who are not object-oriented, but are instead "data-driven". That being said, a vast amount of enterprise data remains in RDBMSs, in a similar manner that a vast amount of enterprise data also remains in COBOL/CICS-powered systems. As for facts, you can Google for hours to look for statistics but you won't find any. All you'd find are Oracle vs. MS SQL Server vs. MySQL/PostGre/other open-source RDBMSs adoption statistics vs. each other, and OODBMSs like db4o are largely ignored. A: In business data processing, the relational model is firmly entrenched, and cannot be removed. It is central and is often highly overused for inappropriate things. Folks will use (abuse) a relational database as a reliable message queue because -- well -- they see every problem as a database problem. The relational model is the backbone of many (almost all) commercial products for every business process. It's hard to find anything that's not fundamentally relational. Indeed, in many cases, the products are closely aligned with the database. Oracle's Financials, Microsoft's Dynamics accounting, etc. For the foreseeable future, relational data stores will be the primary storage for business data processing. Currently, relational databases go without saying. Everyone asks "which database engine" so they can weigh in on the Oracle vs. IBM vs. Microsoft vs. MySQL debate. No one asks "what will the data model be? Object or Relational?" ORM will continue to make inroads. Object-Oriented programming will continue to grow, leading to more and more ORM. Breaking out of this box is hard -- nearly impossible -- because business IT is focused on stability, not innovation. Their goal is almost always "Keep The Lights On". That means to refuse change until it is forced on them by a vendor going out of business or ending support for a product. A: I've always dealt with data sets too large to seriously consider the classic "object" model of rendering the data as class with data elements containing all the information and methods to access / manipulate them. I have however found a simple compromise model with .NET datasets. Since they "self buffer" to disk, they are great for dealing with chunks of data that may or may not fit in memory - so I use them for my "object collections". Then all the classes that comprise the "business" objects for the application simply have a reference to the record in the data set that contains their information and all the methods for the class simply parse from the record set. Works for queries returning 1 result to a million - and the class model is very easy to replicate - since basically all of the class internal data variables are just fields on the recordset. A: No The only pronounced change has come about very recently through the advent of cloud computing where proponents do not necessarily store data in a relation manner.
{ "language": "en", "url": "https://stackoverflow.com/questions/142652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: DocumentRoot in .htaccess is causing an error I have DocumentRoot /var/www/test in my .htaccess file. This is causing the apache server to give me a 500 internal server error. The error log file shows: alert] [client 127.0.0.1] /var/www/.htaccess: DocumentRoot not allowed here AllowOveride All is set in my conf file. Any idea why this is happening? A: The DocumentRoot directive cannot appear in a .htaccess file. Put it in httpd.conf instead. A: DocumentRoot should be set in your VirtualHost directive, not your .htaccess file. Any specific reason you put DocumentRoot in your .htaccess file? A: You don't mention your apache version , but docs for 2.0 say that DocumentRoot is only valid for virtualhost or server config. Acording to the docs it should not be used in a .htacces http://httpd.apache.org/docs/2.0/mod/core.html#documentroot
{ "language": "en", "url": "https://stackoverflow.com/questions/142653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to most efficently handle large numbers of file descriptors? There appear to be several options available to programs that handle large numbers of socket connections (such as web services, p2p systems, etc). * *Spawn a separate thread to handle I/O for each socket. *Use the select system call to multiplex the I/O into a single thread. *Use the poll system call to multiplex the I/O (replacing the select). *Use the epoll system calls to avoid having to repeatedly send sockets fd's through the user/system boundaries. *Spawn a number of I/O threads that each multiplex a relatively small set of the total number of connections using the poll API. *As per #5 except using the epoll API to create a separate epoll object for each independent I/O thread. On a multicore CPU I would expect that #5 or #6 would have the best performance, but I don't have any hard data backing this up. Searching the web turned up this page describing the experiences of the author testing approaches #2, #3 and #4 above. Unfortunately this web page appears to be around 7 years old with no obvious recent updates to be found. So my question is which of these approaches have people found to be most efficient and/or is there another approach that works better than any of those listed above? References to real life graphs, whitepapers and/or web available writeups will be appreciated. A: Speaking with my experience with running large IRC servers, we used to use select() and poll() (because epoll()/kqueue() weren't available). At around about 700 simultaneous clients, the server would be using 100% of a CPU (the irc server wasn't multithreaded). However, interestingly the server would still perform well. At around 4,000 clients, the server would start to lag. The reason for this was that at around 700ish clients, when we'd get back to select() there would be one client available for processing. The for() loops scanning to find out which client it was would be eating up most of the CPU. As we got more clients, we'd start getting more and more clients needing processing in each call to select(), so we'd become more efficient. Moving to epoll()/kqueue(), similar spec'd machines would trivially deal with 10,000 clients, with some (admitidly more powerful machines, but still machines that would be considered tiny by todays standards), have held 30,000 clients without breaking a sweat. Experiments I've seen with SIGIO seem to suggest it works well for applications where latency is extremely important, where there are only a few active clients doing very little individual work. I'd recommend using epoll()/kqueue() over select()/poll() in almost any situation. I've not experimented with splitting clients between threads. To be honest, I've never found a service that needed more optimsation work done on the front end client processing to justify the experimentation with threads. A: From my experience, you'll have the best perf with #6. I also recommend you look into libevent to deal with abstracting some of these details away. At the very least, you'll be able to see some of their benchmark . Also, about how many sockets are you talking about? Your approach probably doesn't matter too much until you start getting at least a few hundred sockets. A: I have spent the 2 last years working on that specific issue (for the G-WAN web server, which comes with MANY benchmarks and charts exposing all this). The model that works best under Linux is epoll with one event queue (and, for heavy processing, several worker threads). If you have little processing (low processing latency) then using one thread will be faster using several threads. The reason for this is that epoll does not scale on multi-Core CPUs (using several concurrent epoll queues for connection I/O in the same user-mode application will just slow-down your server). I did not look seriously at epoll's code in the kernel (I only focussed on user-mode so far) but my guess is that the epoll implementation in the kernel is crippled by locks. This is why using several threads quickly hit the wall. It goes without saying that such a poor state of things should not last if Linux wants to keep its position as one of the best performing kernels. A: I use epoll() extensively, and it performs well. I routinely have thousands of sockets active, and test with up to 131,072 sockets. And epoll() can always handle it. I use multiple threads, each of which poll on a subset of sockets. This complicates the code, but takes full advantage of multi-core CPUs.
{ "language": "en", "url": "https://stackoverflow.com/questions/142677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Calling Reporting services rdl in Java application I have developed my reports in SQL REporting services and deployed in my server. I need to display the reports in my java application page. I want to know is there any control (like .net report viewer control) to display this Thanks balaweblog A: I don't know about a control, but if you just need to display reports and not provide much interaction you could use the report server web service and call it's render method. This would allow you to execute and return the report output in a number of formats. So you could have java code accepting parameters which you then pass to the Render method and you get back a byte array of a pdf that displays the report. Render method... http://msdn.microsoft.com/en-us/library/aa258532(SQL.80).aspx Reporting Services Webservice... http://msdn.microsoft.com/en-us/library/aa274396(SQL.80).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/142690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Unit testing in flex Are there any unit testing solutions for Flex? or actionscript 3? If so, what are their features? Any UI testing abilities? Functional testing? Any pointers, examples, libraries or tools that you can share? A: FlexUnit is pretty awesome - http://opensource.adobe.com/wiki/display/flexunit/FlexUnit Also ASUnit - http://asunit.org They are both pretty similiar and both haven taken quite a bit from frameworks like JUnit. FlexMonkey (http://code.google.com/p/flexmonkey/) although I haven't used it myself seems to do UI unit testing. A: I can confirm that FlexMonkey indeed does UI unit testing. It provides record/playback of UI interactions and generates FlexUnit test cases. Check it out at http://flexmonkey.googlecode.com A: I just found fluint, and it was a great unit testing library, better than both Flexunit and ASUnit imho. It handles asynchronous testing really nice. A: I would recommend FlexUnit, too... and you also have a look at Visual FlexUnit. A few days before I found the RIATest-Tool, but I haven't tried it yet. A: I'd recommend fluint simply due to it having a more active developer base (and it's improved support of testing asynchronous code). Also, if you are after mocking/stubs there is asmock (a dynamic mocking framework) and mock-as3 (a static mocking framework). A: For asynchronous unit testing dpUint is pretty useful. However FlexUnit is the way to go, if you wish to integrate unit testing with a Maven build. Asynchronous testing (e.g. Cairngorm events) can also be done with FlexUnit, but is not as elegant as with dpUint. A: Try mockito for flex http://bitbucket.org/loomis/mockito-flex
{ "language": "en", "url": "https://stackoverflow.com/questions/142692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How can I write software that does bank account transfers? You know those websites that let you type in your checking account number and the routing number, and then they can transfer money to and from your account? How does that work? Any good services or APIs for doing that? Any gotchas? A: If you want to be able to initiate transfer of funds between accounts in different financial institutions (using account and routing number), you need to find a payment processing company that offers ACH (http://en.wikipedia.org/wiki/Automated_Clearing_House) transfer services. Usually these companies are subsidiary of a financial institution that already has access to ACH. For example. one such company is ACH Direct (http://www.achdirect.com/). I can't vouch for its services or reliability though, I am just giving it here as an example of what type of companies you need to search. Of course, technically, you could try to connect to ACH directly. However, to do this, you need to follow the rules and regulations of NACHA (http://en.wikipedia.org/wiki/NACHA-The_Electronic_Payments_Association) when writing your software and pass rigorous certification. It's quite a big investment, so unless you are backed by couple of bilions of dollars, I wouldn't advise attempting this. A: The banks do have APIs for doing this, but only approved people/companies are allowed to interface with these systems. Because it actually involves transferring money around, the security requirements are pretty high in terms of how you handle the account numbers on your system. Many sites that offer this feature for buying goods actually use a third party system to handle the actual money transfer into their account. This lowers the amount of trouble to implement the API, as well as putting the burden of security on the third party handling the money transfers. If you are serious about setting up a system where you can accept bank account numbers, and exchange funds, you should contact your bank, and see what the actual requirements for implementing such a system. Each bank has their own system, along with their own rate regarding the cost of these transactions. Some third parties I'm aware of are * *Moneris *Cactus *Beanstream I'm in Canada, although I think Moneris and Cactus operate in the US. I think Beanstream doesn't. Again, you can talk to your bank, and they can probably get you in touch with a third party who will help you with the transactions. A: You can do this with a Moneris US eSELECTplus merchant account - you just need to have Automated Clearing House (ACH) enabled on your merchant account (unfortunately there is no equivalent to ACH currently available in Canada). Here's an example of what a debit transaction looks like in the Moneris US PHP API: <?php require "../mpgClasses.php"; /************************ Request Variables **********************************/ $store_id='monusqa002'; //account credentials $api_token='qatoken'; /************************ Transaction Object******************************/ $txnArray=array(type=>'us_ach_debit', order_id=>'ach-'.date("dmy-G:i:s"), cust_id=> 'my cust id', amount=>'1.00' ); $achTemplate = array( sec =>'ppd', cust_first_name => 'Bob', cust_last_name => 'Smith', cust_address1 => '101 Main St', cust_address2 => 'Apt 102, cust_city => 'Chicago', cust_state => 'IL', cust_zip =>'123456', routing_num => '490000018', account_num => '23456', check_num => '100', account_type => 'savings' ); $mpgAchInfo = new mpgAchInfo ($achTemplate); $mpgTxn = new mpgTransaction($txnArray); $mpgTxn->setAchInfo($mpgAchInfo); $mpgRequest = new mpgRequest($mpgTxn); $mpgHttpPost = new mpgHttpsPost($store_id,$api_token,$mpgRequest); /************************ Response Object **********************************/ $mpgResponse=$mpgHttpPost->getMpgResponse(); print("\nCardType = " . $mpgResponse->getCardType()); print("\nTransAmount = " . $mpgResponse->getTransAmount()); print("\nTxnNumber = " . $mpgResponse->getTxnNumber()); print("\nReceiptId = " . $mpgResponse->getReceiptId()); print("\nTransType = " . $mpgResponse->getTransType()); print("\nReferenceNum = " . $mpgResponse->getReferenceNum()); print("\nResponseCode = " . $mpgResponse->getResponseCode()); print("\nMessage = " . $mpgResponse->getMessage()); print("\nAuthCode = " . $mpgResponse->getAuthCode()); print("\nComplete = " . $mpgResponse->getComplete()); print("\nTransDate = " . $mpgResponse->getTransDate()); print("\nTransTime = " . $mpgResponse->getTransTime()); print("\nTicket = " . $mpgResponse->getTicket()); print("\nTimedOut = " . $mpgResponse->getTimedOut()); ?> The API files and integration guides for Moneris USA are available at: http://developer.moneris.com (free registration required) Moneris USA - ACH: http://www.monerisusa.com/payment-processing-services/ach-direct-debit.aspx A: Stripe Connect allows you to transfer money to bank accounts and to accept payments through one unified API. As of December 2015 they provide more thorough documentation and in general seem to be a more popular option among developers than most of the companies mentioned in other answers. See https://stripe.com/docs/connect for more info. A: Paypal has a fairly accessible API you can use within your program to accomplish some of this. A: Pretty straightforward way of doing ACH transfers - https://www.dwolla.com/white-label Depending on what you want to your application to do you'll need different functionality. If you want to pay (credit) bank accounts. It's pretty straight forward. Here are the steps: 1. Create a member 2. Create a funding source 3. Create a transfer If you want to debit and credit bank accounts it gets a little more complex. Here are the steps: 1. Create a member 2. Get a funding source authorization 3. Create a transfer The only reason the authorization is a little harder is because you have to go through a 2 deposit method or a verification flow of some type. This gets a lot easier with Dwolla.js - https://www.dwolla.com/dwollajs-bank-verification
{ "language": "en", "url": "https://stackoverflow.com/questions/142693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: High-level languages for out-of-the-box GUI desktop application programming After I discontinued programming in C++ while entering into web authoring I was spoilt by PHP's high level constructs like hash tables or its dynamic, weak typing. I remembered the angst of C/C++ pointers and the maze of low-level Win32 API handles and message loops and that prevented me from utilizing environments like Code::Blocks for desktop applications. I am also not very fond of bulky, statically-typed C#/.NET environment. Any other ideas? A: wxPython A: Python has great GUI toolkits. A: Delphi. Without question. http://www.codegear.com/delphi You'll have to put up with strong typing, though. C# isn't a bad language and the .Net framework certainly has some interesting features, but WinForms can be sluggish, making it less suitable (at least to me) for desktop GUI applications. I also don't like the hefty runtime requirement. A: Tcl/tk is an old-school solution but you can get a gui up and running with surprisingly little code. The runtime can be embedded, so you can distribute a self-contained executable in a single file that contains your code, the runtime, and resource files. The runtime runs on unix/windows/mac so it's easy to generate binaries for whatever platforms you need. However many people find it hard to wrap their heads around tcl... A: I have worked a lot with Flex and WPF (c#). Though you don't like C# very much.. I would say that Flex is very much like C#, but without all the strongly-typed code. I have about 13 years of PHP programming under my belt and I would say that moving to flex application development (this includes AIR Desktop applications) was one of the most fluid transitions I have made. Especially if you like working with any kind of javascript. Anyway, Flex, Flex, Flex... oh yeah, and AIR :) Please let me know if you need more help with this, or a better breakdown. A: I third Pyhon if all you want is fast, easy, pain-free development, or if you want to get back to C++, because some of us just love the pain, try using Boost and Qt you'll be much happier than back in the old days with the Win32 API. A: You might use Lua with wxLua or the lightweight IUP libraries. Both being portable. For quick/small prototype/throw away scripts, I also use AutoHotkey: the language is quite awful for a seasoned programmers (newbies seem to like it...), but its high-level GUI is easy and fast to use. And it is rather small and can be "compiled" to a standalone exe.
{ "language": "en", "url": "https://stackoverflow.com/questions/142695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Why does a VB.Net function that returns string only actually return a single character? I'm calling a function that returns a string, but it's only actually returning the first character of the string it's supposed to be returning. Here's a sample piece of code to recreate the issue I'm experiencing: Public Function GetSomeStringValue(Value as Integer) As String ... Code Goes here Return Some_Multicharacter_string End Function The function call looks like: SomeStringValue = GetSomeStringValue(Value) Why is this not returning the entire string? A: Note: this answer was originally written by the OP, Kibbee, as a self-answer. However, it was written in the body of the question, not as an actual separate answer. Since the OP has refused repeated requests by other users, including a moderator, to repost in accordance with site rules, I'm reposting it myself. After trying a hundred different things, refactoring my code, stepping through the code in the debugger many times, and even having a co-worker look into the problem, I finally, in a flash of genius, discovered the answer. At some point when I was refactoring the code, I changed the function to get rid of the Value parameter, leaving it as follows: Public Function GetSomeStringValue() As String ... Code Goes here Return Some_Multicharacter_String End Function However, I neglected to remove the parameter that I was passing in when calling the function: SomeStringValue = GetSomeStringValue(Value) The compiler didn't complain because it interpreted what I was doing as calling the function without brackets, which is a legacy feature from the VB6 days. Then, the Value parameter transformed into the array index of the string (aka character array) that was returned from the function. So I removed the parameter, and everything worked fine: SomeStringValue = GetSomeStringValue() I'm posting this so that other people will recognize the problem when/if they ever encounter it, and are able to solve it much more quickly than I did. It took quite a while for me to solve, and I hope I can save others some time.
{ "language": "en", "url": "https://stackoverflow.com/questions/142697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: What do 'Delimiter' and 'InheritsFromParent' attributes mean in .vsprops files? I can't seem to find any useful documentation from Microsoft about how one would use the Delimiter and InheritsFromParent attributes in the UserMacro element when defining user Macros in .vsprops property sheet files for Visual Studio. Here's sample usage: <UserMacro Name="INCLUDEPATH" Value="$(VCROOT)\Inc" InheritsFromParent="TRUE" Delimiter=";"/> From the above example, I'm guessing that "inherit" really means "a) if definition is non-empty then append delimiter, and b) append new definition" where as the non-inherit behavior would be to simply replace any current macro definition. Does anyone know for sure? Even better, does anyone have any suggested source of alternative documentation for Visual Studio .vsprops files and macros? NOTE: this is not the same as the InheritedPropertySheets attribute of the VisualStudioPropertySheet element, for example: <VisualStudioPropertySheet ... InheritedPropertySheets=".\my.vsprops"> In this case "inherit" basically means "include". A: [Answering my own question] InheritsFromParent means prepend. To verify this, I did an experiment that reveals how User Macros work in Visual Studio 2008. Here's the setup: * *Project p.vcproj includes the property sheet file d.vsprops ('d' for derived) using the InheritedPropertySheets tag. *d.vsprops includes the property sheet file b.vsprops ('b' for base.) *p.vcproj also defines a Pre-Build Event which dumps the environment. *Both .vsprops files contain User Macro definitions. b.vsprops ... <UserMacro Name="NOENV" Value="B"/> <UserMacro Name="OVERRIDE" Value="B" PerformEnvironmentSet="true"/> <UserMacro Name="PREPEND" Value="B" PerformEnvironmentSet="true"/> ... d.vsprops ... <VisualStudioPropertySheet ... InheritedPropertySheets=".\b.vsprops"> <UserMacro Name="ENV" Value="$(NOENV)" PerformEnvironmentSet="true"/> <UserMacro Name="OVERRIDE" Value="D" PerformEnvironmentSet="true"/> <UserMacro Name="PREPEND" Value="D" InheritsFromParent="true" Delimiter="+" PerformEnvironmentSet="true"/> ... p.vcproj ... <Configuration ... InheritedPropertySheets=".\d.vsprops"> <Tool Name="VCPreBuildEventTool" CommandLine="set | sort"/> ... build output ... ENV=B OVERRIDE=D PREPEND=D+B ... From these results we can conclude the following: * *PerformEnvironmentSet="true" is necessary for User Macros to be defined in the environment used for build events. Proof: NOENV not shown in build output. *User Macros are always inherited from included property sheets regardless of PerformEnvironmentSet or InheritsFromParent. Proof: in b.vsprops, NOENV is not set in the environment and in d.vsprops it is used without need of InheritsFromParent. *Simple redefinition of a User Macro overrides any previous definition. Proof: OVERRIDE is set to D although it was earlier defined as B. *Redefinition of a User Macro with InheritsFromParent="true" prepends the new definition to any previous definition, separated by a specified Delimiter. Proof: PREPEND is set to D+B (not D or B+D.) Here are some additional resources I found for explanation of Visual Studio .vsprops files and related topics, it's from a few years back but it is still helpful: understanding the VC project system part I: files and tools understanding the VC project system part II: configurations and the project property pages dialog understanding the VC project system part III: macros, environment variables and sharing understanding the VC project system part IV: properties and property inheritance understanding the VC project system part V: building, tools and dependencies understanding the VC project system part VI: custom build steps and build events understanding the VC project system part VII: "makefile" projects and (re-)using environments A: There's documentation on the UI version of this here. A lot of the XML files seem somewhat undocumented, often just giving a schema file. Your guess as to how they function is pretty much right. A: It is not the whole story. * *Delimiters are not inherited. Only the list of items they delimit are inherited: The same user macros can have different delimiters in different property sheets but only the last encountered delimiter is used. (I write "last encountered" because at project level, we cannot specify a delimiter and what gets used there is the last property sheet that specified inheritance for that macro) *Delimiters works only if made of a single character. A delimiter longer than one character may have its first and/or last character stripped in some cases, in a mistaken attempt to "join" the list of values. *$(Inherit) appears to work inside user macros. Like for aggregate properties, it works as a placeholder for the parent's values, and it can appear multiple times. When no $(Inherit) is found, it is implied at the beginning if the inheritance flag is set. *$(NoInherit) also appears to work in user's macros(makes VC behaves as if the checkbox was unticked). *User macros (and some built-ins) appears to work when used for constructing a property sheet's path (VC's own project converter uses that feature). The value taken by user's macros in this situation is not always intuitive, though, especially if it gets redefined in other included property sheets. *In general, what gets "inherited" or concatenated are formulae and not values (ie. you cannot use a user macro to take a snapshot the local value of (say) $(IntDir) in a property sheet and hope to "bubble up" that value through inheritance, because what gets inherited is actually the formula "$(IntDir)", whose value will eventually be resolved at the project/config/file level). *A property sheet already loaded is ignored (seem to avoid that the same property sheet has its user macros aggregated twice) *Both "/" and "\" appear to work in property sheet paths (and in most places where VS expects a path). *A property sheet path starting with "/" (after macros have been resolved) is assumed to be in "./", where '.' is the location of the calling sheet/project). Same if the path does not start with "./", "../" or "drive:/" (dunno about UNC).
{ "language": "en", "url": "https://stackoverflow.com/questions/142708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can I implement a cooperative multi-tasking system in VxWorks? A legacy embedded system is implemented using a cooperative multi-tasking scheduler. The system essentially works along the following lines: * *Task A does work *When Task A is done, it yields the processor. *Task B gets the processor and does work. *Task B yields ... *Task n yields *Task A gets scheduled and does work One big Circular Queue: A -> B -> C -> ... -> n -> A We are porting the system to a new platform and want to minimize system redesign. Is there a way to implement that type of cooperative multi-tasking in vxWorks? A: While VxWorks is a priority based OS, it is possible to implement this type of cooperative multi-tasking. Simply put all the tasks at the same priority. In your code, where you do your yield, simply insert a 'taskDelay(0);' Note that you have to make sure the kernel time slicing is disabled (kernelTimeSlice(0)). All tasks at the same priority are in a Queue. When a task yields, it gets put at the end of the queue. This would implement the type of algorithm described. A: I once worked on a relatively large embedded product which did this. Time slicing was disabled and threads would explicitly taskDelay when they wanted to allow another thread to run. I have to conclude: disabling vxWorks slicing leads to madness. Avoid it, if it is within your power to do so. Because tasks were entirely non-preemptive (and interrupt handlers were only allowed to enqueue a message for a regular task to consume), the system had dispensed with any sort of locking for any of its data structures. Tasks were expected to only release the scheduler to another task if all data structures were consistent. Over time the original programmers moved on and were replaced by fresh developers to maintain and extend the product. As it grew more features the system as a whole became less responsive. When faced with a task which took too long the new developers would take the straightforward solution: insert taskDelay in the middle. Sometimes this was fine, and sometimes it wasn't... Disabling task slicing effectively makes every task in your system into a dependency on every other task. If you have more than three tasks, or you even think you might eventually have more than three tasks, you really need to construct the system to allow for it. A: This isn't specific to VxWorks, but the system you have described is a variant of Round Robin Scheduling (I'm assuming you are using priority queues, otherwise it is just Round Robin Scheduling). The wiki article provides a bit of background and then you could go from there. Good Luck A: What you describe is essentially: void scheduler() { while (1) { int st = microseconds(); a(); b(); c(); sleep(microseconds() - st); } } However if you don't already have a scheduler, now is a good time to implement one. In the simplest case, each entry point can be either multiply inherited from a Task class, or implement a Task interface (depending on the language). A: you can make all the tasks of same priority and use task delay(0) or you can use tasklock and taskunlock inside your low priority tasks where you need to make non-premptive working.
{ "language": "en", "url": "https://stackoverflow.com/questions/142710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Coding for high reliability/availability/security - what standards do I read? I've heard that the automotive industry has something called MISRA C. What are the relevant standards for other high reliability/availability/security industries, such as * *Space *Aircraft *Banking/financial *Automotive *Medical *Defense/Military *??? -Adam A: Check out the Goddard Space Flight Center and its coding standards. One of the C standards, which I've adopted in my own code, is that headers must be self-contained, and they provide a simple way to enforce that -- a module's header must be the first file included in the module, so if the file is not self-contained, it won't compile. A: if you're asking specifically about coding, the MISRA presents some guidelines for avoiding common mistakes in C. however, there's a lot more to good software than coding. The "bible" of the aviation industry for sw development is DO-178B. It tells you what questions need to be addressed in the various design phases and how the answers should be documented. It's an ENORMOUS amount of paperwork, but if you're trying to keep planes in the air, you want the weakest point to be the human (pilot), not the software. A: For programming high reliability systems in Ada, there is: ISO/IEC TR 15942:"Information technology — Programming languages — Guide for the use of the Ada programming language in high integrity systems": Introduction As a society, we are increasingly reliant upon high integrity systems: for safety systems (such as fly-by-wire aircraft), for security systems (to protect digital information) or for financial systems (e.g., cash dispensers). As the complexity of these systems grows, so do the demands for improved techniques for the production of the software components of the system. These high integrity systems must be shown to be fully predictable in operation and have all the properties required of them. This can only be achieved by analysing the software, in addition to the use of conventional dynamic testing. There is, currently, no mainstream high level language where all programs in that language are guaranteed to be predictable and analysable. Therefore for any choice of implementation language it is essential to control the ways that the language is used by the application. The Ada language [ARM] is designed with specific mechanisms for controlling the use of certain aspects of the language. Furthermore, * *The semantics of Ada programs are well-defined, even in error situations. Specifically, the effect of a program can be predicted from the language definition with few implementation dependencies or interactions between language features. *The strong typing within the language can be used to reduce the scope (and cost) of analysis to verify key properties. *The Ada language has been successfully used on many high integrity applications. This demonstrates that validated Ada compilers have the quality required for such applications. *Guidance can be provided to facilitate the use of the language and to encourage the development of tools for further verification. Ada is therefore ideally suited for implementing high integrity software and this document provides guidance in the controls that are required on the use of Ada to ensure that programs are predictable and analysable. A: You may find it instructive to look at some of the requirements of Carrier Grade Linux. While they (as the name suggests!) are specifying linux requirements, they are doing so for use in the high availability segment of telecommunications equipment. A: FDA has General Principles of Software Validation, Design Control Guidance For Medical Device Manufacturers, Guidance for Industry, FDA Reviewers and Compliance on Off-The-Shelf Software Use in Medical Devices, etc. A: NIST provides a whole slew of related documents, you can dive in and peruse their work - but there is a lot of it, and it's all quite verbose, so I dont have a specific one to point you at. If you want to be more specific with your needs, I might be able to narrow it down a bit... In addition, Carnegie Mellon is pretty much the definitive when it comes to development processes for reliability, easy enough to find their standards but also quite verbose. Also, specific industries often have their own standards, depending also on the country. For instance, credit card industry - PCI-DSS; Banking industry in EU - Basel II; Medical - HIPAA (though thats pretty high-level); anything US government related, various NIST docs; etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/142722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do you acess a property of a bean for reading in a spring xml config file? I want to do something like the following in spring: <beans> ... <bean id="bean1" ... /> <bean id="bean2"> <property name="propName" value="bean1.foo" /> ... I would think that this would access the getFoo() method of bean1 and call the setPropName() method of bean2, but this doesn't seem to work. A: What I understood: * *You have a bean (bean1) with a property called "foo" *You have another bean (bean2) with a property named "propName", wich also has to have the same "foo" that in bean1. why not doing this: <beans> ... <bean id="foo" class="foopackage.foo"/> <bean id="bean1" class="foopackage.bean1"> <property name="foo" ref="foo"/> </bean> <bean id="bean2" class="foopackage.bean2"> <property name="propName" ref="foo"/> </bean> .... </beans> Doing this, your bean2 is not coupled to bean1 like in your example. You can change bean1 and bean2 without affecting each other. If you REALLY need to do the injection you proposed, you can use: <util:property-path id="propName" path="bean1.foo"/> A: You need to use PropertyPathFactoryBean: <bean id="bean2" depends-on="bean1"> <property name="propName"> <bean class="org.springframework.beans.factory.config.PropertyPathFactoryBean"> <property name="targetBeanName" value="bean1"/> <property name="propertyPath" value="foo"/> </bean> </property> </bean> A: I think you have to inject bean1, then get foo manually because of a timing issue. When does the framework resolve the value of the target bean? You could create a pointer bean and configure that. class SpringRef { private String targetProperty; private Object targetBean; //getters/setters public Object getValue() { //resolve the value of the targetProperty on targetBean. } } Common-BeanUtils should be helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/142740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What software would you recommend for image enhancement prior to OCR (Optical Character Recognition)? We are currently researching ways of enhancing image quality prior to submission to OCR. The OCR engine we are currently utilizing is the Scansoft API from Nuance (v15). We were researching the Lead Tools but have since decided to look elsewhere. The licensing costs associated with Lead Tools is just too great. To start with we are looking for simple image enhancement features such as: deskewing, despeckling, line removal, punch hole removal, sharpening, etc. We are running a mix of .NET and Java software, but java solution would be preferred. A: Kofax is good for pre-processing, but for the types of cleanup you are talking about may be overkill unless the images are really bad. Unless your specialty is in image processing, I'd recommend working with a provider that does the image cleanup and the OCR so you can focus on the value you actually add. We license the OCR development kit from ABBYY (ABBY SDK) and have found it to be superb for both image processing and OCR. The API is quite extensive, and the sample apps, help and support have been beyond impressive. I definitely recommend taking a look. A: Disclaimer: I work for Atalasoft We have those functions and run-time royalty-free licensing for .NET. http://www.atalasoft.com/products/dotimage/ We also have OCR components including a .NET wrapper for Abbyy, Tesseract and others and Searchable PDF generation (image on top of text in a PDF) A: Not sure if this would be quite up to the standards that you guys would need, but perhaps you should look at some of the Paint.Net APIs. I don't know how easy it would be to extract their image processing algorithms for use in your project, but I believe they do some of the things you are looking for. Plus it is an open source project with an MIT License, so it should be pretty friendly for business use. A: Research about KOFAX VRS at KOFAX.com A: Maybe JMagick, it is an open source Java interface of ImageMagick. It is implemented in the form of a thin Java Native Interface (JNI) layer into the ImageMagick API. It's licensed under the LGPL so it shouldn't be a problem license wise. http://sourceforge.net/projects/jmagick/ A: I would suggest Intel for its zero-cost runtime licensing. A: Depends on the number and quality of the original images. Managed code and imaging tool kits will work but it's not always the best solution if you haved several million images to process. For small batches and tight budgets, I agree with the previous posters that projects like Aforge, Paint.NET, and other open source computer vision libraries will do the trick. Of course, you are on your own if the results are not improving... At least this let's you put everything you need under one application for a low cost. If you are processing several hundred thousand images a month, then I would suggest you divide up the process into smaller workflow step and tweak each one until your cost per image gets as close to zero as you can. You will find that the OCR results rise quickly at first and then level off sooner than you expected. (I'm not a big fan of OCR but it has its place) I use commercial Windows product from Recogniform to process and clean up the images prior to OCR in a batch mode using scripts adjusted for various kinds of images. If an image fails QC or is rejected by the OCR engine, it is "repaired" by hand using a custom .NET application built with Atalasoft's toolkit. Batch process everything and only touch what fails.
{ "language": "en", "url": "https://stackoverflow.com/questions/142743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to get CreateProcess/CreateProcessW to execute a process in a path > MAX_PATH characters I'm trying to get either CreateProcess or CreateProcessW to execute a process with a name < MAX_PATH characters but in a path that's greater than MAX_PATH characters. According to the docs at: http://msdn.microsoft.com/en-us/library/ms682425.aspx, I need to make sure lpApplicationName isn't NULL and then lpCommandLine can be up to 32,768 characters. I tried that, but I get ERROR_PATH_NOT_FOUND. I changed to CreateProcessW, but still get the same error. When I prefix lpApplicationName with \\?\ as described in http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx when calling CreateProcessW I get a different error that makes me think I'm a bit closer: ERROR_SXS_CANT_GEN_ACTCTX. My call to CreateProcessW is: CreateProcessW(w_argv0,arg_string,NULL,NULL,0,NULL,NULL,&si,&ipi); where w_argv0 is \\?\<long absolute path>\foo.exe. arg_string contains "<long absolute path>\foo.exe" foo si is set as follows: memset(&si,0,sizeof(si)); si.cb = sizeof(si); si.dwFlags = STARTF_USESHOWWINDOW; si.wShowWindow = SW_HIDE;> and pi is empty, as in: memset(&pi,0,sizeof(pi)); I looked in the system event log and there's a new entry each time I try this with event id 59, source SideBySide: Generate Activation Context failed for .Manifest. Reference error message: The operation completed successfully. The file I'm trying to execute runs fine in a path < MAX_PATH characters. To clarify, no one component of <long absolute path> is greater than MAX_PATH characters. The name of the executable itself certainly isn't, even with .manifest on the end. But, the entire path together is greater than MAX_PATH characters long. I get the same error whether I embed its manifest or not. The manifest is named foo.exe.manifest and lives in the same directory as the executable when it's not embedded. It contains: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.DebugCRT' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> Anyone know how to get this to work? Possibly: * *some other way to call CreateProcess or CreateProcessW to execute a process in a path > MAX_PATH characters *something I can do in the manifest file I'm building with Visual Studio 2005 on XP SP2 and running native. Thanks for your help. A: Embedding the manifest and using GetShortPathNameW did it for me. One or the other on their own wasn't enough. Before calling CreateProcessW using the \\?-prefixed absolute path name of the process to execute as the first argument, I check: wchar_t *w_argv0; wchar_t *w_short_argv0; ... if (wcslen(w_argv0) >= MAX_PATH) { num_chars = GetShortPathNameW(w_argv0,NULL,0); if (num_chars == 0) { syslog(LOG_ERR,"GetShortPathName(%S) to get size failed (%d)", w_argv0,GetLastError()); /* ** Might as well keep going and try with the long name */ } else { w_short_argv0 = malloc(num_chars * sizeof(wchar_t)); memset(w_short_argv0,0,num_chars * sizeof(wchar_t)); if (GetShortPathNameW(w_argv0,w_short_argv0,num_chars) == 0) { syslog(LOG_ERR,"GetShortPathName(%S) failed (%d)",w_argv0, GetLastError()); free(w_short_argv0); w_short_argv0 = NULL; } else { syslog(LOG_DEBUG,"using short name %S for %S",w_short_argv0, w_argv0); } } } and then call CreateProcessW(w_short_argv0 ? w_short_argv0 : w_argv0...); remembering to free(w_short_argv0); afterwards. This may not solve every case, but it lets me spawn more child processes than I could before. A: I don't see any reference in the CreateProcess documentation saying that the '\\?\' syntax is valid for the module name. The page on "Naming a File or Directory" also does not state that CreateProcess supports it, while functions such as CreateFile link to the "Naming a File" page. I do see that you can't use a module name longer than MAX_PATH in lpCommandLine, which suggests that CreateProcess does not support extra long filenames. The error message also suggests that an error is occurring while attempting to append ".manifest" to your application path (that is, the length is now exceeding MAX_PATH). GetShortPathName() may be of some help here, though it does not guarantee to return a name less than MAX_PATH (it does explicitly state that '\\?\' syntax is valid, though). Otherwise, you could try adjusting the PATH environment variable and passing it to CreateProcess() in lpEnvironment. Or you could use SetCurrentDirectory() and pass only the executable name.
{ "language": "en", "url": "https://stackoverflow.com/questions/142750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Automatically set list item permission, after new item is created We have a SharePoint Team Site (WSS 3.0 not MOSS), that includes Tasks list to records every tasks related to a project. Here's the scenario. Users : * *List item *Supervisor1 *TeamMember1 *TeamMember2 *TeamMember3 How do we set the permission settings so that * *Every users (Supervisor and team members) can see any tasks. *Supervisors can edit any tasks *Team members can only edit their own tasks (tasks that were assigned to them, or created by them) I was unable to achieve the intended results using standard WSS permission settings, without resorting to manual permission settings on each item in the list. I'm imagining that the automatic solution has to be accomplish using some sort of workflow or trigger. A: you do not need any workflow or event handlers ( still you can use them for your purpos but they will slow down the performance if you will be having a lot of items) go to setting --> list settings click on Advanced Settings in Item-level Permissions in read access select all items and in the same place in Edit access select only their own and in permissions give list members a contributer role for the suppervisor you can give him higher permission i think designer will work, or simply you can give him full controle on the list A: You can set permissions by going to your List, click Settings dropdown. Under Permissions and Management, click "Permissions for this List". Click Actions and select Edit Permissions. Select the User/Group you want the permission to be changed then Click Actions & select Edit User Permissions. HTH! A: Create a class that inherits from SPItemEventReceiver and override the ItemAdded method, setting your custom permissions in the overriedden method using the API. http://blogs.msdn.com/brianwilson/archive/2007/03/05/part-1-event-handlers-everything-you-need-to-know-about-microsoft-office-sharepoint-portal-server-moss-event-handlers.aspx A: Yes, you would have to write an event handler or workflow that will run upon item creation which would look at these column values and set the item level permissions as such. A: I recommend you to check this solution: SharePoint Column/View Permission by SharePointBoost (199$) Through this you can set read only permission to people you want on all the items, Your requirement "Every users (Supervisor and team members) can see any tasks" is solved! Also you can set edit permission to Supervisors. Second trouble solved! As Ali said, advanced permission>items level permission can fulfill your last requirement. A: It seems that you need a workflow to automatically assign permissions based on the user roles or [Assign To] fields. Try the third-part tool Permission Workflow, this may help you to solve the issues.
{ "language": "en", "url": "https://stackoverflow.com/questions/142756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a DataContext in LINQ-to-Entities (NOT Linq-to-SQL)? I recently asked a question about tracing Linq-to-Entities I think that one of the answers was not right, as they refer to using the DataContext. Is there a DataContext for LINQ-to-Entities? If so, how do I get it? A: Apparently, LinqToEntities uses an ObjectContext instead of DataContext. It is hilarious that the object team made a DataContext and the data team made an ObjectContext (and on to DataQuery vs ObjectQuery, etc.) "Naming is Hard!" Update, for .net 4 with EF4.1, you might also be interested in DbContext when working with LinqToEntities. See also. A: LINQ to Entities uses ObjectContext, not DataContext. Here is a short description of EF: LINQ to Entities, the ObjectContext Class, and the Entity Data Model LINQ to Entities queries use the Object Services infrastructure. The ObjectContext class is the primary class for interacting with an EDM as CLR objects. The developer constructs an ObjectQuery instance through the ObjectContext. The generic ObjectQuery class represents a query that returns an instance or collection of typed entities. Entity objects returned by ObjectQuery are tracked by the Object Context and can be updated by using the SaveChanges method. It doesn't even work the same way as the DataContext in LINQ to SQL. While it is true that they both manage the connection and track changes, yet they differ in how they model the data structures and relationships. I would give the poster of that wrong answer some slack, though, because LINQ to SQL does make reference to "entities", and someone not familiar with EF could very well still be thinking they know what you are talking about. For example: LINQ to SQL and the DataContext Class The DataContext is the source of all entities mapped over a database connection. It tracks changes that you made to all retrieved entities and maintains an "identity cache" that guarantees that entities retrieved more than one time are represented by using the same object instance. It can be confusing. A: I think you might be referring to the ADO.NET Entity Data Model (.edmx file - comparable to a .dbml file). In VS it is seen in the Add Item->ADO.NET Entity Data Model A: There are a lot of these arbitary syntax differences. E.g. SubmitChanges (L2S) and SaveChanges (L2E). However, that would be just the tip of the differences between the two technologies.
{ "language": "en", "url": "https://stackoverflow.com/questions/142762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I upgrade python 2.5.2 to python 2.6rc2 on ubuntu linux 8.04? I'd like to how to upgrade the default python installation(2.5.2) supplied with ubuntu 8.04 to python 2.6rc2. I'd like to make 2.6 the default python version on the system and migrate all the other useful installed python libraries installed on 2.5.2 to python 2.6rc2. Please let me know how I can achieve this. Thanks Dirk A: I have the same issue, and apparently pre-built binaries can be found here: # Python 2.6 deb http://ppa.launchpad.net/doko/ubuntu intrepid main deb-src http://ppa.launchpad.net/doko/ubuntu intrepid main A: With the warning that I think it's a tremendously bad idea to replace the default Python with an unreleased beta version: First, install 2.6rc2. You can download the source from the Python website. Standard ./configure && make && sudo make install installation style. Next, remove the /usr/bin/python symlink. Do not remove /usr/bin/python2.5. Add a symlink to 2.6 with ln -s /usr/local/bin/python2.6 /usr/bin/python. Once again, I think this is a terrible idea. There is almost certainly a better way to do whatever you're trying to accomplish. Migrating installed libraries is a much longer process. Look in the /usr/lib/python2.5/site-packages/ and /usr/local/lib/python2.5/site-packages/ directories. Any libraries installed to them will need to be re-installed with 2.6. Since you're not using a packaged Python version, you cannot use Ubuntu's packages -- you'll have to manually upgrade all the libraries yourself. Most of them can probably be installed with sudo easy_install <name>, but some like PyGTK+ are not so easy. You'll have to follow custom installation procedures for each such library. A: Is there any need to? Ubuntu in general doesn't package RC releases. 2.6 will not be available in Ubuntu until Jaunty Jackalope. However,, if you insist that you need to install it, then, you'll have to do so without a package manager. Download the package, and unzip it to a directory run the following commands (waiting for each to finish as you do so) ./configure make sudo make install There, you have it installed. It's better to wait for it to be packaged first, espescially as Python is used in a lot of ubuntu internals, so may break your system horribly A: It would not be wise to change the default version of Python, i.e. what you get when you type "python" into a shell. However, you can have multiple versions of python installed. The trick is to make sure that the program named "python" on the path is the system supplied version. If you want to run your install of Python 2.6 you'd then type python2.6 into a shell to start it. Download the package and unzip it, then run: ./configure make sudo make install ls -l /usr/local/bin You should see a python and a python2.6 file, both created on the day you ran make install; delete the python file. Then when python is launched the standard system Python version from /usr/bin will be run, and when python2.6 is run you get your shiny new python 2.6rc2. Python displays the version when it starts an interactive interpreter.
{ "language": "en", "url": "https://stackoverflow.com/questions/142764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the difference between the Project and SVN workingDirectory config Blocks in CruiseControl.NET What is the difference between the Project and SVN workingDirectory Config Blocks in CruiseControl.NET? I setup Subversion and now I'm working on CruiseControl.NET and noticed there are two workingDirectory blocks in the config files. I've looked through their google groups and documentation and maybe I missed something but I did not see a clear example of how they are used during the build process. The partial config below is taken from their Project file example page http://confluence.public.thoughtworks.org/display/CCNET/Configuring+the+Server <cruisecontrol> <queue name="Q1" duplicates="ApplyForceBuildsReplace"/> <project name="MyProject" queue="Q1" queuePriority="1"> <webURL>http://mybuildserver/ccnet/</webURL> <workingDirectory>C:\Integration\MyProject\WorkingDirectory</workingDirectory> <artifactDirectory>C:\Integration\MyProject\Artifacts</artifactDirectory> <modificationDelaySeconds>10</modificationDelaySeconds> <triggers> <intervalTrigger seconds="60" name="continuous" /> </triggers> <sourcecontrol type="cvs"> <executable>c:\putty\cvswithplinkrsh.bat</executable> <workingDirectory>c:\fromcvs\myrepo</workingDirectory> <cvsroot>:ext:mycvsserver:/cvsroot/myrepo</cvsroot> </sourcecontrol> </project> </cruisecontrol> A: I think the project working directory is used as the root folder for all commands in the CruiseControl block. So if I have a Nant task/script with relative folders, it will be appended to this root folder for actual execution. The Working Directory for the project (this is used by other blocks). Relative paths are relative to a directory called the project Name in the directory where the CruiseControl.NET server was launched from. The Working Directory is meant to contain the checked out version of the project under integration. The SourceControl working directory is where your SVN or CVS will check out files, when invoked. So this would be the 'Src' subdirectory under your project folder for instance. The folder that the source has been checked out into. Quoted Sources: * *http://confluence.public.thoughtworks.org/display/CCNET/Project+Configuration+Block *http://confluence.public.thoughtworks.org/display/CCNET/CVS+Source+Control+Block A: See Project Configuration Block and Subversion Source Control Block. The Project working directory is for the project as a whole, The Source Control working directory designates where the source will be checked out to. This may be different (if so, likely a sub-directory) of your project working directory.
{ "language": "en", "url": "https://stackoverflow.com/questions/142772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Visual Studio 2008 Express MFC Support As may be known by many, the Express versions of Visual Studio 2008 don't include support for MFC and a few other packages required to compile a lot of windows programs. So, here's my problem: I have the full version of Visual Studio 2005. I've been using this to compile a project that a friend of mine was working on, so that I could test it out for him and continue to track bugs and things. Recently, he upgraded that project to VS 2008, which I don't have. So, I downloaded the express version in the hopes that I could simply compile with that, but no luck, it complains about headers missing left and right. It seems to me that since I already have the full version of VS 2005, I'm bound to have at least some (perhaps older) version of the files in question that his project needs to compile against. Is there a way I can convince VS 2008 to also look in 2005's directories for include files and library files to compile against? Furthermore, is this a bad idea? I would really prefer not to go out and purchase VS 2008 full, as I'll never use it myself. (2005 does the job fine for me at the moment, and I tend to prefer GCC anyway.) Thanks A: You can use the VC++ compiler directly from the command line, or just create a new project w/ the source in Visual Studio 2005. Unless he is using some functionality provided in the new versions of MFC/ATL in 2008/2008sp1, you should be able to compile the project just fine. See ("Create Project from Existing Source") in Visual Studio 2005. It is unfortunate that they don't include these libraries with the Express Editions. A: Use the vcvars*.bat script(s) from Visual Studio 2005. See this blogpost from VC++ Blog to see how. You will use the old compilers, but the build system from Visual Studio 2008. A: You can go into Tools>Options>Projects and Solutions>VC++ Directories and alter the Include, Library, and Source (and Reference maybe?) folders to use VC++ 2005's folders. I'd guess you just replace $(VCInstallDir) with a hardcoded VS 2005 path. I'd record the original values before doing this. However, have you just tried using the OLD 2005 sln and vcproj files? Keep using 2005 on your end and 2008 on his. Keep two sets of these files for each IDE. Any issues are going to be with the library mismatch - which you're not avoiding by using 2008's tools with 2005's libraries. A: The simple way to deal with this would be to revert the solution and project files back to their visual studio 2005 state from source control(you are using source control right?). At this point you can compile the project as long as your friend does not use any of the mfc 9 only functions. A: The first thing I would try is loading this up in VS 2005 by just modifying the version of the .sln and the .vcproj files. In the .vcproj change the version from 9.00 to 8.00 and in the .sln change the format version from 10.00 to 9.00. If you don't have fancy stuff in the project you have a high chance of just being able to use it like this. Also this would avoid having to update 2 project and solution files. A: On this website it is shown how MFC code can be compiled with the Visual Studio Express versions: link A: Just for the record, I've done that(by modifying the include directories and library directories from inside the IDE) and it's working pretty well, I have MFC, ATL, everything. A: I've found this explanation. http://www.codeproject.com/Articles/30439/How-to-compile-MFC-code-in-Visual-C-Express
{ "language": "en", "url": "https://stackoverflow.com/questions/142781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is a "callback" in C and how are they implemented? From the reading that I have done, Core Audio relies heavily on callbacks (and C++, but that's another story). I understand the concept (sort of) of setting up a function that is called by another function repeatedly to accomplish a task. I just don't understand how they get set up and how they actually work. Any examples would be appreciated. A: Callbacks in C are usually implemented using function pointers and an associated data pointer. You pass your function on_event() and data pointers to a framework function watch_events() (for example). When an event happens, your function is called with your data and some event-specific data. Callbacks are also used in GUI programming. The GTK+ tutorial has a nice section on the theory of signals and callbacks. A: A simple call back program. Hope it answers your question. #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <fcntl.h> #include <string.h> #include "../../common_typedef.h" typedef void (*call_back) (S32, S32); void test_call_back(S32 a, S32 b) { printf("In call back function, a:%d \t b:%d \n", a, b); } void call_callback_func(call_back back) { S32 a = 5; S32 b = 7; back(a, b); } S32 main(S32 argc, S8 *argv[]) { S32 ret = SUCCESS; call_back back; back = test_call_back; call_callback_func(back); return ret; } A: There is no "callback" in C - not more than any other generic programming concept. They're implemented using function pointers. Here's an example: void populate_array(int *array, size_t arraySize, int (*getNextValue)(void)) { for (size_t i=0; i<arraySize; i++) array[i] = getNextValue(); } int getNextRandomValue(void) { return rand(); } int main(void) { int myarray[10]; populate_array(myarray, 10, getNextRandomValue); ... } Here, the populate_array function takes a function pointer as its third parameter, and calls it to get the values to populate the array with. We've written the callback getNextRandomValue, which returns a random-ish value, and passed a pointer to it to populate_array. populate_array will call our callback function 10 times and assign the returned values to the elements in the given array. A: This wikipedia article has an example in C. A good example is that new modules written to augment the Apache Web server register with the main apache process by passing them function pointers so those functions are called back to process web page requests. A: It is lot easier to understand an idea through example. What have been told about callback function in C so far are great answers, but probably the biggest benefit of using the feature is to keep the code clean and uncluttered. Example The following C code implements quick sorting. The most interesting line in the code below is this one, where we can see the callback function in action: qsort(arr,N,sizeof(int),compare_s2b); The compare_s2b is the name of function which qsort() is using to call the function. This keeps qsort() so uncluttered (hence easier to maintain). You just call a function by name from inside another function (of course, the function prototype declaration, at the least, must precde before it can be called from another function). The Complete Code #include <stdio.h> #include <stdlib.h> int arr[]={56,90,45,1234,12,3,7,18}; //function prototype declaration int compare_s2b(const void *a,const void *b); int compare_b2s(const void *a,const void *b); //arranges the array number from the smallest to the biggest int compare_s2b(const void* a, const void* b) { const int* p=(const int*)a; const int* q=(const int*)b; return *p-*q; } //arranges the array number from the biggest to the smallest int compare_b2s(const void* a, const void* b) { const int* p=(const int*)a; const int* q=(const int*)b; return *q-*p; } int main() { printf("Before sorting\n\n"); int N=sizeof(arr)/sizeof(int); for(int i=0;i<N;i++) { printf("%d\t",arr[i]); } printf("\n"); qsort(arr,N,sizeof(int),compare_s2b); printf("\nSorted small to big\n\n"); for(int j=0;j<N;j++) { printf("%d\t",arr[j]); } qsort(arr,N,sizeof(int),compare_b2s); printf("\nSorted big to small\n\n"); for(int j=0;j<N;j++) { printf("%d\t",arr[j]); } exit(0); } A: Here is an example of callbacks in C. Let's say you want to write some code that allows registering callbacks to be called when some event occurs. First define the type of function used for the callback: typedef void (*event_cb_t)(const struct event *evt, void *userdata); Now, define a function that is used to register a callback: int event_cb_register(event_cb_t cb, void *userdata); This is what code would look like that registers a callback: static void my_event_cb(const struct event *evt, void *data) { /* do stuff and things with the event */ } ... event_cb_register(my_event_cb, &my_custom_data); ... In the internals of the event dispatcher, the callback may be stored in a struct that looks something like this: struct event_cb { event_cb_t cb; void *data; }; This is what the code looks like that executes a callback. struct event_cb *callback; ... /* Get the event_cb that you want to execute */ callback->cb(event, callback->data); A: A callback function in C is the equivalent of a function parameter / variable assigned to be used within another function.Wiki Example In the code below, #include <stdio.h> #include <stdlib.h> /* The calling function takes a single callback as a parameter. */ void PrintTwoNumbers(int (*numberSource)(void)) { printf("%d and %d\n", numberSource(), numberSource()); } /* A possible callback */ int overNineThousand(void) { return (rand() % 1000) + 9001; } /* Another possible callback. */ int meaningOfLife(void) { return 42; } /* Here we call PrintTwoNumbers() with three different callbacks. */ int main(void) { PrintTwoNumbers(&rand); PrintTwoNumbers(&overNineThousand); PrintTwoNumbers(&meaningOfLife); return 0; } The function (*numberSource) inside the function call PrintTwoNumbers is a function to "call back" / execute from inside PrintTwoNumbers as dictated by the code as it runs. So if you had something like a pthread function you could assign another function to run inside the loop from its instantiation. A: A callback in C is a function that is provided to another function to "call back to" at some point when the other function is doing its task. There are two ways that a callback is used: synchronous callback and asynchronous callback. A synchronous callback is provided to another function which is going to do some task and then return to the caller with the task completed. An asynchronous callback is provided to another function which is going to start a task and then return to the caller with the task possibly not completed. Synchronous callback A synchronous callback is typically used to provide a delegate to another function to which the other function delegates some step of the task. Classic examples of this delegation are the functions bsearch() and qsort() from the C Standard Library. Both of these functions take a callback which is used during the task the function is providing so that the type of the data being searched, in the case of bsearch(), or sorted, in the case of qsort(), does not need to be known by the function being used. For example, here is a small sample program with bsearch() using different comparison functions, demonstrating synchronous callbacks. By allowing us to delegate the data comparison to a callback function, the bsearch() function allows us to decide at run time what kind of comparison we want to use. This is synchronous because when the bsearch() function returns the task is complete. #include <stdio.h> #include <stdlib.h> #include <string.h> typedef struct { int iValue; int kValue; char label[6]; } MyData; int cmpMyData_iValue (MyData *item1, MyData *item2) { if (item1->iValue < item2->iValue) return -1; if (item1->iValue > item2->iValue) return 1; return 0; } int cmpMyData_kValue (MyData *item1, MyData *item2) { if (item1->kValue < item2->kValue) return -1; if (item1->kValue > item2->kValue) return 1; return 0; } int cmpMyData_label (MyData *item1, MyData *item2) { return strcmp (item1->label, item2->label); } void bsearch_results (MyData *srch, MyData *found) { if (found) { printf ("found - iValue = %d, kValue = %d, label = %s\n", found->iValue, found->kValue, found->label); } else { printf ("item not found, iValue = %d, kValue = %d, label = %s\n", srch->iValue, srch->kValue, srch->label); } } int main () { MyData dataList[256] = {0}; { int i; for (i = 0; i < 20; i++) { dataList[i].iValue = i + 100; dataList[i].kValue = i + 1000; sprintf (dataList[i].label, "%2.2d", i + 10); } } // ... some code then we do a search { MyData srchItem = { 105, 1018, "13"}; MyData *foundItem = bsearch (&srchItem, dataList, 20, sizeof(MyData), cmpMyData_iValue ); bsearch_results (&srchItem, foundItem); foundItem = bsearch (&srchItem, dataList, 20, sizeof(MyData), cmpMyData_kValue ); bsearch_results (&srchItem, foundItem); foundItem = bsearch (&srchItem, dataList, 20, sizeof(MyData), cmpMyData_label ); bsearch_results (&srchItem, foundItem); } } Asynchronous callback An asynchronous callback is different in that when the called function to which we provide a callback returns, the task may not be completed. This type of callback is often used with asynchronous I/O in which an I/O operation is started and then when it is completed, the callback is invoked. In the following program we create a socket to listen for TCP connection requests and when a request is received, the function doing the listening then invokes the callback function provided. This simple application can be exercised by running it in one window while using the telnet utility or a web browser to attempt to connect in another window. I lifted most of the WinSock code from the example Microsoft provides with the accept() function at https://msdn.microsoft.com/en-us/library/windows/desktop/ms737526(v=vs.85).aspx This application starts a listen() on the local host, 127.0.0.1, using port 8282 so you could use either telnet 127.0.0.1 8282 or http://127.0.0.1:8282/. This sample application was created as a console application with Visual Studio 2017 Community Edition and it is using the Microsoft WinSock version of sockets. For a Linux application the WinSock functions would need to be replaced with the Linux alternatives and the Windows threads library would use pthreads instead. #include <stdio.h> #include <winsock2.h> #include <stdlib.h> #include <string.h> #include <Windows.h> // Need to link with Ws2_32.lib #pragma comment(lib, "Ws2_32.lib") // function for the thread we are going to start up with _beginthreadex(). // this function/thread will create a listen server waiting for a TCP // connection request to come into the designated port. // _stdcall modifier required by _beginthreadex(). int _stdcall ioThread(void (*pOutput)()) { //---------------------- // Initialize Winsock. WSADATA wsaData; int iResult = WSAStartup(MAKEWORD(2, 2), &wsaData); if (iResult != NO_ERROR) { printf("WSAStartup failed with error: %ld\n", iResult); return 1; } //---------------------- // Create a SOCKET for listening for // incoming connection requests. SOCKET ListenSocket; ListenSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (ListenSocket == INVALID_SOCKET) { wprintf(L"socket failed with error: %ld\n", WSAGetLastError()); WSACleanup(); return 1; } //---------------------- // The sockaddr_in structure specifies the address family, // IP address, and port for the socket that is being bound. struct sockaddr_in service; service.sin_family = AF_INET; service.sin_addr.s_addr = inet_addr("127.0.0.1"); service.sin_port = htons(8282); if (bind(ListenSocket, (SOCKADDR *)& service, sizeof(service)) == SOCKET_ERROR) { printf("bind failed with error: %ld\n", WSAGetLastError()); closesocket(ListenSocket); WSACleanup(); return 1; } //---------------------- // Listen for incoming connection requests. // on the created socket if (listen(ListenSocket, 1) == SOCKET_ERROR) { printf("listen failed with error: %ld\n", WSAGetLastError()); closesocket(ListenSocket); WSACleanup(); return 1; } //---------------------- // Create a SOCKET for accepting incoming requests. SOCKET AcceptSocket; printf("Waiting for client to connect...\n"); //---------------------- // Accept the connection. AcceptSocket = accept(ListenSocket, NULL, NULL); if (AcceptSocket == INVALID_SOCKET) { printf("accept failed with error: %ld\n", WSAGetLastError()); closesocket(ListenSocket); WSACleanup(); return 1; } else pOutput (); // we have a connection request so do the callback // No longer need server socket closesocket(ListenSocket); WSACleanup(); return 0; } // our callback which is invoked whenever a connection is made. void printOut(void) { printf("connection received.\n"); } #include <process.h> int main() { // start up our listen server and provide a callback _beginthreadex(NULL, 0, ioThread, printOut, 0, NULL); // do other things while waiting for a connection. In this case // just sleep for a while. Sleep(30000); } A: Usually this can be done by using a function pointer, that is a special variable that points to the memory location of a function. You can then use this to call the function with specific arguments. So there will probably be a function that sets the callback function. This will accept a function pointer and then store that address somewhere where it can be used. After that when the specified event is triggered, it will call that function.
{ "language": "en", "url": "https://stackoverflow.com/questions/142789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "178" }
Q: Network Multithreading I'm programming an online game for two reasons, one to familiarize myself with server/client requests in a realtime environment (as opposed to something like a typical web browser, which is not realtime) and to actually get my hands wet in that area, so I can proceed to actually properly design one. Anywho, I'm doing this in C++, and I've been using winsock to handle my basic, basic network tests. I obviously want to use a framelimiter and have 3D going and all of that at some point, and my main issue is that when I do a send() or receive(), the program kindly idles there and waits for a response. That would lead to maybe 8 fps on even the best internet connection. So the obvious solution to me is to take the networking code out of the main process and start it up in its own thread. Ideally, I would call a "send" in my main process which would pass the networking thread a pointer to the message, and then periodically (every frame) check to see if the networking thread had received the reply, or timed out, or what have you. In a perfect world, I would actually have 2 or more networking threads running simultaneously, so that I could say run a chat window and do a background download of a piece of armor and still allow the player to run around all at once. The bulk of my problem is that this is a new thing to me. I understand the concept of threading, but I can see some serious issues, like what happens if two threads try to read/write the same memory address at the same time, etc. I know that there are already methods in place to handle this sort of thing, so I'm looking for suggestions on the best way to implement something like this. Basically, I need thread A to be able to start a process in thread B by sending a chunk of data, poll thread B's status, and then receive the reply, also as a chunk of data., ideally without any major crashing going on. ^_^ I'll worry about what that data actually contains and how to handle dropped packets, etc later, I just need to get that happening first. Thanks for any help/advice. PS: Just thought about this, may make the question simpler. Is there a way to use the windows event handling system to my advantage? Like, would it be possible to have thread A initialize data somewhere, then trigger an event in thread B to have it pick up the data, and vice versa for thread B to tell thread A it was done? That would probably solve a lot of my problems, since I don't really need both threads to be able to work on the data at the same time, more of a baton pass really. I just don't know if this is possible between two different threads. (I know one thread can create its own messages for the event handler.) A: The easiest thing for you to do, would be to simply invoke the windows API QueueUserWorkItem. All you have to specify is the function that the thread will execute and the input passed to it. A thread pool will be automatically created for you and the jobs executed in it. New threads will be created as and when is required. http://msdn.microsoft.com/en-us/library/ms684957(VS.85).aspx More Control You could have a more detailed control using another set of API's which can again manage the thread pool for you - http://msdn.microsoft.com/en-us/library/ms686980(VS.85).aspx Do it yourself If you want to control all aspects of your thread creation and the pool management you would have to create the threads yourself, decide how they should end , how many to create etc (beginthreadex is the api you should be using to create threads. If you use MFC you should use AfxBeginThread function). Send jobs to worker threads - Io completion Ports In this case, you would also have to worry about how to communicate your jobs - i would recommend IoCOmpletionPorts to do that. It is the most scalable notification mechanism that i currently know of made for this purpose. It has the additional advantage that it is implemented in the kernel so you avoid all kinds of dead loack sitautions you would encounter if you decide to handroll something yourself. This article will show you how with code samples - http://blogs.msdn.com/larryosterman/archive/2004/03/29/101329.aspx Communicate Back - Windows Messages You could use windows messages to communicate the status back to your parent thread since it is doing the message wait anyway. use the PostMessage function to do this. (and check for errors) ps : You could also allocate the data that needs to be sent out on a dedicated pointer and then the worker thread could take care of deleting it after sending it out. That way you avoid the return pointer traffic too. A: BlodBath's suggestion of non-blocking sockets is potentially the right approach. If you're trying to avoid using a multithreaded approach, then you could investigate the use of setting up overlapped I/O on your sockets. They will not block when you do a transmit or receive, but have the added bonus of giving you the option of waiting for multiple events within your single event loop. When your transmit has finished, you will receive an event. (see this for some details) This is not incompatible with a multithreaded approach, so there's the option of changing your mind later. ;-) On the design of your multithreaded app. the best thing to do is to work out all of the external activities that you want to be alerted to. For example, so far in your question you've listed network transmits, network receives, and user activity. Depending on the number of concurrent connections you're going to be dealing with you'll probably find it conceptually simpler to have a thread per socket (assuming small numbers of sockets), where each thread is responsible for all of the processing for that socket. Then you can implement some form of messaging system between your threads as RC suggested. Arrange your system so that when a message is sent to a particular thread and event is also sent. Your threads can then be sent to sleep waiting for one of those events. (as well as any other stimulus - like socket events, user events etc.) You're quite right that you need to be careful of situations where more than one thread is trying to access the same piece of memory. Mutexes and semaphores are the things to use there. Also be aware of the limitations that your gui has when it comes to multithreading. Some discussion on the subject can be found in this question. But the abbreviated version is that most (and Windows is one of these) GUIs don't allow multiple threads to perform GUI operations simultaneously. To get around this problem you can make use of the message pump in your application, by sending custom messages to your gui thread to get it to perform gui operations. A: I suggest looking into non-blocking sockets for the quick fix. Using non-blocking sockets send() and recv() do not block, and using the select() function you can get any waiting data every frame. A: See it as a producer-consumer problem: when receiving, your network communication thread is the producer whereas the UI thread is the consumer. When sending, it's just the opposite. Implement a simple buffer class which gives you methods like push and pop (pop should be blocking for the network thread and non-blocking for the UI thread). Rather than using the Windows event system, I would prefer something that is more portable, for example Boost condition variables. A: I don't code games, but I've used a system similar to what pukku suggested. It lends nicely to doing things like having the buffer prioritize your messages to be processed if you have such a need. I think of them as mailboxes per thread. You want to send a packet? Have the ProcessThread create a "thread message" with the payload to go on the wire and "send" it to the NetworkThread (i.e. push it on the NetworkThread's queue/mailbox and signal the condition variable of the NetworkThread so he'll wake up and pull it off). When the NetworkThread receives the response, package it up in a thread message and send it back to the ProcessThread in the same manner. Difference is the ProcessThread won't be blocked on a condition variable, just polling on mailbox.empty( ) when you want to check for the response. You may want to push and pop directly, but a more convenient way for larger projects is to implement a toThreadName, fromThreadName scheme in a ThreadMsg base class, and a Post Office that threads register their Mailbox with. The PostOffice then has a send(ThreadMsg*); function that gets/pushes the messages to the appropriate Mailbox based on the to and from. Mailbox (the buffer/queue class) contains the ThreadMsg* = receiveMessage(), basically popping it off the underlying queue. Depending on your needs, you could have ThreadMsg contain a virtual function process(..) that could be overridden accordingly in derived classes, or just have an ordinary ThreadMessage class with a to, from members and a getPayload( ) function to get back the raw data and deal with it directly in the ProcessThread. Hope this helps. A: Some topics you might be interested in: * *mutex: A mutex allows you to lock access to specific resources for one thread only *semaphore: A way to determine how many users a certain resource still has (=how many threads are accessing it) and a way for threads to access a resource. A mutex is a special case of a semaphore. *critical section: a mutex-protected piece of code (street with only one lane) that can only be travelled by one thread at a time. *message queue: a way of distributing messages in a centralized queue *inter-process communication (IPC) - a way of threads and processes to communicate with each other through named pipes, shared memory and many other ways (it's more of a concept than a special technique) All topics in bold print can be easily looked up on a search engine.
{ "language": "en", "url": "https://stackoverflow.com/questions/142804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Does Python have a bitfield type? I need a compact representation of an array of booleans, does Python have a builtin bitfield type or will I need to find an alternate solution? A: Represent each of your values as a power of two: testA = 2**0 testB = 2**1 testC = 2**3 Then to set a value true: table = table | testB To set a value false: table = table & (~testC) To test for a value: bitfield_length = 0xff if ((table & testB & bitfield_length) != 0): print "Field B set" Dig a little deeper into hexadecimal representation if this doesn't make sense to you. This is basically how you keep track of your boolean flags in an embedded C application as well (if you have limitted memory). A: I use the binary bit-wise operators !, &, |, ^, >>, and <<. They work really well and are implemented directly in the underlying C, which is usually directly on the underlying hardware. A: The BitVector package may be what you need. It's not built in to my python installation, but easy to track down on the python site. https://pypi.python.org/pypi/BitVector for the current version. A: If you mainly want to be able to name your bit fields and easily manipulate them, e.g. to work with flags represented as single bits in a communications protocol, then you can use the standard Structure and Union features of ctypes, as described at How Do I Properly Declare a ctype Structure + Union in Python? - Stack Overflow For example, to work with the 4 least-significant bits of a byte individually, just name them from least to most significant in a LittleEndianStructure. You use a union to provide access to the same data as a byte or int so you can move the data in or out of the communication protocol. In this case that is done via the flags.asbyte field: import ctypes c_uint8 = ctypes.c_uint8 class Flags_bits(ctypes.LittleEndianStructure): _fields_ = [ ("logout", c_uint8, 1), ("userswitch", c_uint8, 1), ("suspend", c_uint8, 1), ("idle", c_uint8, 1), ] class Flags(ctypes.Union): _fields_ = [("b", Flags_bits), ("asbyte", c_uint8)] flags = Flags() flags.asbyte = 0xc print(flags.b.idle) print(flags.b.suspend) print(flags.b.userswitch) print(flags.b.logout) The four bits (which I've printed here starting with the most significant, which seems more natural when printing) are 1, 1, 0, 0, i.e. 0xc in binary. A: NumPy has a array interface module that you can use to make a bitfield. A: Bitarray was the best answer I found, when I recently had a similar need. It's a C extension (so much faster than BitVector, which is pure python) and stores its data in an actual bitfield (so it's eight times more memory efficient than a numpy boolean array, which appears to use a byte per element.) A: If your bitfield is short, you can probably use the struct module. Otherwise I'd recommend some sort of a wrapper around the array module. Also, the ctypes module does contain bitfields, but I've never used it myself. Caveat emptor. A: You should take a look at the bitstring module, which has recently reached version 2.0. The binary data is compactly stored as a byte array and can be easily created, modified and analysed. You can create BitString objects from binary, octal, hex, integers (big or little endian), strings, bytes, floats, files and more. a = BitString('0xed44') b = BitString('0b11010010') c = BitString(int=100, length=14) d = BitString('uintle:16=55, 0b110, 0o34') e = BitString(bytes='hello') f = pack('<2H, bin:3', 5, 17, '001') You can then analyse and modify them with simple functions or slice notation - no need to worry about bit masks etc. a.prepend('0b110') if '0b11' in b: c.reverse() g = a.join([b, d, e]) g.replace('0b101', '0x3400ee1') if g[14]: del g[14:17] else: g[55:58] = 'uint:11=33, int:9=-1' There is also a concept of a bit position, so that you can treat it like a file or stream if that's useful to you. Properties are used to give different interpretations of the bit data. w = g.read(10).uint x, y, z = g.readlist('int:4, int:4, hex:32') if g.peek(8) == '0x00': g.pos += 10 Plus there's support for the standard bit-wise binary operators, packing, unpacking, endianness and more. The latest version is for Python 2.7 and 3.x, and although it's pure Python it is reasonably well optimised in terms of memory and speed. A: If you want to use ints (or long ints) to represent as arrays of bools (or as sets of integers), take a look at http://sourceforge.net/projects/pybitop/files/ It provides insert/extract of bitfields into long ints; finding the most-significant, or least-significant '1' bit; counting all the 1's; bit-reversal; stuff like that which is all possible in pure python but much faster in C. A: I needed a minimal, memory efficient bitfield with no external dependencies, here it is: import math class Bitfield: def __init__(self, size): self.bytes = bytearray(math.ceil(size / 8)) def __getitem__(self, idx): return self.bytes[idx // 8] >> (idx % 8) & 1 def __setitem__(self, idx, value): mask = 1 << (idx % 8) if value: self.bytes[idx // 8] |= mask else: self.bytes[idx // 8] &= ~mask Use: # if size is not a multiple of 8, actual size will be the next multiple of 8 bf = Bitfield(1000) bf[432] # 0 bf[432] = 1 bf[432] # 1 A: For mostly-consecutive bits there's the https://pypi.org/project/range_set/ module which is API compatible to Python's built-in set. As the name implies, it stores the bits as begin/end pairs. A: I had to deal with some control words / flags in a communication protocol and my focus was that the editor gives me suggestions of the flag names and jumps to the definition of the flags with "F3". The code below suffices theses requirements (The solution with ctypes by @nealmcb unfortunately is not supported by the PyCharm indexer today. ). Suggestions welcome: """ The following bit-manipulation methods are written to take a tuple as input, which is provided by the Bitfield class. The construct looks weired, however the call to a setBit() looks ok and the editor (PyCharm) suggests all possible bit names. I did not find a more elegant solution that calls the setBit()-function and needs only one argument. Example call: setBit( STW1.bm01NoOff2() ) """ def setBit(TupleBitField_BitMask): # word = word | bit_mask TupleBitField_BitMask[0].word = TupleBitField_BitMask[0].word | TupleBitField_BitMask[1] def isBit(TupleBitField_BitMask): # (word & bit_mask) != 0 return (TupleBitField_BitMask[0].word & TupleBitField_BitMask[1]) !=0 def clrBit(TupleBitField_BitMask): #word = word & (~ BitMask) TupleBitField_BitMask[0].word = TupleBitField_BitMask[0].word & (~ TupleBitField_BitMask[1]) def toggleBit(TupleBitField_BitMask): #word = word ^ BitMask TupleBitField_BitMask[0].word = TupleBitField_BitMask[0].word ^ TupleBitField_BitMask[1] """ Create a Bitfield type for each control word of the application. (e.g. 16bit length). Assign a name for each bit in order that the editor (e.g. PyCharm) suggests the names from outside. The bits are defined as methods that return the corresponding bit mask in order that the bit masks are read-only and will not be corrupted by chance. The return of each "bit"-function is a tuple (handle to bitfield, bit_mask) in order that they can be sent as arguments to the single bit manipulation functions (see above): isBit(), setBit(), clrBit(), toggleBit() The complete word of the Bitfield is accessed from outside by xxx.word. Examples: STW1 = STW1Type(0x1234) # instanciates and inits the bitfield STW1, STW1.word = 0x1234 setBit(STW1.bm00() ) # set the bit with the name bm00(), e.g. bm00 = bitmask 0x0001 print("STW1.word =", hex(STW1.word)) """ class STW1Type(): # assign names to the bit masks for each bit (these names will be suggested by PyCharm) # tip: copy the application's manual description here def __init__(self, word): # word = initial value, e.g. 0x0000 self.word = word # define all bits here and copy the description of each bit from the application manual. Then you can jump # to this explanation with "F3" # return the handle to the bitfield and the BitMask of the bit. def bm00NoOff1_MeansON(self): # 0001 0/1= ON (edge)(pulses can be enabled) # 0 = OFF1 (braking with ramp-function generator, then pulse suppression & ready for switching on) return self, 0x0001 def bm01NoOff2(self): # 0002 1 = No OFF2 (enable is possible) # 0 = OFF2 (immediate pulse suppression and switching on inhibited) return self, 0x0002 def bm02NoOff3(self): # 0004 1 = No OFF3 (enable possible) # 0 = OFF3 (braking with the OFF3 ramp p1135, then pulse suppression and switching on inhibited) return self, 0x0004 def bm03EnableOperation(self): # 0008 1 = Enable operation (pulses can be enabled) # 0 = Inhibit operation (suppress pulses) return self, 0x0008 def bm04RampGenEnable(self): # 0010 1 = Hochlaufgeber freigeben (the ramp-function generator can be enabled) # 0 = Inhibit ramp-function generator (set the ramp-function generator output to zero) return self, 0x0010 def b05RampGenContinue(self): # 0020 1 = Continue ramp-function generator # 0 = Freeze ramp-function generator (freeze the ramp-function generator output) return self, 0x0020 def b06RampGenEnable(self): # 0040 1 = Enable speed setpoint; Drehzahlsollwert freigeben # 0 = Inhibit setpoint; Drehzahlsollwert sperren (set the ramp-function generator input to zero) return self, 0x0040 def b07AcknowledgeFaults(self): # 0080 0/1= 1. Acknowledge faults; 1. Quittieren Störung return self, 0x0080 def b08Reserved(self): # 0100 Reserved return self, 0x0100 def b09Reserved(self): # 0200 Reserved return self, 0x0200 def b10ControlByPLC(self): # 0400 1 = Control by PLC; Führung durch PLC return self, 0x0400 def b11SetpointInversion(self): # 0800 1 = Setpoint inversion; Sollwert Invertierung return self, 0x0800 def b12Reserved(self): # 1000 Reserved return self, 0x1000 def b13MotorPotiSPRaise(self): # 2000 1 = Motorized potentiometer setpoint raise; (Motorpotenziometer Sollwert höher) return self, 0x2000 def b14MotorPotiSPLower(self): # 4000 1 = Motorized potentiometer setpoint lower; (Motorpotenziometer Sollwert tiefer) return self, 0x4000 def b15Reserved(self): # 8000 Reserved return self, 0x8000 """ test the constrution and methods """ STW1 = STW1Type(0xffff) print("STW1.word =", hex(STW1.word)) clrBit(STW1.bm00NoOff1_MeansON()) print("STW1.word =", hex(STW1.word)) STW1.word = 0x1234 print("STW1.word =", hex(STW1.word)) setBit( STW1.bm00NoOff1_MeansON() ) print("STW1.word =", hex(STW1.word)) clrBit( STW1.bm00NoOff1_MeansON() ) print("STW1.word =", hex(STW1.word)) toggleBit(STW1.bm03EnableOperation()) print("STW1.word =", hex(STW1.word)) toggleBit(STW1.bm03EnableOperation()) print("STW1.word =", hex(STW1.word)) print("STW1.bm00ON =", isBit(STW1.bm00NoOff1_MeansON() ) ) print("STW1.bm04 =", isBit(STW1.bm04RampGenEnable() ) ) It prints out: STW1.word = 0xffff STW1.word = 0xfffe STW1.word = 0x1234 STW1.word = 0x1235 STW1.word = 0x1234 STW1.word = 0x123c STW1.word = 0x1234 STW1.bm00ON = False STW1.bm04 = True
{ "language": "en", "url": "https://stackoverflow.com/questions/142812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: How do I convert a 12-bit integer to a hexadecimal string in C#? I want to convert a number between 0 and 4096 ( 12-bits ) to its 3 character hexadecimal string representation in C#. Example: 2748 to "ABC" A: try 2748.ToString("X") A: If you want exactly 3 characters and are sure the number is in range, use: i.ToString("X3") If you aren't sure if the number is in range, this will give you more than 3 digits. You could do something like: (i % 0x1000).ToString("X3") Use a lower case "x3" if you want lower-case letters. A: Note: This assumes that you're using a custom, 12-bit representation. If you're just using an int/uint, then Muxa's solution is the best. Every four bits corresponds to a hexadecimal digit. Therefore, just match the first four digits to a letter, then >> 4 the input, and repeat. A: The easy C solution may be adaptable: char hexCharacters[17] = "0123456789ABCDEF"; void toHex(char * outputString, long input) { outputString[0] = hexCharacters[(input >> 8) & 0x0F]; outputString[1] = hexCharacters[(input >> 4) & 0x0F]; outputString[2] = hexCharacters[input & 0x0F]; } You could also do it in a loop, but this is pretty straightforward, and loop has pretty high overhead for only three conversions. I expect C# has a library function of some sort for this sort of thing, though. You could even use sprintf in C, and I'm sure C# has an analog to this functionality. -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/142813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to get started with game programming on the Zune My zune just updated to 3.0 (didn't even realize they were releasing something new!) and the update came with two games, but the Zune marketplace does not have games. Where do I go to get started, and what are the capabilities of the Zune in terms of games/apps? A: Well, first, you must download the Microsoft XNA 3.0 CTP. Read the documentation, which will explain the capabilities. But, from memory: * *No hardware accelerated 3d (obviously, you can create a software 3d engine and then render the result to a 2d sprite, but... Don't expect much in terms of performance ;)) *No XACT, you must use a new sound API A: Just an update but note that XNA 3.0 has been released. It requires some flavor of Visual Studio 2008. I downloaded it and coded & deployed "hello world" to my Zune in no time at all. Very easy. A: You should check out the blog of Rob Miles. He has a few chapters of his book on his site. Great place to start. A: I was hoping someone here would have better resources, but as this seems to be a new area of development, here's one resource that appears to give all the steps for a newbie to get started (too many assume you already have Visual studio, etc). I'm really interested in a better in-depth overview of the capabilities as well, though. -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/142816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Design Time viewing for User Control events I've create a WinForms control that inherits from System.Windows.Forms.UserControl...I've got some custom events on the control that I would like the consumer of my control to be able to see. I'm unable to actually get my events to show up in the Events tab of the Properties window during design time. This means the only way to assign the events is to programmatically write myUserControl.MyCustomEvent += new MyUserControl.MyCustomEventHandler(EventHandlerFunction); this is fine for me I guess but when someone else comes to use my UserControl they are not going to know that these events exist (unless they read the library doco...yeah right). I know the event will show up using Intellisense but it would be great if it could show in the properties window too. A: Make sure your events are exposed as public. For example... [Browsable(true)] public event EventHandler MyCustomEvent; A: A solution using delegate. For example i used for a custom ListView which handle item added event : Declare your delegate : public delegate void ItemAddedHandler(object sender, ItemEventArgs e) then declare the event which use the delegate : [Browsable(true)] public event ItemAddedHandler ItemAdded; Note : ItemEventArgs is a custom EventArgs Hope can help you, works fine for me
{ "language": "en", "url": "https://stackoverflow.com/questions/142820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Is there a way to indefinitely pause a thread? I've been working on a web crawling .NET app in my free time, and one of the features of this app that I wanted to included was a pause button to pause a specific thread. I'm relatively new to multi-threading and I haven't been able to figure out a way to pause a thread indefinitely that is currently supported. I can't remember the exact class/method, but I know there is a way to do this but it has been flagged as obsolete by the .NET framework. Is there any good general purpose way to indefinitely pause a worker thread in C# .NET. I haven't had a lot of time lately to work on this app and the last time I touched it was in the .NET 2.0 framework. I'm open to any new features (if any) that exist in the .NET 3.5 framework, but I'd like to know of solution that also works in the 2.0 framework since that's what I use at work and it would be good to know just in case. A: Never, ever use Thread.Suspend. The major problem with it is that 99% of the time you can't know what that thread is doing when you suspend it. If that thread holds a lock, you make it easier to get into a deadlock situation, etc. Keep in mind that code you are calling may be acquiring/releasing locks behind the scenes. Win32 has a similar API: SuspendThread and ResumeThread. The following docs for SuspendThread give a nice summary of the dangers of the API: http://msdn.microsoft.com/en-us/library/ms686345(VS.85).aspx This function is primarily designed for use by debuggers. It is not intended to be used for thread synchronization. Calling SuspendThread on a thread that owns a synchronization object, such as a mutex or critical section, can lead to a deadlock if the calling thread tries to obtain a synchronization object owned by a suspended thread. To avoid this situation, a thread within an application that is not a debugger should signal the other thread to suspend itself. The target thread must be designed to watch for this signal and respond appropriately. The proper way to suspend a thread indefinitely is to use a ManualResetEvent. The thread is most likely looping, performing some work. The easiest way to suspend the thread is to have the thread "check" the event each iteration, like so: while (true) { _suspendEvent.WaitOne(Timeout.Infinite); // Do some work... } You specify an infinite timeout so when the event is not signaled, the thread will block indefinitely, until the event is signaled at which point the thread will resume where it left off. You would create the event like so: ManualResetEvent _suspendEvent = new ManualResetEvent(true); The true parameter tells the event to start out in the signaled state. When you want to pause the thread, you do the following: _suspendEvent.Reset(); And to resume the thread: _suspendEvent.Set(); You can use a similar mechanism to signal the thread to exit and wait on both events, detecting which event was signaled. Just for fun I'll provide a complete example: public class Worker { ManualResetEvent _shutdownEvent = new ManualResetEvent(false); ManualResetEvent _pauseEvent = new ManualResetEvent(true); Thread _thread; public Worker() { } public void Start() { _thread = new Thread(DoWork); _thread.Start(); } public void Pause() { _pauseEvent.Reset(); } public void Resume() { _pauseEvent.Set(); } public void Stop() { // Signal the shutdown event _shutdownEvent.Set(); // Make sure to resume any paused threads _pauseEvent.Set(); // Wait for the thread to exit _thread.Join(); } public void DoWork() { while (true) { _pauseEvent.WaitOne(Timeout.Infinite); if (_shutdownEvent.WaitOne(0)) break; // Do the work here.. } } } A: If there are no synchronization requirements: Thread.Sleep(Timeout.Infinite); A: The Threading in C# ebook summarises Thread.Suspend and Thread.Resume thusly: The deprecated Suspend and Resume methods have two modes – dangerous and useless! The book recommends using a synchronization construct such as an AutoResetEvent or Monitor.Wait to perform thread suspending and resuming. A: I just implemented a LoopingThread class which loops an action passed to the constructor. It is based on Brannon's post. I've put some other stuff into that like WaitForPause(), WaitForStop(), and a TimeBetween property, that indicates the time that should be waited before next looping. I also decided to change the while-loop to an do-while-loop. This will give us a deterministic behavior for a successive Start() and Pause(). With deterministic I mean, that the action is executed at least once after a Start() command. In Brannon's implementation this might not be the case. I omitted some things for the root of the matter. Things like "check if the thread was already started", or the IDisposable pattern. public class LoopingThread { private readonly Action _loopedAction; private readonly AutoResetEvent _pauseEvent; private readonly AutoResetEvent _resumeEvent; private readonly AutoResetEvent _stopEvent; private readonly AutoResetEvent _waitEvent; private readonly Thread _thread; public LoopingThread (Action loopedAction) { _loopedAction = loopedAction; _thread = new Thread (Loop); _pauseEvent = new AutoResetEvent (false); _resumeEvent = new AutoResetEvent (false); _stopEvent = new AutoResetEvent (false); _waitEvent = new AutoResetEvent (false); } public void Start () { _thread.Start(); } public void Pause (int timeout = 0) { _pauseEvent.Set(); _waitEvent.WaitOne (timeout); } public void Resume () { _resumeEvent.Set (); } public void Stop (int timeout = 0) { _stopEvent.Set(); _resumeEvent.Set(); _thread.Join (timeout); } public void WaitForPause () { Pause (Timeout.Infinite); } public void WaitForStop () { Stop (Timeout.Infinite); } public int PauseBetween { get; set; } private void Loop () { do { _loopedAction (); if (_pauseEvent.WaitOne (PauseBetween)) { _waitEvent.Set (); _resumeEvent.WaitOne (Timeout.Infinite); } } while (!_stopEvent.WaitOne (0)); } } A: Beside suggestions above, I'd like to add one tip. In some cases, use BackgroundWorker can simplify your code (especially when you use anonymous method to define DoWork and other events of it). A: In line with what the others said - don't do it. What you really want to do is to "pause work", and let your threads roam free. Can you give us some more details about the thread(s) you want to suspend? If you didn't start the thread, you definitely shouldn't even consider suspending it - its not yours. If it is your thread, then I suggest instead of suspending it, you just have it sit, waiting for more work to do. Brannon has some excellent suggestions for this option in his response. Alternatively, just let it end; and spin up a new one when you need it. A: The Suspend() and Resume() may be depricated, however they are in no way useless. If, for example, you have a thread doing a lengthy work altering data, and the user wishes to stop it, he clicks on a button. Of course, you need to ask for verification, but, at the same time you do not want that thread to continue altering data, if the user decides that he really wants to abort. Suspending the Thread while waiting for the user to click that Yes or No button at the confirmation dialog is the only way to prevent it from altering the data, before you signal the designated abort event that will allow it to stop. Events may be nice for simple threads having one loop, but complicated threads with complex processing is another issue. Certainly, Suspend() must never be used for syncronising, since its usefulness is not for this function. Just my opinion.
{ "language": "en", "url": "https://stackoverflow.com/questions/142826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: What are your "hard rules" about commenting your code? I have seen the other questions but I am still not satisfied with the way this subject is covered. I would like to extract a distiled list of things to check on comments at a code inspection. I am sure people will say things that will just cancel each other. But hey, maybe we can build a list for each camp. For those who don't comment at all the list will just be very short :) A: Documentation is like sex; when it's good, it's very, very good, and when it's bad, it's better than nothing A: Write readable code that is self-explanatory as much as possible. Add comments whenever you have to write code that is too complex to understand at a glance. Also add comments to describe the business purpose behind code that you write, to make it easier to maintain/refactor it in the future. A: The comments you write can be revealing about the quality of your code. Countless times I've removed comments in my code to replace them with better, clearer code. For this I follow a couple of anti-commenting rules: * *If your comment merely explains a line of code, you should either let that line of code speak for itself or split it up into simpler components. *If your comment explains a block of code within a function, you should probably be explaining a new function instead. Those are really the same rule repeated for two different contexts. The other, more normal rules I follow are: * *When using a dynamically-typed language, document the expectations that important functions make about their arguments, as well as the expectations callers can make about the return values. Important functions are those that will ever have non-local callers. *When your logic is dictated by the behavior of another component, it's good to document what your understanding and expectations of that component are. A: When implementing an RFC or other protocol specification, comment state machines / event handlers / etc with the section of the spec they correspond to. Make sure to list the version or date of the spec, in case it is revised later. A: I have one simple rule about commenting: Your code should tell the story of what you are doing; your comments should tell the story of why you are doing it. This way, I make sure that whoever inherits my code will be able to understand the intent behind the code. A: I usually comment a method before I write it. I'll write a line or two of comments for each step I need to take within the function, and then I write the code between the comments. When I'm done, the code is already commented. The great part about that is that it's commented before I write the code, so there are not unreasonable assumptions about previous knowledge in the comments; I, myself, knew nothing about my code when I wrote them. This means that they tend to be easy to understand, as they should be. A: There are no hard rules - hard rules lead to dogma and people generally follow dogma when they're not smart enough to think for themselves. The guidelines I follow: 1/ Comments tell what is being done, code tells how it's being done - don't duplicate your effort. 2/ Comments should refer to blocks of code, not each line. That includes comments that explain whole files, whole functions or just a complicated snippet of code. 3/ If I think I'd come back in a year and not understand the code/comment combination then my comments aren't good enough yet. A: A great rule for comments: if you're reading through code trying to figure something out, and a comment somewhere would have given you the answer, put it there when you know the answer. Only spend that time investigating once. Eventually you will know as you write the places that you need to leave guidance, and the places that are sufficiently obvious to stand alone. Until then, you'll spend time trawling through your code trying to figure out why you did something :) A: * *I comment public or protected functions with meta-comments, and usually hit the private functions if I remember. *I comment why any sufficiently complex code block exists (judgment call). The why is the important part. *I comment if I write code that I think is not optimal but I leave it in because I cannot figure out a smarter way or I know I will be refactoring later. *I comment to remind myself or others of missing functionality or upcoming requirements code not present in the code (TODO, etc). *I comment to explain complex business rules related to a class or chunk of code. I have been known to write several paragraphs to make sure the next guy/gal knows why I wrote a hundred line class. A: If a comment is out of date (does not match the code), delete it or update it. Never leave an inaccurate comment in place. A: I document every class, every function, every variable within a class. Simple DocBlocks are the way forward. I'll generally write these docblocks more for automated API documentation than anything else... For example, the first section of one of my PHP classes /** * Class to clean variables * * @package Majyk * @author Martin Meredith <martin@sourceguru.net> * @licence GPL (v2 or later) * @copyright Copyright (c) 2008 Martin Meredith <martin@sourceguru.net> * @version 0.1 */ class Majyk_Filter { /** * Class Constants for Cleaning Types */ const Integer = 1; const PositiveInteger = 2; const String = 3; const NoHTML = 4; const DBEscapeString = 5; const NotNegativeInteger = 6; /** * Do the cleaning * * @param integer Type of Cleaning (as defined by constants) * @param mixed Value to be cleaned * * @return mixed Cleaned Variable * */ But then, I'll also sometimes document significant code (from my init.php // Register the Auto-Loader spl_autoload_register("majyk_autoload"); // Add an Exception Handler. set_exception_handler(array('Majyk_ExceptionHandler', 'handle_exception')); // Turn Errors into Exceptions set_error_handler(array('Majyk_ExceptionHandler', 'error_to_exception'), E_ALL); // Add the generic Auto-Loader to the auto-loader stack spl_autoload_register("spl_autoload"); And, if it's not self explanatory why something does something in a certain way, I'll comment that A: The only guaranteed place I leave comments: TODO sections. The best place to keep track of things that need reworking is right there in the code. A: I create a comment block at the beginning of my code, listing the purpose of the program, the date it was created, any license/copyright info (like GPL), and the version history. I often comment my imports if it's not obvious why they are being imported, especially if the overall program doesn't appear to need the imports. I add a docstring to each class, method, or function, describing what the purpose of that block is and any additional information I think is necessary. I usually have a demarcation line for sections that are related, e.g. widget creation, variables, etc. Since I use SPE for my programming environment, it automatically highlights these sections, making navigation easier. I add TODO comments as reminders while I'm coding. It's a good way to remind myself to refactor the code once it's verified to work correctly. Finally, I comment individual lines that may need some clarification or otherwise need some metadata for myself in the future or other programmers. Personally, I hate looking at code and trying to figure out what it's supposed to do. If someone could just write a simple sentence to explain it, life is easier. Self-documenting code is a misnomer, in my book. A: I focus on the why. Because the what is often easy readable. TODO's are also great, they save a lot of time. And i document interfaces (for example file formats). A: A really important thing to check for when you are checking header documentation (or whatever you call the block preceding the method declaration) is that directives and caveats are easy to spot. Directives are any "do" or "don't do" instructions that affect the client: don't call from the UI thread, don't use in performance critical code, call X before Y, release return value after use, etc. Caveats are anything that could be a nasty surprise: remaining action items, known assumptions and limitations, etc. When you focus on a method that you are writing and inspecting, you'll see everything. When a programmer is using your method and thirty others in an hour, you can't count on a thorough read. I can send you research data on that if you're interested. A: Pre-ambles only; state a class's Single Responsibility, any notes or comments, and change log. As for methods, if any method needs substantial commenting, it is time to refactor. A: When you're writing comments, stop, reflect and ask yourself if you can change the code so that the comments aren't needed. Could you change some variable, class or method names to make things clearer? Would some asserts or other error checks codify your intentions or expectations? Could you split some long sections of code into clearly named methods or functions? Comments are often a reflection of our inability to write (a-hem, code) clearly. It's not always easy to write clearly with computer languages but take some time to try... because code never lies. P.S. The fact that you use quotes around "hard rules" is telling. Rules that aren't enforced aren't "hard rules" and the only rules that are enforced are in code. A: I add 1 comment to a block of code that summarizes what I am doing. This helps people who are looking for specific functionality or section of code. I comment any complex algorithm, or process, that can't be figured out at first glance. I sign my code. A: In my opinion, TODO/TBD/FIXME etc. are ok to have in code which is currently being worked on, but when you see code which hasn't been touched in 5 years and is full of them, you realize that it's a pretty lousy way of making sure that things get fixed. In short, TODO notes in comments tend to stay there. Better to use a bugtracker if you have things which need to be fixed at some point. Hudson (CI server) has a great plugin which scans for TODOs and notes how many there are in your code. You can even set thresholds causing the build to be classified as unstable if there are too many of them. My favorite rule-of-thumb regarding comments is: if the code and the comments disagree, then both are likely incorrect A: We wrote an article on comments (actually, I've done several) here: http://agileinaflash.blogspot.com/2009/04/rules-for-commenting.html It's really simple: Comments are written to tell you what the code cannot. This results in a simple process: - Write any comment you want at first. - Improve the code so that the comment becomes redundant - Delete the now-redundant comment. - Only commit code that has no redundant comments A: I'm writing a Medium article in which I will present this rule: when you commit changes to a repository, each comment must be one of these three types: * *A license header at the top *A documentation comment (e.g., Javadoc), or *A TODO comment. The last type should not be permanent. Either the thing gets done and the TODO comment is deleted, or we decide the task is not necessary and the TODO comment gets deleted.
{ "language": "en", "url": "https://stackoverflow.com/questions/142830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }