text
stringlengths
8
267k
meta
dict
Q: Why do I get an error after closing my Windows Forms application? When I run my Visual Studio Windows Forms application by clicking F5 (debug mode), after I click on the close button (which calls Application.Exit()), after a few seconds I get an error that says: cannot acess a disposed object: Object name 'SampleForm'. A bit of background, I have another thread that runs every x seconds. My guess is that when I close the application, and since it is still in debug mode, the other thread is still running and it tries to access something but since I close the application the form is disposed. Is this correct? Do I have to kill the background process thread in before I call Application.Exit()? Update Now when I call thread.Abort() before the call to Application.Exit() the application closes completely. Before, EVEN after I clicked on the close button, the debugger was still running (i.e. the stop button was not selected) so it must have been because the thread was still active). A: Mark your thread as BackgroundThread, and it will stop running as soon as you close the window. A: I think the debug vs. release mode is a red herring. In release mode, you're just not getting the dialog box with the "cannot access a disposed object" error. A: Set the thread to run in background mode Thread.IsBackground = true. The default is foreground mode which causes it to keep the process alive (even though the UI has closed). Scroll down to Foreground and Background Threads here for more info A: Yes, you need to kill the thread first. This really has nothing to do with debug mode though. This has to do with basic threading. EDIT: Per your update, you should not be aborting the thread. The thread should be getting signaled and exiting on it's own. I am not sure what your thread or code looks like, but something like: do { // Crazy threading stuff here }while(_running); A: Make sure that the other thread is set as background thread. Also, in Application.Exit make otherThread.Join() A: Yes, you definitely need to kill the threads you spawn. In this case, you check to see if the UI object you are accessing is disposed, and if so, abort the current thread. Another possibility would simple be to keep track of your threads and kill them on exit. A third possibility would be to look into the system ThreadPool and BackgroundWorker areas, to see if they handle any sort of thread lifecycle management like that.
{ "language": "en", "url": "https://stackoverflow.com/questions/162962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SWT Link flickers with gradient background I'm developing a an eclipse plugin that uses an SWT interface. I need to display text, and within that text there needs to be links. The only two widgets that I've found that will allow me to include clickable links in text are Link and Browser. Browser, however, is overkill for my needs, and I couldn't properly customize the look of it. This only leaves the Link widget. The problem is I need the Link widget to inherit a gradient from the Composite in which it is in. It does this correctly, only when it is resized or scrolled the Link component flickers. The Link is the only component in which I have seen this effect. In an attempt to fix this I've tried manipulating other components into having clickable links, but I haven't found a good solution yet. Is there anyway to fix the flickering effect on the Link, or is there a different component which would support links? Thanks, Brian A: Have you tried passing SWT.NO_BACKGROUND to your Link widget? It might get a little strange... and you may have to do a little more work to get the gui drawing properly, but that would be my first guess. Other than that, here's my Quick n' dirty implementation of a link inside of a StyledText. You will need to fill in for changing the cursor (if that's something you want), as well as coming up with a good "text to link" mapping scheme. The only thing is I'm not sure if StyledText will inherit your background... give it a shot. public class StyledTextExample { public static void main(String [] args) { // create the widget's shell Shell shell = new Shell(); shell.setLayout(new FillLayout()); shell.setSize(200, 100); Display display = shell.getDisplay(); // create the styled text widget final StyledText widget = new StyledText(shell, SWT.NONE); String text = "This is the StyledText widget."; widget.setText(text); widget.setEditable(false); final StyleRange hyperlinkStyle = new StyleRange(); String linkWord = "StyledText"; hyperlinkStyle.start = text.indexOf(linkWord); hyperlinkStyle.length = linkWord.length(); hyperlinkStyle.fontStyle = SWT.BOLD; hyperlinkStyle.foreground = display.getSystemColor(SWT.COLOR_BLUE); widget.setStyleRange(hyperlinkStyle); widget.addMouseListener(new MouseAdapter() { public void mouseUp(MouseEvent arg0) { Point clickPoint = new Point(arg0.x, arg0.y); try { int offset = widget.getOffsetAtLocation(clickPoint); if (widget.getStyleRangeAtOffset(offset) != null) { System.out.println("link"); } } catch (IllegalArgumentException e) { //ignore, clicked out of text range. } }}); shell.open(); while (!shell.isDisposed()) if (!display.readAndDispatch()) display.sleep(); } } A: After spending the day working on this, I came up with a workaround. I created a Composite for the text area. For each word that isn't part of a url,got its own label. For links, each letter got its own label. Then the labels for the url characters got a listener to launch a browser. Using this method provided the Link functionality, handled resizing properly, and has no flicker.
{ "language": "en", "url": "https://stackoverflow.com/questions/162969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Eclipse RCP Splash Screen I'm involved in a project that is attempting to use the Eclipse RCP splash screen to gather user credentials, language, etc. If this screen loses focus, it is not available (under Windows at least) through the ALt-Tab functionality, and can only be found by minimizing all other windows and uncovering it. Any way of having this screen allow itself to be activated in this way? They're avoiding creating an intermediate screen, for reasons unknown at this point. A: I think it might be time to examine those unknown reasons. Even eclipse doesn't use the splash screen in this way. If it needs to prompt for information, it opens a new dialog to ask for it. Good luck. [Edit] I stand corrected. This thread seems to have a solution to this. Good luck, I'm no SWT/RCP guru. A: See this page. From one of the comments: The splash screen window is created natively with the extended window style WS_EX_TOOLWINDOW which makes it not appear in the task bar. This corresponds to the SWT constant SWT.TOOL. I don't know if it's possible to change the window style after it is created on Windows. You can always drop down to JNI if that's necessary. A: Create your own implementation of AbstractSplashHandler. When creating the shell, don't use the SWT.TOOL style. The shell will be accessible through the windows task bar.
{ "language": "en", "url": "https://stackoverflow.com/questions/162985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Including arrary index in XML Serialization I have a class that looks like this public class SomeClass { public SomeChildClass[] childArray; } which will output XML from the XMLSerializer like this: <SomeClass> <SomeChildClass> ... </SomeChildClass> <SomeChildClass> ... </SomeChildClass> </SomeClass> But I want the XML to look like this: <SomeClass> <SomeChildClass index=1> ... </SomeChildClass> <SomeChildClass index=2> ... </SomeChildClass> </SomeClass> Where the index attribute is equal to the items position in the array. I could add an index property to SomeChildClass with the "XMLAttribute" attribute but then I would have to remember to loop through the array and set that value before I serialize my object. Is there some attribute i can add or some other way to automatically generate the index attribute for me? A: The best approach would be to do what you said and add a property to the "SomeChildClass" like this [XmlAttribute("Index")] public int Order { { get; set; } } Then however you are adding these items to your array, make sure that this property get's set. Then when you serialize....Presto! A: You may need to look into implementing System.Xml.Serialization.IXmlSerializable to accomplish this. A: You can check XmlAttributeOverrides Class.
{ "language": "en", "url": "https://stackoverflow.com/questions/162986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Dynamically Load Embedded Resource Report Using Microsoft.Reporting.WinForms How does one dynamically load a new report from an embedded resource? I have created a reporting project that contains a report as an embedded resource. I added a second report file and use the following code to switch reports: this.reportViewer1.LocalReport.ReportEmbeddedResource = "ReportsApplication2.Report2.rdlc"; this.reportViewer1.LocalReport.Refresh(); this.reportViewer1.RefreshReport(); When this code executes, the original report remains visible in the report viewer. I have also tried using LocalReport.LoadReportDefinition but had the same result. A: The answer: you have to call <ReportViewer>.Reset(); prior to changing the value of ReportEmbeddedResource or calling LoadReportDefinition. After you do so, you'll also have to call <ReportViewer>.LocalReport.DataSources.Add( ... ); to re-establish the data sources. A: a better way to reference your reports is by using the default value of ReportEmbeddedResource, don't hard code it just change the name of the report. //choose which report to load string reportEmbeddedResource = this.orderReportViewer.LocalReport.ReportEmbeddedResource; //remove the extention .rdlc reportEmbeddedResource = reportEmbeddedResource.Remove(reportEmbeddedResource.LastIndexOf('.')); //remove name of current report ex: .invoice.rdlc reportEmbeddedResource = reportEmbeddedResource.Remove(reportEmbeddedResource.LastIndexOf('.')); //clear current reportEmbeddedResource this.orderReportViewer.Reset(); if (_retailReceip) { this.orderReportViewer.LocalReport.ReportEmbeddedResource = reportEmbeddedResource + ".PrintReceipt.rdlc"; } else { this.orderReportViewer.LocalReport.ReportEmbeddedResource = reportEmbeddedResource + ".PrintOrder.rdlc"; }
{ "language": "en", "url": "https://stackoverflow.com/questions/162989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Selecting proper toolkit for a 2D simulation project in Java I am looking for a toolkit that will allow me to design widgets containing 2D graphics for an elevator simulation in Java. Once created, those widgets will be integrated with SWT, Swing, or QtJambi framework. Background information: I am developing an Elevator Simulator for fun. My main goal is to increase my knowledge of Java, Eclipse IDE, and more importantly concurrency. It is certainly fun and I enjoyed implementing this State Machine Pattern. Anyway, I am at a point where I would like to see the Elevator on the screen and not limit myself to log its operations on the console. I will probably choose SWT, Swing, or QtJambi for the UI controls but I am wondering what to do with the graphical part of the simulation. A: You can use an SWT canvas (or Swing canvas, or OpenGL canvas via JOGL, ...), and set it up as an Observer of your simulation, and whenever the simulation state changes, you can redraw the new state. A: You could get some abstract graphics out of a Graph visualisation tool such as JGraph. You could use this to visualise which state your elevator is in. However, i'm not sure how flexible these sort of graph visualisation tools are and whether you can add your own graphics and animations. A: Are you sure you actually want to be using widgets. Would using Graphics2D+friends and your own abstractions not be a better fit?
{ "language": "en", "url": "https://stackoverflow.com/questions/162991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: The RunInstaller attribute in a WMI provider assembly I am creating a decoupled WMI provider in a class library. Everything I have read points towards including something along these lines: [System.ComponentModel.RunInstaller(true)] public class MyApplicationManagementInstaller : DefaultManagementInstaller { } I gather the purpose of this installation is because the Windows WMI infrastructure needs to be aware of the structure of my WMI provider before it is used. My question is - when is this "installer" ran? MSDN says that the installer will be invoked "during installation of an assembly", but I am not sure what that means or when it would happen in the context of a class library containing a WMI provider. I was under the impression that this was an automated replacement for manually running InstallUtil.exe against the assembly containing the WMI provider, but changes I make to the provider are not recognised by the Windows WMI infrastructure unless I manually run InstallUtil from the command prompt. I can do this on my own machine during development, but if an application using the provider is deployed to other machines - what then? It seems that this RunInstaller / DefaultManagementInstaller combination is not working properly - correct? A: As I understand, DefaultManagementInstaller is ran by installutil.exe - if you don't include it, the class is not installed in WMI. Maybe it is possible to create a 'setup project' or 'installer project' that runs it, but I'm not sure because I don't use Visual Studio. [edit] for remote instalation, an option could be to use Installutil with /MOF option to generate MOF for the assembly and use mofcomp to move it to WMI. A: I use something like this to call InstallUtil programmatically: public static void Run( Type type ) { // Register WMI stuff var installArgs = new[] { string.Format( "//logfile={0}", @"c:\Temp\sample.InstallLog" ), "//LogToConsole=false", "//ShowCallStack", type.Assembly.Location, }; ManagedInstallerClass.InstallHelper( installArgs ); } Call this from your Main() method. -dave A: Thanks Uros. It does look like all that RunInstaller and DefaultManagementInstaller do is enable you to run InstallUtil successfully against the assembly. This is strange because I'm almost certain that I didn't know about InstallUtil at the point where I'd compiled and played with my first WMI provider. I will look in to using the MOF file and for my own use I can just run the InstallUtil command line as a post build event in VS.
{ "language": "en", "url": "https://stackoverflow.com/questions/162993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MySQL Query: LIMITing a JOIN Say I have two tables I want to join. Categories: id name ---------- 1 Cars 2 Games 3 Pencils And items: id categoryid itemname --------------------------- 1 1 Ford 2 1 BMW 3 1 VW 4 2 Tetris 5 2 Pong 6 3 Foobar Pencil Factory I want a query that returns the category and the first (and only the first) itemname: category.id category.name item.id item.itemname ------------------------------------------------- 1 Cars 1 Ford 2 Games 4 Tetris 3 Pencils 6 Foobar Pencil Factory And is there a way I could get random results like: category.id category.name item.id item.itemname ------------------------------------------------- 1 Cars 3 VW 2 Games 5 Pong 3 Pencils 6 Foobar Pencil Factory Thanks! A: Just done a quick test. This seems to work: mysql> select * from categories c, items i -> where i.categoryid = c.id -> group by c.id; +------+---------+------+------------+----------------+ | id | name | id | categoryid | name | +------+---------+------+------------+----------------+ | 1 | Cars | 1 | 1 | Ford | | 2 | Games | 4 | 2 | Tetris | | 3 | Pencils | 6 | 3 | Pencil Factory | +------+---------+------+------------+----------------+ 3 rows in set (0.00 sec) I think this would fulfil your first question. Not sure about the second one - I think that needs an inner query with order by random() or something like that! A: Mysql lets you to have columns not included in grouping or aggregate, in which case they've got random values: select category.id, category.name, itemid, itemname inner join (select item.categoryid, item.id as itemid, item.name as itemname from item group by categoryid) on category.id = categoryid Or, for minimums, select category.id, category.name, itemid, itemname inner join (select item.categoryid, min(item.id) as itemid, item.name as itemname from items group by item.categoryid) on category.id = categoryid A: Mysql does let include non aggregate columns and there is no guarantee of determinism, but in my experience I nearly always get the first values. So usually (but not guaranteed) this will give you the first select * from categories c, items i where i.categoryid = c.id group by c.id; If you want guaranteed you will need to do something like select categories.id, categories.name, items.id, items.name from categories inner join items on items.categoryid = categories.id and items.id = (select min(items2.id) from items as items2 where items2.categoryid = category.id) If you want random answers you will have to change the subquery a little bit select categories.id, categories.name, items.id, items.name from categories inner join items on items.categoryid = categories.id and items.id = (select items2.id from items as items2 where items2.categoryid = category.id order by rand() limit 1)
{ "language": "en", "url": "https://stackoverflow.com/questions/163004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: urllib2 file name If I open a file using urllib2, like so: remotefile = urllib2.urlopen('http://example.com/somefile.zip') Is there an easy way to get the file name other then parsing the original URL? EDIT: changed openfile to urlopen... not sure how that happened. EDIT2: I ended up using: filename = url.split('/')[-1].split('#')[0].split('?')[0] Unless I'm mistaken, this should strip out all potential queries as well. A: I think that "the file name" isn't a very well defined concept when it comes to http transfers. The server might (but is not required to) provide one as "content-disposition" header, you can try to get that with remotefile.headers['Content-Disposition']. If this fails, you probably have to parse the URI yourself. A: Just saw this I normally do.. filename = url.split("?")[0].split("/")[-1] A: Did you mean urllib2.urlopen? You could potentially lift the intended filename if the server was sending a Content-Disposition header by checking remotefile.info()['Content-Disposition'], but as it is I think you'll just have to parse the url. You could use urlparse.urlsplit, but if you have any URLs like at the second example, you'll end up having to pull the file name out yourself anyway: >>> urlparse.urlsplit('http://example.com/somefile.zip') ('http', 'example.com', '/somefile.zip', '', '') >>> urlparse.urlsplit('http://example.com/somedir/somefile.zip') ('http', 'example.com', '/somedir/somefile.zip', '', '') Might as well just do this: >>> 'http://example.com/somefile.zip'.split('/')[-1] 'somefile.zip' >>> 'http://example.com/somedir/somefile.zip'.split('/')[-1] 'somefile.zip' A: Using urlsplit is the safest option: url = 'http://example.com/somefile.zip' urlparse.urlsplit(url).path.split('/')[-1] A: Do you mean urllib2.urlopen? There is no function called openfile in the urllib2 module. Anyway, use the urllib2.urlparse functions: >>> from urllib2 import urlparse >>> print urlparse.urlsplit('http://example.com/somefile.zip') ('http', 'example.com', '/somefile.zip', '', '') Voila. A: The os.path.basename function works not only for file paths, but also for urls, so you don't have to manually parse the URL yourself. Also, it's important to note that you should use result.url instead of the original url in order to follow redirect responses: import os import urllib2 result = urllib2.urlopen(url) real_url = urllib2.urlparse.urlparse(result.url) filename = os.path.basename(real_url.path) A: You could also combine both of the two best-rated answers : Using urllib2.urlparse.urlsplit() to get the path part of the URL, and then os.path.basename for the actual file name. Full code would be : >>> remotefile=urllib2.urlopen(url) >>> try: >>> filename=remotefile.info()['Content-Disposition'] >>> except KeyError: >>> filename=os.path.basename(urllib2.urlparse.urlsplit(url).path) A: If you only want the file name itself, assuming that there's no query variables at the end like http://example.com/somedir/somefile.zip?foo=bar then you can use os.path.basename for this: [user@host]$ python Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.path.basename("http://example.com/somefile.zip") 'somefile.zip' >>> os.path.basename("http://example.com/somedir/somefile.zip") 'somefile.zip' >>> os.path.basename("http://example.com/somedir/somefile.zip?foo=bar") 'somefile.zip?foo=bar' Some other posters mentioned using urlparse, which will work, but you'd still need to strip the leading directory from the file name. If you use os.path.basename() then you don't have to worry about that, since it returns only the final part of the URL or file path. A: I guess it depends what you mean by parsing. There is no way to get the filename without parsing the URL, i.e. the remote server doesn't give you a filename. However, you don't have to do much yourself, there's the urlparse module: In [9]: urlparse.urlparse('http://example.com/somefile.zip') Out[9]: ('http', 'example.com', '/somefile.zip', '', '', '') A: not that I know of. but you can parse it easy enough like this: url = 'http://example.com/somefile.zip' print url.split('/')[-1] A: import os,urllib2 resp = urllib2.urlopen('http://www.example.com/index.html') my_url = resp.geturl() os.path.split(my_url)[1] # 'index.html' This is not openfile, but maybe still helps :) A: using requests, but you can do it easy with urllib(2) import requests from urllib import unquote from urlparse import urlparse sample = requests.get(url) if sample.status_code == 200: #has_key not work here, and this help avoid problem with names if filename == False: if 'content-disposition' in sample.headers.keys(): filename = sample.headers['content-disposition'].split('filename=')[-1].replace('"','').replace(';','') else: filename = urlparse(sample.url).query.split('/')[-1].split('=')[-1].split('&')[-1] if not filename: if url.split('/')[-1] != '': filename = sample.url.split('/')[-1].split('=')[-1].split('&')[-1] filename = unquote(filename) A: You probably can use simple regular expression here. Something like: In [26]: import re In [27]: pat = re.compile('.+[\/\?#=]([\w-]+\.[\w-]+(?:\.[\w-]+)?$)') In [28]: test_set ['http://www.google.com/a341.tar.gz', 'http://www.google.com/a341.gz', 'http://www.google.com/asdasd/aadssd.gz', 'http://www.google.com/asdasd?aadssd.gz', 'http://www.google.com/asdasd#blah.gz', 'http://www.google.com/asdasd?filename=xxxbl.gz'] In [30]: for url in test_set: ....: match = pat.match(url) ....: if match and match.groups(): ....: print(match.groups()[0]) ....: a341.tar.gz a341.gz aadssd.gz aadssd.gz blah.gz xxxbl.gz A: Using PurePosixPath which is not operating system—dependent and handles urls gracefully is the pythonic solution: >>> from pathlib import PurePosixPath >>> path = PurePosixPath('http://example.com/somefile.zip') >>> path.name 'somefile.zip' >>> path = PurePosixPath('http://example.com/nested/somefile.zip') >>> path.name 'somefile.zip' Notice how there is no network traffic here or anything (i.e. those urls don't go anywhere) - just using standard parsing rules.
{ "language": "en", "url": "https://stackoverflow.com/questions/163009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Where can I find free content hosting? Is there any free hosting for Javascript? Recently google has been hosting jQuery,etc... and Yahoo hosts it's YUI, which is great, but it'd be even better if there was a service that could host user scripts and things like that. Any ideas? A: You can turn Google AppEngine in to your own CDN. Which will definitely give you the effect you are looking for. http://www.coderjournal.com/2008/06/turn-google-app-engine-into-a-content-delivery-network-cdn/ A: i found another great free javascript file hosting. www.yourjavascript.com they have a nice feature to access the file for specific domains.
{ "language": "en", "url": "https://stackoverflow.com/questions/163021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: High resolution timer in .NET I'd like to do some basic profiling of my code, but found that the DateTime.Now in C# only have a resolution of about 16 ms. There must be better time keeping constructs that I haven't yet found. A: For highest resolution performance counters you can use the underlying win32 performance counters. Add the following P/Invoke sigs: [System.Runtime.InteropServices.DllImport("Kernel32.dll")] public static extern bool QueryPerformanceCounter(out long perfcount); [System.Runtime.InteropServices.DllImport("Kernel32.dll")] public static extern bool QueryPerformanceFrequency(out long freq); And call them using: #region Query Performance Counter /// <summary> /// Gets the current 'Ticks' on the performance counter /// </summary> /// <returns>Long indicating the number of ticks on the performance counter</returns> public static long QueryPerformanceCounter() { long perfcount; QueryPerformanceCounter(out perfcount); return perfcount; } #endregion #region Query Performance Frequency /// <summary> /// Gets the number of performance counter ticks that occur every second /// </summary> /// <returns>The number of performance counter ticks that occur every second</returns> public static long QueryPerformanceFrequency() { long freq; QueryPerformanceFrequency(out freq); return freq; } #endregion Dump it all into a simple class and you're ready to go. Example (assuming a class name of PerformanceCounters): long startCount = PerformanceCounter.QueryPerformanceCounter(); // DoStuff(); long stopCount = PerformanceCounter.QueryPerformanceCounter(); long elapsedCount = stopCount - startCount; double elapsedSeconds = (double)elapsedCount / PerformanceCounter.QueryPerformanceFrequency(); MessageBox.Show(String.Format("Took {0} Seconds", Math.Round(elapsedSeconds, 6).ToString())); A: Here is a sample bit of code to time an operation: Dim sw As New Stopwatch() sw.Start() //Insert Code To Time sw.Stop() Dim ms As Long = sw.ElapsedMilliseconds Console.WriteLine("Total Seconds Elapsed: " & ms / 1000) EDIT: And the neat thing is that it can resume as well. Stopwatch sw = new Stopwatch(); foreach(MyStuff stuff in _listOfMyStuff) { sw.Start(); stuff.DoCoolCalculation(); sw.Stop(); } Console.WriteLine("Total calculation time: {0}", sw.Elapsed); The System.Diagnostics.Stopwatch class will use a high-resolution counter if one is available on your system. A: The System.Diagnostics.StopWatch class is awesome for profiling. Here is a link to Vance Morrison's Code Timer Blog if you don't want to write your own measurement functions. A: You could call down to the high-resolution performance counter in Windows. The function name is QueryPerformanceCounter in kernel32.dll. Syntax for importing into C#: [DllImport("Kernel32.dll")] private static extern bool QueryPerformanceCounter(out long lpPerformanceCount); Syntax for Windows call: BOOL QueryPerformanceCounter( LARGE_INTEGER *lpPerformanceCount ); QueryPerformanceCounter @ MSDN
{ "language": "en", "url": "https://stackoverflow.com/questions/163022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: How can I detect if I'm compiling for a 64bits architecture in C++ In a C++ function I need the compiler to choose a different block if it is compiling for a 64 bit architecture. I know a way to do it for MSVC++ and g++, so I'll post it as an answer. However I would like to know if there is a better way (more elegant that would work for all compilers/all 64 bits architectures). If there is not a better way, what other predefined macros should I look for in order to be compatible with other compiler/architectures? A: Why are you choosing one block over the other? If your decision is based on the size of a pointer, use sizeof(void*) == 8. If your decision is based on the size of an integer, use sizeof(int) == 8. My point is that the name of the architecture itself should rarely make any difference. You check only what you need to check, for the purposes of what you are going to do. Your question does not cover very clearly what your purpose of the check is. What you are asking is akin to trying to determine if DirectX is installed by querying the version of Windows. You have more portable and generic tools at your disposal. A: Raymond covers this. A: An architecture-independent way to detect 32-bit and 64-bit builds in C and C++ looks like this: // C #include <stdint.h> // C++ #include <cstdint> #if INTPTR_MAX == INT64_MAX // 64-bit #elif INTPTR_MAX == INT32_MAX // 32-bit #else #error Unknown pointer size or missing size macros! #endif A: This works for MSVC++ and g++: #if defined(_M_X64) || defined(__amd64__) // code... #endif A: If you're compiling for the Windows platform, you should use: #ifdef _WIN64 The MSVC compiler defines that for both x64 and ia64 platforms (you don't want to cut out that market, do you?). I'm not sure if gcc does the same - but it should if it doesn't. An alternative is #ifdef WIN64 which has a subtle difference. WIN64 (without the leading underscore) is defined by the SDK (or the build configuration). Since this is defined by the SDK/build config, it should work just as well with gcc. A: #ifdef _LP64 Works on both platforms A: Here's a good overview for Mac OS X: http://developer.apple.com/documentation/Darwin/Conceptual/64bitPorting A: If your using Windows, your probably better to get the "PROCESSOR_ARCHITECTURE" environment variable from the registry because sizeof(PVOID) will equal 4 if its a 32bit process running on a 64bit operating system (aka WOW64): if (RegOpenKeyEx(HKEY_LOCAL_MACHINE, _T("SYSTEM\CurrentControlSet\\Control\\Session Manager\\Environment"), 0, KEY_READ, &hKey) == ERROR_SUCCESS) { LPSTR szArch = new CHAR[100]; ZeroMemory(szArch, 100); if (RegQueryValueEx(hKey, _T("PROCESSOR_ARCHITECTURE"), NULL, NULL, (LPBYTE)szArch, &dwSize) == ERROR_SUCCESS) { if (strcmp(szArch, "AMD64") == 0) this->nArchitecture = 64; else this->nArchitecture = 32; } else { this->nArchitecture = (sizeof(PVOID) == 4 ? 32 : 64); } RegCloseKey(hKey); }
{ "language": "en", "url": "https://stackoverflow.com/questions/163058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Coupling, Cohesion and the Law of Demeter The Law of Demeter indicates that you should only speak to objects that you know about directly. That is, do not perform method chaining to talk to other objects. When you do so, you are establishing improper linkages with the intermediary objects, inappropriately coupling your code to other code. That's bad. The solution would be for the class you do know about to essentially expose simple wrappers that delegate the responsibility to the object it has the relationship with. That's good. But, that seems to result in the class having low cohesion. No longer is it simply responsible for precisely what it does, but it also has the delegates that in a sense, making the code less cohesive by duplicating portions of the interface of its related object. That's bad. Does it really result in lowering cohesion? Is it the lesser of two evils? Is this one of those gray areas of development, where you can debate where the line is, or are there strong, principled ways of making a decision of where to draw the line and what criteria you can use to make that decision? A: I think you may have misunderstood what cohesion means. A class that is implemented in terms of several other classes does not necessarily have low cohesion, as long as it represents a clear concept, and has a clear purpose. For example, you may have a class Person, which is implemented in terms of classes Date (for date of birth), Address, and Education (a list of schools the person went to). You may provide wrappers in Person for getting the year of birth, the last school the person went to, or the state where he lives, to avoid exposing the fact that Person is implemented in terms of those other classes. This would reduce coupling, but it would make Person no less cohesive. A: Grady Booch in "Object Oriented Analysis and Design": "The idea of cohesion also comes from structured design. Simply stated, cohesion measures the degree of connectivity among the elements of a single module (and for object-oriented design, a single class or object). The least desirable form of cohesion is coincidental cohesion, in which entirely unrelated abstractions are thrown into the same class or module. For example, consider a class comprising the abstractions of dogs and spacecraft, whose behaviors are quite unrelated. The most desirable form of cohesion is functional cohesion, in which the elements of a class or module all work together to provide some well-bounded behavior. Thus, the class Dog is functionally cohesive if its semantics embrace the behavior of a dog, the whole dog, and nothing but the dog." Subsitute Dog with Customer in the above and it might be a bit clearer. So the goal is really just to aim for functional cohesion and to move away from coincidental cohesion as much as possible. Depending on your abstractions, this may be simple or could require some refactoring. Note cohesion applies just as much to a "module" than to a single class, ie a group of classes working together. So in this case the Customer and Order classes still have decent cohesion because they have this strong relationshhip, customers create orders, orders belong to customers. Martin Fowler says he'd be more comfortable calling it the "Suggestion of Demeter" (see the article Mocks aren't stubs): "Mockist testers do talk more about avoiding 'train wrecks' - method chains of style of getThis().getThat().getTheOther(). Avoiding method chains is also known as following the Law of Demeter. While method chains are a smell, the opposite problem of middle men objects bloated with forwarding methods is also a smell. (I've always felt I'd be more comfortable with the Law of Demeter if it were called the Suggestion of Demeter .)" That sums up nicely where I'm coming from: it is perfectly acceptable and often necessary to have a lower level of cohesion than the strict adherence to the "law" might require. Avoid coincidental cohesion and aim for functional cohesion, but don't get hung up on tweaking where needed to fit in more naturally with your design abstraction. A: It’s a grey area. These principals are meant to help you in your work, if you find you’re working for them (i.e. they’re getting in your way and/or you find it over complicates your code) then you’re conforming too hard and you need to back off. Make it work for you, don’t work for it. A: If you are violating the Law of Demeter by having int price = customer.getOrder().getPrice(); the solution is not to create a getOrderPrice() and transform the code into int price = customer.getOrderPrice(); but instead to note that this is a code smell and make the relevant changes that hopefully both increase cohesion and lower coupling. Unfortunately there is no simple refactoring here that always applies, but you should probably apply tell don't ask A: I don't know if this actually lowers cohesion. Aggregation/composition are all about a class utilising other classes to meet the contract it exposes through its public methods. The class does not need to duplicate the interface of it's related objects. It's actually hiding any knwowledge about these aggregated classes from the method caller. To obey the law of Demeter in the case of multiple levels of class dependency, you just need to apply aggregation/composition and good encapsulation at each level. In other words each class has one or more dependencies on other classes, however these are only ever dependencies on the referenced class and not on any objects returned from properies/methods. A: In the situations where there seems to be a tradeoff between coupling and cohesion, I'd probably ask myself "if somebody else had already written this logic, and I were looking for a bug in it, where would I look first?", and write the code that way.
{ "language": "en", "url": "https://stackoverflow.com/questions/163071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70" }
Q: Exporting from SQLite to SQL Server Is there a tool to migrate an SQLite database to SQL Server (both the structure and data)? A: The SQLite .dump command will output the entire contents of the database as an ASCII text file. This file is in standard SQL format, so it can be imported into any SQL database. More details on this page: sqlite3 A: sqlite-manager, firefox add-on: allows you to export an SQLite database in a SQL script. Data Base>Export Database>Export to file (Correction firefox 35 bugg obliged to correct the extension code as indicate to the following web page: How to fix your optional sqlite manager module to work) Command line : sqlite3 DB_name .dump > DB_name.sql exports the sqlite database in a SQL script. From url : http://doc.ubuntu-fr.org/sqlite. A: SQLite does have a .dump option to run at the command line. Though I prefer to use the SQLite Database Browser application for managing SQLite databases. You can export the structure and contents to a .sql file that can be read by just about anything. File > Export > Database to SQL file. A: I know that this is old thread, but I think that this solution should be also here. * *Install ODBC driver for SQLite *Run odbcad32 for x64 or C:\Windows\SysWOW64\odbcad32.exe for x86 *Create SYSTEM DSN, where you select SQLite3 ODBC Driver *Then you fill up form where Database Name is filepath to sqlite database Then in SQL Server run under sysadmin USE [master] GO EXEC sp_addlinkedserver @server = 'OldSQLite', -- connection name @srvproduct = '', -- Can be blank but not NULL @provider = 'MSDASQL', @datasrc = 'SQLiteDNSName' -- name of the system DSN connection GO Then you can run your queries as normal user e.g. SELECT * INTO SQLServerDATA FROM openquery(SQLiteDNSName, 'select * from SQLiteData') or you can use something like this for larger tables. A: A idea is do some thing like this: - View squema in sql lite and get the CREATE TABLE command. - Execute, parsing sql, in SQL SERVER - Travel data creating a INSERT statment for each row. (parsing sql too) This code is beta, because no detect type data, and no use @parameter and command object, but run. (You need insert reference and install System.Data.SQLite;) c#: Insert this code (or neccesari) in head cs using System; using System.Collections.Generic; using System.Text; using System.Data; using System.Data.SqlClient; using System.Data.SQLite; using System.Threading; using System.Text.RegularExpressions; using System.IO; using log4net; using System.Net; public static Boolean SqLite2SqlServer(string sqlitePath, string connStringSqlServer) { String SqlInsert; int i; try { string sql = "select * from sqlite_master where type = 'table' and name like 'YouTable in SQL'"; string password = null; string sql2run; string tabla; string sqliteConnString = CreateSQLiteConnectionString(sqlitePath, password); //sqliteConnString = "data source=C:\\pro\\testconverter\\Origen\\FACTUNETWEB.DB;page size=4096;useutf16encoding=True"; using (SQLiteConnection sqconn = new SQLiteConnection(sqliteConnString)) { sqconn.Open(); SQLiteCommand command = new SQLiteCommand(sql, sqconn); SQLiteDataReader reader = command.ExecuteReader(); SqlConnection conn = new SqlConnection(connStringSqlServer); conn.Open(); while (reader.Read()) { //Console.WriteLine("Name: " + reader["name"] + "\tScore: " + reader["score"]); sql2run = "" + reader["sql"]; tabla = "" + reader["name"]; /* sql2run = "Drop table " + tabla; SqlCommand cmd = new SqlCommand(sql2run, conn); cmd.ExecuteNonQuery(); */ sql2run = sql2run.Replace("COLLATE NOCASE", ""); sql2run = sql2run.Replace(" NUM", " TEXT"); SqlCommand cmd2 = new SqlCommand(sql2run, conn); cmd2.ExecuteNonQuery(); // insertar los datos. string sqlCmd = "Select * From " + tabla; SQLiteCommand cmd = new SQLiteCommand(sqlCmd, sqconn); SQLiteDataReader rs = cmd.ExecuteReader(); String valor = ""; String Valores = ""; String Campos = ""; String Campo = ""; while (rs.Read()) { SqlInsert = "INSERT INTO " + tabla; Campos = ""; Valores = ""; for ( i = 0; i < rs.FieldCount ; i++) { //valor = "" + rs.GetString(i); //valor = "" + rs.GetName(i); Campo = "" + rs.GetName(i); valor = "" + rs.GetValue(i); if (Valores != "") { Valores = Valores + ','; Campos = Campos + ','; } Valores = Valores + "'" + valor + "'"; Campos = Campos + Campo; } SqlInsert = SqlInsert + "(" + Campos + ") Values (" + Valores + ")"; SqlCommand cmdInsert = new SqlCommand(SqlInsert, conn); cmdInsert.ExecuteNonQuery(); } } } return true; } //END TRY catch (Exception ex) { _log.Error("unexpected exception", ex); throw; } // catch } A: For Android. adb root adb shell cd /data/com.xxx.package/databases/ sqlite3 db_name .dump >dump.sql
{ "language": "en", "url": "https://stackoverflow.com/questions/163079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Setting default values (conditional assignment) In Ruby you can easily set a default value for a variable x ||= "default" The above statement will set the value of x to "default" if x is nil or false Is there a similar shortcut in PHP or do I have to use the longer form: $x = (isset($x))? $x : "default"; Are there any easier ways to handle this in PHP? A: isset($x) or $x = 'default'; A: As of PHP 5.3 you can use the ternary operator while omitting the middle argument: $x = $x ?: 'default'; A: As of PHP 7.4 you can write: $x ??= "default"; This works as long as $x is null. Other "falsy" values don't count as "not set". A: I wrap it in a function: function default($value, $default) { return $value ? $value : $default; } // then use it like: $x=default($x, 'default'); Some people may not like it, but it keeps your code cleaner if you're doing a crazy function call. A: As of PHP 7.0, you can also use the null coalesce operator // PHP version < 7.0, using a standard ternary $x = (isset($_GET['y'])) ? $_GET['y'] : 'not set'; // PHP version >= 7.0 $x = $_GET['y'] ?? 'not set'; A: I think your longer form is already the shortcut for php... and I wouldn't use it, because it is not good to read Some notice: In the symfony framework most of the "get"-Methods have a second parameter to define a default value...
{ "language": "en", "url": "https://stackoverflow.com/questions/163092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: How do I shrink the transaction log on MS SQL 2000 databases? I have several databases where the transaction log (.LDF) is many times larger than the database file (.MDF). What can I do to automatically shrink these or keep them from getting so large? A: That should do the job use master go dump transaction <YourDBName> with no_log go use <YourDBName> go DBCC SHRINKFILE (<YourDBNameLogFileName>, 100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs go -- then you can call to check that all went fine dbcc checkdb(<YourDBName>) A word of warning You would only really use it on a test/development database where you do not need a proper backup strategy as dumping the log will result in losing transactions history. In live systems you should use solution sugested by Cade Roux A: Backup transaction log and shrink it. If the DB is being backed up regularly and truncated on checkpoint, it shouldn't grow out of control, however, if you are doing a large number (size) of transactions between those intervals, it will grow until the next checkpoint. A: Right click on the database in Enterprise Manager > All Tasks > Shrink Database. A: DBCC SHRINKFILE. Here for 2005. Here for 2000. A: No one here said it, so I will: NEVER EVER shrink the transaction log. It is a bad idea from the SQL Server point of view. Keep the transaction log small by doing daily db backups and hourly (or less) transaction log backups. The transaction log backup interval depends on how busy your db is. A: Another thing you can try is to set the recovery mode to simple (if they are not already) for the database, which will keep the log files from growing as rapidly. We had this problem recently where our transaction log filled up and we were not permitted anymore transactions. A combination of the shrink file which is in multiple answers and simple recovery mode made sure our log file stayed a reasonable size. A: Using Query Analyser: USE yourdabatase SELECT * FROM sysfiles You should find something similar to: FileID … 1 1 24264 -1 1280 1048578 0 yourdabatase_Data D:\MSSQL_Services\Data\yourdabatase_Data.MDF 2 0 128 -1 1280 66 0 yourdabatase_Log D:\MSSQL_Services\Data\yourdabatase_Log.LDF Check the file ID of the log file (its 2 most of the time). Execute 2 or 3 times the checkpoint command to write every page to the hard-drive. Checkpoint GO Checkpoint GO Execute the following transactional command to trunk the log file to 1 MB DUMP TRAN yourdabatase WITH no_log DBCC SHRINKFILE(2,1) /*(FileID , the new size = 1 Mb)*/ A: Here is what I have been Using BACKUP LOG <CatalogName> with TRUNCATE_ONLY DBCC SHRINKDATABASE (<CatalogName>, 1) use <CatalogName> go DBCC SHRINKFILE(<CatalogName_logName>,1) A: try sp_force_shrink_log which you can find here http://www.rectanglered.com/sqlserver.php
{ "language": "en", "url": "https://stackoverflow.com/questions/163098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I grab events from sub-controls on a user-control in a WinForms App? Is there any way for the main form to be able to intercept events firing on a subcontrol on a user control? I've got a custom user-control embedded in the main Form of my application. The control contains various subcontrols that manipulate data, which itself is displayed by other controls on the main form. What I'd like is if the main form could be somehow informed when the user changes subcontrols, so I could update the data and the corresponding display elsewhere. Right now, I am cheating. I have a delegate hooked up to the focus-leaving event of the subcontrols. This delegate changes a property of the user-control I'm not using elsewhere (in this cause, CausesValidation). I then have a delegate defined on the main form for when the CausesValidation property of the user control changes, which then directs the app to update the data and display. A problem arises because I also have a delegate set up for when focus leaves the user-control, because I need to validate the fields in the user-control before I can allow the user to do anything else. However, if the user is just switching between subcontrols, I don't want to validate, because they might not be done editing. Basically, I want the data to update when the user switches subcontrols OR leaves the user control, but not validate. When the user leaves the control, I want to update AND validate. Right now, leaving the user-control causes validation to fire twice. A: The best model for this sort of thing will be creating custom events on your user control, and raising them at the appropriate times. Your scenario is fairly complex, but not unheard of. (I'm actually in a very similar mode on one of my current projects.) The way I approach it is that the user control is responsible for its own validation. I don't use CausesValidation; instead at the appropriate user control point, I perform validation through an override of ValidateChildren(). (This usually happens when the user clicks "Save" or "Next" on the user control, for me.) Not being familiar with your user control UI, that may not be 100% the right approach for you. However, if you raise custom events (possibly with a custom EventArgs which specifies whether or not to perform validation), you should be able to get where you want to be. A: You will need to wire up the events you care about capturing inside your user control, and publish them through some custom event properties on the user control itself. A simple example would be wrapping a button click event: // CustomControl.cs // Assumes a Button 'myButton' has been added through the designer // we need a delegate definition to type our event public delegate void ButtonClickHandler(object sender, EventArgs e); // declare the public event that other classes can subscribe to public event ButtonClickHandler ButtonClickEvent; // wire up the internal button click event to trigger our custom event this.myButton.Click += new System.EventHandler(this.myButton_Click); public void myButton_Click(object sender, EventArgs e) { if (ButtonClickEvent != null) { ButtonClickEvent(sender, e); } } Then, in the Form that uses that control, you wire up the event like you would any other: // CustomForm.cs // Assumes a CustomControl 'myCustomControl' has been added through the desinger this.myCustomControl.ButtonClickEvent += new System.EventHandler(this.myCustomControl_ButtonClickEvent); myCustomControl_ButtonClickEvent(object sender, EventArgs e) { // do something with the newly bubbled event } A: The best practice would be to expose events on the UserControl that bubble the events up to the parent form. I have gone ahead and put together an example for you. Here is a description of what this example provides. * *UserControl1 *Create a UserControl with TextBox1 *Register a public event on the UserControl called ControlChanged *Within the UserControl register an event handler for the TextBox1 TextChangedEvent *Within the TextChangeEvent handler function I call the ControlChanged event to bubble to the parent form *Form1 *Drop an instance of UserControl1 on the designer *Register an event handler on UserControl1 for MouseLeave and for ControlChanged Here is a screenshot illustrating that the ControlChanged event that I defined on the UserControl is available through the UX in Visual Studio on the parent Windows form. A: In case someone is still wondering how to simulate event bubbling in WinForm the method Application.AddMessageFilter is a good place to look at. Using this method you can install your own filter which monitors all messages being posted to the current thread's message queue. You should be aware that messages being sent (not posted) cannot be handled by this filter. Fourtounatly, most interesting events (like click events) are posted and not sent and therefore can be monitored by this filter A: I would like to chime in that, as described, it actually sounds like you're chasing a red herring. While it seems like you have a situation where the lack of event bubbling in WinForms is causing you trouble, the reality is that a poor architecture is forcing you into needing event bubbling when you shouldn't. If you can refactor/restructure your design such that the controls are working with a common data model (MVC/MVP are obvious choices) then you can simply apply common WinForms patterns like PropertyChanged events on the model to tell your main form and any other controls which consume that data to update themselves. In short, the other answers are reasonable in that they answer the question as asked. But from a code quality standpoint, I think the better answer is to separate your data from the UI.
{ "language": "en", "url": "https://stackoverflow.com/questions/163104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Can CLIENT-CERT auth-method be used with a JDBC realm within tomcat? The JDBC realm specifies a table structure for authentication which contains the columns defined by the attributes userNameCol and userCredCol. These correspond to user and password which makes sense for FORM or BASIC auth-methods. They are interactive and require these two pieces from the client's user. * *What comes back from the certificate? *What would an example of the data stored in userNameCol and userCredCol look like? *Is there an alternative table structure for the realm in this case? PS - I'm using tomcat 5.5.x. A: JDBCRealm Supports CLIENT-CERT Yes, it can. However, there are few quirks to watch out for. User Names The user name column should contain the certificate subject's distinguished name, as a character string. Unfortunately, the method Tomcat uses to obtain this string produces an implementation-dependent result, so it's possible if you were to switch to a new security provider or even just upgrade your Java runtime, you might need to map your user names to a new form. You'll have to test your deployment to find out what format is used. Specifically, getName() is called on the Principal returned by X509Certificate.getSubjectDN() to obtain a String, which is used as the user name. If you read the documentation, you'll find that this is no longer the best approach. Authentication The simplest set up would be to load your trust anchors into Tomcat's trust store, which is configured in the "server.xml" file. With this setup, any client certificate chain that is root in one of your trusted CAs will be considered "authenticated," and rightly so—authentication means that an identity is known, and is distinct from authorization, which determines what that identity is allowed to do. Authorization Since anyone with a signed certificate will be authenticated, you need to set up roles in order to protect private resources in your application. This is done by setting up security constraints, associated with roles, in your "web.xml" file. Then, in your database, populate the "roles" table to grant trusted users with extra roles. The relationship between the user table and the roles table works exactly as it would with FORM-based authorization, and should be utilized to grant appropriate permissions to users that you trust. A Note on Passwords The JDBCRealm will create a new Principal, which does carry a password, but unless your application downcasts this Principal to the Tomcat-specific implementation (GenericPrincipal), this property won't be visible to you, and it doesn't really matter what you put in that column. I recommend NULL. In other words, when using JDBCRealm with client-auth, the password field is ignored. This GenericPrincipal has a method to access an underlying principal, but unfortunately, the Principal from the certificate is not passed along; the JDBCRealm will set it to null; the only useful method in this scenario might be getName() (returning the subject DN is some possibly non-standard form). Table Structure and Content Use exactly the same table structure you would for a FORM-based JDBCRealm (or DatasourceRealm). The only difference will be in the content. The user name will be a text representation of the subject distinguished name, and the password will be NULL or some dummy value.
{ "language": "en", "url": "https://stackoverflow.com/questions/163113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Breakpoint not hooked up when debugging in VS.Net 2005 Been running into this problem lately... When debugging an app in VS.Net 2005, breakpoints are not connected. Error indicates that the compiled code is not the same as the running version and therefore there's a mismatch that causes the breakpoint to be disconnected. Cleaned solution of all bin file and re-compile doesn't help. Not just happening on a single box or person either. Added Note: This solution is in TFS for Source Control. If I delete my local TFS repository and get it from source control from scratch, SOMETIMES the problem goes away. I've also tried un-installing and re-installed Visual Studio. That also SOMETIMES helps. That fact that both of those work some of the time indicates that the problem isn't caused by either directly. A: Maybe this suggestion might help: * *While debugging in Visual Studio, click on Debug > Windows > Modules. The IDE will dock a Modules window, showing all the modules that have been loaded for your project. *Look for your project's DLL, and check the Symbol Status for it. *If it says Symbols Loaded, then you're golden. If it says something like Cannot find or open the PDB file, right-click on your module, select Load Symbols, and browse to the path of your PDB. I've found that it's sometimes necessary to: * *stop the debugger *close the IDE *close the hosting application *nuke the obj and bin folders *restart the IDE *rebuild the project *go through the Modules window again *Once you browse to the location of your PDB file, the Symbol Status should change to Symbols Loaded, and you should now be able to set and catch a breakpoint at your line in code. Source: The breakpoint will not currently be hit. No symbols have been loaded for this document. A: http://dpotter.net/Technical/2009/05/upgrading-to-ie8-breaks-debugging-with-visual-studio-2005/ A: In Options -> Debugging you can uncheck "require source files to exactly match the original version", which may help. A: Is the build configuration set to Release? Do you have a reference to an external DLL where the breakpoint is set? A: Are you creating a DLL project that is consumed by an external executable? Are you using .NET or COM? If you are using the COM Interop with .NET, the DLL versions can sometimes be a problem when the executable loads the DLL. For instance, if your daily build cranks out an incrementing build number but your debug DLL has a smaller build number, the executable won't load the debug DLL. To fix this, you will need to scan the HKEY_CLASSES_ROOT\CLSID directory in your registry for the GUID/CLSID of your .NET/COM component. Under InProc32, delete entries with a higher version number than your debug DLL. Again, the above only applies to .NET + COM Interop DLLs. A: I've had a similar problem in the past. It was solved by closing Visual Studio and deleting the temporary ASP.NET generated assembly files for the project under "C:\WINDOWS\Microsoft.NET\Framework{framework version}\Temporary ASP.NET Files", re-opening the project. Read the post here and the comments to resolve it. A: AviewAnew - had already done that at the request of the MS tech person. It didn't help to uncheck require source file to match version. Mike L - configuration is set to DEBUG and there are now external DLL. Using all local projects except framework references. A: Are you sure the .pdb files are in the same folder as the executable you are running? Make sure the last modified date of both files match, and that VS is attached to that exe (and no other). A: Do you have a post build step that touches your binaries in any way? If so, this can confuse the debugger and make it look like your symbols don't match your exe/dll because of the incorrect size/timestamp. A: In the past I have sometimes found that switching off compiler optimisations can solve 'missing' breakpoints, as the optimiser had determined (correctly) that the code was not being called, and removed them from the compiled versions. This does sound like a different issue, but it might be worth making sure that optimisation is switched off in Debug mode. [Project / Properties, Build settings tab] A: Sure there are no Debug attributes on the code that prevent code from being debugged, such as DebuggerHidden or DebuggerStepThrough, at any point of the application? A: Can you step through your code up to the line of the breakpoint instead of running and waiting for it to hit? Can you step through code at all?
{ "language": "en", "url": "https://stackoverflow.com/questions/163133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Where is the best place to re-learn graphics programming Thinking in regards to Sliverlight, I would like to know where would be good places to go to get a refresher on 3d space, transforms, matrix manipulation, and all that good stuff. A: There's always The Bible It is expensive and very heavy on the theory, so there's also the cheaper Bible Lite As pointed out in some comments and additional answers, it is definitely worth noting that this book is now quite dated. However, in the context of the original question, there's not really been any change in the low-level principles of linear algebra in a seriously long time. If you are looking to learn about high-level graphics programming this may well not be the first book for you. But if you like to know about "the guts-of-the-machine" and the underlying maths -- perhaps you are the kind of person that thinks folk should learn C :-) -- then go nuts. A: It's not a place, but I've found 3D Programming for Windows by Charles Petzold excellent. It covers everything you ask about and is focused specifically on WPF/silverlight. Of course Petzold (as usual) is able to communicate the important concepts beautifully. A: Think I may have found it myself. Was looking at: http://msdn.microsoft.com/en-us/library/cc189037(VS.95).aspx and http://www.c-sharpcorner.com/UploadFile/mgold/TransformswithGDIplus09142005064919AM/TransformswithGDIplus.aspx A: Free graphics algorithms can be found in the comp.graphics.algorithms faq A: As previously mentioned you should really learn linear algebra, here are some great video lectures about it, MIT Linear Alebgra Video Lectures. A: Any linear algebra textbook should provide the math refresher; there's a fairly good one available online at Linear Algebra textbook home page. A: Personally I think that although the bible (by Foley & Van Damn that is) was the greatest book for its time, but it is somewhat outdated. I would suggest 'Advanced animation & Rendering techniques' by Alan and Mark Watt. The only problem with this book is that it gives you a good understanding almost about every broad aspect in CG but it assumes you have some familiarity with it, and does not explain it all the way. You can always have a look in the Bib and find enhanced articles and books about each subject you are interested in depth. If you want further on once you have more understanding, or if you want to dive into the world of computer graphics and the use of GPU I suggest to have a look at the three 'GPU Gems'.
{ "language": "en", "url": "https://stackoverflow.com/questions/163146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Can you call Directory.GetFiles() with multiple filters? I am trying to use the Directory.GetFiles() method to retrieve a list of files of multiple types, such as mp3's and jpg's. I have tried both of the following with no luck: Directory.GetFiles("C:\\path", "*.mp3|*.jpg", SearchOption.AllDirectories); Directory.GetFiles("C:\\path", "*.mp3;*.jpg", SearchOption.AllDirectories); Is there a way to do this in one call? A: Nope. Try the following: List<string> _searchPatternList = new List<string>(); ... List<string> fileList = new List<string>(); foreach ( string ext in _searchPatternList ) { foreach ( string subFile in Directory.GetFiles( folderName, ext ) { fileList.Add( subFile ); } } // Sort alpabetically fileList.Sort(); // Add files to the file browser control foreach ( string fileName in fileList ) { ...; } Taken from: http://blogs.msdn.com/markda/archive/2006/04/20/580075.aspx A: How about this: private static string[] GetFiles(string sourceFolder, string filters, System.IO.SearchOption searchOption) { return filters.Split('|').SelectMany(filter => System.IO.Directory.GetFiles(sourceFolder, filter, searchOption)).ToArray(); } I found it here (in the comments): http://msdn.microsoft.com/en-us/library/wz42302f.aspx A: For .NET 4.0 and later, var files = Directory.EnumerateFiles("C:\\path", "*.*", SearchOption.AllDirectories) .Where(s => s.EndsWith(".mp3") || s.EndsWith(".jpg")); For earlier versions of .NET, var files = Directory.GetFiles("C:\\path", "*.*", SearchOption.AllDirectories) .Where(s => s.EndsWith(".mp3") || s.EndsWith(".jpg")); edit: Please read the comments. The improvement that Paul Farry suggests, and the memory/performance issue that Christian.K points out are both very important. A: I can't use .Where method because I'm programming in .NET Framework 2.0 (Linq is only supported in .NET Framework 3.5+). Code below is not case sensitive (so .CaB or .cab will be listed too). string[] ext = new string[2] { "*.CAB", "*.MSU" }; foreach (string found in ext) { string[] extracted = Directory.GetFiles("C:\\test", found, System.IO.SearchOption.AllDirectories); foreach (string file in extracted) { Console.WriteLine(file); } } A: List<string> FileList = new List<string>(); DirectoryInfo di = new DirectoryInfo("C:\\DirName"); IEnumerable<FileInfo> fileList = di.GetFiles("*.*"); //Create the query IEnumerable<FileInfo> fileQuery = from file in fileList where (file.Extension.ToLower() == ".jpg" || file.Extension.ToLower() == ".png") orderby file.LastWriteTime select file; foreach (System.IO.FileInfo fi in fileQuery) { fi.Attributes = FileAttributes.Normal; FileList.Add(fi.FullName); } A: DirectoryInfo directory = new DirectoryInfo(Server.MapPath("~/Contents/")); //Using Union FileInfo[] files = directory.GetFiles("*.xlsx") .Union(directory .GetFiles("*.csv")) .ToArray(); A: in .NET 2.0 (no Linq): public static List<string> GetFilez(string path, System.IO.SearchOption opt, params string[] patterns) { List<string> filez = new List<string>(); foreach (string pattern in patterns) { filez.AddRange( System.IO.Directory.GetFiles(path, pattern, opt) ); } // filez.Sort(); // Optional return filez; // Optional: .ToArray() } Then use it: foreach (string fn in GetFilez(path , System.IO.SearchOption.AllDirectories , "*.xml", "*.xml.rels", "*.rels")) {} A: If you are using VB.NET (or imported the dependency into your C# project), there actually exists a convenience method that allows to filter for multiple extensions: Microsoft.VisualBasic.FileIO.FileSystem.GetFiles("C:\\path", Microsoft.VisualBasic.FileIO.SearchOption.SearchAllSubDirectories, new string[] {"*.mp3", "*.jpg"}); In VB.NET this can be accessed through the My-namespace: My.Computer.FileSystem.GetFiles("C:\path", FileIO.SearchOption.SearchAllSubDirectories, {"*.mp3", "*.jpg"}) Unfortunately, these convenience methods don't support a lazily evaluated variant like Directory.EnumerateFiles() does. A: If you have a large list of extensions to check you can use the following. I didn't want to create a lot of OR statements so i modified what lette wrote. string supportedExtensions = "*.jpg,*.gif,*.png,*.bmp,*.jpe,*.jpeg,*.wmf,*.emf,*.xbm,*.ico,*.eps,*.tif,*.tiff,*.g01,*.g02,*.g03,*.g04,*.g05,*.g06,*.g07,*.g08"; foreach (string imageFile in Directory.GetFiles(_tempDirectory, "*.*", SearchOption.AllDirectories).Where(s => supportedExtensions.Contains(Path.GetExtension(s).ToLower()))) { //do work here } A: The following function searches on multiple patterns, separated by commas. You can also specify an exclusion, eg: "!web.config" will search for all files and exclude "web.config". Patterns can be mixed. private string[] FindFiles(string directory, string filters, SearchOption searchOption) { if (!Directory.Exists(directory)) return new string[] { }; var include = (from filter in filters.Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries) where !string.IsNullOrEmpty(filter.Trim()) select filter.Trim()); var exclude = (from filter in include where filter.Contains(@"!") select filter); include = include.Except(exclude); if (include.Count() == 0) include = new string[] { "*" }; var rxfilters = from filter in exclude select string.Format("^{0}$", filter.Replace("!", "").Replace(".", @"\.").Replace("*", ".*").Replace("?", ".")); Regex regex = new Regex(string.Join("|", rxfilters.ToArray())); List<Thread> workers = new List<Thread>(); List<string> files = new List<string>(); foreach (string filter in include) { Thread worker = new Thread( new ThreadStart( delegate { string[] allfiles = Directory.GetFiles(directory, filter, searchOption); if (exclude.Count() > 0) { lock (files) files.AddRange(allfiles.Where(p => !regex.Match(p).Success)); } else { lock (files) files.AddRange(allfiles); } } )); workers.Add(worker); worker.Start(); } foreach (Thread worker in workers) { worker.Join(); } return files.ToArray(); } Usage: foreach (string file in FindFiles(@"D:\628.2.11", @"!*.config, !*.js", SearchOption.AllDirectories)) { Console.WriteLine(file); } A: What about string[] filesPNG = Directory.GetFiles(path, "*.png"); string[] filesJPG = Directory.GetFiles(path, "*.jpg"); string[] filesJPEG = Directory.GetFiles(path, "*.jpeg"); int totalArraySizeAll = filesPNG.Length + filesJPG.Length + filesJPEG.Length; List<string> filesAll = new List<string>(totalArraySizeAll); filesAll.AddRange(filesPNG); filesAll.AddRange(filesJPG); filesAll.AddRange(filesJPEG); A: for var exts = new[] { "mp3", "jpg" }; You could: public IEnumerable<string> FilterFiles(string path, params string[] exts) { return Directory .EnumerateFiles(path, "*.*") .Where(file => exts.Any(x => file.EndsWith(x, StringComparison.OrdinalIgnoreCase))); } * *Don't forget the new .NET4 Directory.EnumerateFiles for a performance boost (What is the difference between Directory.EnumerateFiles vs Directory.GetFiles?) *"IgnoreCase" should be faster than "ToLower" (.EndsWith("aspx", StringComparison.OrdinalIgnoreCase) rather than .ToLower().EndsWith("aspx")) But the real benefit of EnumerateFiles shows up when you split up the filters and merge the results: public IEnumerable<string> FilterFiles(string path, params string[] exts) { return exts.Select(x => "*." + x) // turn into globs .SelectMany(x => Directory.EnumerateFiles(path, x) ); } It gets a bit faster if you don't have to turn them into globs (i.e. exts = new[] {"*.mp3", "*.jpg"} already). Performance evaluation based on the following LinqPad test (note: Perf just repeats the delegate 10000 times) https://gist.github.com/zaus/7454021 ( reposted and extended from 'duplicate' since that question specifically requested no LINQ: Multiple file-extensions searchPattern for System.IO.Directory.GetFiles ) A: Just found an another way to do it. Still not one operation, but throwing it out to see what other people think about it. private void getFiles(string path) { foreach (string s in Array.FindAll(Directory.GetFiles(path, "*", SearchOption.AllDirectories), predicate_FileMatch)) { Debug.Print(s); } } private bool predicate_FileMatch(string fileName) { if (fileName.EndsWith(".mp3")) return true; if (fileName.EndsWith(".jpg")) return true; return false; } A: I wonder why there are so many "solutions" posted? If my rookie-understanding on how GetFiles works is right, there are only two options and any of the solutions above can be brought down to these: * *GetFiles, then filter: Fast, but a memory killer due to storing overhead untill the filters are applied *Filter while GetFiles: Slower the more filters are set, but low memory usage as no overhead is stored.This is explained in one of the above posts with an impressive benchmark: Each filter option causes a seperate GetFile-operation so the same part of the harddrive gets read several times. In my opinion Option 1) is better, but using the SearchOption.AllDirectories on folders like C:\ would use huge amounts of memory. Therefor i would just make a recursive sub-method that goes through all subfolders using option 1) This should cause only 1 GetFiles-operation on each folder and therefor be fast (Option 1), but use only a small amount of memory as the filters are applied afters each subfolders' reading -> overhead is deleted after each subfolder. Please correct me if I am wrong. I am as i said quite new to programming but want to gain deeper understanding of things to eventually become good at this :) A: Here is a simple and elegant way of getting filtered files var allowedFileExtensions = ".csv,.txt"; var files = Directory.EnumerateFiles(@"C:\MyFolder", "*.*", SearchOption.TopDirectoryOnly) .Where(s => allowedFileExtensions.IndexOf(Path.GetExtension(s)) > -1).ToArray(); A: Nop... I believe you have to make as many calls as the file types you want. I would create a function myself taking an array on strings with the extensions I need and then iterate on that array making all the necessary calls. That function would return a generic list of the files matching the extensions I'd sent. Hope it helps. A: /// <summary> /// Returns the names of files in a specified directories that match the specified patterns using LINQ /// </summary> /// <param name="srcDirs">The directories to seach</param> /// <param name="searchPatterns">the list of search patterns</param> /// <param name="searchOption"></param> /// <returns>The list of files that match the specified pattern</returns> public static string[] GetFilesUsingLINQ(string[] srcDirs, string[] searchPatterns, SearchOption searchOption = SearchOption.AllDirectories) { var r = from dir in srcDirs from searchPattern in searchPatterns from f in Directory.GetFiles(dir, searchPattern, searchOption) select f; return r.ToArray(); } A: Make the extensions you want one string i.e ".mp3.jpg.wma.wmf" and then check if each file contains the extension you want. This works with .net 2.0 as it does not use LINQ. string myExtensions=".jpg.mp3"; string[] files=System.IO.Directory.GetFiles("C:\myfolder"); foreach(string file in files) { if(myExtensions.ToLower().contains(System.IO.Path.GetExtension(s).ToLower())) { //this file has passed, do something with this file } } The advantage with this approach is you can add or remove extensions without editing the code i.e to add png images, just write myExtensions=".jpg.mp3.png". A: I had the same problem and couldn't find the right solution so I wrote a function called GetFiles: /// <summary> /// Get all files with a specific extension /// </summary> /// <param name="extensionsToCompare">string list of all the extensions</param> /// <param name="Location">string of the location</param> /// <returns>array of all the files with the specific extensions</returns> public string[] GetFiles(List<string> extensionsToCompare, string Location) { List<string> files = new List<string>(); foreach (string file in Directory.GetFiles(Location)) { if (extensionsToCompare.Contains(file.Substring(file.IndexOf('.')+1).ToLower())) files.Add(file); } files.Sort(); return files.ToArray(); } This function will call Directory.Getfiles() only one time. For example call the function like this: string[] images = GetFiles(new List<string>{"jpg", "png", "gif"}, "imageFolder"); EDIT: To get one file with multiple extensions use this one: /// <summary> /// Get the file with a specific name and extension /// </summary> /// <param name="filename">the name of the file to find</param> /// <param name="extensionsToCompare">string list of all the extensions</param> /// <param name="Location">string of the location</param> /// <returns>file with the requested filename</returns> public string GetFile( string filename, List<string> extensionsToCompare, string Location) { foreach (string file in Directory.GetFiles(Location)) { if (extensionsToCompare.Contains(file.Substring(file.IndexOf('.') + 1).ToLower()) &&& file.Substring(Location.Length + 1, (file.IndexOf('.') - (Location.Length + 1))).ToLower() == filename) return file; } return ""; } For example call the function like this: string image = GetFile("imagename", new List<string>{"jpg", "png", "gif"}, "imageFolder"); A: Using GetFiles search pattern for filtering the extension is not safe!! For instance you have two file Test1.xls and Test2.xlsx and you want to filter out xls file using search pattern *.xls, but GetFiles return both Test1.xls and Test2.xlsx I was not aware of this and got error in production environment when some temporary files suddenly was handled as right files. Search pattern was *.txt and temp files was named *.txt20181028_100753898 So search pattern can not be trusted, you have to add extra check on filenames as well. A: I know it's old question but LINQ: (.NET40+) var files = Directory.GetFiles("path_to_files").Where(file => Regex.IsMatch(file, @"^.+\.(wav|mp3|txt)$")); A: There is also a descent solution which seems not to have any memory or performance overhead and be quite elegant: string[] filters = new[]{"*.jpg", "*.png", "*.gif"}; string[] filePaths = filters.SelectMany(f => Directory.GetFiles(basePath, f)).ToArray(); A: Another way to use Linq, but without having to return everything and filter on that in memory. var files = Directory.GetFiles("C:\\path", "*.mp3", SearchOption.AllDirectories).Union(Directory.GetFiles("C:\\path", "*.jpg", SearchOption.AllDirectories)); It's actually 2 calls to GetFiles(), but I think it's consistent with the spirit of the question and returns them in one enumerable. A: Let var set = new HashSet<string>( new[] { ".mp3", ".jpg" }, StringComparer.OrdinalIgnoreCase); // ignore case var dir = new DirectoryInfo(path); Then dir.EnumerateFiles("*.*", SearchOption.AllDirectories) .Where(f => set.Contains(f.Extension)); or from file in dir.EnumerateFiles("*.*", SearchOption.AllDirectories) from ext in set // makes sense only if it's just IEnumerable<string> or similar where String.Equals(ext, file.Extension, StringComparison.OrdinalIgnoreCase) select file; A: Or you can just convert the string of extensions to String^ vector <string> extensions = { "*.mp4", "*.avi", "*.flv" }; for (int i = 0; i < extensions.size(); ++i) { String^ ext = gcnew String(extensions[i].c_str());; String^ path = "C:\\Users\\Eric\\Videos"; array<String^>^files = Directory::GetFiles(path,ext); Console::WriteLine(ext); cout << " " << (files->Length) << endl; } A: i don t know what solution is better, but i use this: String[] ext = "*.ext1|*.ext2".Split('|'); List<String> files = new List<String>(); foreach (String tmp in ext) { files.AddRange(Directory.GetFiles(dir, tmp, SearchOption.AllDirectories)); } A: you can add this to your project public static class Collectables { public static List<System.IO.FileInfo> FilesViaPattern(this System.IO.DirectoryInfo fldr, string pattern) { var filter = pattern.Split(" "); return fldr.GetFiles( "*.*", System.IO.SearchOption.AllDirectories) .Where(l => filter.Any(k => l.Name.EndsWith(k))).ToList(); } } then use it anywhere like this new System.IO.DirectoryInfo("c:\\test").FilesViaPattern("txt doc any.extension");
{ "language": "en", "url": "https://stackoverflow.com/questions/163162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "411" }
Q: IIS Connection Pool interrogation/leak tracking Per this helpful article I have confirmed I have a connection pool leak in some application on my IIS 6 server running W2k3. The tough part is that I'm serving 300 websites written by 700 developers from this server in 6 application pools, 50% of which are .NET 1.1 which doesn't even show connections in the CLR Data performance counter. I could watch connections grow on my end if everything were .NET 2.0+, but I'm even out of luck on that slim monitoring tool. My 300 websites connect to probably 100+ databases spread out between Oracle, SQLServer and outliers, so I cannot watch the connections from the database end either. Right now my best and only plan is to do a loose binary search for my worst offenders. I will kill application pools and slowly remove applications from them until I find which individual applications result in the most connections dropping when I kill their pool. But since this is a production box and I like continued employment, this could take weeks as a tracing method. Does anyone know of a way to interrogate the IIS connection pools to learn their origin or owner? Is there an MSMQ trigger I might be able to which I might be able to attach when they are created? Anything silly I'm overlooking? Kevin (I'll include the error code to facilitate others finding your answers through search: Exception: System.InvalidOperationException Message: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.) A: Try starting with this first article from Bill Vaughn. A: Todd Denlinger wrote a fantastic class http://www.codeproject.com/KB/database/connectionmonitor.aspx which watches Sql Server connections and reports on ones that have not been properly disposed within a period of time. Wire it into your site, and it will let you know when there is a leak.
{ "language": "en", "url": "https://stackoverflow.com/questions/163164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: LINQ to SQL Peculiarities I'm encountering some peculiarities with LINQ to SQL. With a relatively simple query, I want to select some fields, but have the date fields formatted as strings, which I first achieved like this: var list = dataContext.MyLists.Single(x => x.ID == myId); var items = from i in list.MyItems select new { i.ID, i.Sector, i.Description, CompleteDate = i.CompleteDate.HasValue ? i.CompleteDate.Value.ToShortDateString() : "", DueDate = i.DueDate.HasValue ? i.DueDate.Value.ToShortDateString() : "" }; Later on I tried the following query, which is exactly the same, except I'm querying straight from my dataContext, rather than an element in my first query: var items = from i in dataContext.MyLists select new { i.ID, i.Sector, i.Description, CompleteDate = i.CompleteDate.HasValue ? i.CompleteDate.Value.ToShortDateString() : "", DueDate = i.DueDate.HasValue ? i.DueDate.Value.ToShortDateString() : "" }; The first one runs fine, yet the second query yields a: Could not translate expression '...' into SQL and could not treat it as a local expression. If I remove the lines that Format the date, it works fine. If I remove the .HasValue check it also works fine, until there are null values. Any ideas? Anthony A: In the first query, you have already got the data back from the database by the time the second line runs (var items = ...). This means that the 2nd line runs at the client, where ToShortDateString can run quite happily. In the second query, because the select runs directly on an IQueryable collection (dataContext.MyLists), it attempts to translate the select into SQL for processing at the server, where ToShortDateString is not understood - hence the "Could Not Translate.." exception. To understand this a bit better, you really need to understand the difference between IQueryable and IEnumerable, and at which point a Linq To Sql query stops being IQueryable and becomes IEnumerable. There is plenty of stuff on the web about this. Hope this helps, Paul A: Just like the error message tells you, the difference is due to what can be done locally verses remotely while connecting to SQL. The Linq code has to be converted by Linq to SQL into a SQL command for the remote data pulls - anything that has to be done locally cannot be included. Once you pulled it into a local object (in the first example), it is not using Linq to SQL anymore, just plain Linq. At that point you are free to do local manipulations on it. A: Maybe there was a copy and paste error or just a typo in your sample. But if not, this might be the problem... In the second query you are querying a collection of lists, whereas in the first query you were querying the items within a list. But you haven't adjusted the query to account for this difference. What you need might be this. Note the commented lines which did not appear in your second sample. var items = from aList in dataContext.MyLists from i in aList.MyItems // Access the items in a list where aList.ID == myId // Use only the single desired list select new { i.ID, i.Sector, i.Description, CompleteDate = i.CompleteDate.HasValue ? i.CompleteDate.Value.ToShortDateString() : "", DueDate = i.DueDate.HasValue ? i.DueDate.Value.ToShortDateString() : "" }; A: I'd do the SQL part without doing the formatting, then do the formatting on the client side: var items = list.MyItems.Select(item => new { item.ID, item.Sector, item.Description, item.CompleteDate, item.DueDate }) .AsEnumerable() // Don't do the next bit in the DB .Select(item => new { item.ID, item.Sector, item.Description, CompleteDate = FormatDate(CompleteDate), DueDate = FormatDate(DueDate) }); static string FormatDate(DateTime? date) { return date.HasValue ? date.Value.ToShortDateString() : "" } A: ToShortDateString() is not supported by Linq to SQL http://msdn.microsoft.com/en-us/library/bb882657.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/163183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: VERY slow running regular expression when using large documents I need to convert inline css style attributes to their HTML tag equivelants. The solution I have works but runs VERY slowly using the Microsoft .Net Regex namespace and long documents (~40 pages of html). I've tried several variations but with no useful results. I've done a little wrapping around executing the expressions but in the end it's just the built-in regex Replace method that gets called. I'm sure I'm abusing the greediness of the regex but I'm not sure of a way around it to achieve what I want using a single regex. I want to be able to run the following unit tests: [Test] public void TestCleanReplacesFontWeightWithB() { string html = "<font style=\"font-weight:bold\">Bold Text</font>"; html = Q4.PrWorkflow.Helper.CleanFormatting(html); Assert.AreEqual("<b>Bold Text</b>", html); } [Test] public void TestCleanReplacesMultipleAttributesFontWeightWithB() { string html = "<font style=\"font-weight:bold; color: blue; \">Bold Text</font>"; html = Q4.PrWorkflow.Helper.CleanFormatting(html); Assert.AreEqual("<b>Bold Text</b>", html); } [Test] public void TestCleanReplaceAttributesBoldAndUnderlineWithHtml() { string html = "<span style=\"font-weight:bold; color: blue; text-decoration: underline; \">Bold Text</span>"; html = Q4.PrWorkflow.Helper.CleanFormatting(html); Assert.AreEqual("<u><b>Bold Text</b></u>", html); } [Test] public void TestCleanReplaceAttributesBoldUnderlineAndItalicWithHtml() { string html = "<span style=\"font-weight:bold; color: blue; font-style: italic; text-decoration: underline; \">Bold Text</span>"; html = Q4.PrWorkflow.Helper.CleanFormatting(html); Assert.AreEqual("<u><b><i>Bold Text</i></b></u>", html); } [Test] public void TestCleanReplacesFontWeightWithSpaceWithB() { string html = "<font size=\"10\" style=\"font-weight: bold\">Bold Text</font>"; html = Q4.PrWorkflow.Helper.CleanFormatting(html); Assert.AreEqual("<b>Bold Text</b>", html); } The regular expresion I am using to achieve this logic works but is VERY slow. The regex in the c# code looks like this: public static IReplacePattern IncludeInlineItalicToITag(ICleanUpHtmlFactory factory) { return factory.CreateReplacePattern("(<(span|font) .*?style=\".*?font-style:\\s*italic[^>]*>)(.*?)</\\2>", "$1<i>$3</i></$2>"); } public static IReplacePattern IncludeInlineBoldToBTag(ICleanUpHtmlFactory factory) { return factory.CreateReplacePattern("(<(span|font) .*?style=\".*?font-weight:\\s*bold[^>]*>)(.*?)</\\2>", "$1<b>$3</b></$2>"); } public static IReplacePattern IncludeInlineUnderlineToUTag(ICleanUpHtmlFactory factory) { return factory.CreateReplacePattern("(<(span|font) .*?style=\".*?text-decoration:\\s*underline[^>]*>)(.*?)</\\2>", "$1<u>$3</u></$2>"); } A: I believe the problem is that if it finds a span|font tag, which has no style attribute defined, it will continue looking for it until the end of the document because of the .\*?. I haven't tested it, but changing it to [^>]\*? might improve performance. Make sure you apply that change for all .\*? you have; even the one capturing the content between tags (use [^<]\*? there), because if a file is not well-formed, it will capture up to the next closing tag. A: .NET regular expressions does not support recursive constructs. PCRE does, but that doesn't matter here. Concider <font style="font-weight: bold;"> text1 <font color="blue"> text2 </font> text3 </font> It would get converted into <b> text1 <font color="blue"> text2 </b> text3 </font> My suggestion would be to use a proper markup parser, and maybe use regexp on the values of the style-tags. Edit: Scratch that. It seems .NET has a construct for balanced, recursive patterns. But not as powerful as those in PCRE/perl. (?<N>content) would push N onto a stack if content matches (?<-N>content) would pop N from the stack, if content matches. (?(N)yes|no) would match "yes" if N is on the stack, otherwise "no". See http://weblogs.asp.net/whaggard/archive/2005/02/20/377025.aspx for details. A: Wild guess: I believe the cost comes from the alternative and the corresponding match. You might want to try to replace: "(<(span|font) .*?style=\".*?font-style:\\s*italic[^>]*>)(.*?)</\\2>", "$1<i>$3</i></$2>" with two separate expressions: "(<span .*?style=\".*?font-style:\\s*italic[^>]*>)(.*?)</span>", "$1<i>$2</i></span>" "(<font .*?style=\".*?font-style:\\s*italic[^>]*>)(.*?)</font>", "$1<i>$2</i></font>" Granted, that double the parsing of the file, but the regex being simpler, with less trackbacks, it might be faster in practice. It is not very nice (repetition of code) but as long as it works... Funnily, I did something similar (I don't have the code at hand) to clean up HTML generated by a tool, simplifying it so that JavaHelp can understand it... It is one case where regexes against HTML is OK, because it is not a human making mistakes or changing little things which creates the HTML, but a process with well defined patterns. A: During testing i found strange behavior. When run regexp in separate thread it runs a lot faster. I have sql script that i was spliting to sections from Go to Go using regexp. When working on this script without using separate thread it last for about 2 minutes. But when using multithreading it last only few secounds.
{ "language": "en", "url": "https://stackoverflow.com/questions/163184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: IIS6: 503 errors and CPU spikes So, there is a horribly written site that I occasionally help out with that was originally written in classic ASP. It was then "ported" to ASP.NET by moving the global variables to the code behind and leaving the rest of the code in the aspx...Its a huge mess. On some pages, an occasional race condition seems to be triggered that causes IIS6 to die (returns 503 errors) and spikes the CPU to 100%. We set up some monitoring tools and recycle the apppool when this happens to keep the site stable, but this is just a bandaid. Does anyone know of any tools to get me pointed in the right direction for finding why this happens? Memory usage remains flat, so its not a leaking reference issue. A: Usually the best place to start is the Http.sys log: HTTP.SYS error log - %windir%\System32\LogFiles\HTTPERR You can also check the event log and IIS log to see if you have any additional information in there.
{ "language": "en", "url": "https://stackoverflow.com/questions/163192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I use a String as a Stream in .Net? I need to call a method that accepts a stream argument. The method loads text into the stream, which would normally be a file. I'd like to simply populate a string with the contents of the stream, instead of writing it to a file. How do I do this? A: Use a MemoryStream with a StreamReader. Something like: using (MemoryStream ms = new MemoryStream()) using (StreamReader sr = new StreamReader(ms)) { // pass the memory stream to method ms.Seek(0, SeekOrigin.Begin); // added from itsmatt string s = sr.ReadToEnd(); } A: Use the StringWriter to act as a stream onto a string: StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); CallYourMethodWhichWritesToYourStream(sw); return sb.ToString(); A: Look up MemoryStream class A: MemoryStream ms = new MemoryStream(); YourFunc(ms); ms.Seek(0, SeekOrigin.Begin); StreamReader sr = new StreamReader(ms); string mystring = sr.ReadToEnd(); is one way to do it. A: you can do something like: string s = "Wahoo!"; int n = 452; using( Stream stream = new MemoryStream() ) { // Write to the stream byte[] bytes1 = UnicodeEncoding.Unicode.GetBytes(s); byte[] bytes2 = BitConverter.GetBytes(n); stream.Write(bytes1, 0, bytes1.Length); stream.Write(bytes2, 0, bytes2.Length);
{ "language": "en", "url": "https://stackoverflow.com/questions/163207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I test if a given BSP tree is optimal? I have a polygon soup of triangles that I would like to construct a BSP tree for. My current program simply constructs a BSP tree by inserting a random triangle from the model one at a time until all the triangles are consumed, then it checks the depth and breadth of the tree and remembers the best score it achieved (lowest depth, lowest breadth). By definition, the best depth would be log2(n) (or less if co-planar triangles are grouped?) where n is the number of triangles in my model, and the best breadth would be n (meaning no splitting has occurred). But, there are certain configurations of triangles for which this pinnacle would never be reached. Is there an efficient test for checking the quality of my BSP tree? Specifically, I'm trying to find a way for my program to know it should stop looking for a more optimal construction. A: Construction of an optimal tree is an NP-complete problem. Determining if a given tree is optimal is essentially the same problem. From this BSP faq: The problem is one of splitting versus tree balancing. These are mutually exclusive requirements. You should choose your strategy for building a good tree based on how you intend to use the tree. A: Randomly building BSP trees until you chance upon a good one will be really, really inefficient. Instead of choosing a tri at random to use as a split-plane, you want to try out several (maybe all of them, or maybe a random sampling) and pick one according to some heuristic. The heuristic is typically based on (a) how balanced the resulting child nodes would be, and (b) how many tris it would split. You can trade off performance and quality by considering a smaller or larger sampling of tris as candidate split-planes. But in the end, you can't hope to get a totally optimal tree for any real-world data so you might have to settle for 'good enough'. A: * *Try to pick planes that (could potentially) get split by the most planes as splitting planes. Splitting planes can't be split. *Try to pick a plane that has close to the same number of planes in front as in back. *Try to pick a plane that doesn't cause too many splits. *Try to pick a plane that is coplanar with a lot of other surfaces You'll have to sample this criteria and come up with a scoring system to decide which one is most likely to be a good choice for a splitting plane. For example, the further off balance, the more score it loses. If it causes 20 splits, then penalty is -5 * 20 (for example). Choose the one that scores best. You don't have to sample every polygon, just search for a pretty good one.
{ "language": "en", "url": "https://stackoverflow.com/questions/163225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Where can I find a List of Standard HTTP Header Values? I'm looking for all the current standard header values a web server would generally receive. An example would be things like "what will the header look like when coming from a Mac running OS X Leopard and Camino installed?" or "what will the header look like when coming from Fedora 9 running Firefox 3.0.1 versus SuSe running Konqueror?" PConroy gave an example from JQuery tending towards what I'm looking for. What I want though are the actual example headers. A: Did you try the RFC? It has all that information. Actually, when searching for information on any protocol or standard, try to search for the RFC first. Cheers. A: With regards to user-agent, that is entirely up to the creator of the application. See this semi tongue-in-cheek history of user-agent. In summary, there really isn't a canonical set of values. Microsoft based user-agents may change based on software installed on the local machine (version of .NET framework, etc). A: There is no set-in-stone list of user agent values. You can find lengthy lists (such as this one used by the JQuery browser plugin). Regarding other HTTP Headers, this wikipedia article is a good place to start. A: IANA keeps track of HTTP headers IANA is responsible for maintaining many of the codes and numbers contained in a variety of Internet protocols, enumerated below. We provide this service in coordination with the Internet Engineering Task Force (IETF). Which includes: Message Headers * *Permanent Message Header Field Names *Provisional Message Header Field Names Here's the exhaustive list which was originally based on RFC 4229 A: For the user agent, a quick google search pulled up this site. A: The list of HTTP headers is easily available on the W3 website: * *http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html PConroy also linked to the wikipedia page, which is more concise, and a little easier formatted: * *http://en.wikipedia.org/wiki/List_of_HTTP_headers However, the "User-Agent" header is a bad example, since there's no set response; the user-agent string is decided by the client so it can literally be anything. There's a very comprehensive List of User Agents available, but it's not necessarily going to cover any possible option, since even some toolbars and applications can modify the user-agent for Internet Explorer or other browsers. A: The chipmunk book from O'Reilly is good as is Chris Shiflett's HTTP reference. Oh, whoops, it's not a chipmunk. It's a thirteen-lined ground squirrel.
{ "language": "en", "url": "https://stackoverflow.com/questions/163236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: SQL Server equivalent to Oracle's CREATE OR REPLACE VIEW In Oracle, I can re-create a view with a single statement, as shown here: CREATE OR REPLACE VIEW MY_VIEW AS SELECT SOME_FIELD FROM SOME_TABLE WHERE SOME_CONDITIONS As the syntax implies, this will drop the old view and re-create it with whatever definition I've given. Is there an equivalent in MSSQL (SQL Server 2005 or later) that will do the same thing? A: I typically use something like this: if exists (select * from dbo.sysobjects where id = object_id(N'dbo.MyView') and OBJECTPROPERTY(id, N'IsView') = 1) drop view dbo.MyView go create view dbo.MyView [...] A: As of SQL Server 2016 you have DROP TABLE IF EXISTS [foo]; MSDN source A: You can use 'IF EXISTS' to check if the view exists and drop if it does. IF EXISTS (SELECT TABLE_NAME FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'MyView') DROP VIEW MyView GO CREATE VIEW MyView AS .... GO A: For reference from SQL Server 2016 SP1+ you could use CREATE OR ALTER VIEW syntax. MSDN CREATE VIEW: CREATE [ OR ALTER ] VIEW [ schema_name . ] view_name [ (column [ ,...n ] ) ] [ WITH <view_attribute> [ ,...n ] ] AS select_statement [ WITH CHECK OPTION ] [ ; ] OR ALTER Conditionally alters the view only if it already exists. db<>fiddle demo A: It works fine for me on SQL Server 2017: USE MSSQLTipsDemo GO CREATE OR ALTER PROC CreateOrAlterDemo AS BEGIN SELECT TOP 10 * FROM [dbo].[CountryInfoNew] END GO https://www.mssqltips.com/sqlservertip/4640/new-create-or-alter-statement-in- A: You can use ALTER to update a view, but this is different than the Oracle command since it only works if the view already exists. Probably better off with DaveK's answer since that will always work. A: In SQL Server 2016 (or newer) you can use this: CREATE OR ALTER VIEW VW_NAMEOFVIEW AS ... In older versions of SQL server you have to use something like DECLARE @script NVARCHAR(MAX) = N'VIEW [dbo].[VW_NAMEOFVIEW] AS ...'; IF NOT EXISTS(SELECT * FROM sys.views WHERE name = 'VW_NAMEOFVIEW') -- IF OBJECT_ID('[dbo].[VW_NAMEOFVIEW]') IS NOT NULL BEGIN EXEC('CREATE ' + @script) END ELSE BEGIN EXEC('ALTER ' + @script) END Or, if there are no dependencies on the view, you can just drop it and recreate: IF EXISTS(SELECT * FROM sys.views WHERE name = 'VW_NAMEOFVIEW') -- IF OBJECT_ID('[dbo].[VW_NAMEOFVIEW]') IS NOT NULL BEGIN DROP VIEW [VW_NAMEOFVIEW]; END CREATE VIEW [VW_NAMEOFVIEW] AS ... A: I use: IF OBJECT_ID('[dbo].[myView]') IS NOT NULL DROP VIEW [dbo].[myView] GO CREATE VIEW [dbo].[myView] AS ... Recently I added some utility procedures for this kind of stuff: CREATE PROCEDURE dbo.DropView @ASchema VARCHAR(100), @AView VARCHAR(100) AS BEGIN DECLARE @sql VARCHAR(1000); IF OBJECT_ID('[' + @ASchema + '].[' + @AView + ']') IS NOT NULL BEGIN SET @sql = 'DROP VIEW ' + '[' + @ASchema + '].[' + @AView + '] '; EXEC(@sql); END END So now I write EXEC dbo.DropView 'mySchema', 'myView' GO CREATE View myView ... GO I think it makes my changescripts a bit more readable A: The solutions above though they will get the job done do so at the risk of dropping user permissions. I prefer to do my create or replace views or stored procedures as follows. IF NOT EXISTS (SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'[dbo].[vw_myView]')) EXEC sp_executesql N'CREATE VIEW [dbo].[vw_myView] AS SELECT ''This is a code stub which will be replaced by an Alter Statement'' as [code_stub]' GO ALTER VIEW [dbo].[vw_myView] AS SELECT 'This is a code which should be replaced by the real code for your view' as [real_code] GO
{ "language": "en", "url": "https://stackoverflow.com/questions/163246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "120" }
Q: On 32-bit CPUs, is an 'integer' type more efficient than a 'short' type? On a 32-bit CPU, an integer is 4 bytes and a short integer is 2 bytes. If I am writing a C/C++ application that uses many numeric values that will always fit within the provided range of a short integer, is it more efficient to use 4 byte integers or 2 byte integers? I have heard it suggested that 4 byte integers are more efficient as this fits the bandwidth of the bus from memory to the CPU. However, if I am adding together two short integers, would the CPU package both values in a single pass in parallel (thus spanning the 4 byte bandwidth of the bus)? A: If you're using "many" integer values, the bottleneck in your processing is liable to be bandwidth to memory. 16 bit integers pack more tightly into the data cache, and would therefore be a performance win. If you are number crunching on a very large amount of data, you should read What Every Programmer Should Know About Memory by Ulrich Drepper. Concentrate on chapter 6, about maximizing the efficiency of the data cache. A: A 32 bit CPU is a CPU that usually operates on 32 bit values internally, yet that does not mean that it is any slower when performing the same operation on a 8/16 bit value. x86 for example, still backward compatible up to the 8086, can operate on fractions of a register. That means even if a register is 32 bit wide, it can operate only on the first 16 or the first 8 bit of that register and there will be no slow down at all. This concept has even been adopted by x86_64, where the registers are 64 bit, yet they still can operate only on the first 32, 16, or 8 bit. Also x86 CPUs always load a whole cache line from memory, if not already in cache, and a cache line is bigger than 4 byte anyway (for 32 bit CPUs rather 8 or 16 bytes) and thus loading 2 byte from memory is equally fast as loading 4 byte from memory. If processing many values from memory, 16 bit values may actually be much faster than 32 bit values, since there are less memory transfers. If a cache line is 8 byte, there are four 16 bit values per cache line, yet only two 32 bit values, thus when using 16 bit ints you have one memory access every four values, using 32 bit ints you have one every two values, resulting in twice as many transfers for processing a large int array. Other CPUs, like PPC for example, cannot process only a fraction of a register, they always process the full register. Yet these CPUs usually have special load operations that allow them to, e.g. load a 16 bit value from memory, expand it to 32 bit and write it to a register. Later on they have a special store operation that takes the value from the register and only stores the last 16 bit back to memory; both operation need only one CPU cycle, just like a 32 bit load/store would need, so there is no speed difference either. And since PPC can only perform arithmetic operations on registers (unlike x86, which can also operate on memory directly), this load/store procedure takes place anyway whether you use 32 bit ints or 16 bit ints. The only disadvantage, if you chain multiple operations on a 32 bit CPU that can only operate on full registers, is that the 32 bit result of the last operation may have to be "cut back" to 16 bit before the next operation is performed, otherwise the result may not be correct. Such a cut back is only a single CPU cycle, though (a simple AND operation), and compilers are very good at figuring out when such a cut back is really necessary and when leaving it out won't have any influence on the final result, so such a cut back is not performed after every instruction, it is only performed if really unavoidable. Some CPUs offers various "enhanced" instructions which make such a cut back unnecessary and I've seen plenty of code in my life, where I had expected such a cut back, yet looking at the generated assembly code, the compiler found a way to avoid it entirely. So if you expect a general rule here, I'll have to disappoint you. Neither can one say for sure that 16 bit operations are equally fast to 32 bit operations, nor can anyone say for sure that 32 bit operations will always be faster. It depends also what exactly your code is doing with those numbers and how it is doing that. I've seen benchmarks where 32 bit operations were faster on certain 32 bit CPUs than the same code with 16 bit operations, however I also already saw the opposite being true. Even switching from one compiler to another one or upgrading your compiler version may already turn everything around again. I can only say the following: Whoever claims that working with shorts is significantly slower than working with ints, shall please provide a sample source code for that claim and name CPU and compiler he used for testing, since I have never experienced anything like that within about the past 10 years. There may be some situations, where working with ints is maybe 1-5% faster, yet anything below 10% is not "significant" and the question is, is it worth to waste twice the memory in some cases only because it may buy you 2% performance? I don't think so. A: It depends. If you are CPU bound, 32 bit operations on a 32 bit CPU will be faster than 16 bit. If you are memory bound (specifically if you have too many L2 cache misses), then use the smallest data you can squeeze into. You can find out which one you are using a profiler that will measure both CPU and L2 misses like Intel's VTune. You will run your app 2 times with the same load, and it will merge the 2 runs into one view of the hotspots in your app, and you can see for each line of code how many cycles were spent on that line. If at an expensive line of code, you see 0 cache misses, you are CPU bound. If you see tons of misses, you are memory bound. A: Don't listen to the advice, try it. This is probably going to depend heavily on the hardware/compiler that you're using. A quick test should make short work of this question. Probably less time to write the test than it is to write the question here. A: If you have a large array of numbers, then go with the smallest size that works. It will be more efficient to work with an array of 16 bit shorts than 32 bit ints since you get twice the cache density. The cost of any sign extension the CPU has to do to work with 16 bit values in 32 bit registers is trivially negligible compared to the cost of a cache miss. If you are simply using member variables in classes mixed with other data types then it is less clear cut as the padding requirements will likely remove any space saving benefit of the 16 bit values. A: Yes, you should definitely use a 32 bit integer on a 32 bit CPU, otherwise it may end up masking off the unused bits (i.e., it will always do the maths in 32 bits, then convert the answer to 16 bits) It won't do two 16 bit operations at once for you, but if you write the code yourself and you're sure it won't overflow, you can do it yourself. Edit: I should add that it also depends somewhat on your definition of "efficient". While it will be able to do 32-bit operations more quickly, you will of course use twice as much memory. If these are being used for intermediate calculations in an inner loop somewhere, then use 32-bit. If, however, you're reading this from disk, or even if you just have to pay for a cache miss, it may still work out better to use 16-bit integers. As with all optimizations, there's only one way to know: profile it. A: If you are operating on a large dataset, the biggest concern is memory footprint. A good model in this case is to assume that the CPU is infinitely fast, and spend your time worrying about how much data has to be moved to/from memory. In fact, CPUs are now so fast that it is sometimes more efficient to encode (e.g., compress) the data. That way, the CPU does (potentially much) more work (decoding/coding), but the memory bandwidth is substantially reduced. Thus, if your dataset is large, you are probably better off using 16 bit integers. If your list is sorted, you might design a coding scheme that involves differential or run-length encoding, which will reduce memory bandwidth even more. A: When you say 32bit, I'll assume you mean x86. 16 bit arithmetic is quite slow: the operand-size prefix makes decoding really slow. So don't make your temp variables short int or int16_t. However, x86 can efficiently load 16 and 8 bit integers into 32 or 64 bit registers. (movzx / movsx: zero and sign extension). So feel free to use short int for arrays and struct fields, but make sure you use int or long for your temp variables. However, if I am adding together two short integers, would the CPU package both values in a single pass in parallel (thus spanning the 4 byte bandwidth of the bus)? That is nonsense. load/store instructions interact with L1 cache, and the limiting factor is number of ops; width is irrelevant. e.g. on core2: 1 load and 1 store per cycle, regardless of width. L1 cache has a 128 or 256bit path to L2 cache. If loads are your bottleneck, one wide load which you split up with shifts or masks after loading can help. Or use SIMD to process data in parallel without unpacking after loading in parallel.
{ "language": "en", "url": "https://stackoverflow.com/questions/163254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What do you need to take into consideration when deciding between MySQL and Amazon's SimpleDB for a RoR app? I am just beginning to do research into the feasibility of using Amazon's SimpleDB service as the datastore for RoR application I am planning to build. We will be using EC2 for the web server, and had planned to also use EC2 for the MySQL servers. But now the question is, why not use SimpleDB? The application will (if successful) need to be very scalable in terms of # of users supported, will need to maintain a simple and efficient code base, and will need to be reliable. I'm curious as to what the SO communities thoughts are on this. A: Well, considering simple DB doesn't use SQL, or even have tables, means that it's a completely different beast than MySQL and other SQL-based things (http://aws.amazon.com/simpledb/). There are no constraints, triggers, or joins. Good luck. Here's one way of getting it up and running: http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1242 (via http://rubyforge.org/projects/aws-sdb/ ) I suppose if you're never going to need to query the data outside of rails, then SimpleDB may prove to be OK. But as it's not a first-class supported DB, you're likely to run into bugs that are difficult to fix. I wouldn't want to run a production rails app in a semi-beta backend. A: The Ruby SimpleDB library is not as complete as ActiveRecord (the default Rails DB adapter), so many of the features you're used to will not be there. On the plus side it's schemaless, scalable and works well with ec2. If you're going to do things like full text search in your app then SimpleDB might not be the best choice, stick with AR + sphinx. A: To me this just feels like, "Hey there are these neat tools out there, I should go build a project using them," rather than actually needing to use these specific tools. Maybe I'm just being crabby but it feels like a classic case of premature optimization. You're trying to use an external service that costs money for an app that isn't even written yet and you don't say you've got a guaranteed audience or one that will necessarily scale to a level that warrants that. "The application will (if successful) need to be very scalable in terms of # of users supported", seriously, that describes half the Internet. It's the "if successful" part that's really the question. Just concentrate on building the application quickly and easily. The easiest way to do that is just use ROR as it is out-of-the-box so to speak. Pair it with a database, use ActiveRecord and get something built and attracting users. In fact, I'll go further and say that EC2 is rather expensive for always on servers. Deploy it over on Slicehost or another hosted solution and then move it to EC2 if you need to in order to support demand. A: I myself am very interested in this topic. Right now I'm on a cloud computing high so I'd say go with SimpleDB since it'll probably scale better in the sense that you'll have high availability, but that's just my thoughts as of the moment. Not from experience yet. Edit: It's true that SimpleDB has no normal features a "normal" database, but it should do the trick if you only need a simple CRUD layer to work against, which is my case A: There's a library called SimpleRecord that is a drop in replacement for ActiveRecord, but uses SimpleDB as its backend data store.
{ "language": "en", "url": "https://stackoverflow.com/questions/163275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Textarea overflow-x when a user copy-pastes into it? I have a textarea with overflow-x: auto; attributed to it. It works great when a user is typing text into the box by hand. When a user copy pastes a line from a file, however, that is bigger than my textarea, the overflow-x property does not work, instead the textarea wordwraps the long line. Is there a way (maybe javascript) to make overflow-x work on copy-paste? Thanks. A: I'm not sure what you are trying to achieve - is that for the text area to automatically expand when user types text in? I couldn't create such a behavior using just HTML and CSS. You can theoretically set your textarea wrap attribute to "no" which will force the creating of a horizontal scrollbar when users type in or paste in long lines. A: From the looks of it, it would seem that the text comes pre-wordwrapped from the editor. What editor are you using, and on which platform are you experiencing this behaviour?
{ "language": "en", "url": "https://stackoverflow.com/questions/163294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I ignore a directory in mod_rewrite? I'm trying to have the modrewrite rules skip the directory vip. I've tried a number of things as you can see below, but to no avail. # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / #RewriteRule ^vip$ - [PT] RewriteRule ^vip/.$ - [PT] #RewriteCond %{REQUEST_URI} !/vip RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress How do I get modrewrite to entirely ignore the /vip/ directory so that all requests pass directly to the folder? Update: As points of clarity: * *It's hosted on Dreamhost *The folders are within a wordpress directory *the /vip/ folder contains a webdav .htaccess etc (though I dont think this is important A: In summary, the final solution is: ErrorDocument 401 /misc/myerror.html ErrorDocument 403 /misc/myerror.html # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress I posted more about the cause of this problem in my specific situation, involving Wordpress and WebDAV on Dreamhost, which I expect many others to be having on my site. A: You mentioned you already have a .htaccess file in the directory you want to ignore - you can use RewriteEngine off In that .htaccess to stop use of mod_rewrite (not sure if you're using mod_rewrite in that folder, if you are then that won't help since you can't turn it off). A: Try replacing this part of your code: RewriteRule ^vip/.$ - [PT] ...with the following: RewriteCond %{REQUEST_URI} !(vip) [NC] That should fix things up. A: RewriteCond %{REQUEST_URI} !^pilot/ is the way to do that. A: In my case, the answer by brentonstrine (and I see matdumsa also had the same idea) was the right one... I wanted to up-vote their answers, but being new here, I have no "reputation", so I have to write a full answer, in order to emphasize what I think is the real key here. Several of these answers would successfully stop the WordPress index.php from being used ... but in many cases, the reason for doing this is that there is a real directory with real pages in it that you want to display directly, and the RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d lines already take care of that, so most of those solutions are a distraction in a case like mine. The key was brentonstrine's insight that the error was a secondary effect, caused by the password-protection inside the directory I was trying to display directly. By putting in the ErrorDocument 401 /err.txt ErrorDocument 403 /err.txt lines and creating error pages (I actually created err401.html and err403.html and made more informative error messages) I stopped the 404 response being generated when it couldn't find any page to display for 401 Authentication Required, and then the folder worked as expected... showing an apache login dialog, then the contents of the folder, or on failure, my error 401 page. A: I’ve had the same issue using wordpress and found that the issue is linked with not having proper handler for 401 and 403 errors.. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d These conditions are already supposed not to rewrite the url of existing folders but they don’t do their job for password protected folders. In my case, adding the following two lines to my root .htaccess fixed the problem: ErrorDocument 401 /misc/myerror.html ErrorDocument 403 /misc/myerror.html Of course you need to create the /misc/myerror.html, A: This works ... RewriteRule ^vip - [L,NC] But ensure it is the first rule after RewriteEngine on i.e. ErrorDocument 404 /page-not-found.html RewriteEngine on RewriteRule ^vip - [L,NC] AddType application/x-httpd-php .html .htm RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d etc A: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d This says if it's an existing file or a directory don't touch it. You should be able to access site.com/vip and no rewrite rule should take place. A: Try putting this before any other rules. RewriteRule ^vip - [L,NC] It will match any URI beginning vip. * *The - means do nothing. *The L means this should be last rule; ignore everything following. *The NC means no-case (so "VIP" is also matched). Note that it matches anything beginning vip. The expression ^vip$ would match vip but not vip/ or vip/index.html. The $ may have been your downfall. If you really want to do it right, you might want to go with ^vip(/|$) so you don't match vip-page.html A: The code you are adding, and all answers that are providing Rewrite rules/conditions are useless! The default WordPress code already does everything that you should need it to: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Those lines say "if it's NOT an existing file (-f) or directory (-d), pass it along to WordPress. Adding additional rules, not matter how specific or good they are, is redundant--you should already be covered by the WordPress rules! So why aren't they working??? The .htaccess in the vip directory is throwing an error. The exact same thing happens if you password protect a directory. Here is the solution: ErrorDocument 401 /err.txt ErrorDocument 403 /err.txt Insert those lines before the WordPress code, and then create /err.txt. This way, when it comes upon your WebDAV (or password protected directory) and fails, it will go to that file, and get caught by the existing default WordPress condition (RewriteCond %{REQUEST_FILENAME} !-f). A: I'm not sure if I understand your objective, but the following might do what you're after? RewriteRule ^/vip/(.*)$ /$1?%{QUERY_STRING} [L] This will cause a URL such as http://www.example.com/vip/fred.html to be rewritten without the /vip.
{ "language": "en", "url": "https://stackoverflow.com/questions/163302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: Best Way To Get All Dates Between DateA and DateB I am using an asp:Calander and I have an object that has a beginning date and an ending date. I need to get all the dates between these two dates and place them in an array so i can then render corresponding dates on the calander with different CSS A: DateTime startDate; DateTime endDate; DateTime currentDate = startDate; List<DateTime> dates = new List<DateTime> (); while (true) { dates.Add (currentDate); if (currentDate.Equals (endDate)) break; currentDate = currentDate.AddDays (1); } It assumes that startDate < than endDate, you get the results on the "dates" list A: IEnumerable<DateTime> RangeDays(DateTime RangeStart, DateTime RangeEnd) { DateTime EndDate = RangeEnd.Date; for (DateTime WorkDate = RangeStart.Date; WorkDate <= EndDate; WorkDate = WorkDate.AddDays(1)) { yield return WorkDate; } yield break; } Untested code... but should work. A: I voted up AlbertEin because he gave a good answer, but do you really need a collection to hold all the dates? When you are rendering the day, couldn't you just check if the date is withing the specified range, and then render it differently, no need for a collection. Here's some code to demonstrate DateTime RangeStartDate,RangeEndDate; //Init as necessary DateTime CalendarStartDate,CalendarEndDate; //Init as necessary DateTime CurrentDate = CalendarStartDate; String CSSClass; while (CurrentDate != CalendarEndDate) { if(CurrentDate >= RangeStartDate && CurrentDate <= RangeEndDate) { CSSClass= "InRange"; } else { CSSClass = "OutOfRange"; } //Code For rendering calendar goes here currentDate = currentDate.AddDays (1); } A: // inclusive var allDates = Enumerable.Range(0, (endDate - startDate).Days + 1).Select(i => startDate.AddDays(i)); // exclusive var allDates = Enumerable.Range(1, (endDate - startDate).Days).Select(i => startDate.AddDays(i));
{ "language": "en", "url": "https://stackoverflow.com/questions/163311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Weird results using P4COM I'm using P4COM to communicate with our perforce server. I have written an little utility to simplify our QA of what files have changed from one release to another. I have been using the P4COM interface from Delphi. So far so good. I though it might be nice to allow users to view the diff between the two versions of the file from within my little utility rather than going back to p4v. So I print (get) the files at each revision using p4COM and the following command print -o "E:\Development\TempProjects\p4Changes\temp\File_dispatch.pas#25" "//depot/mydepotpath/File_dispatch.pas"#25 and print -o "E:\Development\TempProjects\p4Changes\temp\File_dispatch.pas#26" "//depot/mydepotpath/File_dispatch.pas"#26 However when I do this from my app using P4COM I seem to get random files (and they appear to be deleted ones). If I run the exact same command from the command line I get perfect results. Running both of these does return a file and correctly dumps it to disk where I want it, its just not the file I've asked for. Any ideas? A: Could it be a backslash issue in the command string? This would work fine at the command line, but a single backslash may be being interpreted as an escape code by whatever language compiler you are using (if C or C++, then this will definitely cause a problem, and that maybe happening under the hood on the P4COM side). Try using double backslashes and see if that fixes it. A: You're probably better of asking this to Perforce support itself, as this sounds like a bug in their software. As a sidenote : Why do you use p4v? (I hugely prefer p4win myself)
{ "language": "en", "url": "https://stackoverflow.com/questions/163313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Isolate a single column in a multi-dimensional array Say for example you just queried a database and you recieved this 2D array. $results = array( array('id' => 1, 'name' => 'red' , 'spin' => 1), array('id' => 2, 'name' => 'green', 'spin' => -1), array('id' => 3, 'name' => 'blue' , 'spin' => .5) ); I often find myself writing loops like this. foreach($results as $result) $names[] = $result['name']; My questions is does there exist a way to get this array $names without using a loop? Using callback functions count as using a loop. Here is a more generic example of getting every field. foreach($results as $result) foreach($result as $key => $value) $fields[$key][] = $value; A: Starting PHP 5.3, you can use this pretty call with lambda function: $names = array_map(function ($v){ return $v['name']; }, $results); This will return array sliced by 'name' dimension. A: Simply put, no. You will need to use a loop or a callback function like array_walk. A: As of June 20th in PHP-5.5 there is a new function array_column For example: $records = array( array( 'id' => 2135, 'first_name' => 'John', 'last_name' => 'Doe' ), array( 'id' => 3245, 'first_name' => 'Sally', 'last_name' => 'Smith' ), array( 'id' => 5342, 'first_name' => 'Jane', 'last_name' => 'Jones' ), array( 'id' => 5623, 'first_name' => 'Peter', 'last_name' => 'Doe' ) ); $firstNames = array_column($records, 'first_name'); print_r($firstNames); Will return Array ( [0] => John [1] => Sally [2] => Jane [3] => Peter ) There are even more examples in the above mentioned link. A: I did more research on this and found that ruby and prototype both have a function that does this called array_pluck,2. It's interesting that array_map has a second use that allows you to do the inverse of what i want to do here. I also found a PHP class someone is writing to emulate prototypes manipulation of arrays. I'm going to do some more digging around and if I don't find anything else I'll work on a patch to submit to the internals@lists.php.net mailing list and see if they will add array_pluck. A: For those of you that cannot upgrade to PHP5.5 right now and need this function, here is an implementation of array_column. function array_column($array, $column){ $a2 = array(); array_map(function ($a1) use ($column, &$a2){ array_push($a2, $a1[$column]); }, $array); return $a2; } A: I voted @Devon's response up because there really isn't a way to do what you're asking with a built-in function. The best you can do is write your own: function array_column($array, $column) { $ret = array(); foreach ($array as $row) $ret[] = $row[$column]; return $ret; } A: If you are running a version of PHP before 5.5 and array_column(), you can use the official replacement in plain PHP: https://github.com/ramsey/array_column A: I think this will do what you want array_uintersect_uassoc You would have to do something like this $results = array( array('id' => 1, 'name' => 'red' , 'spin' => 1), array('id' => 2, 'name' => 'green', 'spin' => -1), array('id' => 3, 'name' => 'blue' , 'spin' => .5) ); $name = array_uintersect_uassoc( $results, array('name' => 'value') , 0, "cmpKey"); print_r($name); ////////////////////////////////////////////////// // FUNCTIONS ////////////////////////////////////////////////// function cmpKey($key1, $key2) { if ($key1 == $key2) { return 0; } else { return -1; } } However, I don't have access to PHP5 so I haven't tested this. A: You could do: $tmp = array_flip($names); $names = array_keys($tmp); A: This is fast function alternative of array_column() if(!function_exists('array_column')) { function array_column($element_name) { $ele = array_map(function($element) { return $element[$element_name]; }, $a); return $ele; } } A: other alternative function transpose(array $array): array { $out = array(); foreach ($array as $rowkey => $row) { foreach ($row as $colkey => $col) { $out[$colkey][$rowkey] = $col; } } return $out; } function filter_columns(array $arr, string ...$columns): array { return array_intersect_key($arr, array_flip($columns)); } test $results = array( array('id' => 1, 'name' => 'red' , 'spin' => 1), array('id' => 2, 'name' => 'green', 'spin' => -1), array('id' => 3, 'name' => 'blue' , 'spin' => .5) ); var_dump(filter_columns(transpose($results),'name')); var_dump(filter_columns(transpose($results),'id','name')); var_dump(filter_columns(transpose($results),'id','spin'));
{ "language": "en", "url": "https://stackoverflow.com/questions/163336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do I read all feed items? I want to read all items of a feed in C#. The solutions I've found are only for the latest items like just the last 10 days. Anybody has a good solution for this? A: Libraries for reading feeds typically read all the data in the feed, but feeds typically only contain recent data - you need a source of data that includes older items, not a better library for reading the data you have. Most entities publish feeds to allow people to track when new content is published, not to make all their data available in a more convenient machine readable format. For this purpose, publishing recent data only makes sense as it saves on bandwidth. A: If you can tie into something like Google Reader, which archives old feed items (although I'm not sure it's a permanent archive or not), then perhaps you can accomplish this. A: Most RSS feeds are only written to deliver a relatively short period of time - 'all' items in a feed generally need you to have created your own archive over time. A: Extending thomas' answer, The two google-related archives of feed data you can find are the official one: Google AJAX Feed API http://code.google.com/apis/ajaxfeeds/ which will limit you to 250 items, and the unofficial one: Google Reader API http://www.niallkennedy.com/blog/2005/12/google-reader-api.html which will give you unlimited (i think) items but you will need to work around their authentication (something with cookies) and pray they don't change or drop the API (as it is undocumented). A: I tried Google Reader, but ther archive was incomplete. I know the people who run the blogs, so I just asked them for a export.
{ "language": "en", "url": "https://stackoverflow.com/questions/163342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Subquery in an IN() clause causing error I'm on SQL Server 2005 and I am getting an error which I am pretty sure should not be getting. Msg 512, Level 16, State 1, Procedure spGetSavedSearchesByAdminUser, Line 8 Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. I am following the example# B on this MSDN link. My stored proc code is as follows. I can simplify it for the sake of this post if you request so: ALTER PROCEDURE [dbo].[spGetSavedSearchesByAdminUser] @strUserName varchar(50) ,@bitQuickSearch bit = 0 AS BEGIN SELECT [intSearchID] ,strSearchTypeCode ,[strSearchName] FROM [tblAdminSearches] WHERE strUserName = @strUserName AND strSearchTypeCode IN ( CASE @bitQuickSearch WHEN 1 THEN 'Quick' ELSE (SELECT strSearchTypeCode FROM tblAdvanceSearchTypes) END ) ORDER BY strSearchName END I have checked there is no datatype mismatch between the resultset from the subquery and the strSearchTypeCode the subquery result is compared with. I see no reason why this should not work. If you have any clues then please let me know. A: Try rearranging the query so that the boolean expression occurs inside the subselect, e.g. ALTER PROCEDURE [dbo].[spGetSavedSearchesByAdminUser] @strUserName varchar(50) ,@bitQuickSearch bit = 0 AS BEGIN SELECT [intSearchID] ,strSearchTypeCode ,[strSearchName] FROM [tblAdminSearches] WHERE strUserName = @strUserName AND strSearchTypeCode IN (SELECT strSearchTypeCode FROM tblAdvanceSearchTypes where @bitQuickSearch=0 UNION SELECT 'Quick' AS strSearchTypeCode WHERE @bitQuickSearch=1) ORDER BY strSearchName END A: I don't know that you can use the CASE statement inside of an IN clause like that. I'd suggest rewriting that bit to: WHERE strUserName = @strUserName AND ( (@bitQuickSearch = 1 AND strSearchTypeCode = 'Quick') OR (strSearchTypeCode IN (SELECT strSearchTypeCode FROM tblAdvanceSearchTypes)) ) or, if you really like the style you got there: WHERE strUserName = @strUserName AND strSearchTypeCode IN ( SELECT CASE @bitQuickSearch WHEN 1 THEN 'Quick' ELSE strSearchTypeCode END FROM tblAdvanceSearchTypes ) In general, SQL should be smart to smart enough to optimize away the table if @bitQuickSearch = 1. But, I'd check the query plan just to be sure (trust, but verify). A: It seems to me that this SELECT: SELECT strSearchTypeCode FROM tblAdvanceSearchTypes returns multiple rows, and that is your problem. You can rewrite it to be: SELECT TOP 1 strSearchTypeCode FROM tblAdvanceSearchTypes
{ "language": "en", "url": "https://stackoverflow.com/questions/163355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Regular expression to match URLs in Java I use RegexBuddy while working with regular expressions. From its library I copied the regular expression to match URLs. I tested successfully within RegexBuddy. However, when I copied it as Java String flavor and pasted it into Java code, it does not work. The following class prints false: public class RegexFoo { public static void main(String[] args) { String regex = "\\b(https?|ftp|file)://[-A-Z0-9+&@#/%?=~_|!:,.;]*[-A-Z0-9+&@#/%=~_|]"; String text = "http://google.com"; System.out.println(IsMatch(text,regex)); } private static boolean IsMatch(String s, String pattern) { try { Pattern patt = Pattern.compile(pattern); Matcher matcher = patt.matcher(s); return matcher.matches(); } catch (RuntimeException e) { return false; } } } Does anyone know what I am doing wrong? A: I'll try a standard "Why are you doing it this way?" answer... Do you know about java.net.URL? URL url = new URL(stringURL); The above will throw a MalformedURLException if it can't parse the URL. A: The problem with all suggested approaches: all RegEx is validating All RegEx -based code is over-engineered: it will find only valid URLs! As a sample, it will ignore anything starting with "http://" and having non-ASCII characters inside. Even more: I have encountered 1-2-seconds processing times (single-threaded, dedicated) with Java RegEx package (filtering Email addresses from text) for very small and simple sentences, nothing specific; possibly bug in Java 6 RegEx... Simplest/Fastest solution would be to use StringTokenizer to split text into tokens, to remove tokens starting with "http://" etc., and to concatenate tokens into text again. If you want to filter Emails from text (because later on you will do NLP staff etc) - just remove all tokens containing "@" inside. This is simple text where RegEx of Java 6 fails. Try it in divverent variants of Java. It takes about 1000 milliseconds per RegEx call, in a long running single threaded test application: pattern = Pattern.compile("[A-Za-z0-9](([_\\.\\-]?[a-zA-Z0-9]+)*)@([A-Za-z0-9]+)(([\\.\\-]?[a-zA-Z0-9]+)*)\\.([A-Za-z]{2,})", Pattern.CASE_INSENSITIVE); "Avalanna is such a sweet little girl! It would b heartbreaking if cancer won. She's so precious! #BeliebersPrayForAvalanna"); "@AndySamuels31 Hahahahahahahahahhaha lol, you don't look like a girl hahahahhaahaha, you are... sexy."; Do not rely on regular expressions if you only need to filter words with "@", "http://", "ftp://", "mailto:"; it is huge engineering overhead. If you really want to use RegEx with Java, try Automaton A: This works too: String regex = "\\b(https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:,.;]*[-a-zA-Z0-9+&@#/%=~_|]"; Note: String regex = "<\\b(https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:,.;]*[-a-zA-Z0-9+&@#/%=~_|]>"; // matches <http://google.com> String regex = "<^(https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:,.;]*[-a-zA-Z0-9+&@#/%=~_|]>"; // does not match <http://google.com> So probably the first one is more useful for general use. A: In line with billjamesdev answer, here is another approach to validate an URL without using a RegEx: From Apache Commons Validator lib, look at class UrlValidator. Some example code: Construct a UrlValidator with valid schemes of "http", and "https". String[] schemes = {"http","https"}. UrlValidator urlValidator = new UrlValidator(schemes); if (urlValidator.isValid("ftp://foo.bar.com/")) { System.out.println("url is valid"); } else { System.out.println("url is invalid"); } prints "url is invalid" If instead the default constructor is used. UrlValidator urlValidator = new UrlValidator(); if (urlValidator.isValid("ftp://foo.bar.com/")) { System.out.println("url is valid"); } else { System.out.println("url is invalid"); } prints out "url is valid" A: ((http?|https|ftp|file)://)?((W|w){3}.)?[a-zA-Z0-9]+\.[a-zA-Z]+ check here:- https://www.freeformatter.com/java-regex-tester.html#ad-output It sorts out theses entries correctly * *google.com *www.google.com *wwwgooglecom *ft. *Www.google.com *.ft *https://www.google.com *https:// *https://www. *https://google.com A: Try the following regex string instead. Your test was probably done in a case-sensitive manner. I have added the lowercase alphas as well as a proper string beginning placeholder. String regex = "^(https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:,.;]*[-a-zA-Z0-9+&@#/%=~_|]"; This works too: String regex = "\\b(https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:,.;]*[-a-zA-Z0-9+&@#/%=~_|]"; Note: String regex = "<\\b(https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:,.;]*[-a-zA-Z0-9+&@#/%=~_|]>"; // matches <http://google.com> String regex = "<^(https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:,.;]*[-a-zA-Z0-9+&@#/%=~_|]>"; // does not match <http://google.com> A: The best way to do it now is: android.util.Patterns.WEB_URL.matcher(linkUrl).matches(); EDIT: Code of Patterns from https://github.com/android/platform_frameworks_base/blob/master/core/java/android/util/Patterns.java : /* * Copyright (C) 2007 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package android.util; import java.util.regex.Matcher; import java.util.regex.Pattern; /** * Commonly used regular expression patterns. */ public class Patterns { /** * Regular expression to match all IANA top-level domains. * List accurate as of 2011/07/18. List taken from: * http://data.iana.org/TLD/tlds-alpha-by-domain.txt * This pattern is auto-generated by frameworks/ex/common/tools/make-iana-tld-pattern.py * * @deprecated Due to the recent profileration of gTLDs, this API is * expected to become out-of-date very quickly. Therefore it is now * deprecated. */ @Deprecated public static final String TOP_LEVEL_DOMAIN_STR = "((aero|arpa|asia|a[cdefgilmnoqrstuwxz])" + "|(biz|b[abdefghijmnorstvwyz])" + "|(cat|com|coop|c[acdfghiklmnoruvxyz])" + "|d[ejkmoz]" + "|(edu|e[cegrstu])" + "|f[ijkmor]" + "|(gov|g[abdefghilmnpqrstuwy])" + "|h[kmnrtu]" + "|(info|int|i[delmnoqrst])" + "|(jobs|j[emop])" + "|k[eghimnprwyz]" + "|l[abcikrstuvy]" + "|(mil|mobi|museum|m[acdeghklmnopqrstuvwxyz])" + "|(name|net|n[acefgilopruz])" + "|(org|om)" + "|(pro|p[aefghklmnrstwy])" + "|qa" + "|r[eosuw]" + "|s[abcdeghijklmnortuvyz]" + "|(tel|travel|t[cdfghjklmnoprtvwz])" + "|u[agksyz]" + "|v[aceginu]" + "|w[fs]" + "|(\u03b4\u03bf\u03ba\u03b9\u03bc\u03ae|\u0438\u0441\u043f\u044b\u0442\u0430\u043d\u0438\u0435|\u0440\u0444|\u0441\u0440\u0431|\u05d8\u05e2\u05e1\u05d8|\u0622\u0632\u0645\u0627\u06cc\u0634\u06cc|\u0625\u062e\u062a\u0628\u0627\u0631|\u0627\u0644\u0627\u0631\u062f\u0646|\u0627\u0644\u062c\u0632\u0627\u0626\u0631|\u0627\u0644\u0633\u0639\u0648\u062f\u064a\u0629|\u0627\u0644\u0645\u063a\u0631\u0628|\u0627\u0645\u0627\u0631\u0627\u062a|\u0628\u06be\u0627\u0631\u062a|\u062a\u0648\u0646\u0633|\u0633\u0648\u0631\u064a\u0629|\u0641\u0644\u0633\u0637\u064a\u0646|\u0642\u0637\u0631|\u0645\u0635\u0631|\u092a\u0930\u0940\u0915\u094d\u0937\u093e|\u092d\u093e\u0930\u0924|\u09ad\u09be\u09b0\u09a4|\u0a2d\u0a3e\u0a30\u0a24|\u0aad\u0abe\u0ab0\u0aa4|\u0b87\u0ba8\u0bcd\u0ba4\u0bbf\u0baf\u0bbe|\u0b87\u0bb2\u0b99\u0bcd\u0b95\u0bc8|\u0b9a\u0bbf\u0b99\u0bcd\u0b95\u0baa\u0bcd\u0baa\u0bc2\u0bb0\u0bcd|\u0baa\u0bb0\u0bbf\u0b9f\u0bcd\u0b9a\u0bc8|\u0c2d\u0c3e\u0c30\u0c24\u0c4d|\u0dbd\u0d82\u0d9a\u0dcf|\u0e44\u0e17\u0e22|\u30c6\u30b9\u30c8|\u4e2d\u56fd|\u4e2d\u570b|\u53f0\u6e7e|\u53f0\u7063|\u65b0\u52a0\u5761|\u6d4b\u8bd5|\u6e2c\u8a66|\u9999\u6e2f|\ud14c\uc2a4\ud2b8|\ud55c\uad6d|xn\\-\\-0zwm56d|xn\\-\\-11b5bs3a9aj6g|xn\\-\\-3e0b707e|xn\\-\\-45brj9c|xn\\-\\-80akhbyknj4f|xn\\-\\-90a3ac|xn\\-\\-9t4b11yi5a|xn\\-\\-clchc0ea0b2g2a9gcd|xn\\-\\-deba0ad|xn\\-\\-fiqs8s|xn\\-\\-fiqz9s|xn\\-\\-fpcrj9c3d|xn\\-\\-fzc2c9e2c|xn\\-\\-g6w251d|xn\\-\\-gecrj9c|xn\\-\\-h2brj9c|xn\\-\\-hgbk6aj7f53bba|xn\\-\\-hlcj6aya9esc7a|xn\\-\\-j6w193g|xn\\-\\-jxalpdlp|xn\\-\\-kgbechtv|xn\\-\\-kprw13d|xn\\-\\-kpry57d|xn\\-\\-lgbbat1ad8j|xn\\-\\-mgbaam7a8h|xn\\-\\-mgbayh7gpa|xn\\-\\-mgbbh1a71e|xn\\-\\-mgbc0a9azcg|xn\\-\\-mgberp4a5d4ar|xn\\-\\-o3cw4h|xn\\-\\-ogbpf8fl|xn\\-\\-p1ai|xn\\-\\-pgbs0dh|xn\\-\\-s9brj9c|xn\\-\\-wgbh1c|xn\\-\\-wgbl6a|xn\\-\\-xkc2al3hye2a|xn\\-\\-xkc2dl3a5ee0h|xn\\-\\-yfro4i67o|xn\\-\\-ygbi2ammx|xn\\-\\-zckzah|xxx)" + "|y[et]" + "|z[amw])"; /** * Regular expression pattern to match all IANA top-level domains. * @deprecated This API is deprecated. See {@link #TOP_LEVEL_DOMAIN_STR}. */ @Deprecated public static final Pattern TOP_LEVEL_DOMAIN = Pattern.compile(TOP_LEVEL_DOMAIN_STR); /** * Regular expression to match all IANA top-level domains for WEB_URL. * List accurate as of 2011/07/18. List taken from: * http://data.iana.org/TLD/tlds-alpha-by-domain.txt * This pattern is auto-generated by frameworks/ex/common/tools/make-iana-tld-pattern.py * * @deprecated This API is deprecated. See {@link #TOP_LEVEL_DOMAIN_STR}. */ @Deprecated public static final String TOP_LEVEL_DOMAIN_STR_FOR_WEB_URL = "(?:" + "(?:aero|arpa|asia|a[cdefgilmnoqrstuwxz])" + "|(?:biz|b[abdefghijmnorstvwyz])" + "|(?:cat|com|coop|c[acdfghiklmnoruvxyz])" + "|d[ejkmoz]" + "|(?:edu|e[cegrstu])" + "|f[ijkmor]" + "|(?:gov|g[abdefghilmnpqrstuwy])" + "|h[kmnrtu]" + "|(?:info|int|i[delmnoqrst])" + "|(?:jobs|j[emop])" + "|k[eghimnprwyz]" + "|l[abcikrstuvy]" + "|(?:mil|mobi|museum|m[acdeghklmnopqrstuvwxyz])" + "|(?:name|net|n[acefgilopruz])" + "|(?:org|om)" + "|(?:pro|p[aefghklmnrstwy])" + "|qa" + "|r[eosuw]" + "|s[abcdeghijklmnortuvyz]" + "|(?:tel|travel|t[cdfghjklmnoprtvwz])" + "|u[agksyz]" + "|v[aceginu]" + "|w[fs]" + "|(?:\u03b4\u03bf\u03ba\u03b9\u03bc\u03ae|\u0438\u0441\u043f\u044b\u0442\u0430\u043d\u0438\u0435|\u0440\u0444|\u0441\u0440\u0431|\u05d8\u05e2\u05e1\u05d8|\u0622\u0632\u0645\u0627\u06cc\u0634\u06cc|\u0625\u062e\u062a\u0628\u0627\u0631|\u0627\u0644\u0627\u0631\u062f\u0646|\u0627\u0644\u062c\u0632\u0627\u0626\u0631|\u0627\u0644\u0633\u0639\u0648\u062f\u064a\u0629|\u0627\u0644\u0645\u063a\u0631\u0628|\u0627\u0645\u0627\u0631\u0627\u062a|\u0628\u06be\u0627\u0631\u062a|\u062a\u0648\u0646\u0633|\u0633\u0648\u0631\u064a\u0629|\u0641\u0644\u0633\u0637\u064a\u0646|\u0642\u0637\u0631|\u0645\u0635\u0631|\u092a\u0930\u0940\u0915\u094d\u0937\u093e|\u092d\u093e\u0930\u0924|\u09ad\u09be\u09b0\u09a4|\u0a2d\u0a3e\u0a30\u0a24|\u0aad\u0abe\u0ab0\u0aa4|\u0b87\u0ba8\u0bcd\u0ba4\u0bbf\u0baf\u0bbe|\u0b87\u0bb2\u0b99\u0bcd\u0b95\u0bc8|\u0b9a\u0bbf\u0b99\u0bcd\u0b95\u0baa\u0bcd\u0baa\u0bc2\u0bb0\u0bcd|\u0baa\u0bb0\u0bbf\u0b9f\u0bcd\u0b9a\u0bc8|\u0c2d\u0c3e\u0c30\u0c24\u0c4d|\u0dbd\u0d82\u0d9a\u0dcf|\u0e44\u0e17\u0e22|\u30c6\u30b9\u30c8|\u4e2d\u56fd|\u4e2d\u570b|\u53f0\u6e7e|\u53f0\u7063|\u65b0\u52a0\u5761|\u6d4b\u8bd5|\u6e2c\u8a66|\u9999\u6e2f|\ud14c\uc2a4\ud2b8|\ud55c\uad6d|xn\\-\\-0zwm56d|xn\\-\\-11b5bs3a9aj6g|xn\\-\\-3e0b707e|xn\\-\\-45brj9c|xn\\-\\-80akhbyknj4f|xn\\-\\-90a3ac|xn\\-\\-9t4b11yi5a|xn\\-\\-clchc0ea0b2g2a9gcd|xn\\-\\-deba0ad|xn\\-\\-fiqs8s|xn\\-\\-fiqz9s|xn\\-\\-fpcrj9c3d|xn\\-\\-fzc2c9e2c|xn\\-\\-g6w251d|xn\\-\\-gecrj9c|xn\\-\\-h2brj9c|xn\\-\\-hgbk6aj7f53bba|xn\\-\\-hlcj6aya9esc7a|xn\\-\\-j6w193g|xn\\-\\-jxalpdlp|xn\\-\\-kgbechtv|xn\\-\\-kprw13d|xn\\-\\-kpry57d|xn\\-\\-lgbbat1ad8j|xn\\-\\-mgbaam7a8h|xn\\-\\-mgbayh7gpa|xn\\-\\-mgbbh1a71e|xn\\-\\-mgbc0a9azcg|xn\\-\\-mgberp4a5d4ar|xn\\-\\-o3cw4h|xn\\-\\-ogbpf8fl|xn\\-\\-p1ai|xn\\-\\-pgbs0dh|xn\\-\\-s9brj9c|xn\\-\\-wgbh1c|xn\\-\\-wgbl6a|xn\\-\\-xkc2al3hye2a|xn\\-\\-xkc2dl3a5ee0h|xn\\-\\-yfro4i67o|xn\\-\\-ygbi2ammx|xn\\-\\-zckzah|xxx)" + "|y[et]" + "|z[amw]))"; /** * Good characters for Internationalized Resource Identifiers (IRI). * This comprises most common used Unicode characters allowed in IRI * as detailed in RFC 3987. * Specifically, those two byte Unicode characters are not included. */ public static final String GOOD_IRI_CHAR = "a-zA-Z0-9\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF"; public static final Pattern IP_ADDRESS = Pattern.compile( "((25[0-5]|2[0-4][0-9]|[0-1][0-9]{2}|[1-9][0-9]|[1-9])\\.(25[0-5]|2[0-4]" + "[0-9]|[0-1][0-9]{2}|[1-9][0-9]|[1-9]|0)\\.(25[0-5]|2[0-4][0-9]|[0-1]" + "[0-9]{2}|[1-9][0-9]|[1-9]|0)\\.(25[0-5]|2[0-4][0-9]|[0-1][0-9]{2}" + "|[1-9][0-9]|[0-9]))"); /** * RFC 1035 Section 2.3.4 limits the labels to a maximum 63 octets. */ private static final String IRI = "[" + GOOD_IRI_CHAR + "]([" + GOOD_IRI_CHAR + "\\-]{0,61}[" + GOOD_IRI_CHAR + "]){0,1}"; private static final String GOOD_GTLD_CHAR = "a-zA-Z\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF"; private static final String GTLD = "[" + GOOD_GTLD_CHAR + "]{2,63}"; private static final String HOST_NAME = "(" + IRI + "\\.)+" + GTLD; public static final Pattern DOMAIN_NAME = Pattern.compile("(" + HOST_NAME + "|" + IP_ADDRESS + ")"); /** * Regular expression pattern to match most part of RFC 3987 * Internationalized URLs, aka IRIs. Commonly used Unicode characters are * added. */ public static final Pattern WEB_URL = Pattern.compile( "((?:(http|https|Http|Https|rtsp|Rtsp):\\/\\/(?:(?:[a-zA-Z0-9\\$\\-\\_\\.\\+\\!\\*\\'\\(\\)" + "\\,\\;\\?\\&\\=]|(?:\\%[a-fA-F0-9]{2})){1,64}(?:\\:(?:[a-zA-Z0-9\\$\\-\\_" + "\\.\\+\\!\\*\\'\\(\\)\\,\\;\\?\\&\\=]|(?:\\%[a-fA-F0-9]{2})){1,25})?\\@)?)?" + "(?:" + DOMAIN_NAME + ")" + "(?:\\:\\d{1,5})?)" // plus option port number + "(\\/(?:(?:[" + GOOD_IRI_CHAR + "\\;\\/\\?\\:\\@\\&\\=\\#\\~" // plus option query params + "\\-\\.\\+\\!\\*\\'\\(\\)\\,\\_])|(?:\\%[a-fA-F0-9]{2}))*)?" + "(?:\\b|$)"); // and finally, a word boundary or end of // input. This is to stop foo.sure from // matching as foo.su public static final Pattern EMAIL_ADDRESS = Pattern.compile( "[a-zA-Z0-9\\+\\.\\_\\%\\-\\+]{1,256}" + "\\@" + "[a-zA-Z0-9][a-zA-Z0-9\\-]{0,64}" + "(" + "\\." + "[a-zA-Z0-9][a-zA-Z0-9\\-]{0,25}" + ")+" ); /** * This pattern is intended for searching for things that look like they * might be phone numbers in arbitrary text, not for validating whether * something is in fact a phone number. It will miss many things that * are legitimate phone numbers. * * <p> The pattern matches the following: * <ul> * <li>Optionally, a + sign followed immediately by one or more digits. Spaces, dots, or dashes * may follow. * <li>Optionally, sets of digits in parentheses, separated by spaces, dots, or dashes. * <li>A string starting and ending with a digit, containing digits, spaces, dots, and/or dashes. * </ul> */ public static final Pattern PHONE = Pattern.compile( // sdd = space, dot, or dash "(\\+[0-9]+[\\- \\.]*)?" // +<digits><sdd>* + "(\\([0-9]+\\)[\\- \\.]*)?" // (<digits>)<sdd>* + "([0-9][0-9\\- \\.]+[0-9])"); // <digit><digit|sdd>+<digit> /** * Convenience method to take all of the non-null matching groups in a * regex Matcher and return them as a concatenated string. * * @param matcher The Matcher object from which grouped text will * be extracted * * @return A String comprising all of the non-null matched * groups concatenated together */ public static final String concatGroups(Matcher matcher) { StringBuilder b = new StringBuilder(); final int numGroups = matcher.groupCount(); for (int i = 1; i <= numGroups; i++) { String s = matcher.group(i); if (s != null) { b.append(s); } } return b.toString(); } /** * Convenience method to return only the digits and plus signs * in the matching string. * * @param matcher The Matcher object from which digits and plus will * be extracted * * @return A String comprising all of the digits and plus in * the match */ public static final String digitsAndPlusOnly(Matcher matcher) { StringBuilder buffer = new StringBuilder(); String matchingRegion = matcher.group(); for (int i = 0, size = matchingRegion.length(); i < size; i++) { char character = matchingRegion.charAt(i); if (character == '+' || Character.isDigit(character)) { buffer.append(character); } } return buffer.toString(); } /** * Do not create this static utility class. */ private Patterns() {} } A: When using regular expressions from RegexBuddy's library, make sure to use the same matching modes in your own code as the regex from the library. If you generate a source code snippet on the Use tab, RegexBuddy will automatically set the correct matching options in the source code snippet. If you copy/paste the regex, you have to do that yourself. In this case, as others pointed out, you missed the case insensitivity option. A: Here is a proposal of an URL parser regex that recognizes : * *Protocol *Host *Port *Path (Document/folder) *Get parameters ^(?>(?<protocol>[[:alpha:]]+(?>\:[[:alpha:]]+)*)\:\/\/)?(?<host>(?>[[:alnum:]]|[-_.])+)(?>\:(?<port>[[:digit:]]+))?(?<path>\/(?>[[:alnum:]]|[-_.\/])*)?(?>\?(?<request>(?>[[:alnum:]]+=[[:alnum:]]+)(?>\&(?>[[:alnum:]]+=[[:alnum:]]+))*))?$ This regex is able to parse an URL such : jdbc:hsqldb:hsql://localhost:91/index. There can be many way to engineer a URL regex, thus the one I propose can be lightly adapted to match more accurate URL grammars. It can be tested on the following page : https://regex101.com/r/Dy7HE0/5 Be aware that langages native API for regex (such as java.util.regex) don't support smart character classes such as [[:alnum:]] and [[:alpha:]]. Use instead \w and \d. A: First, an regex example: regex = “((http|https)://)(www.)?” + “[a-zA-Z0-9@:%._\\+~#?&//=]{2,256}\\.[a-z]” + “{2,6}\\b([-a-zA-Z0-9@:%._\\+~#?&//=]*)” *The URL must start with either http or https and *then followed by :// and *then it must contain www. and *then followed by subdomain of length (2, 256) and *last part contains top level domain like .com, .org etc. In JAVA // Java program to check URL is valid or not // using Regular Expression import java.util.regex.*; class GFG { // Function to validate URL // using regular expression public static boolean isValidURL(String url) { // Regex to check valid URL String regex = "((http|https)://)(www.)?" + "[a-zA-Z0-9@:%._\\+~#?&//=]" + "{2,256}\\.[a-z]" + "{2,6}\\b([-a-zA-Z0-9@:%" + "._\\+~#?&//=]*)"; // Compile the ReGex Pattern p = Pattern.compile(regex); // If the string is empty // return false if (url == null) { return false; } // Find match between given string // and regular expression // using Pattern.matcher() Matcher m = p.matcher(url); // Return if the string // matched the ReGex return m.matches(); } // Driver code public static void main(String args[]) { String url = "https://www.superDev.org"; if (isValidURL(url) == true) { System.out.println("Yes"); } else System.out.println("NO"); } } In python 3 # Python3 program to check # URL is valid or not # using regular expression import re # Function to validate URL # using regular expression def isValidURL(str): # Regex to check valid URL regex = ("((http|https)://)(www.)?" + "[a-zA-Z0-9@:%._\\+~#?&//=]" + "{2,256}\\.[a-z]" + "{2,6}\\b([-a-zA-Z0-9@:%" + "._\\+~#?&//=]*)") # Compile the ReGex p = re.compile(regex) # If the string is empty # return false if (str == None): return False # Return if the string # matched the ReGex if(re.search(p, str)): return True else: return False # Driver code # Test Case 1: url = "https://www.superDev.org" if(isValidURL(url) == True): print("Yes") else: print("No") A: This regular expression works for me: String regex = "(https?://|www\\.)[-a-zA-Z0-9+&@#/%?=~_|!:.;]*[-a-zA-Z0-9+&@#/%=~_|]";
{ "language": "en", "url": "https://stackoverflow.com/questions/163360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "100" }
Q: How do I create an Excel chart that pulls data from multiple sheets? I have monthly sales figures stored in separate sheets. I would like to create a plot of sales for multiple products per month. Each product would be represented in a different colored line on the same chart with each month running along the x axis. What is the best way to create a single line chart that pulls from the same relative cells on multiple sheets? A: Use the Chart Wizard. On Step 2 of 4, there is a tab labeled "Series". There are 3 fields and a list box on this tab. The list box shows the different series you are already including on the chart. Each series has both a "Name" field and a "Values" field that is specific to that series. The final field is the "Category (X) axis labels" field, which is common to all series. Click on the "Add" button below the list box. This will add a blank series to your list box. Notice that the values for "Name" and for "Values" change when you highlight a series in the list box. Select your new series. There is an icon in each field on the right side. This icon allows you to select cells in the workbook to pull the data from. When you click it, the Wizard temporarily hides itself (except for the field you are working in) allowing you to interact with the workbook. Select the appropriate sheet in the workbook and then select the fields with the data you want to show in the chart. The button on the right of the field can be clicked to unhide the wizard. Hope that helps. EDIT: The above applies to 2003 and before. For 2007, when the chart is selected, you should be able to do a similar action using the "Select Data" option on the "Design" tab of the ribbon. This opens up a dialog box listing the Series for the chart. You can select the series just as you could in Excel 2003, but you must use the "Add" and "Edit" buttons to define custom series. A: Here's some code from Excel 2010 that may work. It has a couple specifics (like filtering bad-encode characters from titles) but it was designed to create multiple multi-series graphs from 4-dimensional data having both absolute and percentage-based data. Modify it how you like: Sub createAllGraphs() Const chartWidth As Integer = 260 Const chartHeight As Integer = 200 If Sheets.Count = 1 Then Sheets.Add , Sheets(1) Sheets(2).Name = "AllCharts" ElseIf Sheets("AllCharts").ChartObjects.Count > 0 Then Sheets("AllCharts").ChartObjects.Delete End If Dim c As Variant Dim c2 As Variant Dim cs As Object Set cs = Sheets("AllCharts") Dim s As Object Set s = Sheets(1) Dim i As Integer Dim chartX As Integer Dim chartY As Integer Dim r As Integer r = 2 Dim curA As String curA = s.Range("A" & r) Dim curB As String Dim curC As String Dim startR As Integer startR = 2 Dim lastTime As Boolean lastTime = False Do While s.Range("A" & r) <> "" If curC <> s.Range("C" & r) Then If r <> 2 Then seriesAdd: c.SeriesCollection.Add s.Range("D" & startR & ":E" & (r - 1)), , False, True c.SeriesCollection(c.SeriesCollection.Count).Name = Replace(s.Range("C" & startR), "Â", "") c.SeriesCollection(c.SeriesCollection.Count).XValues = "='" & s.Name & "'!$D$" & startR & ":$D$" & (r - 1) c.SeriesCollection(c.SeriesCollection.Count).Values = "='" & s.Name & "'!$E$" & startR & ":$E$" & (r - 1) c.SeriesCollection(c.SeriesCollection.Count).HasErrorBars = True c.SeriesCollection(c.SeriesCollection.Count).ErrorBars.Select c.SeriesCollection(c.SeriesCollection.Count).ErrorBar Direction:=xlY, Include:=xlBoth, Type:=xlCustom, Amount:="='" & s.Name & "'!$F$" & startR & ":$F$" & (r - 1), minusvalues:="='" & s.Name & "'!$F$" & startR & ":$F$" & (r - 1) c.SeriesCollection(c.SeriesCollection.Count).ErrorBar Direction:=xlX, Include:=xlBoth, Type:=xlFixedValue, Amount:=0 c2.SeriesCollection.Add s.Range("D" & startR & ":D" & (r - 1) & ",G" & startR & ":G" & (r - 1)), , False, True c2.SeriesCollection(c2.SeriesCollection.Count).Name = Replace(s.Range("C" & startR), "Â", "") c2.SeriesCollection(c2.SeriesCollection.Count).XValues = "='" & s.Name & "'!$D$" & startR & ":$D$" & (r - 1) c2.SeriesCollection(c2.SeriesCollection.Count).Values = "='" & s.Name & "'!$G$" & startR & ":$G$" & (r - 1) c2.SeriesCollection(c2.SeriesCollection.Count).HasErrorBars = True c2.SeriesCollection(c2.SeriesCollection.Count).ErrorBars.Select c2.SeriesCollection(c2.SeriesCollection.Count).ErrorBar Direction:=xlY, Include:=xlBoth, Type:=xlCustom, Amount:="='" & s.Name & "'!$H$" & startR & ":$H$" & (r - 1), minusvalues:="='" & s.Name & "'!$H$" & startR & ":$H$" & (r - 1) c2.SeriesCollection(c2.SeriesCollection.Count).ErrorBar Direction:=xlX, Include:=xlBoth, Type:=xlFixedValue, Amount:=0 If lastTime = True Then GoTo postLoop End If If curB <> s.Range("B" & r).Value Then If curA <> s.Range("A" & r).Value Then chartX = chartX + chartWidth * 2 chartY = 0 curA = s.Range("A" & r) End If Set c = cs.ChartObjects.Add(chartX, chartY, chartWidth, chartHeight) Set c = c.Chart c.ChartWizard , xlXYScatterSmooth, , , , , True, Replace(s.Range("B" & r), "Â", "") & " " & s.Range("A" & r), s.Range("D1"), s.Range("E1") Set c2 = cs.ChartObjects.Add(chartX + chartWidth, chartY, chartWidth, chartHeight) Set c2 = c2.Chart c2.ChartWizard , xlXYScatterSmooth, , , , , True, Replace(s.Range("B" & r), "Â", "") & " " & s.Range("A" & r) & " (%)", s.Range("D1"), s.Range("G1") chartY = chartY + chartHeight curB = s.Range("B" & r) curC = s.Range("C" & r) End If curC = s.Range("C" & r) startR = r End If If s.Range("A" & r) <> "" Then oneMoreTime = False ' end the loop for real this time r = r + 1 Loop lastTime = True GoTo seriesAdd postLoop: cs.Activate End Sub A: 2007 is more powerful with ribbon..:=) To add new series in chart do: Select Chart, then click Design in Chart Tools on the ribbon, On the Design ribbon, select "Select Data" in Data Group, Then you will see the button for Add to add new series. Hope that will help.
{ "language": "en", "url": "https://stackoverflow.com/questions/163363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to setup VS2008 for efficient C++ development Normally I program in C# but have been forced to do some work in C++. It seems that the integration with Visual Studio (2008) is really poor compared to C# but I was wondering if there are any good tools, plugins or configurations that can improve the situation. Another post pointed out the program Visual Assist X, which at least helps with some things such as refactoring (though it is a bit expensive for me). My major problem is, though, that the compile errors give little clue about what is wrong and I spend most of my time figuring out what I did wrong. It just feels like it is possibly to statically check for a lot more errors than VS does out of the box. And why doesn't it provide the blue underlines as with C#, that shouldn't be too hard?! I realize that half the problem is just the fact that I am new to C++ but I really feel that it can be unreasonably hard to get a program to compile. Are there any tools of this sort out there or are my demands too high? A: I think there are two possibilities: 1) either you're trying out C++ stuff that is waaay over your knowledge (and consequently, you don't know what you did wrong and how to interpret error messages), 2) you have too high expectations. A hint: many subsequent errors are caused by the first error. When I get a huge list of errors, I usually correct just the first error and recompile. You'd be amazed how much garbage (in terms of error messages) a missing delimiter or type declaration could produce :) It is difficult to syntactically analyze a C++ program before compilation mainly for two reasons: 1) the C++ grammar is context-dependent, 2) templates are Turing-complete (think of them as of a functional programming language with a weird syntax). A: My suggestions: * *If you want more features like you get in C#, get VisualAssist X, and learn how to use it. It isn't free but it can save you a lot of time. *Set your warning level high (this will initially generate more compile-errors but as you fix them, you'll get a feel for common mistakes). *Set warning as error so you don't get in the habit of ignoring warnings. *To understand compile errors, use Google (don't waste your time with the help system) to search on warning error numbers (they look like this: C4127). *Avoid templates until you get your code compiling without errors using the above methods. If you don't know templates well, study! Get some books, do some tutorials and start small. Template compile errors are notoriously hard to figure out. Visual C++ 2008 has much better error messages than previous versions but it's still hard. *If you start doing templates in earnest, get a wide-screen monitor (maybe even two) to make reading the verbose errors easier. A: +1 for Visual Assist, maybe not now - but when you turn the hobby into a profession you will need it. In my experience, the diagnsotics are already much better than in VC6, but you will need to "learn" their true meaning as part of learning the IDE. Static checking of C++ is much more complicated than C#, due to the build mode, and the incredibly more complex language. PC-Lint (best together with Visual Lint to integrate it into the IDE) is the canonical static analysis. Not cheap either, though... The C++ standard sometimes reads like scripture, but without a trained preacher to interpret it. One excellent interpreter is Marshal Cline with his C++ FAQ. Note that the online FAQ, while extensive, covers much less than the book. What helped me a lot understanding complex error messages is trying to reproduce the problem in a smaller environment - but then, there was no internet back then...
{ "language": "en", "url": "https://stackoverflow.com/questions/163364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I make a C++ macro behave like a function? Let's say that for some reason you need to write a macro: MACRO(X,Y). (Let's assume there's a good reason you can't use an inline function.) You want this macro to emulate a call to a function with no return value. Example 1: This should work as expected. if (x > y) MACRO(x, y); do_something(); Example 2: This should not result in a compiler error. if (x > y) MACRO(x, y); else MACRO(y - x, x - y); Example 3: This should not compile. do_something(); MACRO(x, y) do_something(); The naïve way to write the macro is like this: #define MACRO(X,Y) \ cout << "1st arg is:" << (X) << endl; \ cout << "2nd arg is:" << (Y) << endl; \ cout << "Sum is:" << ((X)+(Y)) << endl; This is a very bad solution which fails all three examples, and I shouldn't need to explain why. Ignore what the macro actually does, that's not the point. Now, the way I most often see macros written is to enclose them in curly braces, like this: #define MACRO(X,Y) \ { \ cout << "1st arg is:" << (X) << endl; \ cout << "2nd arg is:" << (Y) << endl; \ cout << "Sum is:" << ((X)+(Y)) << endl; \ } This solves example 1, because the macro is in one statement block. But example 2 is broken because we put a semicolon after the call to the macro. This makes the compiler think the semicolon is a statement by itself, which means the else statement doesn't correspond to any if statement! And lastly, example 3 compiles OK, even though there is no semicolon, because a code block doesn't need a semicolon. Is there a way to write a macro so that it pass all three examples? Note: I am submitting my own answer as part of the accepted way of sharing a tip, but if anyone has a better solution feel free to post it here, it may get more votes than my method. :) A: There is a rather clever solution: #define MACRO(X,Y) \ do { \ cout << "1st arg is:" << (X) << endl; \ cout << "2nd arg is:" << (Y) << endl; \ cout << "Sum is:" << ((X)+(Y)) << endl; \ } while (0) Now you have a single block-level statement, which must be followed by a semicolon. This behaves as expected and desired in all three examples. A: Macros should generally be avoided; prefer inline functions to them at all times. Any compiler worth its salt should be capable of inlining a small function as if it were a macro, and an inline function will respect namespaces and other scopes, as well as evaluating all the arguments once. If it must be a macro, a while loop (already suggested) will work, or you can try the comma operator: #define MACRO(X,Y) \ ( \ (cout << "1st arg is:" << (X) << endl), \ (cout << "2nd arg is:" << (Y) << endl), \ (cout << "3rd arg is:" << ((X) + (Y)) << endl), \ (void)0 \ ) The (void)0 causes the statement to evaluate to one of void type, and the use of commas rather than semicolons allows it to be used inside a statement, rather than only as a standalone. I would still recommend an inline function for a host of reasons, the least of which being scope and the fact that MACRO(a++, b++) will increment a and b twice. A: Create a block using #define MACRO(...) do { ... } while(false) Do not add a ; after the while(false) A: Your answer suffers from the multiple-evaluation problem, so (eg) macro( read_int(file1), read_int(file2) ); will do something unexpected and probably unwanted. A: I know you said "ignore what the macro does", but people will find this question by searching based on the title, so I think discussion of further techniques to emulate functions with macros are warranted. Closest I know of is: #define MACRO(X,Y) \ do { \ auto MACRO_tmp_1 = (X); \ auto MACRO_tmp_2 = (Y); \ using std::cout; \ using std::endl; \ cout << "1st arg is:" << (MACRO_tmp_1) << endl; \ cout << "2nd arg is:" << (MACRO_tmp_2) << endl; \ cout << "Sum is:" << (MACRO_tmp_1 + MACRO_tmp_2) << endl; \ } while(0) This does the following: * *Works correctly in each of the stated contexts. *Evaluates each of its arguments exactly once, which is a guaranteed feature of a function call (assuming in both cases no exceptions in any of those expressions). *Acts on any types, by use of "auto" from C++0x. This is not yet standard C++, but there's no other way to get the tmp variables necessitated by the single-evaluation rule. *Doesn't require the caller to have imported names from namespace std, which the original macro does, but a function would not. However, it still differs from a function in that: * *In some invalid uses it may give different compiler errors or warnings. *It goes wrong if X or Y contain uses of 'MACRO_tmp_1' or 'MACRO_tmp_2' from the surrounding scope. *Related to the namespace std thing: a function uses its own lexical context to look up names, whereas a macro uses the context of its call site. There's no way to write a macro that behaves like a function in this respect. *It can't be used as the return expression of a void function, which a void expression (such as the comma solution) can. This is even more of an issue when the desired return type is not void, especially when used as an lvalue. But the comma solution can't include using declarations, because they're statements, so pick one or use the ({ ... }) GNU extension. A: Here is an answer coming right from the libc6! Taking a look at /usr/include/x86_64-linux-gnu/bits/byteswap.h, I found the trick you were looking for. A few critics of previous solutions: * *Kip's solution does not permit evaluating to an expression, which is in the end often needed. *coppro's solution does not permit assigning a variable as the expressions are separate, but can evaluate to an expression. *Steve Jessop's solution uses the C++11 auto keyword, that's fine, but feel free to use the known/expected type instead. The trick is to use both the (expr,expr) construct and a {} scope: #define MACRO(X,Y) \ ( \ { \ register int __x = static_cast<int>(X), __y = static_cast<int>(Y); \ std::cout << "1st arg is:" << __x << std::endl; \ std::cout << "2nd arg is:" << __y << std::endl; \ std::cout << "Sum is:" << (__x + __y) << std::endl; \ __x + __y; \ } \ ) Note the use of the register keyword, it's only a hint to the compiler. The X and Y macro parameters are (already) surrounded in parenthesis and casted to an expected type. This solution works properly with pre- and post-increment as parameters are evaluated only once. For the example purpose, even though not requested, I added the __x + __y; statement, which is the way to make the whole bloc to be evaluated as that precise expression. It's safer to use void(); if you want to make sure the macro won't evaluate to an expression, thus being illegal where an rvalue is expected. However, the solution is not ISO C++ compliant as will complain g++ -pedantic: warning: ISO C++ forbids braced-groups within expressions [-pedantic] In order to give some rest to g++, use (__extension__ OLD_WHOLE_MACRO_CONTENT_HERE) so that the new definition reads: #define MACRO(X,Y) \ (__extension__ ( \ { \ register int __x = static_cast<int>(X), __y = static_cast<int>(Y); \ std::cout << "1st arg is:" << __x << std::endl; \ std::cout << "2nd arg is:" << __y << std::endl; \ std::cout << "Sum is:" << (__x + __y) << std::endl; \ __x + __y; \ } \ )) In order to improve my solution even a bit more, let's use the __typeof__ keyword, as seen in MIN and MAX in C: #define MACRO(X,Y) \ (__extension__ ( \ { \ __typeof__(X) __x = (X); \ __typeof__(Y) __y = (Y); \ std::cout << "1st arg is:" << __x << std::endl; \ std::cout << "2nd arg is:" << __y << std::endl; \ std::cout << "Sum is:" << (__x + __y) << std::endl; \ __x + __y; \ } \ )) Now the compiler will determine the appropriate type. This too is a gcc extension. Note the removal of the register keyword, as it would the following warning when used with a class type: warning: address requested for ‘__x’, which is declared ‘register’ [-Wextra] A: C++11 brought us lambdas, which can be incredibly useful in this situation: #define MACRO(X,Y) \ [&](x_, y_) { \ cout << "1st arg is:" << x_ << endl; \ cout << "2nd arg is:" << y_ << endl; \ cout << "Sum is:" << (x_ + y_) << endl; \ }((X), (Y)) You keep the generative power of macros, but have a comfy scope from which you can return whatever you want (including void). Additionally, the issue of evaluating macro parameters multiple times is avoided. A: As others have mentioned, you should avoid macros whenever possible. They are dangerous in the presence of side effects if the macro arguments are evaluated more than once. If you know the type of the arguments (or can use C++0x auto feature), you could use temporaries to enforce single evaluation. Another problem: the order in which multiple evaluations happen may not be what you expect! Consider this code: #include <iostream> using namespace std; int foo( int & i ) { return i *= 10; } int bar( int & i ) { return i *= 100; } #define BADMACRO( X, Y ) do { \ cout << "X=" << (X) << ", Y=" << (Y) << ", X+Y=" << ((X)+(Y)) << endl; \ } while (0) #define MACRO( X, Y ) do { \ int x = X; int y = Y; \ cout << "X=" << x << ", Y=" << y << ", X+Y=" << ( x + y ) << endl; \ } while (0) int main() { int a = 1; int b = 1; BADMACRO( foo(a), bar(b) ); a = 1; b = 1; MACRO( foo(a), bar(b) ); return 0; } And it's output as compiled and run on my machine: X=100, Y=10000, X+Y=110 X=10, Y=100, X+Y=110 A: If you're willing to adopt the practice of always using curly braces in your if statements, Your macro would simply be missing the last semicolon: #define MACRO(X,Y) \ cout << "1st arg is:" << (X) << endl; \ cout << "2nd arg is:" << (Y) << endl; \ cout << "Sum is:" << ((X)+(Y)) << endl Example 1: (compiles) if (x > y) { MACRO(x, y); } do_something(); Example 2: (compiles) if (x > y) { MACRO(x, y); } else { MACRO(y - x, x - y); } Example 3: (doesn't compile) do_something(); MACRO(x, y) do_something();
{ "language": "en", "url": "https://stackoverflow.com/questions/163365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Error Updating a record I get a mysql error: #update (ActiveRecord::StatementInvalid) "Mysql::Error: #HY000Got error 139 from storage engine: When trying to update a text field on a record with a string of length 1429 characters, any ideas on how to track down the problem? Below is the stacktrace. from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract_adapter.rb:147:in `log' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/connection_adapters/mysql_adapter.rb:299:in `execute' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb:167:in `update_sql' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/connection_adapters/mysql_adapter.rb:314:in `update_sql' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb:49:in `update_without_query_dirty' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/query_cache.rb:19:in `update' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/base.rb:2481:in `update_without_lock' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/locking/optimistic.rb:70:in `update_without_dirty' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/dirty.rb:137:in `update_without_callbacks' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/callbacks.rb:234:in `update_without_timestamps' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/timestamp.rb:38:in `update' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/base.rb:2472:in `create_or_update_without_callbacks' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/callbacks.rb:207:in `create_or_update' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/base.rb:2200:in `save_without_validation' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/validations.rb:901:in `save_without_dirty' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/dirty.rb:75:in `save_without_transactions' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/transactions.rb:106:in `save' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/connection_adapters/abstract/database_statements.rb:66:in `transaction' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/transactions.rb:79:in `transaction' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/transactions.rb:98:in `transaction' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/transactions.rb:106:in `save' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/transactions.rb:118:in `rollback_active_record_state!' from /var/www/releases/20081002155111/vendor/rails/activerecord/lib/active_record/transactions.rb:106:in `save' A: When you say a text field, is it of type VARCHAR, or TEXT? If its the former then you cannot store a string larger than 255 chars (possibly less with UTF-8 overhead) in that column. If its the latter, you'd better post your schema definition so people can assist you further. A: Maybe it's this bug: #1030 - Got error 139 from storage engine, but it would help if you'd post the query which should come directly after the error message. A: It seemed to be a very weird mysql error, where the text was being truncated to 256 characters (for a text type) and throwing the above error is the string was 1000 characters or more. modifying the table column to be text again fixed the issue, or it just fixed it self.. i'm still not sure. Update: Changing the table type to MyISAM fixed this problem
{ "language": "en", "url": "https://stackoverflow.com/questions/163367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Type mismatch for Class Generics I have the following code that won't compile and although there is a way to make it compile I want to understand why it isn't compiling. Can someone enlighten me as to specifically why I get the error message I will post at the end please? public class Test { public static void main(String args[]) { Test t = new Test(); t.testT(null); } public <T extends Test> void testT(Class<T> type) { Class<T> testType = type == null ? Test.class : type; //Error here System.out.println(testType); } } Type mismatch: cannot convert from Class<capture#1-of ? extends Test> to Class<T> By casting Test.class to Class<T> this compiles with an Unchecked cast warning and runs perfectly. A: Suppose I extend Test: public class SubTest extends Test { public static void main(String args[]) { Test t = new Test(); t.testT(new SubTest()); } } Now, when I invoked testT, the type parameter <T> is SubTest, which means the variable testType is a Class<SubTest>. Test.class is of type Class<Test>, which is not assignable to a variable of type Class<SubTest>. Declaring the variable testType as a Class<? extends Test> is the right solution; casting to Class<T> is hiding a real problem. A: The reason is that Test.class is of the type Class<Test>. You cannot assign a reference of type Class<Test> to a variable of type Class<T> as they are not the same thing. This, however, works: Class<? extends Test> testType = type == null ? Test.class : type; The wildcard allows both Class<T> and Class<Test> references to be assigned to testType. There is a ton of information about Java generics behavior at Angelika Langer Java Generics FAQ. I'll provide an example based on some of the information there that uses the Number class heirarchy Java's core API. Consider the following method: public <T extends Number> void testNumber(final Class<T> type) This is to allow for the following statements to be successfully compile: testNumber(Integer.class); testNumber(Number.class); But the following won't compile: testNumber(String.class); Now consider these statements: Class<Number> numberClass = Number.class; Class<Integer> integerClass = numberClass; The second line fails to compile and produces this error Type mismatch: cannot convert from Class<Number> to Class<Integer>. But Integer extends Number, so why does it fail? Look at these next two statements to see why: Number anumber = new Long(0); Integer another = anumber; It is pretty easy to see why the 2nd line doesn't compile here. You can't assign an instance of Number to a variable of type Integer because there is no way to guarantee that the Number instance is of a compatible type. In this example the Number is actually a Long, which certainly can't be assigned to an Integer. In fact, the error is also a type mismatch: Type mismatch: cannot convert from Number to Integer. The rule is that an instance cannot be assigned to a variable that is a subclass of the type of the instance as there is no guarantee that is is compatible. Generics behave in a similar manner. In the generic method signature, T is just a placeholder to indicate what the method allows to the compiler. When the compiler encounters testNumber(Integer.class) it essentially replaces T with Integer. Wildcards add additional flexibility, as the following will compile: Class<? extends Number> wildcard = numberClass; Since Class<? extends Number> indicates any type that is a Number or a subclass of Number this is perfectly legal and potentially useful in many circumstances. A: Remove the conditional and the error is a little nicer... public class Test { public static void main(String args[]) { Test t = new Test(); t.testT(null); } public <T extends Test> void testT(Class<T> type) { Class<T> testClass = Test.class; System.out.println(testClass); } } Test.java:10: incompatible types found : java.lang.Class<Test> required: java.lang.Class<T> Class<T> testClass = Test.class;
{ "language": "en", "url": "https://stackoverflow.com/questions/163382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Logging conventions What conventions do you use for log categories in log4j or similar libraries ? Usually you see class names as categories, but have you used other systems ? What about log levels ? What levels do you use and in which case ? Update: as some of you replied, there is no 'right' answer. I'm just looking for what different conventions people use as a possible source of inspiration. A: I have 3 levels: errors, warnings and verbose log telling whatever the program is doing at a time. I use class+function as a context. A: I agree with Vaibhav's answer: you have to know why you are logging. * *for debug internal technical debug informations, log4j or any other library is fine (provided their usage does not artificially augment the cyclomatic complexity of the functions) *for transversal punctual logging (across the whole code), some Aspect-Oriented approach is better suited *for monitoring, you enter to an whole other level of logging, namely the KPI, with the need to record those information through a publication bus (like TIBCO for instance) to some kind of database. So for internal logging only, we follow a pretty standard approach: * *severe for any error that may compromise the program *info for following the internal progression *fine for some sub-step details The granularity (for classical internal logging) is the main class, the one in charge of the main steps of the process. A: We have had extensive debates about this over the years and the only thing we all agree on is that there is no perfect answer! What we have settled on is using a top level category name to differentiate between broad categories: e.g. 'Operation' relates to anything the user might care about, 'Internal' relates to things only the developer will care about, 'Audit' is used for tracking interesting events. Beyond that we try to limit the number of categories, since we find no-one ever turns them on/off at the more detailed level. So instead of class names we try to group them into the functional area, eg. Query, Updates, etc. A: The logging depends on your requirement. If you are making a log which simply keeps tabs on whether there have been any problems (such as logging exceptions), then you may only require Class and Function name. However, if you have a functional requirement for creating and audit trail of sorts, then the logging has to be taken to a whole different level of detail. A: We have debug logs which are class + method. We also have specific logs for certain actions, e.g., connection received on a socket. These are what I call 'Fact Logs' or 'Audit Trail Logs', they log a single type of thing. Of course recently I just stick these into a database because the facts you are capturing can be quite a lot more complex than a string of text, they can include state at a particular time. I.e., you roll your own audit trail recording mechanism for each audit you require. When debugging we will set the package/class we are debugging to DEBUG in log4j, whilst leaving the rootlogger at ERROR, and we will have a debugging log file for that which hopefully leaves out all the gumpf logging from other areas of the application. But there isn't really a 'right way' to do these things. A combination of mechanisms seems good but it depends on what you want to log.
{ "language": "en", "url": "https://stackoverflow.com/questions/163385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is there a way to get types/names of an unknown db query without executing it? I have a web application where users enter arbitrary sql queries for later batch processing. We want to validate the syntax of the query without actually executing it. Some of the queries will take a long time, which is why we don't want to execute them. I'm using Oracle's dbms_sql.parse to do this. However, I now have a situation where I need to know the number and type of the result set columns. Is there a way to do this without actually executing the query? That is, to have Oracle parse the query and tell me what the result datatypes/names will be returned when the query is actually executed? I'm using Oracle 10g and and it's a Java 1.5/Servlet 2.4 application. Edit: The users who enter the queries are already users on the database. They authenticate to my app with their database credentials and the queries are executed using those credentials. Therefore they can't put in any query that they couldn't run by just connecting with sqlplus. A: You should be able to prepare a SQL query to validate the syntax and get result set metadata. Preparing a query should not execute it. import java.sql.*; . . . Connection conn; . . . PreparedStatement ps = conn.prepareStatement("SELECT * FROM foo"); ResultSetMetadata rsmd = ps.getMetaData(); int numberOfColumns = rsmd.getColumnCount(); Then you can get metadata about each column of the result set. A: If you want to do this strictly through pl/sql then you could do the following: DECLARE lv_stat varchar2(100) := 'select blah blah blah'; lv_cur INTEGER; lv_col_cnt INTEGER; lv_desc DBMS_SQL.desc_tab; BEGIN DBMS_SQL.parse(lv_cur,lv_stat,DBMS_SQL.NATIVE); DBMS_SQL.describe_columns(lv_cur,lv_col_cnt,lv_desc); FOR ndx in lv_desc.FIRST .. lv_desc.LAST LOOP DBMS_OUTPUT.PUT_LINE(lv_desc(ndx).col_name ||' '||lv_desc(ndx).col_type); END LOOP; END; the DBMS_SQL.desc_tab contains pretty much all that you would need to know about the columns.
{ "language": "en", "url": "https://stackoverflow.com/questions/163389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Microsoft Access - SQL - Internal Foreign Key Does MS Access 2007 support internal foreign keys within the same table? A: Yes. Create the table with the hierarchy. id - autonumber - primary key parent_id - number value Go to the relationships screen. Add the hierarchy table twice. Connect the id and the parent_id fields. Enforce referential integrity. A: Yes it does. Under database tools and relationships you need to show 2 copies of the self-referencing table. It will name the second copy Table_1. Then you setup a relationship between the primary key in "table" and the foreign key column(s) in "Table_1". A: Yes it does and unlike many more capable SQLs (e.g. SQL Server) you can also use CASCADE referential actions on FKs within the same table, which is nice.
{ "language": "en", "url": "https://stackoverflow.com/questions/163392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Database Design Issues with relationships I'm working on an upgrade for an existing database that was designed without any of the code to implement the design being considered. Now I've hit a brick wall in terms of implementing the database design in code. I'm certain whether its a problem with the design of the database or if I'm simply not seeing the correct solution on how to what needs to be done. The basic logic stipulates the following: * *Users access the online trainings by way of Seats. Users can have multiple Seats. *Seats are purchased by companies and have a many-to-many relationship with Products. *A Product has a many-to-many relationship with Modules. *A Module has a many-to-many relationship with Lessons. *Lessons are the end users access for their training. *To muddy the waters, for one reason or another some Users have multiple Seats that contain the same Products. *Certification takes place on a per Product basis, not on a per Seat basis. *Users have a many-to-many relationship with lessons that stores their current status or score for the lesson. *Users certify for a Product when they complete all of the Lessons in all of the Modules for the Product. *It is also significant to know when all Lessons for a particular Module are completed by a User. *Some Seats will be for ReCertification meaning that Users that previously certified for a Product can sign up and take a recertification exam. *Due to Rule 11, Users can and will have multiple Certification records. *Edit: When a User completes a Lesson (scores better than 80%) then the User has (according to the current business logic) completed the Lesson for all Products and all Seats that contain the Lesson. The trouble that I keep running into with the current design and the business logic as I've more or less described it is that I can't find a way to effectively tie whether a user has certified for a particular product and seat vs when they have not. I keep hitting snags trying to establish which Products under which Seats have been certified for the User and which haven't. Part of the problem is because if they are currently registered for multiple of the same Product under different Seats, then I have to count the Product only once. Below is a copy of the portion of the schema that's involved. Any suggestions on how to improve the design or draw the association in code would be appreciated. In case it matters, this site is built on the LAMPP stack. You can view the relevant portion of the database schema here: http://lpsoftware.com/problem_db_structure.png A: What you're looking for is relational division Not implemented directly in SQL, but it can be done. Search google for other examples. A: After a quick look at the schema I think one of the things you can do is create a 'to_be_certified' table. Populate it with user_id, product_id and seat_id when a product is assigned to a seat (when product_seat_rtab is populated). On adding a record to the certification_rtab table, delete the corresponding record in the 'to_be_certified' table. This will give you an easy access to all the products which are certified for a users and the ones that are not. To get rid of duplicate product_ids, you can group by product_id. A: You need to make changes to the lessonstatus_rtab table: CREATE TABLE lessonstatus_rtab ( user_id INT NOT NULL, seat_id INT NOT NULL, lesson_id INT NOT NULL REFERENCES lesson_rtab, accessdate TIMESTAMP, score NUMERIC(5,2) NOT NULL DEFAULT 0, PRIMARY KEY (user_id, seat_id, lesson_id), FOREIGN KEY (user_id, seat_id) REFERENCES user_seat_rtab (user_id, seat_id) ); Then you can query for each product that a user has a seat for, is he certified? This presumes that the number of lessons he has scored, say, 50% or higher is the same as the number of lessons in all modules for the product. SELECT p.name, us.user_id, us.seat_id, COUNT(l.id) = COUNT(lu.lesson_id) AS is_certified FROM user_seat_rtab AS us JOIN seat_rtab AS s ON (s.id = us.seat_id) JOIN product_seat_rtab AS ps ON (ps.seat_id = s.id) JOIN product_rtab AS p ON (p.id = ps.product_id) JOIN product_module_rtab AS pm ON (pm.product_id = p.id) JOIN module_rtab AS m ON (m.id = pm.module_id) JOIN module_lesson_rtab AS ml ON (ml.module_id = m.id) JOIN lesson_rtab AS l ON (l.id = ml.lesson_id) LEFT OUTER JOIN lessonstatus_rtab AS lu ON (lu.lesson_id = l.id AND lu.user_id = us.user_id AND lu.seat_id = us.seat_id AND lu.score > 0.50) GROUP BY p.id, us.user_id, us.seat_id; A: UPDATE: I have considering this issue further and have considered whether it would allow things to work better to simply remove the user_seat_rtab table and then use the equivalent certification_rtab table (probably renamed) to hold all of the information regarding the status of a user's seat. This way there is a direct relationship established between a User, their Seat, each Product within the Seat, and whether the User has certified for the particular Product and Seat. So I would apply the following changes to the schema posted with the question: DROP TABLE user_seat_rtab; RENAME TABLE certification_rtab TO something_different; An alternative to further normalize this new structure would be to do something like this: ALTER TABLE user_seat_rtab DROP PRIMARY KEY; ADD COLUMN product_id int(10) unsigned NOT NULL; ADD CONSTRAINT pk_user_seat_product PRIMARY KEY (user_id, seat_id, product_id); ADD CONSTRAINT fk_product_user_seat FOREIGN KEY (product_id) REFERENCES product_rtab(id) ON DELETE RESTRICT; I'm not really certain whether this would solve the problem or if it will just change the nature of the problem slightly while introducing new ones. So, does anyone have any other criticisms or suggestions?
{ "language": "en", "url": "https://stackoverflow.com/questions/163400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Enum inside a JSP Is there a way to use Enum values inside a JSP without using scriptlets. e.g. package com.example; public enum Direction { ASC, DESC } so in the JSP I want to do something like this <c:if test="${foo.direction ==<% com.example.Direction.ASC %>}">... A: It can be done like this I guess <c:set var="ASC" value="<%=Direction.ASC%>"/> <c:if test="${foo.direction == ASC}"></c:if> the advantage is when we refactor it will reflect here too A: You could implement the web-friendly text for a direction within the enum as a field: <%@ page import="com.example.Direction" %> ... <p>Direction is <%=foo.direction.getFriendlyName()%></p> <% if (foo.direction == Direction.ASC) { %> <p>That means you're going to heaven!</p> <% } %> but that mixes the view and the model, although for simple uses it can be view-independent ("Ascending", "Descending", etc). Unless you don't like putting straight Java into your JSP pages, even when used for basic things like comparisons. A: You can simply check against the enum value as a string: <c:if test="${foo.direction == 'ASC'}">...
{ "language": "en", "url": "https://stackoverflow.com/questions/163407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: ASPX Page Compilation Fails We’re developing a web-based application that allows administrators to upload plug-ins. All plug-ins are stored in a special folder outside the app root (say, C:\Plugins) and are dynamically loaded via Assembly.LoadFrom(). This works just fine for the most part: WebControls in the plug-ins are instantiated and loaded, custom classes function as expected, etc. We’re using a custom VirtualPathProvider to get resources out of these plug-ins. So, to get an embedded ASPX file, you’d simply do like, “/MySite/embeddedResource/?Assembly=MyPlugin&Resource=MyPage.aspx”. And that works fine, too: the embedded ASPX file compiles and is served up like a normal page. The problem, however, comes in when an embedded .aspx file (inside of a dynamically loaded plugin) references a class inside that same plug-in assembly. We get compilation errors like, “cannot find type or assembly MyPlugin.” This is odd because, clearly, it’s pulling the .aspx file out of MyPlugin; so how can it not find it? So, I’m hoping you can help me out with this. The plugin would look like this: MyPlugin.dll: * *InternalHelperClass.cs *MyPage.aspx (resource with no .cs file) When MyPage.aspx contains something like, “<%= InternalHelperClass.WriteHelloWorld() %>”, the compilation fails. How can we get this to work? UPDATE: We have tried using fully qualified names. No difference. It is impossible to step through - it is a compilation error when you go to the aspx page. Namespaces would not be an issue in this case (since it was from an external plugin dll) UPDATE2: Joel, I think you are onto something. Unfortunately, editing the web.config to include these assemblies isn't part of the design. Basically, we want the plugins to be completely dynamic - drop them in a folder, restart the app, and be ready to go. A: Assembly.LoadFrom is dynamic (late bound) which means the type is not included during compilation time therefore references to its contained classes are invalid. You need to specifically reference the assembly so its included as part of the compilation of the *.aspx class. You may find some of the source code here helpful, and I recommend giving the Managed Extensibility Framework a go because it may have already solved this issue. Update: I have found what I think is the answer to your problem. While this won't work in an ASP.NET 1.1 project, it would for 2.0+. They have restructured the building pipeline to use a BuildProvider which can be specified in the configuration file (web.config). Though you have to write your own build provider, you can make one that automatically references all the assemblies in the Plugins folder prior to compilation. Here's information on the configuration and here's what you need to subclass to do it. Here's an out of date copy of the source code for Mono's PageBuildProvider, you'll need to check the latest implementation for ASP.NET from MS's shared source, copy it, and extend it with your custom assembly reference because the class is unfortunately sealed (but it doesn't look terribly complex). A: Have you tried fully-qualifying the namespace for the helper class? Is it public? Is it in the same assembly? Perhaps it's in another assembly that has to be loaded as well. Try stepping into the code and examining the type of InternalHelperClass. The newer "website" methods of compilation often add namespaces that you were not expecting. E.G. the class for a web page has the namespace ASP.MyWebPage. And sometimes namespaces are added based on which folder they reside in. A: I've never tried to do this, but I suspect that your ASPX page, though it is loaded from your plugin assembly, is being compiled in an ASP.NET environment that has no reference to your plugin assembly - which would explain why the fully qualified name doesn't work. Have you tried adding a reference to MyPlugin.dll to the compilation/assemblies tag in your web.config file?
{ "language": "en", "url": "https://stackoverflow.com/questions/163412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Printing to a pdf printer programmatically I am trying to print an existing file to PDF programmatically in Visual Basic 2008. Our current relevant assets are: Visual Studio 2008 Professional Adobe Acrobat Professional 8.0 I thought about getting a sdk like ITextSharp, but it seem like overkill for what I am trying to do especially since we have the full version of Adobe. Is there a relatively simple bit of code to print to a PDF printer (and of course assign it to print to a specific location) or will it require a the use of another library to print to pdf? I want to print a previosly created document to a pdf file. In this case it a .snp file that I want to make into a .pdf file, but I think the logic would be the same for any file type. I just tried the above shell execute, and it will not perform the way I want it to. as it prompts me as to where I want to print and still does not print where I want it to (multiple locations), which is crucial as we create a lot of the same named PDF files (with different data within the PDF and placed in corresponding client folders) The current process is: * *Go to \\report server\client1 *create pdf files of all the snp documents in the folder by hand *copy the pdf to \\website reports\client1 *then repeat for all 100+ clients takes roughly two hours to complete and verify I know this can be done better but I have only been here three months and there were other pressing concerns that were a lot more immediate. I also was not expecting something that looks this trivial to be that hard to code. A: This is how I do it in VBScript. Might not be very useful for you but might get you started. You need to have a PDF maker (adobe acrobat) as a printer named "Adobe PDF". 'PDF_WILDCARD = "*.pdf" 'PrnName = "Adobe PDF" Sub PrintToPDF(ReportName As String, TempPath As String, _ OutputName As String, OutputDir As String, _ Optional RPTOrientation As Integer = 1) Dim rpt As Report Dim NewFileName As String, TempFileName As String '--- Printer Set Up --- DoCmd.OpenReport ReportName, View:=acViewPreview, WindowMode:=acHidden Set rpt = Reports(ReportName) Set rpt.Printer = Application.Printers(PrnName) 'Set up orientation If RPTOrientation = 1 Then rpt.Printer.Orientation = acPRORPortrait Else rpt.Printer.Orientation = acPRORLandscape End If '--- Print --- 'Print (open) and close the actual report without saving changes DoCmd.OpenReport ReportName, View:=acViewNormal, WindowMode:=acHidden ' Wait until file is fully created Call waitForFile(TempPath, ReportName & PDF_EXT) 'DoCmd.Close acReport, ReportName, acSaveNo DoCmd.Close acReport, ReportName TempFileName = TempPath & ReportName & PDF_EXT 'default pdf file name NewFileName = OutputDir & OutputName & PDF_EXT 'new file name 'Trap errors caused by COM interface On Error GoTo Err_File FileCopy TempFileName, NewFileName 'Delete all PDFs in the TempPath '(which is why you should assign it to a pdf directory) On Error GoTo Err_File Kill TempPath & PDF_WILDCARD Exit_pdfTest: Set rpt = Nothing Exit Sub Err_File: ' Error-handling routine while copying file Select Case Err.Number ' Evaluate error number. Case 53, 70 ' "Permission denied" and "File Not Found" msgs ' Wait 3 seconds. Debug.Print "Error " & Err.Number & ": " & Err.Description & vbCr & "Please wait a few seconds and click OK", vbInformation, "Copy File Command" Call sleep(2, False) Resume Case Else MsgBox Err.Number & ": " & Err.Description Resume Exit_pdfTest End Select Resume End Sub Sub waitForFile(ByVal pathName As String, ByVal tempfile As String) With Application.FileSearch .NewSearch .LookIn = pathName .SearchSubFolders = True .filename = tempfile .MatchTextExactly = True '.FileType = msoFileTypeAllFiles End With Do While True With Application.FileSearch If .Execute() > 0 Then Exit Do End If End With Loop End Sub Public Sub sleep(seconds As Single, EventEnable As Boolean) On Error GoTo errSleep Dim oldTimer As Single oldTimer = Timer Do While (Timer - oldTimer) < seconds If EventEnable Then DoEvents Loop errSleep: Err.Clear End Sub A: The big takeaway point here is that PDF IS HARD. If there is anything you can do to avoid creating or editing PDF documents directly, I strongly advise that you do so. It sounds like what you actually want is a batch SNP to PDF converter. You can probably do this with an off-the-shelf product, without even opening Visual Studio at all. Somebody mentioned Adobe Distiller Server -- check your docs for Acrobat, I know it comes with basic Distiller, and you may be able to set up Distiller to run in a similar mode, where it watches Directory A and spits out PDF versions of any files that show up in Directory B. An alternative: since you're working with Access snapshots, you might be better off writing a VBA script that iterates through all the SNPs in a directory and prints them to the installed PDF printer. ETA: if you need to specify the output of the PDF printer, that might be harder. I'd suggest having the PDF distiller configured to output to a temp directory, so you can print one, move the result, then print another, and so on. A: What you want to do is find a good free PDF Printer driver. These are installed as printers, but instead of printing to a physical device, render the printer commands as a PDF. Then, you can either ShellExecute as stated above, or use the built in .net PrintDocument, referring the the PDF "printer" by name. I found a couple free ones, including products from Primo and BullZip (freedom limited to 10 users) pretty quickly. It looks like SNP files are Microsoft Access Snapshots. You will have to look for a command line interface to either Access or the Snapshot Viewer that will let you specify the printer destination. I also saw that there is an ActiveX control included in the SnapshotViewer download. You could try using that in your program to load the snp file, and then tell it where to print it to, if it supports that functionality. A: PDFforge offers PDFCreator. It will create PDFs from any program that is able to print, even existing programs. Note that it's based on GhostScript, so maybe not a good fit to your Acrobat license. Have you looked into Adobe Distiller Server ? You can generate PostScript files using any printer driver and have it translated into PDF. (Actually, PDFCreator does a similar thing.) A: I had the same challenge. The solution I've made was buying a component called PDFTron. It has an API to send pdf documents to a printer from an unattended service. I posted some information in my blog about that. Take a look! How to print a PDF file programmatically??? A: Try using ShellExecute with the Print Verb. Here is a blog I found with Google. http://www.vbforums.com/showthread.php?t=508684 A: If you are trying to hand generated the PDF (with and SDK or a PDF printer driver) it's not very easy. The PDF format reference is available from Adobe. The problem is that the file is a mix of ASCII and tables that have binary offsets within the file to reference objects. It is an interesting format, and very extensible, but it is difficult to write a simple file. It's doable if you need to. I looked at the examples in the Adobe PDF reference, hand typed them in and worked them over till I could get them to work as I needed. If you will be doing this a lot it might be worth it, otherwise look at an SDK. A: I encountered a similar problem in a C# ASP.NET app. My solution was to fire a LaTeX compiler at the command line with some generated code. It's not exactly a simple solution but it generates some really beautiful .pdfs. A: Imports System.Drawing.Printing Imports System.Reflection Imports System.Runtime.InteropServices Public Class Form1 Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load Dim pkInstalledPrinters As String ' Find all printers installed For Each pkInstalledPrinters In _ PrinterSettings.InstalledPrinters printList.Items.Add(pkInstalledPrinters) Next pkInstalledPrinters ' Set the combo to the first printer in the list If printList.Items.Count > 0 Then printList.SelectedItem = 0 End If End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Try Dim pathToExecutable As String = "AcroRd32.exe" Dim sReport = " " 'pdf file that you want to print 'Dim SPrinter = "HP9F77AW (HP Officejet 7610 series)" 'Name Of printer Dim SPrinter As String SPrinter = printList.SelectedItem 'MessageBox.Show(SPrinter) Dim starter As New ProcessStartInfo(pathToExecutable, "/t """ + sReport + """ """ + SPrinter + """") Dim Process As New Process() Process.StartInfo = starter Process.Start() Process.WaitForExit(10000) Process.Kill() Process.Close() Catch ex As Exception MessageBox.Show(ex.Message) 'just in case if something goes wrong then we can suppress the programm and investigate End Try End Sub End Class A: Similar to other answers, but much simpler. I finally got it down to 4 lines of code, no external libraries (although you must have Adobe Acrobat installed and configured as Default for PDF). Dim psi As New ProcessStartInfo psi.FileName = "C:\Users\User\file_to_print.pdf" psi.Verb = "print" Process.Start(psi) This will open the file, print it with default settings and then close. Adapted from this C# answer
{ "language": "en", "url": "https://stackoverflow.com/questions/163420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Garbage with pointers in a class, C++ I am using Borland Builder C++. I have a memory leak and I know it must be because of this class I created, but I am not sure how to fix it. Please look at my code-- any ideas would be greatly appreciated! Here's the .h file: #ifndef HeaderH #define HeaderH #include <vcl.h> #include <string> using std::string; class Header { public: //File Header char FileTitle[31]; char OriginatorName[16]; //Image Header char ImageDateTime[15]; char ImageCordsRep[2]; char ImageGeoLocation[61]; NitfHeader(double latitude, double longitude, double altitude, double heading); ~NitfHeader(); void SetHeader(char * date, char * time, double location[4][2]); private: void ConvertToDegMinSec (double angle, AnsiString & s, bool IsLongitude); AnsiString ImageDate; AnsiString ImageTime; AnsiString Latitude_d; AnsiString Longitude_d; double Latitude; double Longitude; double Heading; double Altitude; }; And here is some of the .cpp file: void Header::SetHeader(char * date, char * time, double location[4][2]){ //File Header strcpy(FileTitle,"Cannon Powershot A640"); strcpy(OperatorName,"Camera Operator"); //Image Header //Image Date and Time ImageDate = AnsiString(date); ImageTime = AnsiString(time); AnsiString secstr = AnsiString(ImageTime.SubString(7,2)); AnsiString rounder = AnsiString(ImageDate.SubString(10,1)); int seconds = secstr.ToInt(); //Round off seconds - will this be necessary with format hh:mm:ss in text file? if (rounder.ToInt() > 4) { seconds++; } AnsiString dateTime = ImageDate.SubString(7,4)+ ImageDate.SubString(4,2) + ImageDate.SubString(1,2) + ImageTime.SubString(1,2) + ImageTime.SubString(4,2) + AnsiString(seconds); strcpy(ImageDateTime,dateTime.c_str()); //Image Coordinates Representation strcpy(ImageCordsRep,"G"); //Image Geographic Location AnsiString lat; AnsiString lon; AnsiString locationlat_d; AnsiString locationlon_d; AnsiString corner; for (int i = 0; i < 4; i++){ ConvertToDegMinSec(location[i][0],lat,false); ConvertToDegMinSec(location[i][1],lon,true); if(location[i][0] < 0){ locationlat_d = 'S'; ConvertToDegMinSec(-location[i][0],lat,false); }else if(location[i][0] > 0){ locationlat_d = 'N'; }else locationlat_d = ' '; if(location[i][1] < 0){ locationlon_d = 'W'; ConvertToDegMinSec(-location[i][1],lon,true); }else if(location[i][1] > 0){ locationlon_d = 'E'; }else locationlon_d = ' '; corner += lat + locationlat_d + lon + locationlon_d; } strcpy(ImageGeoLocation,corner.c_str()); } Now when I use the class in main, basically I just create a pointer: Header * header = new Header; header->SetHeader(t[5],t[6],corners->location); char * imageLocation = header->ImageGeoLocation; //do something with imageLocation delete header; Where corners->location is a string from another class, and t[5] and t[6] are both strings. The problem is that imageLocation doesn't contain what is expected, and often just garbage. I have read a lot about memory leaks and pointers, but I am still very new to programming and some of it is quite confusing. Any suggestions would be fabulous!! A: I'm afraid there are a number of issues here. For starters char ImageCordsRep[1]; doesn't work ... a string is always null terminated, so when you do strcpy(ImageCordsRep,"G"); you are overflowing the buffer. It would also be good practice to terminate all those string buffers with a null in your constructor, so they are always valid strings. Even better would be to use a string class instead of the char arrays, or at least use 'strncpy' to prevent buffer overruns if the incoming strings are larger than you expect. A: Your memory leak is in main; you are making a pointer with new, but not subsequently calling delete. If you wish to just create an object of type Header that will be destroyed when main exits, just declare it as "Header header;" If you wish to create a persistent pointer, you should use new as you do, but be sure to delete header; and some point prior to the program ending. A: Is your problem that ImageGeoLocation is trash or you have a memory leak? If you code is written as such: Header * header = new Header; header->SetHeader(t[5],t[6],corners->location); char * imageLocation = header->ImageGeoLocation; delete header; printf("ImageLocation is %s", imageLocation); Then you problem isn't a memory leak, but that you are deleting the memory out from under imageLocation. ImageLocation is just a pointer and doesn't actually contain data, it just points to it. So if you delete the data, then the pointer is pointing to trash. If that isn't the case, then debug your SetHeader method. Is ImageGeoLocation getting populated with data as you expect? If it is, then imageLocation must point to valid data unless there is some omitted code that is damaging ImageGeoLocation later on. A memory what window looking at ImageGeoLocation can help since you will be able to step through your code and see which line actually changes ImageGeoLocation where you don't expect. A: I changed strcpy() to strncpy() and it solved my problem. A: Something else... Be careful not to use imageLocation after the header object is deleted. It's often better to copy the string from the object instead of getting a pointer to it. It could be OK in this case depending on the rest of the code. Header * header = new Header; header->SetHeader(t[5],t[6],corners->location); char * imageLocation = header->ImageGeoLocation; A: Thank you, Torlack, and others for replying so quickly. Basically, imageLocation gets populated fine, unless I have other code before it. For example, I have this string list, which basically contains file names. AnsiString fileType ("*.jpg"); AnsiString path = f + fileType; WIN32_FIND_DATA fd; HANDLE hFindJpg = FindFirstFile(path.c_str(),&fd); //Find all images in folder TStringList * imageNames = new TStringList; if (hFindJpg != INVALID_HANDLE_VALUE) { do{ if(!(fd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)){ image = AnsiString(fd.cFileName); imageNames->Add(image); jpgFileCount++; } }while(FindNextFile(hFindJpg,&fd)); }else ShowMessage ("Cannot find images."); FindClose(hFindJpg); Now when I try to refer to an image from the list directly before, I get the name of the image put inside imageLocation. //char * imageLocation = header->ImageGeoLocation; //as expected Image1->Picture->LoadFromFile(imageNames->Strings[j]); char * imageLocation = header->ImageGeoLocation; //puts name of jpg file in imageLocation
{ "language": "en", "url": "https://stackoverflow.com/questions/163432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Are nulls in a relational database okay? There's a school of thought that null values should not be allowed in a relational database. That is, a table's attribute (column) should not allow null values. Coming from a software development background, I really don't understand this. It seems that if null is valid within the context of the attribute, then it should be allowed. This is very common in Java where object references are often null. Not having an extensive database experience, I wonder if I'm missing something here. A: There is another alternative to using "N/A" or "N/K" or the empty string - a separate table. E.g. if we may or may not know a customer's phone number: CREATE TABLE Customer (ID int PRIMARY KEY, Name varchar(100) NOT NULL, Address varchar(200) NOT NULL); CREATE TABLE CustomerPhone (ID int PRIMARY KEY, Phone varchar(20) NOT NULL, CONSTRAINT FK_CustomerPhone_Customer FOREIGN KEY (ID) REFERENCES Customer (ID)); If we don't know the phone number we just don't add a row to the second table. A: Don't underestimate the complexity you create by making a field NULLable. For example, the following where clause looks like it will match all rows (bits can only be 1 or 0, right?) where bitfield in (1,0) But if the bitfield is NULLable, it will miss some. Or take the following query: select * from mytable where id not in (select id from excludetable) Now if the excludetable contains a null and a 1, this translates to: select * from mytable where id <> NULL and id <> 1 But "id <> NULL" is false for any value of id, so this will never return any rows. This catches even experienced database developers by surpise. Given that most people can be caught off-guard by NULL, I try to avoid it when I can. A: Nulls are negatively viewed from the perspective of database normalization. The idea being that if a value can be nothing, then you really should split that out into another sparse table such that you don't require rows for items which have no value. It's an effort to make sure all data is valid and valued. In some cases having a null field is useful, though, especially when you want to avoid yet another join for performance reasons (although this shouldn't be an issue if the database engine is setup properly, except in extraordinary high performance scenarios.) -Adam A: I would say that Nulls should definitely be used. There is no other right way to represent lack of data. For example, it would be wrong to use an empty string to represent a missing address line, or it would be wrong to use 0 to represent a missing age data item. Because both an empty string and 0 are data. Null is the best way to represent such a scenario. A: This is a huge can of worms, because NULL can mean so many things: * *No date of death because the person is still alive. *No cell phone number because we don't know what it is or even if it exists. *No social security number because that person is know to not have one. Some of these can be avoided by normalisation, some of them can be avoided by the presence of a value in that column ("N/A"), some of them can be mitigated by having a separate column to explain the presence of the NULL ("N/K", "N/A" etc). It's also a can of worms because the SQL syntax needed to find them is different to that of non-null values, it's difficult to join on them, and they are generally not included in index entries. Because of the former reason you're going to find cases where a null is unavoidable. Because of the latter reason you should still do your best to minimise the number of them. Regardless, always use NOT NULL constraints to guard against nulls where a value is required. A: The main issue with nulls is that they have special semantics that can produce unexpected results with comparisons, aggregates and joins. * *Nothing is ever equal to null, and nothing is ever not equal to, greater than or less than null, so you have to set nulls to a placeholder value if you want do any bulk comparison. *This is also a problem on composite keys that might be used in a join. Where the natural key includes a nullable column you might want to consider using a synthetic key. *Nulls can drop out of counts, which may not be the semantics you desire. *Nulls in a column that you can join against will eliminate rows from an inner join. In general this is probably desired behaviour, but it can lay elephant traps for people doing reporting. There are quite a few other subtleties to nulls. Joe Celko's SQL for Smarties has a whole chapter on the subject and is a good book and worth reading anyway. Some examples of places where nulls are a good solution are: * *Optional relationships where a joined entity may or may not be present. Null is the only way to represent an optional relationship on a foreign key column. *Columns that you may wish to use to null to drop out of counts. *Optional numeric (e.g. currency) values that may or may not be present. There is no effective placeholder value for 'not recorded' in number systems (particularly where zero is a legal value), so null is really the only good way to do this. Some examples of places where you might want to avoid using nulls because they are likely to cause subtle bugs. * *'Not Recorded' values on code fields with a FK against a reference table. Use a placeholder value, so you (or some random business analyst down the track) don't inadvertently drop rows out of result sets when doing a query against the database. *Description fields where nothing has been entered - null string ('') works fine for this. This saves having to treat the nulls as a special case. *Optional columns on a reporting or data warehouse system. For this situation, make a placeholder row for 'Not Recorded' in the dimension and join against that. This simplifies querying and plays nicely with ad-hoc reporting tools. Again, Celko's book is a good treatment of the subject. A: Best thing to know about Normal Forms is that they are guides and guides should not be doggedly adhered to. When the world of academia clashes with the actual world you seldom find many surviving warriors of acedemia. The answer to this question is that its ok to use nulls. Just evaluate your situation and decide if you want them to show up in the table or collapse the data into another related table if you feel you ratio of null values to actual values is too high. As a friend is fond of saying, "Don't let the perfect be the enemy of the good". Think Voltaire also said that. 8) A: One argument against nulls is that they don't have a well-defined interpretation. If a field is null, that could be interpreted as any of the following: * *The value is "Nothing" or "Empty set" *There is no value that makes sense for that field. *The value is unknown. *The value hasn't been entered yet. *The value is an empty string (for databases that don't distinguish between nulls and empty strings). *Some application-specific meaning (e.g., "If the value is null, then use a default value.") *An error has occurred, causing the field to have a null value when it really shouldn't. Some schema designers demand that all values and data types should have well-defined interpretations, therefore nulls are bad. A: According to strict relational algebra, nulls are not needed. However for any practical project, they are needed. First, much real-world data is unknown or not applicable and nulls implement that behavior well. Second, they make views and outer joins much more practical. A: You'll find with step-by-step data acquisition systems that you can't avoid having nulls in a database because the order of asking questions / data gathering very rarely matches the logical data model. Or you can default the values (requiring code to handle these default values). You can assume all strings are empty instead of null, for example, in your model. Or you can have staging database tables for data acquisition that continues until all the data is obtained before you populate the actual database tables. This is a lot of extra work. A: To a database, null translates to "I don't have a value for this". Which means that (interestingly), a boolean column that allows nulls is perfectly acceptable, and appears in many database schemas. In contrast, if you have a boolean in your code that can have a value of 'true', 'false' or 'undefined', you're likely to see your code wind up on thedailywtf sooner or later :) So yes, if you need to allow for the possibility of a field not having any value at all, then allowing nulls on the column is perfectly acceptable. It's significantly better than the potential alternatives (empty strings, zero, etc) A: Nulls can be hard to work with, but they make sense in some cases. Suppose you have an invoice table with a column "PaidDate" which has a date value. What do you put in that column before the invoice has been paid (assuming you don't know beforehand when it will be paid)? It can't be an empty string, because that's not a valid date. It doesn't make sense to give it an arbitrary date (e.g. 1/1/1900) because that date simply isn't correct. It seems the only reasonable value is NULL, because it does not have a value. Working with nulls in a database has a few challenges, but databases handle them well. The real problems are when you load nulls from your database into your application code. That's where I've found that things are more difficult. For example, in .NET, a date in a strongly typed dataset (mimicking your DB structure) is a value type and cannot be null. So you have to build workarounds. Avoid nulls when you can, but don't rule them out because they have valid uses. A: I think you're confusing Conceptual Data Modeling with Physical Data Modeling. In CDM's if an object has an optional field, you should subtype the object and create a new object for when that field is not null. That's the theory in CDMs In the physical world we make all sorts of compromises for the real world. In the real world NULLS are more than fine, they are essential A: I agree with many of the answers above and also believe that NULL can be used, where appropriate, in a normalized schema design - particularly where you may wish to avoid using some kind of "magic number" or default value which, in turn, could be misleading! Ultimately though, I think usage of null needs to be well thought out (rather than by default) to avoid some of the assuptions listed in the answers above, particularly where NULL might be assumed to be 'nothing' or 'empty', 'unknown' or the 'value hasn't been entered yet'. A: null means no value while 0 doesn't, if you see a 0 you don't know the meaning, if you see a null you know it is a missing value I think nulls are much clearer, 0 and '' are confusing since they don't clearly show the intent of the value stored A: Dont take my words sarcastic, I mean it. Unless you are working with toy databases NULLs are inevitable and in realworld we cannot avoid NULL values. Just for saying how can you have first name, middle name, last name for every person. (Middle name and Last name is optional, then in that case NULLs are there for you) and how you can have Fax,Business phone,Office phone for everybody in the blog list. NULLS are fine, and you have to handle them properly when retrieval. In SQL server 2008 there is a concept of Sparse columns where you can avoid the space taken for NULLs also. Dont confuse NULLs with Zeros and any other value. People do that any say it is right. Thanks Naveen A: It depends. As long as you understand why you are allowing NULLs in the database (the choice needs to be made on a per-column basis) AND how you will interpret, ignore or otherwise deal with them, they are fine. For instance, a column like NUM_CHILDREN - what do you do if you don't know the answer - it should be NULL. In my mind, there is no other best option for this column's design (even if you have a flag to determine whether the NUM_CHILDREN column is valid, you still have to have a value in this column). On the other hand, if you don't allow NULLs and have special reserved values for certain cases (instead of flags), like -1 for number of children when it is really unknown, you have to address these in a similar way, in terms of conventions, documentation, etc. So, ultimately, the issues have to be addressed with conventions, documentation and consistency. The alternative, as apparently espoused by Adam Davis in the above answer, of normalizing the columns out to sparse (or not so sparse, in the case of the NUM_CHILDREN example or any example where most of the data has known values) tables, while able to eliminate all NULLs, is non-workable in general practice. In many cases where an attribute is unknown, it makes little sense to join to another table for each and every column which could allow NULLs in a simpler design. The overhead of joins, the space requirements for theprimary keys make little sense in the real world. This brings to mind the way duplicate rows can be eliminated by adding a cardinality column, while it theoretically solves the problem of not having a unique key, in practice that is sometimes impossible - for instance, in large scale data. The purists are then quick to suggest a surrogate PK instead, yet the idea that a meaningless surrogate can form part of a tuple (row) in a relation (table) is laughable from the point of view of the relational theory. A: There are several different objections to the use of NULL. Some of the objections are based on database theory. In theory, there is no difference between theory and practice. In practice, there is. It is true that a fully normalized database can get along without NULLS at all. Any place where a data value has to be left out is a place where an entire row can be left out with no loss of information. In practice, decomposing tables to this extent serves no great useful purpose, and the programming needed to perform simple CRUD operations on the database become more tedious and error prone, rather than less. There are places where the use of NULLS can cause problems: essentially these revolve around the following question: what does missing data really mean? All a NULL really conveys is that there is no value stored in a given field. But the inferences application programmers draw from missing data are sometimes incorrect, and that causes a lot of problems. Data can be missing from a location for a variety of reasons. Here are a few: * *The data is inapplicable in this context. e.g. spouse's first name for a single person. *The user of a data entry form left a field blank, and the application does not require an entry in the field. *The data is copied to the database from some other database or file, and there was missing data in the source. *There is an optional relationship encoded in a foreign key. *An empty string was stored in an Oracle database. Here are some guidelines about when to avoid NULLS: If in the course of normal expected programming, query writers have to write a lot of ISNULL, NV, COALESCE, or similar code in order to substitute a valid value for the NULL. Sometimes, it's better to make the substitution at store time, provided what's being stored is "reality". If counts are likely to be off because rows containing a NULL were counted. Often, this can be obviated by just selecting count(MyField) instead of count(*). Here is one place where you by golly better get used to NULLS, and program accordingly: whenever you start using outer joins, like LEFT JOIN and RIGHT JOIN. The whole point behind an outer join as distinct from an inner join is to get rows when some matching data is missing. The missing data will be given as NULLS. My bottom line: don't dismiss theory without understanding it. But learn when to depart from theory as well as how to follow it. A: Null markers are fine. Really, they are. A: One gotcha if you are using an Oracle database. If you save an empty string to a CHAR type column then Oracle will coerce the value to be NULL without asking. So it can be quite difficult to avoid NULL values in string columns in Oracle. If you are using NULL values, learn to use the SQL command COALESCE, especially with string values. You can then prevent NULL values propogating into your programming language. For example, imagine a person having a FirstName, MiddleName and FamilyName but you want to return a single field; SELECT FullName = COALESCE(FirstName + ' ', '') + COALESCE(MiddleName+ ' ', '') + COALESCE(FamilyName, '') FROM Person If you don't use COALESCE, if any column contains a NULL value you get NULL returned. A: Technically, nulls are illegal in relational math on which the relational database is based. So from a purely technical, semantic relational model point of view, no, they are not okay. In the real world, denormalization and some violations of the model are okay. But, in general, nulls are an indicator that you should look at your overall design more closely. I am always very wary of nulls and try to normalize them out whenever I can. But that doesn't mean that they aren't the best choice sometimes. But I would definitely lean to the side of "no nulls" unless you are really sure that having the nulls is better in your particular base. A: NULL rocks. If it wasn't necessary in some cases, SQL would not have IS NULL and IS NOT NULL as special-case operators. NULL is the root of the conceptual universal, all else is NOT NULL. Use NULLs freely, whenever it may be possible for a data value to be absent but not missed. Default values can only compensate for NULL if they are absolutely correct all of the time. For example, if i have a single-bit field "IsReady" it may make perfect sense for this field to have a default value of false and NULL not be allowed, but this implicitly asserts that we know that the whatever is not ready, when in fact we may have no such knowledge. Chances are, in a workflow scenario, the person who decides ready-or-not just hasn't had the chance to enter their opinion yet, so a default of false could actually be dangerous, leading them to overlook a decision that appears to have been made but was in fact only defaulted. as an aside, and in reference to the middle-initial example, my father had no middle name, therefore his middle initial would be NULL - not blank, space, or asterisk - except in the Army where his middle initial was NMI = No Middle Initial. How silly was that? A: While technically NULLs are ok as a field value, they are quite frequently frowned upon. Depending on how data is written to your database, it is possible (and common) to end up with an empty string value in the field as opposed to a NULL. So, any query that has this field as part of the WHERE clause, would need to handle both scenarios which are needless keystrokes. A: My controversial opinion for the day - the default of allowing NULLs in database columns was probably the worst universally accepted design decision in all of RDBMs land. Every vendor does it, and it's wrong. NULLs are fine in certain, specific, well thought out instances, but the idea that you have to explicitly disallow NULLs for every column makes negligent nullability way more common than it should be. A: There is nothing wrong with using NULL for data fields. You have to be careful when setting keys to null. Primary keys should never be NULL. Foreign keys can be null but you have to be careful not to create orphan records. If something is "non existent" then you should use NULL instead of an empty string or other kind of flag. A: Instead of writing up all the issues of NULL, and tristate vs boolean logic, etc. - I'll offer this pithy advice: * *Don't allow NULL in your columns, until you find yourself adding a magic value to represent missing or incomplete data. *Since you're asking this question, you should be very careful in how you approach NULL. There's a lot of nonobvious pitfalls to it. When in doubt, don't use NULL. A: Personally, I think that nulls should only be used when you are using the field as a foreign key to another table, to symbolize that this record doesn't link to anything in the other table. Other than that, I find that null values are actually very troublesome when programming application logic. Because there is no direct representation of a database null in most programming languages for many data types, it ends up creating a lot of application code to deal with the meaning of these null values. When a DB encounters null integer, and tries, for instance, add a value of 1 to it (aka null + 1), the database will return null, as that is how the logic is defined. However, when a programming language tries to add null and 1, it will usually thrown an exception. So, your code ends up littered with checks of what to do when the value is null, which often just equates to converting to 0 for numbers, empty string for text, and some null date (1900/1/1?) for date fields. A: I think the question comes down to what you interpret a value of NULL to signify. Yes, there are many interpretations for a NULL value, however some of them posted here should never be used. The true meaning of NULL is determined by the context of your application and should never mean more than one thing. For example, one suggestion was that NULL on a date of birth field would indicate the person was still alive. This is dangerous. In all simplicity, define NULL and stick to it. I use it to mean "the value in this field is unknown at this time". It means that and ONLY that. If you need it to mean something else AS WELL, then you need to re-examine your data model. A: It all comes down to normalization versus ease of use and performance issues. If you are going to stick to complete normalization rules you are going to end up writing stuff that looks like: Select c.id, c.lastname,....... from customer c left join customerphonenumber cpn on c.id = cpn.customerid left join customeraddress ca on c.id = ca.customerid left join customerphonenumber2 cpn2 on c.id = cpn2.customerid etc, etc, etc A: It seems that if null is valid within the context of the attribute, then it should be allowed. But what does null mean? That's the rub. It's "no value", but there's a dozen different reasons there might be no value there, and "null" doesn't give you any clue which one it means in this case. (Not set yet, not applicable to this instance, not applicable to this type, not known, not knowable, not found, error, program bug, ...) This is very common in Java where object references are often null. There's a school of thought that says null references there are bad there, too. Same problem: what does null mean? IIRC, Java has both "null" and "uninitialized" (though no syntax for the latter). So Gosling realized the folly of using "null" for every kind of "no value". But why stop with just two? A: As an analyst/programmer with 30 years experience I'll just say NULLs should be taken out back and put out of their misery. -1, 01/01/0001/12/31/9999 and ? will all suffice just as well without the mind distorting code needed to cope with these nasty NULLs. A: It's absolutely fine with null. A: Related question: How do I enforce data integrity rules in my database? I initially started with many small tables with almost zero nullalbe fields. Then I learned about the LINQ to SQL IsDiscriminator property and that LINQ to SQL only supports single table inheritance. Therefore I re-engineered it as a single table with lots of nullalbe fields.
{ "language": "en", "url": "https://stackoverflow.com/questions/163434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: Can I pass parameter from different page in JavaScript? In PHP it is easy to pass parameter using $_GET[] and $_POST[]. Is there something like that in JavaScript? I hope I can pass parameters from addresses or forms. A: window.location.href contains the current page's URL. You can append your parameters to a page's URL after a "?" (i.e., a querystring), and have the javascript on that page parse them. Lots more information and examples on googlable pages like this one. A: You can use java script to set hidden form fields. When the form is posted the data will be transmitted to the server. A: If you're talking about passing parameters from one page to the next using purely client-side code, your best bet is probably to construct a normal URL query string, and append it on to the URL of the page you're navigating to when setting document.location. On the target page, your client side code will have to parse document.location.href to get the individual URL parameters. Some Javascript libraries have helper functions to do this sort of parsing for you. Prototype, for example, can easily convert hash objects to and from URL query strings. A: You have access to the query string from within javascript as well, so that should help. A: What Ken said is the equivalent of POST variables. For passing values via GET you can append a querystring key to a link using javascript.
{ "language": "en", "url": "https://stackoverflow.com/questions/163444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Should Model make service calls to get data We are build a website using MVC pattern. So far all the pages we built used models which had to operate on Reference data(which is cached when the website loads for the first time). But now we have reached that stage of the flow where we have to deal with Transactional data (which is specific to that flow). Till now we created model classes by giving it all the data, since they were all cached already. But now that we have to deal with transactional data, should we do the same thing where we get all the data upfront and create a model object or should we make the model class get the data by making service calls. A: If you're truly using MVC, then your controller should intercept the particular action that should be taken, invoke any data-related requests, and shove the data into your model objects so that the model can then be placed into the view. There is very little benefit to having the model populate itself from a database, because you already have a controller which can do the job in a more cohesive manner. A: In true MVC, the model is responsible for updating itself in reaction to an instruction from the controller. As such, yes. The Model, and only the Model, should make service calls A: The disadvantage of the first approach is that the data that is fetched upfront might never be used. So we went with the second approach where the model gets the data. To decouple the model and the service calls we used a interface. Alternatives are welcome. A: Model objects are built through queries to the database. That's the general approach. Model objects can be built through web services requests to other servers and databases. That's almost the same thing. If -- for some performance tuning -- you pre-build all the model objects, fine. That's a special case. I prefer to use am ORM layer to handle object caching, so I don't prefetch anything. Rather, it stays in the ORM cache.
{ "language": "en", "url": "https://stackoverflow.com/questions/163451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why doesn't my ListView display List or Details items? Using C# .NET 2.0, I have an owner-drawn ListView where I'm overriding the OnDrawColumnHeader, OnDrawItem and OnDrawSubitem events. If I set the View property to Details at design-time, everything works beautifully and I can switch the View property and all view modes display as they should (I'm not using Tile view). However, if I start in any other View, both the List and Details views are blank. I know you'll probably want to see code, but there's a lot of it, so I'm hesitant to post that much, but can if necessary. I'm more curious if someone has seen this before, and/or might have an inkling of how to fix it. The View property will be a user-saved setting, so I won't always be able to start in Details view by default. A: Either SubItems are not added, or you didn't add any columns. That's my initial feeling. A: The WinForms ListView is mostly a layer of abstraction of the top of the actual Windows control, so there are aspect of its behaviour that are, well, counterintuitive is a polite way of putting things. I have a vague recollection, from back in my days as a Delphi developer, that when you are Owner drawing a ListView, the subitems of the control aren't actually populated unless your Listview is in "Details" mode when you load the items. Things to try ... ... force the WinForms control to recreate the underlying windows handle after you change the display style. If memory serves, DestroyHandle() is the method you want. ... assuming you have a "Refresh" in your application to reload the data, do things work properly when you refresh after changing the display style? ... if all else fails, beg borrow or steal a copy of Charles' Petzolds classic on windows programming. A: If you configure it correctly using the designer, just go into the generated designer code and see what code was emitted by Visual Studio to get it to work right. THen just emulate that code. A: Withour your code, there is nothing much can be said but DrawColumnHeader is only called when OwnerDraw property is set to true. Not sure if it is automatically set to true and false depending on the View property but it is worth giving a try. So make sure OwnerDraw is set to true before launching your application.
{ "language": "en", "url": "https://stackoverflow.com/questions/163472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Stop MSVC++ debug errors from blocking the current process? Any failed ASSERT statements on Windows cause the below debug message to appear and freeze the applications execution. I realise this is expected behaviour but it is running periodically on a headless machine so prevent the unit tests from failing, instead waiting on user input indefinitely. Is there s a registry key or compiler flag I can use to prevent this message box from requesting user input whilst still allowing the test to fail under ASSERT? Basically, I want to do this without modifying any code, just changing compiler or Windows options. Thanks! Microsoft Visual C++ Debug Library ASSERT http://img519.imageshack.us/img519/853/snapshotbu1.png A: I think this is a dialog shown by _CrtDbgReport for reports of type _CRT_ASSERT. With _CrtSetReportHook, you can tailor that behavior for your entire application. (i.e. requires one local change) In particular, you can continue execution after a failed assertion, thus ignoring it. A: From MSDN about the ASSERT macro: In an MFC ISAPI application, an assertion in debug mode will bring up a modal dialog box (ASSERT dialog boxes are now modal by default); this will interrupt or hang the execution. To suppress modal assertion dialogs, add the following lines to your project source file (projectname.cpp): // For custom assert and trace handling with WebDbg #ifdef _DEBUG CDebugReportHook g_ReportHook; #endif Once you have done this, you can use the WebDbg tool (WebDbg.exe) to see the assertions. A: In a unit-test context, it is often good to convert ASSERTs (actually _CrtDbgReport calls) into some exception, typically a std::exception, that contains some informative text. This tends to wend its way out to the unit test's output log as a fail. That's just what you want: A failed ASSERT should be a failed unit test. Do that by throwing in your report-hook function, as specified using: _CrtSetReportHook()
{ "language": "en", "url": "https://stackoverflow.com/questions/163484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Generate a WSDL without a webserver I would like to generate a WSDL file from a c++ atl webservice without using a web server. I would like to generate it as part of the visual studio build or as a post build event. I found a program (CmdHelper) that does this for .NET assemblies but it doesn't seem to work for what I need. Any ideas? A: The Microsoft SOAP Toolkit comes with a WSDL generator, which will generate a WSDL file from a COM component. We use that where I work, and it seems to do the job. We haven't tried to integrate it into our build process - we've always run the tool by hand when we need to update the WSDL, and we check the generated WSDL into version control. I see that Microsoft has deprecated this product, so there may be newer alternatives out there, but it works fine for us.
{ "language": "en", "url": "https://stackoverflow.com/questions/163487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: State and time transending logic and program flow? Wondering if it would ever be useful to index every possible state of an application using some reference keys... Meaning, say we have a program that starts, has only so many possible outcomes, say 8. but if each outcome is attained through stepping through many more logic states, and in between each branch is considered to be a state and is mapped to a key. It could take a lot of memory in large programs but if we could access a key directly (the key could be based on time or depth of logic), then we could instantly traverse through any of the possible situations without having to start the whole process over again with fresh data. Think of it like a tree where the nodes with no children are final outcomes, and every branch between a node and it's parents or children is a 'state', each one keyed differently. So while there are only 8 leaves, or final outcomes of the process, there could be many 'states' depending on how deep the logic goes down the tree before running out of children. Maybe for simulations, but it would take a ton of memory. A: This would not be possible to solve for a general program. The halting problem proves that is impossible to determine whether a program will halt. The problem of determining whether a given state is possible is reducible to the halting problem, thus not solvable either. A: I think this approach would be totally intractable for, well, anything. As a search problem, it's too big. If we assume that each state can lead to 10 outcomes (though I think this number is really low), then to look just 20 steps ahead, we now have to keep track of 200 billion possibilities. And remember that every step in a loop counts as a branch point. So if we have code that looks like this: for (int i=0; i < 100; i++) some_function(); Then the number of possible states is (number of branches inside some_function) ^ 100 A: While Josh is right that you can't answer the most liberal version of this problem due to the ambiguity of it, you can answer it if you place some limitations on your scenario. There is a big difference between the state of your program and the state of say business entities. For example, say you have a workflow oriented application that is defined by a DFA (State Machine). You actually could then associate a given point in that DFA with an id of some sort. So yes it's tractable but not without restrictions. A: This is done on the function level; it's a technique called memoization. A: Research kripke structures and modal logic. This is an approach taken in modelling programmes. I forget what the classic systems that use this approach are. A: Ryan, the answer is definitively YES. Contrary to the first answer, the halting problem does not prove anything. In fact, Ryan, what you're suggesting proves the halting problem wrong does not apply to real digital computers, and I've used this very example as a proof of it before. In a deterministic digital system (i.e. a program running on real digital hardware), the number of possible states is finite, and therefore all possible states are enumerable. The exact amount of memory required for the hash would be: (2)*(program state size)*(number of initial states) The initial state would be your hash key, and final state would be the hash value, and then you'd have a key/value pair for each initial state. For an operating system, the "program state size" is 2^(total gigabits of memory across all system devices). Obviously, such a large, general purpose program would require an impractical amount of memory to hash, and would not be useful anyway, since the system is self-referencing/irreducibly complex (i.e. next user input depends on previous system output). Explanation: This is highly useful, because if you index every possible initial state and associate it with the terminating state, you would effectively bring the running time of ANY PROGRAM to zero! Any by zero I mean a very fast O(1) running time -- the time it takes to look up the terminating state (if it terminates). Running a program, starting from each of all possible states, will provide a kind of state map showing cycles. The halting problem is therefore solved, because there are only three (actually four collapsing to three) possibilities given any possible initial state: * *The program will reenter a previously encountered state (since the initial state), before exhausting all possible states, and therefore logically loops forever. *The program reaches a state identified as "terminating" before it has a chance to reenter a previously encountered state or exhaust all possible states (since the initial state). *or 4. The simplest program will start from an initial state, will enter all possible states exactly once, and then has no choice but to (3) halt or (4) reenter a previously encountered state and loop forever. for (int i = 0; true; i++); //i will reach max-value, roll over back to zero, at which point it will have reentered the initial state So, basically, your index could be described like this: * *For each initial state, there is exactly one or zero terminating states. In other words, for each initial state, the program either reaches a terminating state or reenters a state already encountered since the initial state and cycles endlessly. So, for any program running on deterministic digital hardware, it is absolutely possible (but often not practical) to determine all its states and whether it halts or loops forever. * *The practicality depends solely on how many valid initial states you have (which you can reduce drastically with input constraints), and how feasible it is to take the time to run the program for each of them to termination and store the resulting state in the hash table. Besides forcing any program's running time an O(1) operation, other uses of capturing state include the save-state function in game console emulators and the hibernate feature of computers (although not a perfect restoration of state, since some system memory must be used for the code that restores the state and some memory may never be stored (e.g. GPU memory)). What this proves is that any program can be represented by a hash table. Any program can be represented by an initial-to-final state-transition map. All programs can be simplified to one big function with a massive memory-footprint!
{ "language": "en", "url": "https://stackoverflow.com/questions/163492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Running a Ruby Program as a Windows Service? Is it possible to run a ruby application as a Windows Service? I see that there is a related question which discusses running a Java Application as a Windows Service, how can you do this with a Ruby application? A: Here is a code template to do firedeamon :) ##################################################################### # runneur.rb : service which run (continuously) a process # 'do only one simple thing, but do it well' ##################################################################### # Usage: # .... duplicate this file : it will be the core-service.... # .... modify constantes in beginning of this script.... # .... modify stop_sub_process() at end of this script for clean stop of sub-application.. # # > ruby runneur.rb install foo ; foo==name of service, # > ruby runneur.rb uninstall foo # > type d:\deamon.log" ; runneur traces # > type d:\d.log ; service traces # ##################################################################### class String; def to_dos() self.tr('/','\\') end end class String; def from_dos() self.tr('\\','/') end end rubyexe="d:/usr/Ruby/ruby19/bin/rubyw.exe".to_dos # example with spawn of a ruby process... SERVICE_SCRIPT="D:/usr/Ruby/local/text.rb" SERVICE_DIR="D:/usr/Ruby/local".to_dos SERVICE_LOG="d:/d.log".to_dos # log of stdout/stderr of sub-process RUNNEUR_LOG="d:/deamon.log" # log of runneur LCMD=[rubyexe,SERVICE_SCRIPT] # service will do system('ruby text.rb') SLEEP_INTER_RUN=4 # at each dead of sub-process, wait n seconds before rerun ################### Installation / Desintallation ################### if ARGV[0] require 'win32/service' include Win32 name= ""+(ARGV[1] || $0.split('.')[0]) if ARGV[0]=="install" path = "#{File.dirname(File.expand_path($0))}/#{$0}".tr('/', '\\') cmd = rubyexe + " " + path print "Service #{name} installed with\n cmd=#{cmd} ? " ; rep=$stdin.gets.chomp exit! if rep !~ /[yo]/i Service.new( :service_name => name, :display_name => name, :description => "Run of #{File.basename(SERVICE_SCRIPT.from_dos)} at #{SERVICE_DIR}", :binary_path_name => cmd, :start_type => Service::AUTO_START, :service_type => Service::WIN32_OWN_PROCESS | Service::INTERACTIVE_PROCESS ) puts "Service #{name} installed" Service.start(name, nil) sleep(3) while Service.status(name).current_state != 'running' puts 'One moment...' + Service.status(name).current_state sleep 1 end while Service.status(name).current_state != 'running' puts ' One moment...' + Service.status(name).current_state sleep 1 end puts 'Service ' + name+ ' started' elsif ARGV[0]=="desinstall" || ARGV[0]=="uninstall" if Service.status(name).current_state != 'stopped' Service.stop(name) while Service.status(name).current_state != 'stopped' puts 'One moment...' + Service.status(name).current_state sleep 1 end end Service.delete(name) puts "Service #{name} stopped and uninstalled" else puts "Usage:\n > ruby #{$0} install|desinstall [service-name]" end exit! end ################################################################# # service runneur : service code ################################################################# require 'win32/daemon' include Win32 Thread.abort_on_exception=true class Daemon def initialize @state='stopped' super log("******************** Runneur #{File.basename(SERVICE_SCRIPT)} Service start ***********************") end def log(*t) txt= block_given?() ? (yield() rescue '?') : t.join(" ") File.open(RUNNEUR_LOG, "a"){ |f| f.puts "%26s | %s" % [Time.now,txt] } rescue nil end def service_pause #put activity in pause @state='pause' stop_sub_process log { "service is paused" } end def service_resume #quit activity from pause @state='run' log { "service is resumes" } end def service_interrogate # respond to quistion status log { "service is interogate" } end def service_shutdown # stop activities before shutdown log { "service is stoped for shutdown" } end def service_init log { "service is starting" } end def service_main @state='run' while running? begin if @state=='run' log { "starting subprocess #{LCMD.join(' ')} in #{SERVICE_DIR}" } @pid=::Process.spawn(*LCMD,{ chdir: SERVICE_DIR, out: SERVICE_LOG, err: :out }) log { "sub-process is running : #{@pid}" } a=::Process.waitpid(@pid) @pid=nil log { "sub-process is dead (#{a.inspect})" } sleep(SLEEP_INTER_RUN) if @state=='run' else sleep 3 log { "service is sleeping" } if @state!='run' end rescue Exception => e log { e.to_s + " " + e.backtrace.join("\n ")} sleep 4 end end end def service_stop @state='stopped' stop_sub_process log { "service is stoped" } exit! end def stop_sub_process ::Process.kill("KILL",@pid) if @pid @pid=nil end end Daemon.mainloop A: Check out the following library: Win32Utils. You can create a simple service that you can start/stop/restart at your leisure. I'm currently using it to manage a Mongrel instance for a Windows hosted Rails app and it works flawlessly. A: When trying the Win32Utils one really need to studie the doc and look over the net before finding some simple working example. This seems to work today 2008-10-02: gem install win32-service Update 2012-11-20: According to https://stackoverflow.com/users/1374569/paul the register_bar.rb should now be Service.create( :service_name => 'some_service', :host => nil, :service_type => Service::WIN32_OWN_PROCESS, :description => 'A custom service I wrote just for fun', :start_type => Service::AUTO_START, :error_control => Service::ERROR_NORMAL, :binary_path_name => 'c:\usr\ruby\bin\rubyw.exe -C c:\tmp\ bar.rb', :load_order_group => 'Network', :dependencies => ['W32Time','Schedule'], :display_name => 'This is some service' ) bar.rb create the application/daemon LOG_FILE = 'C:\\test.log' begin require "rubygems" require 'win32/daemon' include Win32 class DemoDaemon < Daemon def service_main while running? sleep 10 File.open("c:\\test.log", "a"){ |f| f.puts "Service is running #{Time.now}" } end end def service_stop File.open("c:\\test.log", "a"){ |f| f.puts "***Service stopped #{Time.now}" } exit! end end DemoDaemon.mainloop rescue Exception => err File.open(LOG_FILE,'a+'){ |f| f.puts " ***Daemon failure #{Time.now} err=#{err} " } raise end bar.rb is the service but we must create and register first! this can be done with sc create some_service but if we are going to use ruby and win32utils we should do a register_bar.rb require "rubygems" require "win32/service" include Win32 # Create a new service Service.create('some_service', nil, :service_type => Service::WIN32_OWN_PROCESS, :description => 'A custom service I wrote just for fun', :start_type => Service::AUTO_START, :error_control => Service::ERROR_NORMAL, :binary_path_name => 'c:\usr\ruby\bin\rubyw.exe -C c:\tmp\ bar.rb', :load_order_group => 'Network', :dependencies => ['W32Time','Schedule'], :display_name => 'This is some service' ) Note, there is a space between c:\tmp\ bar.rb in 'c:\usr\ruby\bin\rubyw.exe -C c:\tmp\ bar.rb' Run ruby register_bar.rb and now one can start the service either from the windows service control panel or sc start some_service and watch c:test.log be filled with Service is running Thu Oct 02 22:06:47 +0200 2008 For the simple of have something to work with it is easier to remove the service register and create a new one instead of modifying a existing one unregister_bar.rb require "rubygems" require "win32/service" include Win32 Service.delete("some_service") Credits to the people http://rubypane.blogspot.com/2008/05/windows-service-using-win32-service-and_29.html http://rubyforge.org/docman/view.php/85/595/service.html A: You can write (or download) a wrapper service. The wrapper can call the ruby.exe to execute your program. Same trick works for Java, VB, etc. A: You should be able to accomplish this in IronRuby since you would have the .NET framework behind you.
{ "language": "en", "url": "https://stackoverflow.com/questions/163497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: C#, ASP.NET - NullReferenceException - Object reference not set to an instance of an object Definition of variables in use: Guid fldProId = (Guid)ffdPro.GetProperty("FieldId"); string fldProValue = (string)ffdPro.GetProperty("FieldValue"); FormFieldDef fmProFldDef = new FormFieldDef(); fmProFldDef.Key = fldProId; fmProFldDef.Retrieve(); string fldProName = (string)fmProFldDef.GetProperty("FieldName"); string fldProType = (string)fmProFldDef.GetProperty("FieldType"); Lines giving the problem (specifically line 4 (hTxtBox.Text = ...)): if (fldProType.ToLower() == "textbox") { Label hTxtBox = (Label)findControl(fldProName); hTxtBox.Text = fldProValue; } All data is gathered from the database correctly, however the label goes screwy. Any ideas? A: Are you sure that findControl is returning a value? Is hTxtBox.Text a property that does any computation on a set that could be throwing the NullReferenceException? A: findControl is returning a null value. It could be that the particular Label is not a direct child of the current page, i.e., inside an UpdatePanel or some other control so that the actual name of the control is different than the name applied (and thus it can't find it). For example, if it is named "name", the actual name may be ctl0$content$name because it is nested inside another control on the page. You don't really give enough information about the context for me to give you a better answer. A: Looks like fmProFldDef's FieldName property is screwy. Did you verify that it's getting the hTxtBox's client Id? A: this line is returning null: Label hTxtBox = (Label)findControl(fldProName); It may be a result of "FieldName" not existing (thus this line returning null, then null being used in the lookup) string fldProName = (string)fmProFldDef.GetProperty("FieldName"); or the text within FieldName not representing a form field. A: FindControl might not be able to see the textbox - is it in a databound control (e.g. ListView, FormView, etc.)?
{ "language": "en", "url": "https://stackoverflow.com/questions/163507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Accounting Software Design Patterns Are there any good resources (books, authoritative guides, etc.) for design patterns or other best practices for software that includes financial accounting features? Specifically, where is good information about handling issues like the following: * *Internal representations of money quantities *Internal representations of accounts, journals, and other records *Reconciling inconsistencies (either automatically or via user action) *Handling ends of accounting periods (daily, weekly, monthly) *Designing UIs and printed financial reports that make sense to businesspeople Note: "Authoritative" or otherwise widely-accepted information is what we're looking for here. Otherwise, this will just turns into a big list of anecdotes of all the things people have tried, making the topic very subjective. A: For dealing with currencies, remember that you need to always remember not just what currency the amount was entered in, but also what time it was entered, and what the rate of each currency was at that time. Also, accountants are not forgiving when it comes to "inaccuracies" in amounts. If an amount is entered, you have to store it as it was entered, and not convert it first, because afterwards you won't be able to guarantee that you can get back the entered amount just like it was entered. These may sound like obvious things, but people do sin against them in the real world. A: A while ago when I was assigned to work on such a system, I found this link in the Martin Fowler website: Martin Fowler - Accounting Patterns It contais some patterns for accounting software, such as accounting entries, transactions and adjustments. The architecture he describes is based on events. Never read it entirely, as the system I work on was already in the middle of its development stage and I couldn't change the design. Hope it helps. A: I can Recommend Patterns of Enterprise Application Architecture and Analysis Patterns, Reusable Object Models both by Martin Fowler they give software architectural patterns to common problems. A: I find the Data Model Resource book to be a good source of inspiration for modeling business structures. Apache Ofbiz ERP was built around the concepts in this book. A: Martin Fowler's Analysis Patterns covers some of those topics. A: I would have the following structural classes: * *Account - Represents a financial account. eg. Cash, Sale, Expense; *Category - The category where the Account belongs to. eg. Asset, Expenses, Revenues; *Mutation - Represents a financial entry of an account. *Transaction - Contains a collection of mutations. *Money - A composite class using Currency object and storing amount as long integer; When I approached the design initially I kept thinking about Decorator and Builder Patterns. Tax calculation can use the Strategy Pattern. Observer Pattern can be used to veto Transaction. A: FOR UI / REPORTING: Look into Crystal Reports and Business Objects. Both are used at my place of employment in the Investment Accounting department. We use other stuff for the internals here (JD Edwards) but I can't really go into much detail other than 'yeah, it does that'
{ "language": "en", "url": "https://stackoverflow.com/questions/163517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Set ASP.Net version using WiX I am creating an installer for an ASP.Net website using WiX. How do you set the ASP.Net version in IIS using WiX? A: Don't forget to enable ASP 2.0 on the server <iis:WebServiceExtension Id="ExtensionASP2" Group="ASP.NET v2.0.50727" Allow="yes" File="[NETFRAMEWORK20INSTALLROOTDIR]aspnet_isapi.dll" Description="ASP.NET v2.0.50727"/> Here is the sof-question A: My answer is basically the same as others seen here; I just wanted to offer people another example. Given the number of file extensions that ASP.NET handles, and that the list changes in each version, I think the most reliable solution is to run aspnet_regiis at the end of the installation. This does mean though, that I don't have any support for rollback or uninstallation. I you're creating a new application in IIS, it doesn't really matter because it will be deleted by Wix. If you're modifying an existing application, perhaps you could find out from the registry what version of ASP.NET is configured, and run that version's aspnet_regiis to undo your changes. The following uses Wix 3.5. <Fragment> <!-- Use the properties in Wix instead of doing your own registry search. --> <PropertyRef Id="IISMAJORVERSION"/> <PropertyRef Id="NETFRAMEWORK40FULL"/> <PropertyRef Id="NETFRAMEWORK40FULLINSTALLROOTDIR"/> <!-- The code I'm using is intended for IIS6 and above, and it needs .NET 4 to be installed. --> <Condition Message="This application requires the .NET Framework 4.0. Please install the required version of the .NET Framework, then run this installer again."> <![CDATA[Installed OR (NETFRAMEWORK40FULL)]]> </Condition> <Condition Message="This application requires Windows Server 2003 and Internet Information Services 6.0 or better."> <![CDATA[Installed OR (VersionNT >= 502)]]> </Condition> <!-- Populates the command line for CAQuietExec. IISWEBSITEID and IISVDIRNAME could be set to default values, passed in by the user, or set in your installer's UI. --> <CustomAction Id="ConfigureIis60AspNetCommand" Property="ConfigureIis60AspNet" Execute="immediate" Value="&quot;[NETFRAMEWORK40FULLINSTALLROOTDIR]aspnet_regiis.exe&quot; -norestart -s &quot;W3SVC/[IISWEBSITEID]/ROOT/[IISVDIRNAME]&quot;" /> <CustomAction Id="ConfigureIis60AspNet" BinaryKey="WixCA" DllEntry="CAQuietExec" Execute="deferred" Return="check" Impersonate="no"/> <InstallExecuteSequence> <Custom Action="ConfigureIis60AspNetCommand" After="CostFinalize"/> <!-- Runs the aspnet_regiis command immediately after Wix configures IIS. The condition shown here assumes you have a selectable feature in your installer with the ID "WebAppFeature" that contains your web components. The command will not be run if that feature is not being installed, or if IIS is not version 6. It *will* run if the application is being repaired. SKIPCONFIGUREIIS is a property defined by Wix that causes it to skip the IIS configuration. --> <Custom Action="ConfigureIis60AspNet" After="ConfigureIIs" Overridable="yes"> <![CDATA[((&WebAppFeature = 3) OR (REINSTALL AND (!WebAppFeature = 3))) AND (NOT SKIPCONFIGUREIIS) AND (IISMAJORVERSION = "#6")]]> </Custom> </InstallExecuteSequence> <UI> <ProgressText Action="ConfigureIis60AspNetCommand" >Configuring ASP.NET</ProgressText> <ProgressText Action="ConfigureIis60AspNet" >Configuring ASP.NET</ProgressText> </UI> </Fragment> A: We use this: First determine the .Net framework root directory from the registry: <Property Id="FRAMEWORKROOT"> <RegistrySearch Id="FrameworkRootDir" Root="HKLM" Key="SOFTWARE\Microsoft\.NETFramework" Type="directory" Name="InstallRoot" /> </Property> Then, inside the component that installs your website in IIS: <!-- Create and configure the virtual directory and application. --> <Component Id='WebVirtualDirComponent' Guid='{GUID}' Permanent='no'> <iis:WebVirtualDir Id='WebVirtualDir' Alias='YourAlias' Directory='InstallDir' WebSite='DefaultWebSite' DirProperties='DirProperties'> <iis:WebApplication Id='WebApplication' Name='YourAppName' WebAppPool='AppPool'> <!-- Required to run the application under the .net 2.0 framework --> <iis:WebApplicationExtension Extension="config" CheckPath="yes" Script="yes" Executable="[FRAMEWORKROOT]v2.0.50727\aspnet_isapi.dll" Verbs="GET,HEAD,POST" /> <iis:WebApplicationExtension Extension="resx" CheckPath="yes" Script="yes" Executable="[FRAMEWORKROOT]v2.0.50727\aspnet_isapi.dll" Verbs="GET,HEAD,POST" /> <iis:WebApplicationExtension Extension="svc" CheckPath="no" Script="yes" Executable="[FRAMEWORKROOT]v2.0.50727\aspnet_isapi.dll" Verbs="GET,HEAD,POST" /> </iis:WebApplication> </iis:WebVirtualDir> </Component> For an x64 installer (THIS IS IMPORTANT) Add Win64='yes' to the registry search, because the 32 bits environment on a 64 bits machine has a different registry hive (and a different frameworkroot) <RegistrySearch Id="FrameworkRootDir" Root="HKLM" Key="SOFTWARE\Microsoft\.NETFramework" Type="directory" Name="InstallRoot" Win64='yes' /> A: Here is what worked for me after wrestling with it: <Property Id="FRAMEWORKBASEPATH"> <RegistrySearch Id="FindFrameworkDir" Root="HKLM" Key="SOFTWARE\Microsoft\.NETFramework" Name="InstallRoot" Type="raw"/> </Property> <Property Id="ASPNETREGIIS" > <DirectorySearch Path="[FRAMEWORKBASEPATH]" Depth="4" Id="FindAspNetRegIis"> <FileSearch Name="aspnet_regiis.exe" MinVersion="2.0.5"/> </DirectorySearch> </Property> <CustomAction Id="MakeWepApp20" Directory="TARGETDIR" ExeCommand="[ASPNETREGIIS] -norestart -s W3SVC/[WEBSITEID]/ROOT/[VIRTUALDIR]" Return="check"/> <InstallExecuteSequence> <Custom Action="MakeWepApp20" After="InstallFinalize">ASPNETREGIIS AND NOT Installed</Custom> </InstallExecuteSequence> [WEBSITEID] and [VIRTUALDIR] are properties you have to define yourself. [VIRTUALDIR] is only necessary if you are setting the ASP.NET version for an application rather than an entire website. The sequencing of the custom action is critical. Executing it before InstallFinalize will cause it to fail because the web application isn't available until after that. Thanks to Chris Burrows for a proper example of finding the aspnet_regiis executable (Google "Using WIX to Secure a Connection String"). jb A: I found a different way by using the WiX WebApplicationExtension. You can check out the full solution here and here. I like Wix so far, but man does it takes a lot of digging to find what you are looking for. A: This is a bit simpler. I don’t know if this works on updating an existing AppPool, but works for creating an APP Pool and setting the .NET version. <iis:WebServiceExtension Id="AMS_AppPool" Name="AccountManagementSVC1" Identity="other" ManagedPipelineMode="integrated" ManagedRuntimeVersion="v4.0" User="AMS_AppPoolUser" RecycleMinutes="120" /> A: * *First find the correct .NET version folder. Use DirectorySearch/FileSearch to perform search. *Use the above path to call aspnet_regiis.exe and set the version for the webapp from a custom action. aspnet_regiis.exe -s W3SVC/1/ROOT/SampleApp1
{ "language": "en", "url": "https://stackoverflow.com/questions/163531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Restoring a SQL Server Express Edition Database Is there a way to take a backup file from SQL Server Express Edition and restore it into a standard SQL Server database? I tried to do it from Management Studio but it didn't recognize the file format. A: If you have a databse .bak file you can move from SQL Server 2005 Express to SQL Server. Just make sure that you check the option to overwrite the existing DB. A: Yes. It should work using the standard tools. What versions of the databases are you working with?
{ "language": "en", "url": "https://stackoverflow.com/questions/163534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is best method to find a ASP.Net control using jQuery? In implementing my first significant script using jquery I needed to find a specific web-control on the page. Since I work with DotNetNuke, there is no guaranteeing the controls ClientID since the container control may change from site to site. I ended up using an attribute selector that looks for an ID that ends with the control's server ID. $("select[id$='cboPanes']") This seems like it might not be the best method. Is there another way to do this? @Roosteronacid - While I am getting the controls I want, I try to follow the idioms for a given technology/language. When I program in C#, I try to do it in the way that best takes advantage of C# features. As this is my first effort at really using jQuery, and since this will be used by 10's of thousands of users, I want to make sure I am creating code that is also a good example for others. @toohool - that would definitely work, but unfortunately I need to keep the javascript in separate files for performance reasons. You can't really take advantage of caching very well if you inline the javascript since each "page" is dynamically generated. I would end up sending the same javascript to the client over and over again just because other content on the page changed. @Roosteronacid - While I am getting the controls I want, I try to follow the idioms for a given technology/language. When I program in C#, I try to do it in the way that best takes advantage of C# features. As this is my first effort at really using jQuery, and since this will be used by 10's of thousands of users, I want to make sure I am creating code that is also a good example for others. @toohool - that would definitely work, but unfortunately I need to keep the javascript in separate files for performance reasons. You can't really take advantage of caching very well if you inline the javascript since each "page" is dynamically generated. I would end up sending the same javascript to the client over and over again just because other content on the page changed. A: $("#<%= cboPanes.ClientID %>") This will dynamically inject the DOM ID of the control. Of course, this means your JS has to be in an ASPX file, not in an external JS file. A: One thing that I have done in the past (in JavaScript not jQuery), in the above my JavaScript imports, is output the dynamic controls ID's similiar to what toohool recommends and assign them to variables that I reference in my script imports. Something like this, should allow you to take advantage of caching and still enable you to have the exact client IDs: <head> <script type="text/javascript> var cboPanesID = <%= cboPanes.ClientID %>; </script> <!-- this JS import references cboPanesID variable declared above --> <script src="jquery.plugin.js"></script> </head> A: Use a marker class on the control, and select that via jQuery. A: Other than being a bit more expensive, performance-wise, I can't see anything wrong with using that selector. After all; you are getting the controls you want to access.
{ "language": "en", "url": "https://stackoverflow.com/questions/163535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I write output to the console from a custom MSBuild task? I'm trying to debug an MSBuild task, and I know there is some way to write to the MSBuild log from within a custom task but I forget how. A: The base Task class has a Log property you can use: Log.LogMessage("My message"); A: For unit testing purposes, I wrap the logger around a helper class public static void Log(ITask task, string message, MessageImportance importance) { try { BuildMessageEventArgs args = new BuildMessageEventArgs(message, string.Empty, task.ToString(), importance); task.BuildEngine.LogMessageEvent(args); } catch (NullReferenceException) { // Don't throw as task and BuildEngine will be null in unit test. } } Nowadays I'd probably convert that into an extension method for convenience.
{ "language": "en", "url": "https://stackoverflow.com/questions/163537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: C# - What does the Assert() method do? Is it still useful? I am debugging with breakpoints and I realize the assert call? I thought it was only for unit tests. What does it do more than breakpoint? Since I can breakpoint, why should I use Assert? A: Assertions feature heavily in Design by Contract (DbC) which as I understand was introducted/endorsed by Meyer, Bertand. 1997. Object-Oriented Software Contruction. An important feature is that they mustn't produce side-effects, for example you can handle an exception or take a different course of action with an if statement(defensive programming). Assertions are used to check the pre/post conditions of the contract, the client/supplier relationship - the client must ensure that the pre-conditions of the supplier are met eg. sends £5 and the supplier must ensure the post-conditions are met eg. delivers 12 roses. (Just simple explanation of client/supplier - can accept less and deliver more, but about Assertions). C# also introduces Trace.Assert(), which can be used for release code. To answer the question yes they still useful, but can add complexity+readability to code and time+difficultly to maintain. Should we still use them? Yes, Will we all use them? Probably not, or not to the extent of how Meyer describes. (Even the OU Java course that I learnt this technique on only showed simple examples and the rest of there code didn't enforce the DbC assertion rules on most of code, but was assumed to be used to assure program correctness!) A: You should use it for times when you don't want to have to breakpoint every little line of code to check variables, but you do want to get some sort of feedback if certain situations are present, for example: Debug.Assert(someObject != null, "someObject is null! this could totally be a bug!"); A: The way I think of it is Debug.Assert is a way to establish a contract about how a method is supposed to be called, focusing on specifics about the values of a paramter (instead of just the type). For example, if you are not supposed to send a null in the second parameter you add the Assert around that parameter to tell the consumer not to do that. It prevents someone from using your code in a boneheaded way. But it also allows that boneheaded way to go through to production and not give the nasty message to a customer (assuming you build a Release build). A: In a debug compilation, Assert takes in a Boolean condition as a parameter, and shows the error dialog if the condition is false. The program proceeds without any interruption if the condition is true. If you compile in Release, all Debug.Assert's are automatically left out. A: Assert also gives you another opportunity to chuckle at Microsoft's UI design skills. I mean: a dialog with three buttons Abort, Retry, Ignore, and an explanation of how to interpret them in the title bar! A: First of all Assert() method is available for Trace and Debug classes. Debug.Assert() is executing only in Debug mode. Trace.Assert() is executing in Debug and Release mode. Here is an example: int i = 1 + 3; // Debug.Assert method in Debug mode fails, since i == 4 Debug.Assert(i == 3); Debug.WriteLine(i == 3, "i is equal to 3"); // Trace.Assert method in Release mode is not failing. Trace.Assert(i == 4); Trace.WriteLine(i == 4, "i is equla to 4"); Console.WriteLine("Press a key to continue..."); Console.ReadLine(); Run this code in Debug mode and then in Release mode. You will notice that during Debug mode your code Debug.Assert statement fails, you get a message box showing the current stack trace of the application. This is not happening in Release mode since Trace.Assert() condition is true (i == 4). WriteLine() method simply gives you an option of logging the information to Visual Studio output. A: From Code Complete 8 Defensive Programming 8.2 Assertions An assertion is code that’s used during development—usually a routine or macro—that allows a program to check itself as it runs. When a assertion is true, that means everything is operating as expected. When it’s false, that means it has detected an unexpected error in the code. For example, if the system assumes that a customer-information file will never have more than 50,000 records, the program might contain an assertion that the number of records is less than or equal to 50,000. As long as the number of records is less than or equal to 50,000, the assertion will be silent. If it encounters more than 50,000 records, however, it will loudly “assert” that there is a error in the program. Assertions are especially useful in large, complicated programs and in high-reliability programs. They enable programmers to more quickly flush out mismatched interface assumptions, errors that creep in when the code is modified, and so on. An assertion usually takes two arguments: a boolean expression that describes the assumption that’s supposed to be true and a message to display if it isn’t. (…) Normally, you don’t want users to see assertion messages in production code; assertions are primarily for use during development and maintenance. Assertions are normally compiled into the code at development time and compiled out of the code for production. During development, assertions flush out contradictory assumptions, unexpected conditions, bad values passed to routines, and so on. During production, they are compiled out of the code so that the assertions don’t degrade system performance. A: Assert allows you to assert a condition (post or pre) applies in your code. It's a way of documenting your intentions and having the debugger inform you with a dialog if your intention is not met. Unlike a breakpoint, the Assert goes with your code and can be used to add additional detail about your intention. A: Assert can help you give separate messaging behavior between testing and release. For example, Debug.Assert(x > 2) will only trigger a break if you are running a "debug" build, not a release build. There's a full example of this behavior here
{ "language": "en", "url": "https://stackoverflow.com/questions/163538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "184" }
Q: How do I pass a string into subprocess.Popen (using the stdin argument)? If I do the following: import subprocess from cStringIO import StringIO subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0] I get: Traceback (most recent call last): File "<stdin>", line 1, in ? File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__ (p2cread, p2cwrite, File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles p2cread = stdin.fileno() AttributeError: 'cStringIO.StringI' object has no attribute 'fileno' Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this? A: """ Ex: Dialog (2-way) with a Popen() """ p = subprocess.Popen('Your Command Here', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=PIPE, shell=True, bufsize=0) p.stdin.write('START\n') out = p.stdout.readline() while out: line = out line = line.rstrip("\n") if "WHATEVER1" in line: pr = 1 p.stdin.write('DO 1\n') out = p.stdout.readline() continue if "WHATEVER2" in line: pr = 2 p.stdin.write('DO 2\n') out = p.stdout.readline() continue """ .......... """ out = p.stdout.readline() p.wait() A: On Python 3.7+ do this: my_data = "whatever you want\nshould match this f" subprocess.run(["grep", "f"], text=True, input=my_data) and you'll probably want to add capture_output=True to get the output of running the command as a string. On older versions of Python, replace text=True with universal_newlines=True: subprocess.run(["grep", "f"], universal_newlines=True, input=my_data) A: Beware that Popen.communicate(input=s)may give you trouble ifsis too big, because apparently the parent process will buffer it before forking the child subprocess, meaning it needs "twice as much" used memory at that point (at least according to the "under the hood" explanation and linked documentation found here). In my particular case,swas a generator that was first fully expanded and only then written tostdin so the parent process was huge right before the child was spawned, and no memory was left to fork it: File "/opt/local/stow/python-2.7.2/lib/python2.7/subprocess.py", line 1130, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory A: I figured out this workaround: >>> p = subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=subprocess.PIPE) >>> p.stdin.write(b'one\ntwo\nthree\nfour\nfive\nsix\n') #expects a bytes type object >>> p.communicate()[0] 'four\nfive\n' >>> p.stdin.close() Is there a better one? A: Popen.communicate() documentation: Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too. Replacing os.popen* pipe = os.popen(cmd, 'w', bufsize) # ==> pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin Warning Use communicate() rather than stdin.write(), stdout.read() or stderr.read() to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process. So your example could be written as follows: from subprocess import Popen, PIPE, STDOUT p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=STDOUT) grep_stdout = p.communicate(input=b'one\ntwo\nthree\nfour\nfive\nsix\n')[0] print(grep_stdout.decode()) # -> four # -> five # -> On Python 3.5+ (3.6+ for encoding), you could use subprocess.run, to pass input as a string to an external command and get its exit status, and its output as a string back in one call: #!/usr/bin/env python3 from subprocess import run, PIPE p = run(['grep', 'f'], stdout=PIPE, input='one\ntwo\nthree\nfour\nfive\nsix\n', encoding='ascii') print(p.returncode) # -> 0 print(p.stdout) # -> four # -> five # -> A: There's a beautiful solution if you're using Python 3.4 or better. Use the input argument instead of the stdin argument, which accepts a bytes argument: output_bytes = subprocess.check_output( ["sed", "s/foo/bar/"], input=b"foo", ) This works for check_output and run, but not call or check_call for some reason. In Python 3.7+, you can also add text=True to make check_output take a string as input and return a string (instead of bytes): output_string = subprocess.check_output( ["sed", "s/foo/bar/"], input="foo", text=True, ) A: This is overkill for grep, but through my journeys I've learned about the Linux command expect, and the python library pexpect * *expect: dialogue with interactive programs *pexpect: Python module for spawning child applications; controlling them; and responding to expected patterns in their output. import pexpect child = pexpect.spawn('grep f', timeout=10) child.sendline('text to match') print(child.before) Working with interactive shell applications like ftp is trivial with pexpect import pexpect child = pexpect.spawn ('ftp ftp.openbsd.org') child.expect ('Name .*: ') child.sendline ('anonymous') child.expect ('Password:') child.sendline ('noah@example.com') child.expect ('ftp> ') child.sendline ('ls /pub/OpenBSD/') child.expect ('ftp> ') print child.before # Print the result of the ls command. child.interact() # Give control of the child to the user. A: I'm a bit surprised nobody suggested creating a pipe, which is in my opinion the far simplest way to pass a string to stdin of a subprocess: read, write = os.pipe() os.write(write, "stdin input here") os.close(write) subprocess.check_call(['your-command'], stdin=read) A: p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=STDOUT) p.stdin.write('one\n') time.sleep(0.5) p.stdin.write('two\n') time.sleep(0.5) p.stdin.write('three\n') time.sleep(0.5) testresult = p.communicate()[0] time.sleep(0.5) print(testresult) A: I am using python3 and found out that you need to encode your string before you can pass it into stdin: p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=PIPE) out, err = p.communicate(input='one\ntwo\nthree\nfour\nfive\nsix\n'.encode()) print(out) A: Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen I'm afraid not. The pipe is a low-level OS concept, so it absolutely requires a file object that is represented by an OS-level file descriptor. Your workaround is the right one. A: from subprocess import Popen, PIPE from tempfile import SpooledTemporaryFile as tempfile f = tempfile() f.write('one\ntwo\nthree\nfour\nfive\nsix\n') f.seek(0) print Popen(['/bin/grep','f'],stdout=PIPE,stdin=f).stdout.read() f.close()
{ "language": "en", "url": "https://stackoverflow.com/questions/163542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "337" }
Q: Is there a maximum number of characters that can be written using a StreamWriter? Is there a maximum number of characters that can be written to a file using a StreamWriter? Or is there a maximum number of characters that WriteLine() can output? I am trying to write some data to a file but all of the data does not seem to make it. This is the current state of my code: StreamWriter sw = new StreamWriter(pathToFile); foreach (GridViewRow record in gv_Records.Rows) { string recordInfo = "recordInformation"; sw.WriteLine(recordInfo); } A: Be sure you wrap your StreamWriter in a using-block, or are careful about your explicit management of the resource's lifetime. using (StreamWriter writer = new StreamWriter(@"somefile.txt")) { // ... writer.WriteLine(largeAmountsOfData); // ... } A: Are you calling StreamWriter.Close() or Flush()? A: Make sure that you are calling .Flush()
{ "language": "en", "url": "https://stackoverflow.com/questions/163550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: JDK/JRE source code with matching JSSE (SSL) source code and matching runnable JDK / JRE? I have seen Where to find Java 6 JSSE/JCE Source Code? and asked the question myself How to get JRE/JDK with matching source? but I don't either of these was specific enough to get the answer I was really after, so I'm going to try a way more specific version of the question. Basically the problem that I am trying to solve is that I would like to be able to use my Eclipse debugger on Windows and step into the Java SSL classes (JSSE) to help me debug SSL issues as well as to just understand the SSL process better. BTW I am familiar with (and use) the javax.net.debug=ssl|all system property to get SSL tracing and, while this is very helpful, I'd still like to be able to step through that pesky code. So what I think I specifically need is: * *An executable JRE / JDK implementation (not wanting to build one)... *That runs on my Windows platform (XP)... *That includes source... *And that source includes the SSL "bits" (JSSE, etc.)... *And ideally the SSL implementation is Sun's or the OpenJDK version. I think the closest thing (as noted in PW's answer StackOverflow: 87106) is the OpenJDK source openjdk-6-src-b12-28_aug_2008.tar.gz found at OpenJDK 6 Source Release, but I'm not sure there's a matching executable JDK / JRE for that that would run on Windows. A: You can get the source code of JSSE lib (Open JDK implementation) here - http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/file/4d6c03fb1039/src/share/classes/sun/security/ssl Steps to create a source jar file for attaching to an IDE for debugging. * *Go a little above in the directory structure i.e. to http://hg.openjdk.java.net/jdk8u/jdk8u-dev/jdk/file/4d6c03fb1039/src/share/classes/ repo. *Download the source package by clicking on the "zip" or "gz" links that you see at the left pane. *But this package is huge and contains thousands of *.java files. You do not normally want all of these to just debug jsse.jar code. *So better copy only the sun.security.rsa , sun.security.ssl , sun.security.provider & com.sun.net.ssl packages to a new folder (lets say jsse-code) on your computer. *Go to that folder from command line & create the source jar on your own. e.g. jar -cvf jsse-src.jar * *You are done. You now have your jsse source lib that you can attach to your preferred IDE (e.g. - Eclipse) to debug the JSSE code. Thanks Ayas A: I used the OpenJDK download for Java 6: http://download.java.net/openjdk/jdk6/ To debug the JSSE/SSL code, I used the classes found in the sun.security.ssl and sun.security.ec packages and created a new library. Unfortunately, just having a library with all the source wasn't enough for me. I couldn't figure out how to get my IDE (Netbeans) to step into the JSSE code. Instead, it was calling the JSSE bundled with my JDK. As a workaround, I ended up refactoring the ssl and ec packages into a new "Provider". Here's what I did: * *Renamed the SunJSSE class to SSLProvider and replaced all references to "SunJSSE" in the code. *Refactored sun.security.ssl and sun.security.ec into 2 new packages: javaxt.ssl and javaxt.ec *Find/Replace all references to the original package names in the code. For example, in the SSLProvider.java class, replace "sun.security.ssl.SSLContextImpl" with "javaxt.ssl.SSLContextImpl". Once I had a new security provider, I could reference it explicitly in my code. Example: java.security.Provider provider = new javaxt.ssl.SSLProvider(); java.security.Security.addProvider(provider); SSLContext sslc = SSLContext.getInstance("TLS", "SSLProvider"); By explicitly setting the security provider, I can now drop breakpoints and throw out print statements to my heart's content :-) If anyone is interested, I have posted a zip archive of the "SSLProvider" source here: http://www.javaxt.com/download/?/jsse/SSLProvider.zip A: You can get the source code of JSSE lib (Open JDK implementation) from its mercurial repository following these steps to create a source zip file for attaching to an IDE for debugging. * *Get your java build version (in my case the build is 1.8.0_181-b13) java -version Probably you will get a result like this: java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode) *Now we can find the node for our version in this the repository. In my case my tag will be jdk8u181-b13 because my build is 1.8.0_181-b13 and its node will be 0cb452d66676 remember that the java version is jdk8u. We can download the source package by clicking on the "zip" or "gz" links that you see at the left pane and manually repack it as a zip. Or select only the packages you need. *In this example I will download all the packages under the directory classes. To this end, replace the version jdk8u and node 0cb452d66676 in this script to download the source code, and repack it as a src zip file. version=jdk8u node=0cb452d66676 mkdir ~/temp cd ~/temp wget http://hg.openjdk.java.net/$version/$version/jdk/archive/$node.zip/src/share/classes/ unzip $node.zip -d $version-$node cd jdk-$node/src/share/classes/ zip -r $version-$node-src.zip . *Add the source to your IDE and happy coding. Notice: In this repository, the available versions are: * *jdk6 *jdk7 *jdk7u *jdk8 *jdk8u *jdk9 *jdk10 A: The Sun implementation is not open source as far as I know. You can download an open source JCE here: http://www.bouncycastle.org/java.html A: As a matter of fact, the SSL implementation is included in the OpenJDK sources, but for some reason not in the standard source Zip file. I have no clue why. I don't know where one would normally fetch the OpenJDK sources; I got them on Debian via apt-get source openjdk-6. The SSL implementation sources are in jdk/src/share/classes/javax/net/ssl. A: The JSSE source code for the Sun Java releases was formerly available via the Sun Community Source License Program. Since the Oracle takeover it appears to have disappeared, but I hope I'm wrong about that. A: I ended up doing the following on Mac OS X High Sierra 10.13.4 running eclipse luna and javac 1.8.0_171 On a ubuntu machine also running open jdk and javac 1.8.0_171 apt-get install openjdk-8-source cd tmp unzip /usr/lib/jvm/openjdk-8/src.zip "sun/security/*" zip -r jsse-src sun I didn't include the com/sun/net/ssl stuff but was good enough in my case. I then copied jsse-src.zip to the mac at /Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home/ and pointed eclipse to that. A: Following up on this, you can download OpenJDK: https://adoptopenjdk.net/ and match it up exactly to the source code from https://github.com/AdoptOpenJDK/openjdk-jdk8u Currently u172-b11 is the latest version, but they are in sync and will work on all platforms.
{ "language": "en", "url": "https://stackoverflow.com/questions/163552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Wiki style text formatting I'm looking for some kind of text-parser for ASP.NET that can make HTML from some style of text that uses a special format. Like in Wiki's there is some special syntax for headings and such. I have tried to look on google, but I did not found anything for .NET. Do someone know about a library for .NET that can parse the text to HTML wiki-style? I't don't have to be the same syntax as a Wiki? If no, how whould be the best way to design such a system your self? Thanks in advance A: how about the Markdown that StackOverflow uses? http://daringfireball.net/projects/markdown/ from their home page: Thus, “Markdown” is two things: (1) a plain text formatting syntax; and (2) a software tool, written in Perl, that converts the plain text formatting to HTML. A: For the server side, you can use the Markdown.Net library from Milan Negovan : http://www.aspnetresources.com/blog/markdown_announced.aspx A: Markdown is great - very intuitive syntax, and you have WMD - this terrific editing tool that I'm typing into now. A: I would like to strongly recommend Textile over Markdown. Textile.NET should do what you want. Why? I like Textile's syntax better, and I think it's easier for users to learn and use. There's no single large reason - just a lot of small things. In Markdown you can do *italics* and **bold** easily, but the syntax seems arbitrary. Compare to the equivalent syntax in Textile for _italics_ and *bold*, which mirrors the conventional way to indicate those modifiers in plain text formats. Or for another example, in Textile you make an ordered list by prefixing each item with an '#'. In Markdown, you prefix it with "n.", where n is any integer. Markdown is trying to imitate the syntax people use in flat text files when writing lists (which is nice), but it means that this Markdown code: 3. Test1 2. Test2 1. Test3 Is rendered as this: * *Test1 *Test2 *Test3 Basically, Markdown asks you for a number, which it then ignores. That seems inelegant to me, although I couldn't explain why precisely. Textile also does tables (and wish a nicely compact syntax). Markdown doesn't. There's a few other minor points, but I think that covers most of it. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/163562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Javascript Date() constructor doesn't work I have an issue - The javascript Date("mm-dd-yyyy") constructor doesn't work for FF. It works fine for IE. * *IE : new Date("04-02-2008") => "Wed Apr 2 00:00:00 EDT 2008" *FF2 : new Date("04-02-2008") => Invalid Date So lets try another constructor. Trying this constructor Date("yyyy", "mm", "dd") * *IE : new Date("2008", "04", "02"); => "Fri May 2 00:00:00 EDT 2008" *FF : new Date("2008", "04", "02"); => "Fri May 2 00:00:00 EDT 2008" *IE : new Date("2008", "03", "02"); => "Wed Apr 2 00:00:00 EDT 2008" *FF : new Date("2008", "03", "02"); => "Wed Apr 2 00:00:00 EDT 2008" So the Date("yyyy", "mm", "dd") constructor uses an index of 0 to represent January. Has anyone dealt with this? There must be a better way than subtracting 1 from the months. A: It is the definition of the Date object to use values 0-11 for the month field. I believe that the constructor using a String is system-dependent (not to mention locale/timezone dependent) so you are probably better off using the constructor where you specify year/month/day as seperate parameters. BTW, in Firefox, new Date("04/02/2008"); works fine for me - it will interpret slashes, but not hyphens. I think this proves my point that using a String to construct a Date object is problemsome. Use explicit values for month/day/year instead: new Date(2008, 3, 2); A: Using var theDate = new Date(myDate[0],myDate[1]-1,myDate[2]); Is fine, but it shows some strange behaviors when month and day values are erroneous. Try casting a date where both myDate[1]-1 and myDate[2] have values of 55. Javascript still returns a date, though the input is obviously not correct. I would have preferred javascript to return an error in such a case. A: @Frank: you are right. When you need to validate date, var theDate = new Date(myDate[0],myDate[1]-1,myDate[2]); will not work. What happens is that it keeps on adding the extra parameter. For example: new Date("2012", "11", "57") // Date {Sat Jan 26 2013 00:00:00 GMT+0530 (IST)} Date object takes the extra days (57-31=26) and adds it to the date we created. Or if we try constructing a date object with: new Date("2012", "11", "57", "57") //Date {Mon Jan 28 2013 09:00:00 GMT+0530 (IST)} an extra 2 days and 9 hours (57=24+24+9) are added. A: nice trick indeed, which i just found out the hard way (by thinking thru it). But i used a more natural date string with hyphen :-) var myDateArray = "2008-03-02".split("-"); var theDate = new Date(myDateArray[0],myDateArray[1]-1,myDateArray[2]); alert(theDate); A: You're quite right, month is indicated as an index, so January is month number 0 and December is month number 11 ... -- and there is no work-around as it is stated clearly in the ECMA-script-definition, though simple tricks commonly will work: var myDate = "2008,03,02".split(","); var theDate = new Date(myDate[0],myDate[1]-1,myDate[2]); alert(theDate); A: Bold statement. This might have your interest: JavaScript Pretty Date.
{ "language": "en", "url": "https://stackoverflow.com/questions/163563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Is it possible to stop a ColdFusion Request? I have a Flex application that calls a function which searches a large document collection. Depending on the search term, the user may want to stop the request from flex. I’d like to not only stop the flex application from expecting the request, but also stop the CFC request. Is this possible? What’s the best approach for doing this? A: I don't think there is a direct way to stop a page call externally. According to the docs, only the thread itself and it's parent can abort a given thread. However, you could set a flag for a given thread in a shared scope. Let's say you call a method that starts some background processing. It generates a unique thread ID and returns it to the caller. The thread looks for a flag in (for example) the application scope that tells it to stop. It checks at each substep of the background process. It could abort at any point that flag is thrown. To throw the flag, add an abort method that takes the name of the thread that is to be aborted, along with sufficient security to make sure a 3rd party can't just start killing off threads. A: To add onto Ben Doom's answer, I'm including some example code of a way this can be accomplished. There are multiple approaches and ways of names, organizing and calling the code below, but hopefully it is helpful. At some point during request start, store information about the process in shared scope and return an ID to the client. Here are example functions that could be used on page or remote requests. <cffunction name="createProcess" output="false"> <cfset var id = createUUID()> <cfset application.processInfo[id] = { progress = 0, kill = false }> <cfreturn id /> </cffunction> Client can then check progress by polling server, or submit request to kill process <cffunction name="getProcessProgress" output="false"> <cfargument name="processID" required="true"> <cfreturn application.processInfo[arguments.processID].progress /> </cffunction> <cffunction name="killProcess" output="false"> <cfargument name="processID" required="true"> <cfset application.processInfo[arguments.processID].kill = true /> </cffunction> The actual server-side process in question can then hit a function, for example during a loop, to check whether it should abort processing and cleanup any work as appropriate. <cffunction name="shouldKillProcess" output="false"> <cfargument name="processID" required="true"> <cfreturn application.processInfo[arguments.processID].kill /> </cffunction> A: If you are using ColdFusion 8 you can make use of the <cfthread> tag. You can spawn the search process off on its own thread and then use the remote call to terminate the search thread as needed. * *Livedoc page for cfthread *Using threads in ColdFusion A: You can programmatically end requests with either <cfabort/> or <cfsetting requesttimeout="0"/> - but that's on the CF server side of things, which I don't think is what you're asking? Ending it remotely... well, if you have FusionReactor it might be possible to contact that using Flex and have it interrupt the request for you. (You can certainly try to end requests within FusionReactor, but whether or not Flex can actually ask FR to stop it... you'd have to ask that on the FR mailing list if there's a way to do that.) Possibly an alternative solution is to try and architect the search so that it works over multiple requests, but how feasible that is will depend on exactly what you're searching.
{ "language": "en", "url": "https://stackoverflow.com/questions/163569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Asp.net formatting lists of grouped data I have a asp.net page where i query a list of url and the groups in the urls. In the code behind i loop through each group and create a group header and then list all of the links. something like this: Group 1 * Link 1 * Link 2 * Link 3 Group 2 * Link 1 * Link 2 * Link 3 Now that i have a lot of links, this create one long list on the page and you have to scroll down. What are the best suggestions for formatting this data using multiple columns or other layout options so it looks better on a single page rather than one long list. Any example code would be great . . . A: For ASP.NET 3.5 you could use ListView the control. A nice tutorial for grouping can be found here. If you are using ASP.NET 1.x or 2.0 you can try the DataList control (check the RepeatColumns and RepeatDirection properties). The ListView is more powerful.
{ "language": "en", "url": "https://stackoverflow.com/questions/163581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to deploy minified Javascript files to a web server without changing the file's Timestamp? We have several hundred javascript files in our app that are currently being served uncompressed. One piece of our solution to gain a little more client performance is to minify our javascript files. I've created an automated solution to do this on the build, however, when these new files are deployed, the files' timestamp that determines if it will be resent to the client will be changed. This means, that on every future release, all the javascript files will have a new timestamp. our clients will redownload ALL the minified javascript files again, and thus defeating the performance aspect of minification. Is this an issue anyone else has encountered? What was your solution? Do you have seperate non-minified and minified javascript files used in your projects, and don't perform the minification on the build? We have other solutions in mind (like only looking for the actual changed files in the source control repository), but this is one question for which I wanted to find out what others are doing. A: You are going to have to determine which files have actually changed. Or just don't worry about it and enjoy the improvement you gain with the minified files. Clients may not hold the files in cache for very long anyway, so unless you're updating the files very, very frequently, there is likely to be little gain with trying to manage the caching behavior. A: You could write a script that checks the CRC or MD5 hash of each file from the source and target folders, and only perform the overwrite if the file has changed. This would preserve the timestamps for files which have not changed, giving you the desired caching behaviour. Similarly, you could record the previous timestamp, do the overwrite, then use the touch command (assuming these files are on a unix system) to set the timestamp back to it's original value. The first option is probably better rather than just blindly setting the timestamp to the same value all the time, because that might mean that some clients don't pick up a modified JS file for a while because the server claims it hasn't changed. A: You could roll these script files out over a period of time so that no single user request takes an inordinately long time. But seriously, how much javascript are we talking about? Is a one-time re-download of the script associated with your page really that big a deal? Think of how much effort you're going to have to go through to get this done and weigh that against the benefit. A: Have a read of http://www.thinkvitamin.com/features/webapps/serving-javascript-fast by Cal Henderson of Flickr fame. Hopefully you will find it useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/163583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Bash autocompletion in Emacs shell-mode In the GNOME Terminal, Bash does smart auto-completion. For example apt-get in<TAB> becomes apt-get install In Emacs shell-mode, this auto-completion doesn't work, even after I explicitly source /etc/bash_completion. The above example sticks as in or auto-completes with a filename in the current directory rather than a valid apt-get command option. Presumably, this is because Emacs is intercepting the Tab key-press. How do I enable smart auto-completion in shell-mode? A: Please, consider another mode M-x term, like I did this when hit problem in 2011. I tried to gather all efforts over Inet at that time to make shell work with Bash completion, including this question. But since discovering alternative in face of term-mode I don't even want to try eshell. It is full terminal emulator, so you can run interactive program inside, like Midnight commander. Or switch to zsh completion so you won't lose time on Emacs configuration. You get TAB completion in bash for free. But more important you get full Readline power, like incremental or prefixed command search. To make this setup more convenient check my .inputrc, .bashrc, .emacs. Essential part of .inputrc: # I like this! set editing-mode emacs # Don't strip characters to 7 bits when reading. set input-meta on # Allow iso-latin1 characters to be inserted rather than converted to # prefix-meta sequences. set convert-meta off # Display characters with the eighth bit set directly rather than as # meta-prefixed characters. set output-meta on # Ignore hidden files. set match-hidden-files off # Ignore case (on/off). set completion-ignore-case on set completion-query-items 100 # First tab suggests ambiguous variants. set show-all-if-ambiguous on # Replace common prefix with ... set completion-prefix-display-length 1 set skip-completed-text off # If set to 'on', completed directory names have a slash appended. The default is 'on'. set mark-directories on set mark-symlinked-directories on # If set to 'on', a character denoting a file's type is appended to the # filename when listing possible completions. The default is 'off'. set visible-stats on set horizontal-scroll-mode off $if Bash "\C-x\C-e": edit-and-execute-command $endif # Define my favorite Emacs key bindings. "\C-@": set-mark "\C-w": kill-region "\M-w": copy-region-as-kill # Ctrl+Left/Right to move by whole words. "\e[1;5C": forward-word "\e[1;5D": backward-word # Same with Shift pressed. "\e[1;6C": forward-word "\e[1;6D": backward-word # Ctrl+Backspace/Delete to delete whole words. "\e[3;5~": kill-word "\C-_": backward-kill-word # UP/DOWN filter history by typed string as prefix. "\e[A": history-search-backward "\C-p": history-search-backward "\eOA": history-search-backward "\e[B": history-search-forward "\C-n": history-search-forward "\eOB": history-search-forward # Bind 'Shift+TAB' to complete as in Python TAB was need for another purpose. "\e[Z": complete # Cycling possible completion forward and backward in place. "\e[1;3C": menu-complete # M-Right "\e[1;3D": menu-complete-backward # M-Left "\e[1;5I": menu-complete # C-TAB .bashrc (YEA! There is dabbrev in Bash from any word in ~/.bash_history): set -o emacs if [[ $- == *i* ]]; then bind '"\e/": dabbrev-expand' bind '"\ee": edit-and-execute-command' fi .emacs to make navigation comfortable in term buffer: (setq term-buffer-maximum-size (lsh 1 14)) (eval-after-load 'term '(progn (defun my-term-send-delete-word-forward () (interactive) (term-send-raw-string "\ed")) (defun my-term-send-delete-word-backward () (interactive) (term-send-raw-string "\e\C-h")) (define-key term-raw-map [C-delete] 'my-term-send-delete-word-forward) (define-key term-raw-map [C-backspace] 'my-term-send-delete-word-backward) (defun my-term-send-forward-word () (interactive) (term-send-raw-string "\ef")) (defun my-term-send-backward-word () (interactive) (term-send-raw-string "\eb")) (define-key term-raw-map [C-left] 'my-term-send-backward-word) (define-key term-raw-map [C-right] 'my-term-send-forward-word) (defun my-term-send-m-right () (interactive) (term-send-raw-string "\e[1;3C")) (defun my-term-send-m-left () (interactive) (term-send-raw-string "\e[1;3D")) (define-key term-raw-map [M-right] 'my-term-send-m-right) (define-key term-raw-map [M-left] 'my-term-send-m-left) )) (defun my-term-mode-hook () (goto-address-mode 1)) (add-hook 'term-mode-hook #'my-term-mode-hook) As any usual commands as C-x o aren't working in terminal emulation mode I extended keymap with: (unless (ignore-errors (require 'ido) (ido-mode 1) (global-set-key [?\s-d] #'ido-dired) (global-set-key [?\s-f] #'ido-find-file) t) (global-set-key [?\s-d] #'dired) (global-set-key [?\s-f] #'find-file)) (defun my--kill-this-buffer-maybe-switch-to-next () "Kill current buffer. Switch to next buffer if previous command was switching to next buffer or this command itself allowing sequential closing of uninteresting buffers." (interactive) (let ( (cmd last-command) ) (kill-buffer (current-buffer)) (when (memq cmd (list 'next-buffer this-command)) (next-buffer)))) (global-set-key [s-delete] 'my--kill-this-buffer-maybe-switch-to-next) (defun my--backward-other-window () (interactive) (other-window -1)) (global-set-key [s-up] #'my--backward-other-window) (global-set-key [s-down] #'other-window) (global-set-key [s-tab] 'other-window) Note that I use super key so term-raw-map and possibly any other keymap don't conflict with my key bindings. To make super key from left Win key I use .xmodmaprc: ! To load this config run: ! $ xmodmap .xmodmaprc ! Win key. clear mod3 clear mod4 keycode 133 = Super_L keycode 134 = Hyper_R add mod3 = Super_L add mod4 = Hyper_R You just should remember 2 commands: C-c C-j - to enter to normal Emacs editing mode (for copying or grepping in buffer text), C-c C-k - to return to terminal emulation mode. Mouse selection and Shift-Insert work as in xterm. A: Like Matli said, it is not an easy task, since bash is started with --noediting and TAB is bound to comint-dynamic-complete. One could possibly rebind TAB to self-insert-command in shell-comand-hook with local-set-key and make shell-mode not start with --noediting by M-x customize-variable RET explicit-bash-args, but I suspect that it will not sit well with all other editing. You might want to try term-mode, but it has another set of problems, because some of the other regular keybindings are overtaken by term-mode. EDIT: By other regular keybidings being overtaken by term-mode, I mean all but C-c which becomes the escape to be able to switch buffers. So instead of C-x k to kill the buffer you'd have to C-c C-x k. Or to switch to another buffer 'C-c C-x o' or 'C-c C-x 2' A: I know this post is over 11 years old now. But I have created a function to give native shell completion in Emacs. It just sends a tab key to the underlying process and intercepts the output, so it is the exact same as you would get in the shell itself. https://github.com/CeleritasCelery/emacs-native-shell-complete A: In the emacs shell, it's actually emacs doing the auto-completion, not bash. If the shell and emacs are out of sync (e.g. by using pushd, popd or some bash user function that changes the shell's current directory), then auto-completion stops working. To fix this, just type 'dirs' into the shell and things get back in sync. I also have the following in my .emacs: (global-set-key "\M-\r" 'shell-resync-dirs) Then just hitting Esc-return resyncs the auto-completion. A: I don't know the answer to this. But the reason that it doesn't work as you expect is probably because the completion in emacs shells is handled by emacs internally (by the comint-dynamic-complete function), and doesn't have those smart completion functions built-in. I'm afraid it is not an easy thing to fix. Edit: njsf's suggestion of using term-mode is probably as good as it gets. Start it with M-x term It is included in the standard emacs distribution (and in emacs21-common or emacs22-common on Ubuntu and Debian at least). A: I know this question is three years old, but it's something that I've also been interested in solving. A Web search directed me to a piece of elisp that makes Emacs use bash for completion in shell mode. It works for me, in any case. Check it out at https://github.com/szermatt/emacs-bash-completion . A: I use Prelude and when I hit Meta+Tab it completes for me. Also, Ctrl+i seems to do the same thing. A: I use helm mode. It's has this functionality (after press "TAB"): A: I make no claims to being an emacs expert but this should solve your problem: Create: ~/.emacs Add to it: (require 'shell-command) (shell-command-completion-mode) Emacs takes over the shell so BASH settings don't carry through. This will set auto completion for EMACS itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/163591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "99" }
Q: Apache sockets not closing? I have a web application written using CherryPy, which is run locally on 127.0.0.1:4321. We use mod-rewrite and mod-proxy to have Apache act as a reverse proxy; Apache also handles our SSL encryption and may eventually be used to transfer all of our static content. This all works just fine for small workloads. However, I recently used urllib2 to write a stress-testing script that would simulate a workload of 100 clients. After some time, each client gets a 503 error from Apache, indicating that Apache cannot connect to 127.0.0.1:4321. CherryPy is functioning properly, but my Apache error log reveals lines like the following: [Thu Oct 02 12:55:44 2008] [error] (OS 10048)Only one usage of each socket address (protocol/network address/port) is normally permitted. : proxy: HTTP: attempt to connect to 127.0.0.1:4321 (*) failed Googling for this error reveals that Apache has probably run out of socket file descriptors. Since I only have 100 clients running, this implies that the connections are not being closed, either between my urllib2 connection and Apache (I am definitely calling .close() on the return value of urlopen), or between Apache and CherryPy. I've confirmed that my urllib2 request is sending an HTTP Connection: close header, although Apache is configured with KeepAlive On if that matters. In case it matters, I'm using Python 2.5, Apache 2.2, CherryPy 3.0.3, and the server is running on Windows Server 2003. So what's my next step to stop this problem? A: SetEnv proxy-nokeepalive 1 would probably tell you right away if the problem is keepalive between Apache and CP. See the mod_proxy docs for more info. A: You might run the netstat command and see if you have a bunch of sockets in the TIME_WAIT state. Depending on your MaxUserPort setting you might be severly limited in the number of ports available to use. In addition the TcpTimedWaitDelay is usually set to 240 seconds so any sockets that are used cannot be reused for four minutes. There's more good information here --> http://smallvoid.com/article/winnt-tcpip-max-limit.html
{ "language": "en", "url": "https://stackoverflow.com/questions/163603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What am I doing wrong when using RAND() in MS SQL Server 2005? I'm trying to select a random 10% sampling from a small table. I thought I'd just use the RAND() function and select those rows where the random number is less than 0.10: SELECT * FROM SomeTable WHERE SomeColumn='SomeCondition' AND RAND() < 0.10 But I soon discovered that RAND() always returns the same number! Reminds me of this xkcd cartoon. OK, no problem, the RAND function takes a seed value. I will be running this query periodically, and I want it to give different results if I run it on a different day, so I seed it with a combination of the date and a unique row ID: SELECT * FROM SomeTable WHERE SomeColumn='SomeCondition' AND RAND(CAST(GETDATE) AS INTEGER) + RowID) < 0.10 I still don't get any results! When I show the random numbers returned by RAND, I discover that they're all within a narrow range. It appears that getting a random number from RAND requires you to use a random seed. If I had a random seed in the first place, I wouldn't need a random number! I've seen the previous discussions related to this problem: SQL Server Random Sort How to request a random row in SQL? They don't help me. TABLESAMPLE works at the page level, which is great for a big table but not for a small one, and it looks like it applies prior to the WHERE clause. TOP with NEWID doesn't work because I don't know ahead of time how many rows I want. Anybody have a solution, or at least a hint? Edit: Thanks to AlexCuse for a solution which works for my particular case. Now to the larger question, how to make RAND behave? A: This type of approach (shown by ΤΖΩΤΖΙΟΥ) will not guarantee a 10% sampling. It will only give you all rows where Rand() is evaluated to < .10 which will not be consistent. Something like select top 10 percent * from MyTable order by NEWID() will do the trick. edit: there is not really a good way to make RAND behave. This is what I've used in the past (kludge alert - it kills you not being able to use Rand() in a UDF) CREATE VIEW RandView AS SELECT RAND() AS Val GO CREATE FUNCTION RandomFloat() RETURNS FLOAT AS BEGIN RETURN (SELECT Val FROM RandView) END Then you just have select blah, dbo.RandomFloat() from table in your query. A: If your table has a column (perhaps even the rowid column) that is numeric in the general sense, like integer, floating point or SQL numeric, please try the following: SELECT * FROM SomeTable WHERE SomeColumn='SomeCondition' AND 0*rowid+RAND() < 0.10 In order to evaluate RAND() once for every row, not once at the start of your query. The query optimizer is to blame. Perhaps there is another way, but I believe this will work for you. A: This seems to work: select * from SomeTable where rand(0*SomeTableID + cast(cast(newid() as binary(4)) as int)) <= 0.10 A: Did you see this question? How do I return random numbers as a column in SQL Server 2005? Adam posted a UDF you can use in place of Rand() that works much better. A: This seems to work SELECT TOP 10 PERCENT * FROM schema.MyTable ORDER BY NEWID()
{ "language": "en", "url": "https://stackoverflow.com/questions/163604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: AJAX and the Browser Back Button I run a browser based game at www.darknovagames.com. Recently, I've been working on reformatting the site with CSS, trying to get all of its pages to verify according to the HTML standard. I've been toying with this idea of having the navigation menu on the left AJAX the pages in (rather than taking the user to a separate page each time, requiring a reload of the title and nav bar, which almost never change) and I know that if I do so, I will probably break the Forward/Back buttons in the browser. My question I guess is, should I go ahead and AJAX the site, thus requiring the user to use the sites navigation to play the game, or should I leave the site as it currently stands, and use standard hyperlinks and things for navigation? The reason I ask I guess is that I built a forums system into the site, and a lot of times I would want to link say to a particular topic within the forums. I'm also open to suggestions. Is there a standard (preferably without traditional frames) way to make only the body area of the site reload, while still changing the URL so that users can bookmark and forward/back, etc? That could potentially solve my problem as well. I'm just asking for the best solution here, not an answer to a specific question. ^_^ Thanks A: Check out reallysimplehistory. The wiki hasn't been updated for 10 months, but I was just at the Ajax Experience 2008 and saw a presentation by Brian Dillard on it. He says the 0.8 code is on his hard drive. Hopefully, it will be downloadable soon. A: Another solution: AJAX Pagination & Back Button This seems to be the best one out there, works with JQuery & Mootools. A: Use ajax for portions of the page that needs to update, not the entire thing. For that you should use templates. When you want to still preserve the back button for your various state changes on the page, combine them with # achors to alter the url (without forcing the browser to issue another GET). For example, gmail's looks like this: mail.google.com/#inbox/message-1234 everything past the # was a page state change that happened via ajax. If I press Back, I'll go to the inbox again (again, without another browser GET) A: If you're going to enable AJAX, don't do it at the expense of having accessible URLs to every significant page on your site. This is the backbone of a navigable site that people can use. When you shovel all your functionality into AJAX calls and callbacks, you're basically forcing your users into a single path to access the features and content that they want -- which is totally against how the web is meant to function. People rely on the address bar and the back button. If you override all your links so that your site is essentially a single page that only updates through AJAX, you're limiting your users' ability to navigate your site and find what they need. It also stops your users from being able to share what they find (which, that's part of the point, right?). Think about a user's mental map of your site. If they know they came in through the home page, then they went to search for something, then they landed on a games page, then they started playing a particular game, that's four distinct units of action that the user took. They might have done a few other smaller, more insignificant actions on each of these pages -- but these are the main units. When they click the Back button, they should expect to go back through the path they came in on. If you are loading all these pages through AJAX calls, you're providing a site whose functionality runs contrary to what the user expects. Break your site out into every significant function (ie, search, home, profiles, games -- it'll be dictated by what your site is all about). Anywhere you link to these pages, do it through a regular link and a static URL. AJAX is fine. But the art of it is knowing when to use it and when not to. If you keep to the model I've sketched out above, your users will appreciate it. A: There are numerous ways the solve this problem using funky Javascript techniques, often involving iframes, but I think in this situation you need to question why you're using AJAX. Is it actually going to make the site any easier to use for the user? It sounds to me like you're using it cos you think its cool (which in itself isn't always a bad thing) not because it will actually add any value to your visitors. From any normal website, normal hyperlinked documents are nearly always the right thing for the primary navigation. Its what people expect and I wouldn't recommend you go around breaking those expectations based on some fancy technology. AJAX is awesome and allows you to do many great things, changing a websites navigation is not one of them. Well done for picking up on this problem though, theres a lot of sites out there that just go ahead with AJAX and don't even think about this! A: Try this simple & lightweight PathJS lib. It allows to bind listeners directly to anchors. Example: Path.map("#/page").to(function(){ alert('page!'); }); A: AJAX is not the best solution for navigation for exactly the reason you describe. The hit for reloading the header and navbar is minimal compared to the hassle of breaking the browser's navigation UI. A more appropriate example of AJAX would be to allow users to play the game in the main window while they can browse through a list of other content in a nav pane. You could load additional items in the nav pane via AJAX without disturbing the gameplay. A: I’d stick with simple hyperlinks. Your page furniture shouldn’t account for a big portion of the HTML, so it’s not a big win excluding it from page requests. Making each resource addressable (i.e. a URL for each bit of content a user might be interested in) is a key design feature of the web. It means caching can work, and means users can share bookmarks. It makes Google work, as well as social bookmarking sites. Shaving a couple of HTML bytes off subsequent page changes isn’t worth the effort, in my opinion.
{ "language": "en", "url": "https://stackoverflow.com/questions/163610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Changing the DefaultValue of a property on an inherited .net control In .net, I have an inherited control: public CustomComboBox : ComboBox I simply want to change the default value of DropDownStyle property, to another value (ComboBoxStyle.DropDownList) besides the default one specified in the base class (ComboBoxStyle.DropDown). One might think that you can just add the constructor: public CustomComboBox() { this.DropDownStyle = ComboBoxStyle.DropDownList; } However, this approach will confuse the Visual Studio Designer. When designing the custom Control in Visual Studio, if you select ComboBoxStyle.DropDown for the DropDownStyle it thinks that the property you selected is still the default value (from the [DevaultValue()] in the base ComboBox class), so it doesn't add a customComboBox.DropDownStyle = ComboBoxStyle.DropDown line to the Designer.cs file. And confusingly enough, you find that the screen does not behave as intended once ran. Well you can't override the DropDownStyle property since it is not virtual, but you could do: [DefaultValue(typeof(ComboBoxStyle), "DropDownList")] public new ComboBoxStyle DropDownStyle { set { base.DropDownStyle = value; } get { return base.DropDownStyle; } } but then you will run into trouble from the nuances of using "new" declarations. I've tried it and it doesn't seem to work right as the visual studio designer gets confused from this approach also and forces ComboBoxStyle.DropDown (the default for the base class). Is there any other way to do this? Sorry for the verbose question, it is hard to describe in detail. A: This looks like it works: public class CustomComboBox : ComboBox { public CustomComboBox() { base.DropDownStyle = ComboBoxStyle.DropDownList; } [DefaultValue(ComboBoxStyle.DropDownList)] public new ComboBoxStyle DropDownStyle { set { base.DropDownStyle = value; Invalidate(); } get { return base.DropDownStyle;} } }
{ "language": "en", "url": "https://stackoverflow.com/questions/163611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Making email addresses safe from bots on a webpage? When placing email addresses on a webpage do you place them as text like this: joe.somebody@company.com or use a clever trick to try and fool the email address harvester bots? For example: HTML Escape Characters: &#106;&#111;&#101;&#46;&#115;&#111;&#109;&#101;&#98;&#111;&#100;&#121;&#64;&#99;&#111;&#109;&#112;&#97;&#110;&#121;&#46;&#99;&#111;&#109; Javascript Decrypter: function XOR_Crypt(EmailAddress) { Result = new String(); for (var i = 0; i < EmailAddress.length; i++) { Result += String.fromCharCode(EmailAddress.charCodeAt(i) ^ 128); } document.write(Result); } XOR_Crypt("êïå®óïíåâïäùÀãïíðáîù®ãïí"); Human Decode: joe.somebodyNOSPAM@company.com joe.somebody AT company.com What do you use or do you even bother? A: This is the method I used, with a server-side include, e.g. <!--#include file="emailObfuscator.include" --> where emailObfuscator.include contains the following: <!-- // http://lists.evolt.org/archive/Week-of-Mon-20040202/154813.html --> <script type="text/javascript"> function gen_mail_to_link(lhs,rhs,subject) { document.write("<a href=\"mailto"); document.write(":" + lhs + "@"); document.write(rhs + "?subject=" + subject + "\">" + lhs + "@" + rhs + "<\/a>"); } </script> To include an address, I use JavaScript: <script type="text/javascript"> gen_mail_to_link('john.doe','example.com','Feedback about your site...'); </script> <noscript> <em>Email address protected by JavaScript. Activate JavaScript to see the email.</em> </noscript> Because I have been getting email via Gmail since 2005, spam is pretty much a non-issue. So, I can't speak of how effective this method is. You might want to read this study (although it's old) that produced this graph: A: Have a look at this way, pretty clever and using css. CSS span.reverse { unicode-bidi: bidi-override; direction: rtl; } HTML <span class="reverse">moc.rehtrebttam@retsambew</span> The CSS above will then override the reading direction and present the text to the user in the correct order. Hope it helps Cheers A: I know my answer won't be liked by many but please consider the points outlined here before thumbing down. Anything easily machine readable will be easily machine readable by the spammers. Even though their actions seem stupid to us, they're not stupid people. They're innovative and resourceful. They do not just use bots to harvest e-mails, they have a plethora of methods at their disposal and in addition to that, they simply pay for good fresh lists of e-mails. What it means is, that they got thousands of black-hat hackers worldwide to execute their jobs. People ready to code malware that scrape the screens of other peoples' browsers which eventually renders any method you're trying to achieve useless. This thread has already been read by 10+ such people and they're laughing at us. Some of them may be even bored to tears to find out we cannot put up a new challenge to them. Keep in mind that you're not eventually trying to save your time but the time of others. Because of this, please consider spending some extra time here. There is no easy-to-execute magic bullet that would work. If you work in a company that publishes 100 peoples' e-mails on the site and you can reduce 1 spam e-mail per day per person, we're talking about 36500 spam emails a year. If deleting such e-mail takes 5 seconds on average, we're talking about 50 working hours yearly. Not to mention the reduced amount of annoyance. So, why not spend a few hours on this? It's not only you and the people who receive the e-mail that consider time an asset. Therefore, you must find a way to obfuscate the e-mail addresses in such way, that it doesn't pay off to crack it. If you use some widely used method to obfuscate the e-mails, it really pays off to crack it. Since as an result, the cracker will get their hands on thousands, if not tens or hundreds of thousands of fresh e-mails. And for them, they will get money. So, go ahead and code your own method. This is a rare case where reinventing the wheel really pays off. Use a method that is not machine readable and one which will preferably require some user interaction without sacrificing the user experience. I spent some 20 minutes to code off an example of what I mean. In the example, I used KnockoutJS simply because I like it and I know you won't probably use it yourself. But it's irrelevant anyway. It's a custom solution which is not widely used. Cracking it won't pose a reward for doing it since the method of doing it would only work on a single page in the vast internet. Here's the fiddle: http://jsfiddle.net/hzaw6/ The below code is not meant to be an example of good code. But just a quick sample of code which is very hard for machine to figure out we even handle e-mails in here. And even if it could be done, it's not gonna pay off to execute in large scale. And yes, I do know it doesn't work on IE = lte8 because of 'Unable to get property 'attributes' of undefined or null reference' but I simply don't care because it's just a demo of method, not actual implementation, and not intended to be used on production as it is. Feel free to code your own which is cooler, technically more solid etc.. Oh, and never ever ever name something mail or email in html or javascript. It's just way too easy to scrape the DOM and the window object for anything named mail or email and check if it contains something that matches an e-mail. This is why you don't want any variables ever that would contain e-mail in it's full form and this is also why you want user to interact with the page before you assign such variables. If your javascript object model contains any e-mail addresses on DOM ready state, you're exposing them to the spammers. The HTML: <div data-bind="foreach: contacts"> <div class="contact"> <div> <h5 data-bind="text: firstName + ' ' + lastName + ' / ' + department"></h5> <ul> <li>Phone: <span data-bind="text: phone"></span></li> <li><a href="#999" data-bind="click:$root.reveal">E-mail</a> <span data-bind="visible: $root.msgMeToThis() != ''"><input class="merged" data-bind="value: mPrefix" readonly="readonly" /><span data-bind="text: '@' + domain"></span></span></li> </ul> </div> </div> </div> The JS function ViewModel(){ var self = this; self.contacts = ko.observableArray([ { firstName:'John', mPrefix: 'john.doe', domain: 'domain.com', lastName: 'Doe', department: 'Sales', phone: '+358 12 345 6789' }, { firstName:'Joe', mPrefix: 'joe.w', domain: 'wonder.com', lastName: 'Wonder', department: 'Time wasting', phone: '+358 98 765 4321' }, { firstName:'Mike', mPrefix: 'yo', domain: 'rappin.com', lastName: 'Rophone', department: 'Audio', phone: '+358 11 222 3333' } ]); self.msgMeToThis = ko.observable(''); self.reveal = function(m, e){ var name = e.target.attributes.href.value; name = name.replace('#', ''); self.msgMeToThis(name); }; } var viewModel = new ViewModel(); ko.applyBindings(viewModel); A: You can try to hide characters using html entities in hexa (ex: &#x40 for @). This is convenient solution, as a correct browser will translate it, and you can have a normal link. The drawback is that a bot can translate it theorically, but it's a bit unusual. I use this to protect my e-mail on my blog. Another solution is to use javascript to assemble part of the address and to decode on-the-fly the address. The drawback is that a javascript-disabled browser won't show your adress. The most effective solution is to use an image, but it's a pain for the user to have to copy the address by hand. Your solution is pretty good, as you only add a drawback (writing manually the @) only for user that have javascript disabled. You can also be more secure with : onclick="this.href='mailto:' + 'admin' + '&#x40;' + 'domain.com'" A: One of my favorite methods is to obfuscate the email address using php, a classic example is to convert the characters to HEX values like so: function myobfiscate($emailaddress){ $email= $emailaddress; $length = strlen($email); for ($i = 0; $i < $length; $i++){ $obfuscatedEmail .= "&#" . ord($email[$i]).";"; } echo $obfuscatedEmail; } And then in my markup I'll simply call it as follows: <a href="mailto:<?php echo myobfiscate('someone@somewhere.com'); ?>" title="Email me!"><?php echo myobfiscate('someone@somewhere.com');?> </a> Then examine your source, you'll be pleasantly surprised! A: I wouldn't bother -- it is fighting the SPAM war at the wrong level. Particularly for company web sites I think it makes things look very unprofessional if you have anything other than the straight text on the page with a mailto hyperlink. There is so much spam flying around that you need good filtering anyway, and any bot is going end up understanding all the common tricks anyway. A: HTML: <a href="#" class="--mailto--john--domain--com-- other classes goes here" /> JavaScript, using jQuery: // match all a-elements with "--mailto--" somehere in the class property $("a[class*='--mailto--']").each(function () { /* for each of those elements use a regular expression to pull out the data you need to construct a valid e-mail adress */ var validEmailAdress = this.className.match(); $(this).click(function () { window.location = validEmailAdress; }); }); A: Not my idea originally but I can't find the author: <a href="mailto:coxntact@domainx.com" onmouseover="this.href=this.href.replace(/x/g,'');">link</a> Add as many x's as you like. It works perfectly to read, copy and paste, and can't be read by a bot. A: Spambots won't interpret this, because it is a lesser-known method :) First, define the css: email:before { content: "admin"; } email:after { content: "@example.com"; } Now, wherever you want to display your email, simply insert the following HTML: <div id="email"></div> And tada! A: I generally don't bother. I used to be on a mailing list that got several thousand spams every day. Our spam filter (spamassassin) let maybe 1 or 2 a day through. With filters this good, why make it difficult for legitimate people to contact you? A: I use a very simple combination of CSS and jQuery which displays the email address correctly to the user and also works when the anchor is clicked or hovered: HTML: <a href="mailto:me@example.spam" id="lnkMail">moc.elpmaxe@em</a> CSS: #lnkMail { unicode-bidi: bidi-override; direction: rtl; } jQuery: $('#lnkMail').hover(function(){ // here you can use whatever replace you want var newHref = $(this).attr('href').replace('spam', 'com'); $(this).attr('href', newHref); }); Here is a working example. A: I don't bother. You'll only annoy sophisticated users and confuse unsophisticated users. As others have said, Gmail provides very effective spam filters for a personal/small business domain, and corporate filters are generally also very good. A: The best method hiding email addresses is only good until bot programmer discover this "encoding" and implement a decryption algorithm. The JavaScript option won't work long, because there are a lot of crawler interpreting JavaScript. There's no answer, imho. A: One easy solution is to use HTML entities instead of actual characters. For example, the "me@example.com" will be converted into : <a href="&#109;&#97;&#105;&#108;&#116;&#111;&#58;&#109;&#101;&#64;&#101;&#120;&#97;&#109;&#112;&#108;&#101;&#46;&#99;&#111;&#109;">email me</A> A: !- Adding this for reference, don't know how outdated the information might be, but it tells about a few simple solutions that don't require the use of any scripting After searching for this myself i came across this page but also these pages: http://nadeausoftware.com/articles/2007/05/stop_spammer_email_harvesters_obfuscating_email_addresses try reversing the emailadress Example plain HTML: <bdo dir="rtl">moc.elpmaxe@nosrep</bdo> Result : person@example.com The same effect using CSS CSS: .reverse { unicode-bidi:bidi-override; direction:rtl; } HTML: <span class="reverse">moc.elpmaxe@nosrep</span> Result : person@example.com Combining this with any of earlier mentioned methods may even make it more effective A: A response of mine on a similar question: I use a very simple combination of CSS and jQuery which displays the email address correctly to the user and also works when the anchor is clicked: HTML: <a href="mailto:me@example.spam" id="lnkMail">moc.elpmaxe@em</a> CSS: #lnkMail { unicode-bidi: bidi-override; direction: rtl; } jQuery: $('#lnkMail').hover(function(){ // here you can use whatever replace you want var newHref = $(this).attr('href').replace('spam', 'com'); $(this).attr('href', newHref); }); Here is a working example. A: Here is my working version: Create somewhere a container with a fallback text: <div id="knock_knock">Activate JavaScript, please.</div> And add at the bottom of the DOM (w.r.t. the rendering) the following snippet: <script> (function(d,id,lhs,rhs){ d.getElementById(id).innerHTML = "<a rel=\"nofollow\" href=\"mailto"+":"+lhs+"@"+rhs+"\">"+"Mail"+"<\/a>"; })(window.document, "knock_knock", "your.name", "example.com"); </script> It adds the generated hyperlink to the specified container: <div id="knock_knock"><a rel="nofollow" href="your.name@example.com">Mail</a></div> In addition here is a minified version: <script>(function(d,i,l,r){d.getElementById(i).innerHTML="<a rel=\"nofollow\" href=\"mailto"+":"+l+"@"+r+"\">"+"Mail"+"<\/a>";})(window.document,"knock_knock","your.name","example.com");</script> A: A neat trick is to have a div with the word Contact and reveal the email address only when the user moves the mouse over it. E-mail can be Base64-encoded for extra protection. Here's how: <div id="contacts">Contacts</div> <script> document.querySelector("#contacts").addEventListener("mouseover", (event) => { // Base64-encode your email and provide it as argument to atob() event.target.textContent = atob('aW5mb0BjbGV2ZXJpbmcuZWU=') }); </script> A: Invent your own crazy email address obfuscation scheme. Doesn't matter what it is, really, as long as it's not too similar to any of the commonly known methods. The problem is that there really isn't a good solution to this, they're all either relatively simple to bypass, or rather irritating for the user. If any one method becomes prevalent, then someone will find a way around it. So rather than looking for the One True email address obfuscation technique, come up with your own. Count on the fact that these bot authors don't care enough about your site to sit around writing a thing to bypass your slightly crazy rendering-text-with-css-and-element-borders or your completely bizarre, easily-cracked javascript encryption. It doesn't matter if it's trivial, nobody will bother trying to bypass it just so they can spam you. A: The only safest way is of course not to put the email address onto web page in the first place. A: Use a contact form instead. Put all of your email addresses into a database and create an HTML form (subject, body, from ...) that submits the contents of the email that the user fills out in the form (along with an id or name that is used to lookup that person's email address in your database) to a server side script that then sends an email to the specified person. At no time is the email address exposed. You will probably want to implement some form of CAPTCHA to deter spambots as well. A: There are probably bots that recognize the [at] and other disguises as @ symbol. So this is not a really effective method. Sure you could use some encodings like URL encode or HTML character references (or both): // PHP example // encodes every character using URL encoding (%hh) function foo($str) { $retVal = ''; $length = strlen($str); for ($i=0; $i<$length; $i++) $retVal.=sprintf('%%%X', ord($str[$i])); return $retVal; } // encodes every character into HTML character references (&#xhh;) function bar($str) { $retVal = ''; $length = strlen($str); for ($i=0; $i<$length; $i++) $retVal.=sprintf('&#x%X;', ord($str[$i])); return $retVal; } $email = 'user@example.com'; echo '<a href="'.bar('mailto:?to=' . foo(','.$email.'')).'">mail me</a>'; // output // <a href="&#x6D;&#x61;&#x69;&#x6C;&#x74;&#x6F;&#x3A;&#x3F;&#x74;&#x6F;&#x3D;&#x25;&#x32;&#x43;&#x25;&#x37;&#x35;&#x25;&#x37;&#x33;&#x25;&#x36;&#x35;&#x25;&#x37;&#x32;&#x25;&#x34;&#x30;&#x25;&#x36;&#x35;&#x25;&#x37;&#x38;&#x25;&#x36;&#x31;&#x25;&#x36;&#x44;&#x25;&#x37;&#x30;&#x25;&#x36;&#x43;&#x25;&#x36;&#x35;&#x25;&#x32;&#x45;&#x25;&#x36;&#x33;&#x25;&#x36;&#x46;&#x25;&#x36;&#x44;">mail me</a> But as it is legal to use them, every browser/e-mail client should handle these encodings too. A: One possibility would be to use isTrusted property (Javascript). The isTrusted read-only property of the Event interface is a Boolean that is true when the event was generated by a user action, and false when the event was created or modified by a script or dispatched via EventTarget.dispatchEvent(). eg in your case: getEmail() { if (event.isTrusted) { /* The event is trusted */ return 'your-email@domain.com'; } else { /* The event is not trusted */ return 'chuck@norris.com'; } } ⚠ IE isn't compatible ! Read more from doc: https://developer.mozilla.org/en-US/docs/Web/API/Event/isTrusted A: You can protect your email address with reCAPTCHA, they offer a free service so people have to enter a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) to see your email: https://www.google.com/recaptcha/admin#mailhide A: I've written an encoder (source) that uses all kinds of parsing tricks that I could think of (different kinds of HTML entities, URL encoding, comments, multiline attributes, soft hyphens, non-obvious structure of mailto: URL, etc) It doesn't stop all harvesters, but OTOH it's completely standards-compliant and transparent to the users. Another IMHO good approach (which you can use in addition to tricky encoding) is along lines of: <a href="mailto:userhatestogetspam@example.com" onclick="this.href=this.href.replace(/hatestogetspam/,'')"> A: I think the only foolproof method you can have is creating a Contact Me page that is a form that submits to a script that sends to your email address. That way, your address is never exposed to the public at all. This may be undesirable for some reason, but I think it's a pretty good solution. It often irks me when I'm forced to copy/paste someone's email address from their site to my mail client and send them a message; I'd rather do it right through a form on their site. Also, this approach allows you to have anonymous comments sent to you, etc. Just be sure to protect your form using some kind of anti-bot scheme, such as a captcha. There are plenty of them discussed here on SO. A: Working with content and attr in CSS: .cryptedmail:after { content: attr(data-name) "@" attr(data-domain) "." attr(data-tld); } <a href="#" class="cryptedmail" data-name="info" data-domain="example" data-tld="org" onclick="window.location.href = 'mailto:' + this.dataset.name + '@' + this.dataset.domain + '.' + this.dataset.tld; return false;"></a> When javascript is disabled, just the click event will not work, email is still displayed. Another interesting approach (at least without a click event) would be to make use of the right-to-left mark to override the writing direction. more about this: https://en.wikipedia.org/wiki/Right-to-left_mark A: If you have php support, you can do something like this: <img src="scriptname.php"> And the scriptname.php: <?php header("Content-type: image/png"); // Your email address which will be shown in the image $email = "you@yourdomain.com"; $length = (strlen($email)*8); $im = @ImageCreate ($length, 20) or die ("Kann keinen neuen GD-Bild-Stream erzeugen"); $background_color = ImageColorAllocate ($im, 255, 255, 255); // White: 255,255,255 $text_color = ImageColorAllocate ($im, 55, 103, 122); imagestring($im, 3,5,2,$email, $text_color); imagepng ($im); ?> A: I make mine whateverDOC@whatever.com and then next to it I write "Remove the capital letters" A: Another, possibly unique, technique might be to use multiple images and a few plain-text letters to display the address. That might confuse the bots. A: Gmail which is free has an awesome spam filter. If you don't want to use Gmail directly you could send the email to gmail and use gmail forwarding to send it back to you after it has gone through their spam filter. In a more complex situation, when you need to show a @business.com address you could show the public@business.com and have all this mail forwarded to a gmail account who then forwards it back to the real@business.com I guess it's not a direct solution to your question but it might help. Gmail being free and having such a good SPAM filter makes using it a very wise choice IMHO. I receive about 100 spam per day in my gmail account but I can't remember the last time one of them got to my inbox. To sum up, use a good spam filter whether Gmail or another. Having the user retype or modify the email address that is shown is like using DRM to protect against piracy. Putting the burden on the "good" guy shouldn't be the way to go about doing anything. :) A: Does it work if I right-click on the link and choose "copy URL"? If not, it's very much not an ideal situation (I very seldom click on a mailto link, preferring to copy the email address and paste it into my mail application or wherever else I need it at a specific point in time). I used to be fairly paranoid protecting my mail address on-line (UseNet, web and the like), but these days I suspect more "possible targets for spam" are actually generated matching local-parts to domains programmatically. I base this on having, on occasion, gone through my mail server logs. There tends to be quite a few delivery attempts to non-existing addresses (including truncated versions of spam-bait I dangled on UseNet back in the late 90s, when address-scraping was very prevalent). A: First I would make sure the email address only shows when you have javascript enabled. This way, there is no plain text that can be read without javascript. Secondly, A way of implementing a safe feature is by staying away from the <button> tag. This tag needs a text insert between the tags, which makes it computer-readable. Instead try the <input type="button"> with a javascript handler for an onClick. Then use all of the techniques mentioned by otherse to implement a safe email notation. One other option is to have a button with "Click to see emailaddress". Once clicked this changes into a coded email (the characters in HTML codes). On another click this redirects to the 'mailto:email' function An uncoded version of the last idea, with selectable and non-selectable email addresses: <html> <body> <script type="text/javascript"> e1="@domain"; e2="me"; e3=".extension"; email_link="mailto:"+e2+e1+e3; </script> <input type="text" onClick="this.onClick=window.open(email_link);" value="Click for mail"/> <input type="text" onClick="this.value=email;" value="Click for mail-address"/> <input type="button" onClick="this.onClick=window.open(email_link);" value="Click for mail"/> <input type="button" onClick="this.value=email;" value="Click for mail-address"/> </body></html> See if this is something you would want and combine it with others' ideas. You can never be too sure. A: For your own email address I'd recommend not worrying about it too much. If you have a need to make your email address available to thousands of users then I would recommend either using a gmail address (vanilla or via google apps) or using a high quality spam filter. However, when displaying other users email addresses on your website I think some level of due diligence is required. Luckily, a blogger named Silvan Mühlemann has done all the difficult work for you. He tested out different methods of obfuscation over a period of 1.5 years and determined the best ones, most of them involve css or javascript tricks that allow the address to be presented correctly in the browser but will confuse automated scrapers. A: after using so many techniques i found an easy way and very friendly, the bots search for @ Símbolo and recently they search for [at] ant it's variation so i use 2 techniques * *i write my email on an image like the domaintolls use and it works perfectly or *to replace the Símbolo (@) with an image of it like and the image alt will be alt="@" so the bot will find an image and any human will see it as a normal address so if he copy it he will copy the email and the job is don so the code will be <p>myname<img src="http://www.traidnt.net/vb/images/mail2.gif" width="11" height="9" alt="@" />domain.com</p> A: what about HTML_CHARACTER?: joe&#064;mail.com outputs joe@mail.com A: And my function. I've created it looking at answers placed in this topic. function antiboteEmail($email) { $html = ''; $email = strrev($email); $randId = rand(1, 500); $html .= '<span id="addr-'.$randId.'" class="addr">[turn javascript on to see the e-mail]</span>'; $html .= <<<EOD <script> $(document).ready(function(){ var addr = "$email"; addr = addr.split("").reverse().join(""); $("#addr-$randId").html("<a href=\"mailto:" + addr + "\">" + addr + " </a>"); }); </script> EOD; return $html; } It uses two methods: right to left dir and javascript putting. A: Option 1 : Split email address into multiple parts and create an array in JavaScript out of these parts. Next join these parts in the correct order and use the .innerHTML property to add the email address to the web page. <span id="email"> </span> // blank tag <script> var parts = ["info", "XXXXabc", "com", "&#46;", "&#64;"]; var email = parts[0] + parts[4] + parts[1] + parts[3] + parts[2]; document.getElementById("email").innerHTML=email; </script> Option 2 : Use image instead of email text Image creator website from text : http://www.chxo.com/labelgen/ Option 3 : We can use AT instead of "@" and DOT instead of " . " i.e : info(AT)XXXabc(DOT)com A: I don't like JavaScript and HTML to be mixed, that's why I use this solution. It works fine for me, for now. Idea: you could make it more complicated by providing encrypted information in the data-attributes and decrypt it within the JS. This is simply done by replacing letters or just reversing them. HTML: <span class="generate-email" data-part1="john" data-part2="gmail" data-part3="com">placeholder</span> JS: $(function() { $('.generate-email').each(function() { var that = $(this); that.html( that.data('part1') + '@' + that.data('part2') + '.' + that.data('part3') ); }); }); Try it: http://jsfiddle.net/x6g9L817/ A: A script that saves email addresses to png files would be a secure solution ( if you have enough space and you are allowed to embed images in your page ) A: This is what we use (VB.NET): Dim rxEmailLink As New Regex("<a\b[^>]*mailto:\b[^>]*>(.*?)</a>") Dim m As Match = rxEmailLink.Match(Html) While m.Success Dim strEntireLinkOrig As String = m.Value Dim strEntireLink As String = strEntireLinkOrig strEntireLink = strEntireLink.Replace("'", """") ' replace any single quotes with double quotes to make sure the javascript is well formed Dim rxLink As New Regex("(<a\b[^>]*mailto:)([\w.\-_^@]*@[\w.\-_^@]*)(\b[^>]*?)>(.*?)</a>") Dim rxLinkMatch As Match = rxLink.Match(strEntireLink) Dim strReplace As String = String.Format("<script language=""JavaScript"">document.write('{0}{1}{2}>{3}</a>');</script>", _ RandomlyChopStringJS(rxLinkMatch.Groups(1).ToString), _ ConvertToAsciiHex(rxLinkMatch.Groups(2).ToString), _ rxLinkMatch.Groups(3), _ ConvertToHtmlEntites(rxLinkMatch.Groups(4).ToString)) Result = Result.Replace(strEntireLinkOrig, strReplace) m = m.NextMatch() End While and Public Function RandomlyChopStringJS(ByVal s As String) As String Dim intChop As Integer = Int(6 * Rnd()) + 1 Dim intCount As Integer = 0 RandomlyChopStringJS = "" If Not s Is Nothing AndAlso Len(s) > 0 Then For Each c As Char In s.ToCharArray() If intCount = intChop Then RandomlyChopStringJS &= "'+'" intChop = Int(6 * Rnd()) + 1 intCount = 0 End If RandomlyChopStringJS &= c intCount += 1 Next End If End Function We override Render and run the outgoing HTML through this before it goes out the door. This renders email addresses that render normally to a browser, but look like this in the source: <script language="JavaScript">document.write('<a '+'clas'+'s='+'"Mail'+'Link'+'" hr'+'ef'+'="ma'+'ilto:%69%6E%66%6F%40%62%69%63%75%73%61%2E%6F%72%67">&#105;&#110;&#102;&#111;&#64;&#98;&#105;&#99;&#117;&#115;&#97;&#46;&#111;&#114;&#103;</a>');</script> Obviously not foolproof, but hopefully cuts down on a certain amount of harvesting without making things hard for the visitor. A: It depends on what exactly your needs are. For most sites with which I work, I have found it far more useful to put in a "contact me/us" form which sends an email from the system to whomever needs to be contacted. I know that this isn't exactly the solution that you are seeking but it does completely protect against harvesting and so far I have never seen spam sent through a form like that. It will happen but it is very rare and you are never harvested. This also gives you a chance to log the messages before sending them giving you an extra level of protection against losing a contact, if you so desire. A: Spam bots will have their own Javascript and CSS engines over time, so I think you shouldn't look in this direction. A: Here is a simple jquery solution to this problem: <script type="text/javascript"> $(document).ready(function() { str1="mailto:"; str2="info"; str3="@test.com"; $("#email_a").attr("href", str1+str2+str3); }); </script> <a href="#" id="email_a"><img src="sample.png"/></a> A: I like ofaurax's answer best but I would modify to this for a little more hidden email: onclick="p1='admin'; p2='domain.com'; this.href='mailto:' + p1 + '& #x40;' + p2" A: I just have to provide an another answer. I just came up with something fun to play with. I found out that in many common character tables, the letters @ and a-z reappear more than once. You can map the original characters to the new mappings and make it harder for spam bots to figure out what the e-mail is. If you loop through the string, and get the character code of a letter, then add 65248 to it and build a html entity based on the number, you come up with a human readable e-mail address. var str = 'john.doe@email.com'; str = str.toLowerCase().replace(/[\.@a-z]/gi, function(match, position, str){ var num = str.charCodeAt(position); return ('&#' + (num + 65248) + ';'); }); Here is a working fiddle: http://jsfiddle.net/EhtSC/8/ You can improve this approach by creating a more complete set of mappings between characters that look the same. But if you copy/paste the e-mail to notepad, for example, you get a lot of boxes. To overcome some of the user experience issues, I created the e-mail as link. When you click it, it remaps the characters back to their originals. To improve this, you can create more complex character mappings if you like. If you can find several characters that can be used for example in the place of 'a' why not randomly mapping to those. Probably not the most secure approach ever but I really had fun playing around with it :D A: Option 1 : Split email address into multiple parts and create an array in JavaScript out of these parts. Next join these parts in the correct order and use the .innerHTML property to add the email address to the web page. <span id="email"> </span> // blank tag <script> var parts = ["info", "XXXXabc", "com", "&#46;", "&#64;"]; var email = parts[0] + parts[4] + parts[1] + parts[3] + parts[2]; document.getElementById("email").innerHTML=email; </script> Option 2 : Use image instead of email text Image creator website from text : http://www.chxo.com/labelgen/ Option 3 : We can use AT instead of "@" and DOT instead of " . " i.e : info(AT)XXXabc(DOT)com A: I just coded the following. Don't know if it's good but it's better then just writing the email in plain text. Many robots will be fooled but not all of them. <script type="text/javascript"> $(function () { setTimeout(function () { var m = ['com', '.', 'domain', '@', 'info', ':', 'mailto'].reverse().join(''); /* Set the contact email url for each "contact us" links.*/ $('.contactUsLink').prop("href", m); }, 200); }); </script> If the robot solve this then there's no need to add more "simple logic" code like "if (1 == 1 ? '@' : '')" or adding the array elements in another order since the robot just evals the code anyway. A: Font-awesome works! <link rel="stylesheet" href="path/to/font-awesome/css/font-awesome.min.css"> <p>myemail<i class="fa fa-at" aria-hidden="true"></i>mydomain.com</p> http://fontawesome.io/ A: Hidden Base64 solution. I think it does not matter if you put an email address in a :before/:after pseudo or split it in reverse written data attributes ... Spambots are clever and analyze parsed webpages. This solution is interactive. The user has to click "show" to get an base64 decoded email address which can be copied and/or is clickable. // search for [data-b64mail] attributes document.querySelectorAll('[data-b64mail]').forEach(el => { // set "show" link el.innerHTML = '<span style="text-decoration:underline;cursor:pointer">show</span>'; // set click event to all elements el.addEventListener('click', function (e) { let cT = e.currentTarget; // show address cT.innerHTML = atob(cT.getAttribute('data-b64mail')); // set mailto on a tags if (cT.tagName === 'A') cT.setAttribute('href', 'mailto:' + atob(cT.getAttribute('data-b64mail'))); }); }); // get base64 encoded string console.log(btoa('mail@example.org')); <p>E-mail (span): <span data-b64mail="bWFpbEBleGFtcGxlLm9yZw=="></span></p> <p>E-mail (link): <a href="#" data-b64mail="bWFpbEBleGFtcGxlLm9yZw=="></a></p> A: Another option, I perefer font awesome icons Fa implementation: <link rel="stylesheet" href="path/to/font-awesome/css/font-awesome.min.css"> Mail Address: <a href="mailto:info@uploadimage.club"><span class="label">info<i class="fa fa-at"></i>uploadimage.club</span></a>
{ "language": "en", "url": "https://stackoverflow.com/questions/163628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Which SVG toolkit would you recommend to use in Java? As a follow-up to another question, I was wondering what would be the best way to use SVG in a Java project. A: The Apache Batik project is an open source SVG renderer written in Java. You can pass it an SVG file, or create a document programatically via a DOM-style API accesssible from Java code. A: Besides Batik there is also SVG Salamander. Personally I prever Salamander, it only doesn't support all SVG features, eg. Gaussian blurring.
{ "language": "en", "url": "https://stackoverflow.com/questions/163632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to get access to the Websphere 6.1 ant tasks from vanilla ant (not ws_ant) I guess I need to know what I need in the classpath (what jar) in order to execute WebSphere 6.1 ant tasks. If someone can provide an example that would be perfect. A: The Actual Websphere Ant tasks are defined in wsanttasks.jar. A possible path for linux Systems is /opt/IBM/WebSphere/AppServer/lib/wsanttasks.jar However I doubt that you will be successfull just by including that, as I do remember trying it once and it failed because of dependecies. However, it is not impossible to do it as ws_ant is just a wrapper script which adds all the required classpaths and calls the inbuilt ant. So if you have time to look into the ws_ant script you will be able to get all the required classpath's. By the way is there any special reason why you want to avoid ws_ant, that will surely make your life simple. A: For Websphere 6.1, you can use the jar com.ibm.ws.runtime_6.1.0.jar to access the ant tasks. On Windows, the jar is located in the plugins directory (for me this is: C:\Program Files\IBM\WebSphere\AppServer\plugins). A: websphere's ant tasks are screwed up and they invoke wsadmin.bat you can do that yourself
{ "language": "en", "url": "https://stackoverflow.com/questions/163646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Lightweight .NET debugger? I frequently need to debug .NET binaries on test machines (by test-machine, I mean that the machine doesn't have Visual Studio installed on it, it's frequently re-imaged, It's not the same machine that I do my development on, etc). I love the Visual Studio debugger, but it's not practical for me to install visual studios on a freshly imaged test-machine just to debug an assertion or crash (the install takes way too long, the footprint is too large, etc). I'd really like a quickly installed program that could break into a running process, let me specify the location of symbols/source code, and let me jump right into debugging. For native binaries, windbg works great, but I haven't found anything similiar for managed binaries. Any recommendations? (as a side note, I am aware of visual studios remote debugging capabilities, but for some reason it never seems to work consistently for me... I often have connection issues) A: I've finally found extensions for Windbg that do just what I wanted: Sosex.dll, lets me use windbg to debug managed applications with very minimal installation required. I've used it for more than a year now, and It's worked, without fault, for every debugging scenario I've encountered. A: There is always mdbg and cordbg, but I would suggest digging more into why remote debugging doesn't work consistently. VS2005/8 seem a lot more reliable than earlier versions here (though I primarily do unmanaged) and it saves you from having to have the symbols accessible on the target machine. A: Version 2 of ILSpy contains a debugger. And it works awesome! It is still in very early stages, but have helped me several times. Just watch out for bugs! :) A: For a bit nicer interface than MDbg or cordbg take a look at DbgCLR - a cut-down version of the Visual Studio debugger (at least it looks like one) that handles only managed code. It comes with the .NET Framework (I'm not sure if it's in the runtime or if you need the Framework SDK): * *http://msdn.microsoft.com/en-us/library/7zxbks7z(VS.85).aspx Note that cordbg is deprecated in favor of MDbg (even though MDbg doesn't have all of cordbg's features): * *http://blogs.msdn.com/jmstall/archive/2005/11/07/views_on_cordbg_and_mdbg.aspx And in looking back at MDbg whle writing this post, I found that there's a GUI wrapper available for MDbg (which I haven't tried): * *http://blogs.msdn.com/jmstall/archive/2005/02/04/367506.aspx A: Use dnSpy. dnSpy is a debugger and .NET assembly editor. You can use it to edit and debug assemblies even if you don't have any source code available. It's so wonderful. Very small and lightweight. No installation or configuration needed. Its interface is exactly like Visual Studio. Even its shortcuts are the same as VS. Features: Debugger * *Debug .NET Framework, .NET Core and Unity game assemblies, no source code required *Set breakpoints and step into any assembly *Locals, watch, autos windows *Variables windows supports saving variables (eg. decrypted byte arrays) to disk or view them in the hex editor (memory window) *Object IDs *Multiple processes can be debugged at the same time *Break on module load *Tracepoints and conditional breakpoints *Export/import breakpoints and tracepoints *Call stack, threads, modules, processes windows *Break on thrown exceptions (1st chance) *Variables windows support evaluating C# / Visual Basic expressions *Dynamic modules can be debugged (but not dynamic methods due to CLR limitations) *Output window logs various debugging events, and it shows timestamps by default :) *Assemblies that decrypt themselves at runtime can be debugged, dnSpy will use the in-memory image. You can also force dnSpy to always use in-memory images instead of disk files. *Public API, you can write an extension or use the C# Interactive window to control the debugger Assembly Editor * *All metadata can be edited *Edit methods and classes in C# or Visual Basic with IntelliSense, no source code required *Add new methods, classes or members in C# or Visual Basic *IL editor for low level IL method body editing *Low level metadata tables can be edited. This uses the hex editor internally. Hex Editor * *Click on an address in the decompiled code to go to its IL code in the hex editor *Reverse of above, press F12 in an IL body in the hex editor to go to the decompiled code or other high level representation of the bits. It's great to find out which statement a patch modified. *Highlights .NET metadata structures and PE structures *Tooltips shows more info about the selected .NET metadata / PE field *Go to position, file, RVA *Go to .NET metadata token, method body, #Blob / #Strings / #US heap offset or #GUID heap index *Follow references (Ctrl+F12) Other * *BAML decompiler *Blue, light and dark themes (and a dark high contrast theme) *Bookmarks *C# Interactive window can be used to script dnSpy *Search assemblies for classes, methods, strings etc *Analyze class and method usage, find callers etc *Multiple tabs and tab groups *References are highlighted, use Tab / Shift+Tab to move to next reference *Go to entry point and module initializer commands *Go to metadata token or metadata row commands *Code tooltips (C# and Visual Basic) *Export to project A: You could check out MDbg: http://blogs.msdn.com/jmstall/archive/2006/11/22/mdbg-sample-2-1.aspx. It looks like it comes with the .NET 3.5 SDK at least (and it's probably included with 2.0+). Windbg has managed extensions (called SOS I believe), though I don't know if they allow source-level debugging. A: Have your tried using Cracked.NET ? It's a runtime debugging and scripting tool that gives you access to the internals of any .NET desktop application running on your computer. A: Maybe you can try using Live Tuning combined with an Ocf Server? It's not a debugger per-se, but it's pretty easy to get a connection between an app and Live Tuning. Like, literally 3 lines of code. Then you have access to all the variables you choose to publish. I found it useful when trying to debug my programs without having access to the decompiled code or a real debugger. You can't really have breakpoints but it turns out there's sometimes better ways to debug.
{ "language": "en", "url": "https://stackoverflow.com/questions/163647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: What are the best options for NAT port forwarding? I'd like to make it easy for users to forward a port on their NAT to their local machine for my C++ app. I'd like to make this work on OSX & Windows. Linux would be a great bonus, but Linux users are probably more comfortable forwarding ports manually, so it is less of a concern. LGPL type code is OK, but I can't use anything that is straight GPL. I'd love to hear any thoughts or experiences anyone has had in this area, but a few specific questions come to mind: * *Is there a recognized best library for UPNP? The MiniUPNP client looks like it might work, but is there anything else out there? *What about Bonjour? Can I rely on it for OSX computers? *All the big bittorrent apps have to deal with this, so is there an existing survey of how they do it? What about Skype? A: MiniUPNP is used by at least one bittorrent client (Transmission) and should work fine. A: Bonjour on both OS X and Windows can be used to do port mappings with routers that support uPNP or NAT-PMP. I haven't used the API (DNSServiceNATPortMappingCreate) but I have successfully published wide-area services on both Windows and OS X behind a NAT-PMP router. I'm not sure if your Windows users will want to install Bonjour (although they may already have it if they use iTunes or Safari) to use your app but on OS X support shouldn't be an issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/163654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Reading Comma Delimited File and Putting Data in ListView - C# Alright, I'm trying to read a comma delimited file and then put that into a ListView (or any grid, really). I have the delimiting part of the job taken care of, with the fields of the file being put into a multidimensional string array. The problem is trying to get it into the ListView. It appears that there isn't a real way of adding columns or items dynamically, since each column and item needs to be manually declared. This poses a problem, because I need the ListView to be as large as the file is, who's size isn't set. It could be huge one time, and small another. Any help with this would be appreciated. In response to Jeffrey's answer. I would do exactly that, but the problem that I'm running into is a basic one. How can I create these objects without naming them. Noobie question, but a problem for me, sadly. This is what I have so far. int x = 0; int y = 0; while (y < linenum) { while (x < width) { ListViewItem listViewItem1 = new ListViewItem(list[y,x]); x++; } y++; x = 0; } What should I do for the name of listViewItem1? A: Just loop through each of the arrays in that you've created and create a new ListViewItem object (there is a constructor that takes an array of strings, I believe). The pass the ListViewItem to the ListView.Items.Add() method. A: You can load a csv file with ado.net and bind it to a datagrids data source. Or you could use linq for xml to parse the file and bind those results to a datagrid's data source property. A: I would use the FileHelpers Library to read in the CSV file and then DataBind the collection to the ListView. Use the DelimitedClassBuilder to dynamically create columns with the typeof(string) equal to the number of columns in your source file. Load your CSV file into a DataTable using the RecordClass that you created and then set the ListView.DataSource to the DataTable. A: Linq To CSV A: Is there a reason you can't use a DataTable? Use the DataSource member off of it. Also, I hope you are using the String.Split function, and not manually parsing... ~S
{ "language": "en", "url": "https://stackoverflow.com/questions/163662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I change the default *.elog log file name for an interpreted Specman session? I want to be able to specify the file name stem for the log file in a Specman test. I need to hard-code the main *.elog filename so that I don't get variance between tests and confuse the post-processing scripts. Is there a constraint or command line I can pass into Specman? A: You can also use specman command "set log " or use following code. extend sys { run() is also { specman("set log specman.elog"); }; }; A: You can control the *.elog filename with switch -log.
{ "language": "en", "url": "https://stackoverflow.com/questions/163678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Ajax versus Frames In light of how ajax is actually used by most sites today; why is ajax embraced while frames are still regarded as a bad idea? A: AJAX, from where I'm sitting, is a sort of grand tradeoff. You are breaking things in the "document" model of the interwebs so that your site can behave more like an "application." If a site is using AJAx well, they will break the document model in subtle ways that add something of value to the application. The "vote" link isn't really a link, but it gives you a cool animation and updates the question's status asynchronously. Frames break just as much, if not more, of the document model (bookmarks, scrolling, copy-and-paste, etc) but without as much of the benefit. Frames also insert whatever decorations my OS/Window manager happens to be using, so they look pretty ugly. AJAX, if done correctly, also breaks better for people using screen readers, text-based browsers, etc. A: The big problems with frames are that it's possible to deep-link to the frames page outside of the frameset, and that bookmarking rarely works as expected. There are of course fixes for all these things, but they simply make an already not-very-nice system even clunkier and more complicated. Ajax, as I have stated elsewhere, is more about bringing modern javascript to the mainstream and making it acceptable again than it is about using the xmlhttp object (which is really what the term AJAX means). Once you have a site on which javascript use is accepted and even expected, there's a lot more interesting stuff you can do with it. A: With Ajax you can put all your logic in javascript code. That way you can create or use a javascript library that does not depend on your page. if you use an iframe, now you have to deal with a hidden control and most of your javascript code has to know the iframe. Also for search engines work better if the page don't have frames. A: Ajax gives you more granular control. You can update an individual element in a page, where Frames give you control of blocks that aren't even really in the same document. A: Here are two simple answers: 1) Just using the term AJAX is cool and makes your project sound more "Web 2.0". Frames is not sexy. In fact, in web terms, frames are the antithesis of sexy. 2) AJAX is forward looking even if used in non-standard or poorly supported ways. It is less likely, IMHO, to break moving forward compared to frames which is backward looking even if in the same manner. A: Ajax and frames are completely different from an accessibility standpoint (they're also completely different full stop). Frames offer very little positive effect but bring with them a host of negative issues. Ajax on the other hand makes the user interface more dynamic without compromising usability in most cases.
{ "language": "en", "url": "https://stackoverflow.com/questions/163704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Where is the Attic in Subversion (Tortoise)? Whoops, I need some info from a file I deleted, a while ago. In CVS I would just go to the ATTIC to find it, how do I find a file in SVN without having to go back to a revision where it existed (especially annoying since I have no idea really when I deleted -- one week ago, two weeks ago...) A: Browse the SVN Log of the directory it was in, find the revision where you deleted it. In the bottom pane, right click the file, and choose the option "Save Revision To..". To help you find which revision you deleted it in, look for the icon of a doc with an X in the lower left of it in the Actions column of Show Log. A: The "attic" in CVS is more of an implementation detail. The file can't be deleted completely from the repository, since the file history is in the ",v" file itself, so CVS moves it aside. Subversion uses a more sophisticated repository storage mechanism where files don't need to be moved aside in this way. I don't think there's an easy way to query for the most recent revision where a file existed, but you should be able to find it easily enough using "svn ls -rrev". In this case rev can be any one of the thing Subversion accepts to indicate a revision - a number, a date, etc. Just go back in history until you find it, then step forward until you find the last revision where it existed. Update: @AviewAnew has a good idea about checking the log of the directory where the file existed. Since a file delete is really a change to the directory that contains it, it should be easy to find where the file disappeared this way. A: svn log --verbose will show you what you deleted. Then you can do an svn copy --revision <last_revision_with_deleted_file> to get a working copy of the deleted file. This shouldn't be any harder than getting a deleted file from CVS.
{ "language": "en", "url": "https://stackoverflow.com/questions/163707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Taking thrift files from an API, and building the .NET dll file I can't figure out how to compile thrift files for C#. I've read, "thrift files which can then be compiled down to language-specific interfaces for a wide variety of different programming platforms (Java, PHP, C/C++, Cocoa, Perl, C#, Ruby, etc.)." I was looking here: http://www.markhneedham.com/blog/2008/08/29/c-thrift-examples/ and it's like I have to compile the compiler, and then build the C# version from the thrift files provided. Any ideas on how to do this? A: Yes, that's right, you first compile the Win32 compiler using a Cygwin environment and then in turn use that compiler to create Thrift language interfaces. A: You can download thrift compiler executable from http://thrift.apache.org/download/ A: I was searching your question and I found this useful topic. May this helping :)
{ "language": "en", "url": "https://stackoverflow.com/questions/163711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Which Java profiler is better: JProfiler or YourKit? Which profiler is better for general purpose profiling and heap analysis? 90% of our apps are standalone command line programs with substantial database and numeric processing. The other 10% are webapps/servlet container apps (with very little JSP and NO SCRIPLETS!). Target user would be Sr Software Engineer with 5-10 years of industry experience. We need support only for Sun JDK 5 and. As of writing this question (2008-10-02), JProfiler was at 5.1.4 and YourKit was 7.5. Looks like YourKit 8.0 will be released soon. A: I've used JProbe, OptimizeIt, and YourKit all extensively and they're all capable tools. Of the 3, my all around favorite is YourKit. The one killer feature in JProbe is the ability to move from a perf snapshot to annotated source (with counts and timings). I found that to be exceptionally useful. A: None of the tools other than JXInsight perform real database transaction analysis: http://www.jinspired.com/products/jxinsight/concurrency.html http://www.jinspired.com/products/jxinsight/olapvsoltp.html JXInsight's Probes technology is also the only one that could even run in production considering that we out perform netbeans profiler by 20x and yourkit 100x in SPECjvm2008 benchmarks. http://blog.jinspired.com/?p=272 I am the architect of JXInsight so of course I am completely biased but at the same time I am probably more qualified than most in the Java industry to make such a claim since I have devoted the last 8 years to performance analysis for some of the most demanding of Java/J2EE application in production. I should be point out that JXInsight is designed for software performance engineers and not just for the occasional adhoc profiling session. We have more than 4000+ system properties to configure the runtime and 600+ technology extensions libraries so it might be overkill unless one has a complex problem to solve and/or using the same tool across development, test and production is paramount. Kind regards, William A: With Java 7 Update 40, Oracle included Java mission control (originally part of the JRockit JDK) - a very powerful performance tuning tool which is able to compete with yourkit/jprofiler. Take a look and be surprised. A: I have used both and my vote now is definitely JProfiler (in the current version 6) as it is easier to use and has a lot of useful additional features. In previous releases YourKit had some advantages with larger snaphots, but this is gone now. A: Definitely YourKit ... It was able to open 4 gigs heap dump with just 1g of heap used. While Jprofiler with same heap allocation crashed! A: I've used both JProfiler 4 and YourKit 7.5, and YourKit wins hands down. It's so much less invasive than JProfile, in that I'll happy run production servers with the YourKit agent installed, which I would never do with JProfiler. Also, the analysis tool that comes with YourKit is more intuitive (in my opinion), making it easier to get the root cause of problems. A: i've used yourkit and it is a very nice profiler, the best i've ever used in java (i've used a variety of others over the years). that being said, i've never used jprofiler, so i can't give a direct comparison. A: If you're on jdk >=1.6_07 you might also want to look at jvisualvm which comes bundled. A: Having used both JProfiler and Yourkit recently I find that yourkit is far superior for memory problem analysis and strongly prefer jprofiler for performance analysis. Yourkit's memory analysis seems to be much easier and intuitive. For performance analysis on yourkit I have been unsuccessful in resolving any performance issue I have tried to resolve with yourkit. JProfiler shows more accurate and concise information for performance analysis with the exact number of method invocations and percent time spent in each method. I have yet to find this in yourkit. It seems yourkit just gives sampling information which is not accurate unless you are measuring thousands of invocations. A: For quick and dirty profiling of command-line programs, JIP works really well. A: Been using JProfiler for years and very happy with it. IntelliJ seems to switch their recommendation back and forth between YourKit and JProfiler so I would guess their feature sets are similar. I believe they both have trial version. A: DISCLAIMER : Alternate answer. they have various products for production monitoring/profiling UNLIKE other mostly development time tools : http://www.jinspired.com/products/jxinsight/ This post on theserverside on JDBInsight : http://www.theserverside.com/news/thread.tss?thread_id=13488 DISCLAIMER : I am NOT associated with this company at any level. A: I have used YourKit. I have not used JProfiler. I have used OptimizeIt before. I have very good opinion about YourKit. It is very stable and good GUI and good feature list. One unique feature I have noticed is CPU profiling with and without wait time (like I/O wais) including. It is priced also very reasonably (about about $1100 for 5 licenses I think) A: YourKit is great. You might also want to check out the profiler built into NetBeans--it's pretty cool. A: Yourkit It's low overhead, stable, easy to install on the JVM to be profiled (just one dll) and powerful. For analyzing heap dumps it's the only profiler that comes close to the Eclipse Memory Analyzer. A: I've only used JProfiler (and some JProbe). As far as I can tell, one limitation of YourKit is that they don't appear to support JDK 1.4.2. That's not an issue for many people, but it might be. A: +1 for yourkit --- using 7.0 on dev boxes in windows not used JProfiler for a while -- cannot comment since they might have improved in the meantime. A: Just as an aside, you may want to consider the Netbeans profiler -- it's pretty good. But I've not used either of the two you mentioned. A: I am using JProfiler and find it overall OK. It's "dynamical instrumentation" feature is terribly biased for small methods, though. A: I am using the TPTP profiler. the best feature it has that this can be integrated very easily in Eclipse but the bad thing is that it makes the Eclipse run slower. A: Definitly YourKit. It is the most intuitiv and stable!
{ "language": "en", "url": "https://stackoverflow.com/questions/163722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: maven-buildnumber-plugin i use the maven-buildnumber-plugin to generate my version number for JAR/WAR/EAR packages. So when doing a compile i'll get for example ${project.version}-${buildNumber}, because is set to this value. But when using mvn deploy just ${project.version} is the filename, samen when i set in pom.xml to XX ${buildNumber} then the filename ist file-XXX ${buildNumber} (<- not the content of buildNumber, instead ${buildNumber as test}). What do i do wrong? i also want to have the files installed with ${project.version} ${buildNumber}. thx for any help markus A: Not 100% sure I follow your question, but I had a problem getting a build number in my WAR manifest. The discussion here helped me out. I had to create a global property called build.version <properties> <build.version>${project.version}-r${buildNumber}</build.version> </properties> and use that instead of using ${buildNumber} directly. Hopefully that'll be some help with your problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/163727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Recommended .NET Class for a collection of unique integers? What would you recommend for class that needs to keep a list of unique integers? I'm going to want to Add() integers to the collection and also check for existence e.g. Contains(). Would be nice to also get them in a list as a string for display, ie. "1, 5, 10, 21". A: In my testing, I have found that a Dictionary with a dummy value is faster than a HashSet, when dealing with very large sets of data (100,000+ in my case). I expect this is because the Dictionary allows you to set an initial capacity, but I don't really know. In the case you're describing, I would probably use the Dictionary if I was expecting a very large set of numbers, and then (or as I was adding to the Dictionary, depending on the intent) iterate over it using a string builder, to create the output string. A: HashSet: The HashSet<T> class provides high-performance set operations. A set is a collection that contains no duplicate elements, and whose elements are in no particular order... The capacity of a HashSet<T> object is the number of elements that the object can hold. A HashSet<T> object's capacity automatically increases as elements are added to the object. The HashSet<T> class is based on the model of mathematical sets and provides high-performance set operations similar to accessing the keys of the Dictionary<TKey, TValue> or Hashtable collections. In simple terms, the HashSet<T> class can be thought of as a Dictionary<TKey, TValue> collection without values. A HashSet<T> collection is not sorted and cannot contain duplicate elements... A: If you can't use .NET 3.5, then you can't use HashSet. If that's the case, it's easy to roll your own based on the Dictionary structure. public class Set<T> { private class Unit { ... no behavior } private Dictionary<T, Unit> d; .... } Unit is intended to be a type with exactly one value. It doesn't matter what you map elements to, just use the keys to know what's in your set. The operations you asked for in the question are straightforward to implement. A: you could inherit a class from KeyedCollection. This way your key can be the value itself, you can override the ToString so that you get your desired output. This could give you the behaviour you want/need. Note, this answer was for the framework 2.0 part of the Q
{ "language": "en", "url": "https://stackoverflow.com/questions/163732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: ASP.NET project size Are there any known issues around how many "pages" are in an ASP.NET project? Does the size of the DLL created by the project matter at all? My existing project is about 150 pages and the DLL is only around 3MB but it has increased from about 50 pages and 0.5 MB recently A: Scott Hanselman got on the subject two years ago. The absolute limit is your system memory. The bigger the project the more memory it will use. Surely you can stay in a [50-200] projects in a solution. If you find your Visual Studio taking more than memory than expected start thinking about breaking your projects up. In my opinion the best thing to do is to make the project size a design requirement. EDIT : I'm not really answering to your question. To make it clear. The more elements (Pages, projects, image, references, etc.) you have in your VS2003, VS2005 or VS2008 solution the more Visual Studio will take on memory. A: I haven't run into any issues with sites up to hundreds of pages in size. How big of a site are we talking about? Are there built-in redundancies between pages which can be resolved by having a single page take parameters? A: Often times it makes sense to break down subsections of a larger site or project into multiple ASP.NET web projects. This makes the site way more modular. Additionally, if you have common controls that are used throughout the site that often is cause for a project that has just those controls. To address the question directly, however, I've managed projects that have many, many pages and have had no issue... aside from it becoming "fun" to browser through and develop against. A: Technically, you can have hundereds (I'd venture to say thousands) of 'pages' in a web application project. What Russell said though, to break it into modules, would potentially make it easier to maintain. Is there a limit? No. Should you organize your project so that it's easier to manage? Yes. A: There aren't concerns unless you are trying to debug it (or turn off batch compilation). Or if you have a bunch of folders. Each folder will have one DLL for all the files with batch turned on. So if you break things up into a bunch of folders, that could cause fragmentation problems of the processes memory. Otherwise, no issues for lots of pages.
{ "language": "en", "url": "https://stackoverflow.com/questions/163740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Autoconf test for JNI include dir I'm working on a configuration script for a JNI wrapper. One of the configuration parameters is the path to jni.h. What's a good quick-and-dirty Autoconf test for whether this parameter is set correctly for C++ compilation? You can assume you're running on Linux and g++ is available. Alternatively, is there a way to get javah (or a supporting tool) to give me this path directly? A: Then there is the easy way: http://www.gnu.org/software/autoconf-archive/ax_jni_include_dir.html Sometimes it is best to just use the standard recipies. A: Checking for headers is easy; just use AC_CHECK_HEADER. If it's in a weird place (i.e., one the compiler doesn't know about), it's entirely reasonable to expect users to set CPPFLAGS. The hard part is actually locating libjvm. You typically don't want to link with this; but you may want to default to a location to dlopen it from if JAVA_HOME is not set at run time. But I don't have a better solution than requiring that JAVA_HOME be set at configure time. There's just too much variation in how this stuff is deployed across various OSes (even just Linux distributions). This is what I do: AC_CHECK_HEADER([jni.h], [have_jni=yes]) AC_ARG_VAR([JAVA_HOME], [Java Runtime Environment (JRE) location]) AC_ARG_ENABLE([java-feature], [AC_HELP_STRING([--disable-java-feature], [disable Java feature])]) case $target_cpu in x86_64) JVM_ARCH=amd64 ;; i?86) JVM_ARCH=i386 ;; *) JVM_ARCH=$target_cpu ;; esac AC_SUBST([JVM_ARCH]) AS_IF([test X$enable_java_feature != Xno], [AS_IF([test X$have_jni != Xyes], [AC_MSG_FAILURE([The Java Native Interface is required for Java feature.])]) AS_IF([test -z "$JAVA_HOME"], [AC_MSG_WARN([JAVA_HOME has not been set. JAVA_HOME must be set at run time to locate libjvm.])], [save_LDFLAGS=$LDFLAGS LDFLAGS="-L$JAVA_HOME/lib/$JVM_ARCH/client -L$JAVA_HOME/lib/$JVM_ARCH/server $LDFLAGS" AC_CHECK_LIB([jvm], [JNI_CreateJavaVM], [LIBS=$LIBS], [AC_MSG_WARN([no libjvm found at JAVA_HOME])]) LDFLAGS=$save_LDFLAGS ])]) A: FYI - the patch below against the latest ax_jni_include_dir.m4 works for me on Macos 11.1. --- a/m4/ax_jni_include_dir.m4 +++ b/m4/ax_jni_include_dir.m4 @@ -73,13 +73,19 @@ fi case "$host_os" in darwin*) # Apple Java headers are inside the Xcode bundle. - macos_version=$(sw_vers -productVersion | sed -n -e 's/^@<:@0-9@:>@ *.\(@<:@0-9@:>@*\).@<:@0-9@:>@*/\1/p') - if @<:@ "$macos_version" -gt "7" @:>@; then - _JTOPDIR="$(xcrun --show-sdk-path)/System/Library/Frameworks/JavaVM.framework" - _JINC="$_JTOPDIR/Headers" + major_macos_version=$(sw_vers -productVersion | sed -n -e 's/^\(@<:@0-9@:>@*\).@<:@0-9@:>@*.@<:@0-9@:>@*/\1/p') + if @<:@ "$major_macos_version" -gt "10" @:>@; then + _JTOPDIR="$(/usr/libexec/java_home)" + _JINC="$_JTOPDIR/include" else - _JTOPDIR="/System/Library/Frameworks/JavaVM.framework" - _JINC="$_JTOPDIR/Headers" + macos_version=$(sw_vers -productVersion | sed -n -e 's/^@<:@0-9@:>@*.\(@<:@0-9@:>@*\).@<:@0-9@:>@*/\1/p') + if @<:@ "$macos_version" -gt "7" @:>@; then + _JTOPDIR="$(xcrun --show-sdk-path)/System/Library/Frameworks/JavaVM.framework" + _JINC="$_JTOPDIR/Headers" + else + _JTOPDIR="/System/Library/Frameworks/JavaVM.framework" + _JINC="$_JTOPDIR/Headers" + fi fi ;; *) _JINC="$_JTOPDIR/include";;
{ "language": "en", "url": "https://stackoverflow.com/questions/163747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: COM Registration and the GAC I have a web project, a C# library project, and a web setup project in Visual Studio 2005. The web project needs the C# library needs to be registered in the GAC. This is simple, just add the GAC folder to the setup project and drop the primary output of the C# library in there. The C# library also needs to be registered for COM interop. I click select the primary output in the GAC folder and change the Register property to vsdrpCOM. I build the setup project and run it but the DLL never gets registered for COM. This is not really a surprise to me. I have always had to add an installer class which had a custom action which used RegistrationServices.RegisterAssembly to properly register my DLLs for COM. So I apply this workaround which I have accepted for years to my situation. Now I find that custom actions assigned to primary output in the GAC folder of a setup project prevent the setup project from even building. I have always felt like I was hacking thigs to get .NET and COM to play nice with setup and deployment projects. What is the proper way to solve my problem? A: To register the assemblies with COM use: regasm /codebase. See: http://msdn.microsoft.com/en-us/library/tzat5yw6(VS.80).aspx for details. Your method for installing into the GAC seems fine to me. A: WIX, will allow you to install your COm objects and assemblies in the GAC with little fuss WIX A: I don't know about the "proper" way, but my team solves this issue with batch files. We call the file something like "cycleCOM.bat" and we run it after a build when we need to update the GAC and Component Services. (I personally use a keyboard launcher so I can trigger this with a few keypresses) This is a super-simplified view of the batch file: REM Unregister anything currently in component services RemoveCom+.vbs C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\regsvcs /u "C:\source\bin\FooCode.dll" REM Remove from GAC "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil" /uf FooCode REM Register in component services C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\regsvcs "C:\source\bin\FooCode.dll" REM Add to GAC "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\gacutil" /if "C:\source\bin\FooCode.dll" The RemoveCom+.vbs file shuts down anything running in component services. Its code is: set cat = CreateObject ("COMAdmin.COMAdminCatalog") Set apps = cat.GetCollection("Applications") bFound = false apps.Populate lNumApps = apps.Count ' Enumerate through applications looking for AppName. Dim app For I = lNumApps - 1 to 0 step -1 Set app = apps.Item(I) If app.Name = "FooCode" Then cat.ShutdownApplication ("FooCode") apps.Remove(I) apps.SaveChanges End If Next We have multiple versions of this script for each version of our application that we work on locally, and the scripts make it easy (well, as easy as it can get) to keep the GAC and COM+ in sync with the version of code being edited.
{ "language": "en", "url": "https://stackoverflow.com/questions/163748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to use boost::bind in C++/CLI to bind a member of a managed class I am using boost::signal in a native C++ class, and I now I am writing a .NET wrapper in C++/CLI, so that I can expose the native C++ callbacks as .NET events. When I try to use boost::bind to take the address of a member function of my managed class, I get compiler error 3374, saying I cannot take the address of a member function unless I am creating a delegate instance. Does anyone know how to bind a member function of a managed class using boost::bind? For clarification, the following sample code causes Compiler Error 3374: #include <boost/bind.hpp> public ref class Managed { public: Managed() { boost::bind(&Managed::OnSomeEvent, this); } void OnSomeEvent(void) { } }; A: After googling some more, I finally found a nice blog post about how to do this. The code in that post was a little more than I needed, but the main nugget was to use a global free function that takes an argument of the managed this pointer wrapped in a gcroot<> template. See the SomeEventProxy(...) in the code below for an example. This function then turns around and calls the managed member I was trying to bind. My solution appears below for future reference. #include <msclr/marshal.h> #include <boost/bind.hpp> #include <boost/signal.hpp> #include <iostream> #using <mscorlib.dll> using namespace System; using namespace msclr::interop; typedef boost::signal<void (void)> ChangedSignal; typedef boost::signal<void (void)>::slot_function_type ChangedSignalCB; typedef boost::signals::connection Callback; class Native { public: void ChangeIt() { changed(); } Callback RegisterCallback(ChangedSignalCB Subscriber) { return changed.connect(Subscriber); } void UnregisterCallback(Callback CB) { changed.disconnect(CB); } private: ChangedSignal changed; }; delegate void ChangeHandler(void); public ref class Managed { public: Managed(Native* Nat); ~Managed(); void OnSomeEvent(void); event ChangeHandler^ OnChange; private: Native* native; Callback* callback; }; void SomeEventProxy(gcroot<Managed^> This) { This->OnSomeEvent(); } Managed::Managed(Native* Nat) : native(Nat) { native = Nat; callback = new Callback; *callback = native->RegisterCallback(boost::bind( SomeEventProxy, gcroot<Managed^>(this) ) ); } Managed::~Managed() { native->UnregisterCallback(*callback); delete callback; } void Managed::OnSomeEvent(void) { OnChange(); } void OnChanged(void) { Console::WriteLine("Got it!"); } int main(array<System::String ^> ^args) { Native* native = new Native; Managed^ managed = gcnew Managed(native); managed->OnChange += gcnew ChangeHandler(OnChanged); native->ChangeIt(); delete native; return 0; } A: While your answer works, it exposes some of your implementation to the world (Managed::OnSomeEvent). If you don't want people to be able to raise the OnChange event willy-nilly by invoking OnSomeEvent(), you can update your Managed class as follows (based on this advice): public delegate void ChangeHandler(void); typedef void (__stdcall *ChangeCallback)(void); public ref class Managed { public: Managed(Native* Nat); ~Managed(); event ChangeHandler^ OnChange; private: void OnSomeEvent(void); Native* native; Callback* callback; GCHandle gch; }; Managed::Managed(Native* Nat) : native(Nat) { callback = new Callback; ChangeHandler^ handler = gcnew ChangeHandler( this, &Managed::OnSomeEvent ); gch = GCHandle::Alloc( handler ); System::IntPtr ip = Marshal::GetFunctionPointerForDelegate( handler ); ChangeCallback cbFunc = static_cast<ChangeCallback>( ip.ToPointer() ); *callback = native->RegisterCallback(boost::bind<void>( cbFunc ) ); } Managed::~Managed() { native->UnregisterCallback(*callback); delete callback; if ( gch.IsAllocated ) { gch.Free(); } } void Managed::OnSomeEvent(void) { OnChange(); } Note the alternate bind<R>() form that's used.
{ "language": "en", "url": "https://stackoverflow.com/questions/163757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Generic GDI+ Error I have a Form being launched from another form on a different thread. Most of the time it works perfectly, but I get the below error from time to time. Can anyone help? at System.Drawing.Bitmap..ctor(Int32 width, Int32 height, PixelFormat format) at System.Drawing.Bitmap..ctor(Int32 width, Int32 height) at System.Drawing.Icon.ToBitmap() at System.Windows.Forms.ThreadExceptionDialog..ctor(Exception t) at System.Windows.Forms.Application.ThreadContext.OnThreadException(Exception t) at System.Windows.Forms.Control.WndProcException(Exception e) at System.Windows.Forms.Control.ControlNativeWindow.OnThreadException(Exception e) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Form.ShowDialog(IWin32Window owner) at System.Windows.Forms.Form.ShowDialog() A: The user has to be able to see multiple open accounts simultaneously, right? So you need multiple instances of a form? Unless I'm misreading something, I don't think you need threads for this scenario, and I think you are just introducing yourself to a world of hurt (like these exceptions) as a result. Assuming your account form is called AccountForm, I'd do this instead: Dim acctForm As New AccountForm() acctForm.Show() (Of course you'll have your own logic for that ... ) I might even put it in the ShowForm method so that I could just update my caller thusly: ShowForm() And be done. Now all of this assumes that you've encapsulated the AccountForm nicely so that each instance has its own data, and they don't share anything between instances. Using threads for this is not only overkill, but likely to introduce bugs like the exception at the top. And my experience in debugging multi-threaded WinForms apps has shown that these bugs are often very difficult to replicate, and extremely tricky to find and fix. Oftentimes, the best fix is to not multithread unless you absolutely, positively have to. A: Can you elaborate what you are trying to do here? If you are trying to show a Form from a different thread than the UI thread then refer to this question: My form doesn't properly display when it is launched from another thread A: The application is an Explorer-Type customer management system. An account form is launched from the "Main" explorer form on a separate background thread. We do this because the user needs to be able to have multiple accounts open at the same time. We launch the form using this code: Thread = New Thread(AddressOf ShowForm) Thread.SetApartmentState(ApartmentState.STA) Thread.IsBackground = True
{ "language": "en", "url": "https://stackoverflow.com/questions/163760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SetURL method of QuickTime object undefined? I have a hidden embedded QuickTime object on my page that I'm trying to control via JavaScript, but it's not working. The object looks like this: <object id="myPlayer" data="" type="audio/mpeg" pluginspage="http://www.apple.com/quicktime/download" width="0" height="0"> <param name="autoPlay" value="false" /> <param name="controller" value="false" /> <param name="enablejavascript" value="true" /> </object> There is nothing in the data parameter because at render time, I don't know the URL that's going to be loaded. I set it like this: var player = document.getElementById("myPlayer"); player.SetURL(url); The audio will later be played back with: player.Play(); Firefox 3.0.3 produces no error in the JavaScript console, but no playback occurs when Play() is called. Safari 3.0.4 produces the following error in the console: "Value undefined (result of expression player.SetURL) is not object." Internet Explorer 7.0.5730.11 gives the following extremely helpful error message: "Unspecified error." I have QuickTime version 7.4 installed on my machine. Apple's documentation says that SetURL() is correct, so why does it not work? A: Try giving the object element some width and height (1px by 1px) and make it visible within the viewport when you attempt to communicate with the plugin via JavaScript. I've noticed that if the plugin area is not visible on screen it's unresponsive to JS commands. This might explain why this isn't working for you in IE. Safari and Opera should work, but FireFox will definitely require the Netscape style embed element, and really you should provide both. Additionally, once you have both, you need to ascertain which element (the object versus the embed) to address in which browser. A: I don't know the QuickTime API, but this might be worth a shot: player.attributes.getNamedItem('data').value = 'http://yoururlhere'; A: The page you linked to doesn't mention a 'data' attribute. They have an EMBED and PARAM within an OBJECT, with the EMBED's 'src' attribute having the url, but I don't see an EMBED in what you posted.
{ "language": "en", "url": "https://stackoverflow.com/questions/163761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a good iTunes coverflow-type control for WPF? I am currently using Telerik's carousel control, but it is lacking many features and is buggy. Is there a good control out there that looks the the coverflow control in itunes? A: ElementFlow control is inside the codeplex project called FluidKit - can be downloaded from here A: WPF Cover Flow Tutorial Source Code: Part 7, Download Author's rebuttal to claim of memory leak (it doesn't): Part 8 In Action: Videos Contains a detailed walkthrough for building a coverflow control, including features such as reflection. I compiled and tried it out and pointed it to a directory containing hundreds of smallish images (you'll need to edit TestWindow.xaml.cs to point to a directory containing jpg's) and I was impressed with the performance and smoothness of the animation. I noticed that using very large images degrades the performance though, so I'd recommend using images that are just the size needed for display. For example, when pointed to my desktop background images directory, there was nearly a one-second delay after pressing the arrow key and the item going through the transition (although the animation itself was still fluid, it took a moment to begin). This is the best one that I found, for what I was looking for - namely, non-commercial, reflections, and smooth animation. I did look at the other ones currently mentioned in the other answers though, here are some comments on them (in no particular order): FluidKit's ElementFlow * *Open source, I used the latest source code, but did not try out any patches *Animation was smooth *Transition didn't feel very refined, the pictures clip each other in an odd way *Didn't seem geared for showing a handful of element's on the screen at once, it tries to show everything, and from some of the discussion comments, apparently isn't virtualized *After adding some images to the demo through the provided button, a large portion of them couldn't seem to get selected *Doesn't have reflections Mindscape CoverFlow * *Commercial *Animation was smooth *Doesn't "popup" selected item, feels very 2D *Has reflections DevExpress Carousel * *Commercial *No online demo and I didn't try to obtain the trial, looks polished though Telerik Carousel * *Commercial *Animation was smooth *The transition wasn't as pleasing to me, the new picture passed through the old one *Doesn't have reflections Xceed Cardflow 3D * *Commercial (professional edition only) *Animation was smooth, if you went quickly it would show blank cards speeding by and then fade in the actual data on the cards when you slowed down *Supports flipping the selected item, like in iTunes *Has reflections A: For more details about the control - ElementFlow control at Pavan's blog A: Mindscape now provide a commercial WPF Coverflow control as part of their WPF Elements control pack that might be useful also. A: http://www.telerik.com/products/wpf/carousel.aspx http://www.devexpress.com/Products/NET/Controls/WPF/Carousel/dependency_properties.xml Both of these are FAR more versatile than your average cover flow clone (though they can easily just do that too if you want). I'd recommend Telerik well above DevExpress as WPF is still a relatively immature technology and DevExpress are very poor at keeping up with the tech game (they only JUST released a VS2010-supporting version of their DXperience suite despite promising it "just around the corner" since the start of January, while Telerik, ComponentOne etc all keep up with current tech. Not good enough for enterprise).
{ "language": "en", "url": "https://stackoverflow.com/questions/163775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }